100s of Free Microsoft eBooks

Mirosoft Director of Sales Excellence, Eric Ligman, writes a post about 100s of FREE Microsoft eBooks each year.

These are legal DRM-free copies (PDF/EPUB/MOBI) on various Microsoft technologies like Windows, SQL Server, Azure, SharePoint, PowerShell, Visual Studio, Office, Dynamics etc.

The titles may repeat in multiple links below so I’d suggest that you begin browsing at the latest year URL and go backwards from there.

2017

https://blogs.msdn.microsoft.com/mssmallbiz/2017/07/11/largest-free-microsoft-ebook-giveaway-im-giving-away-millions-of-free-microsoft-ebooks-again-including-windows-10-office-365-office-2016-power-bi-azure-windows-8-1-office-2013-sharepo/

2016

https://blogs.msdn.microsoft.com/mssmallbiz/2016/07/10/free-thats-right-im-giving-away-millions-of-free-microsoft-ebooks-again-including-windows-10-office-365-office-2016-power-bi-azure-windows-8-1-office-2013-sharepoint-2016-sha/

2015

https://blogs.msdn.microsoft.com/mssmallbiz/2015/07/07/im-giving-away-millions-of-free-microsoft-ebooks-again-including-windows-10-windows-8-1-windows-8-windows-7-office-2013-office-365-sharepoint-2013-dynamics-crm-powershell-exchange-se/

2014

https://blogs.msdn.microsoft.com/mssmallbiz/2014/07/07/largest-collection-of-free-microsoft-ebooks-ever-including-windows-8-1-windows-8-windows-7-office-2013-office-365-office-2010-sharepoint-2013-dynamics-crm-powershell-exchange-server-lync-2/

2013

https://blogs.msdn.microsoft.com/mssmallbiz/2013/06/18/huge-collection-of-free-microsoft-ebooks-for-you-including-office-office-365-sharepoint-sql-server-system-center-visual-studio-web-development-windows-windows-azure-and-windows-server/

2012

https://blogs.msdn.microsoft.com/mssmallbiz/2012/07/27/large-collection-of-free-microsoft-ebooks-for-you-including-sharepoint-visual-studio-windows-phone-windows-8-office-365-office-2010-sql-server-2012-azure-and-more/

Download Free SQL Server 2016 Developer Edition

Microsoft announced the general availability of SQL Server 2016 today. This version has many path-breaking updates for performance, security and business intelligence. On top of that, the Developer Edition of SQL Server 2016 and 2014 are now completely free! There is no restriction on features, capabilities or how long you can use them! Read on to find out how to download free SQL Server 2016 Developer Edition and other goodies.

In these Developer Editions, the databases do not have to be smaller than X GB, the feature Y will not be disabled, the time is not restricted to Z days (i.e. 180 days) and you don’t have to pay any $ Dollars. These Developer Editions have all the features of the Enterprise Edition, but should not be used for Production.

Download Free SQL Server Developer Edition

To download the Developer Edition, all you need is a free membership to the Visual Studio Dev Essentials and then go to the following URL to download the installation file.
Download Free SQL Server 2014 and 2016 Developer Edition

Download Free SQL Server Tools

If you don’t plan on downloading the full install of SQL Server yet, you can still get the latest tools.

New 2016 Version of SQL Server Management Studio (SSMS)

SSMS is a free client tool for general SQL Server management and TSQL development. It is compatible with prior versions of SQL Server too.
https://msdn.microsoft.com/library/mt238290.aspx

New 2016 Version of SQL Server Data Tools (SSDT)

SSDT is a free development tool that is a subset of Visual Studio IDE. It is focused on database and business intelligence related development i.e. TSQL, SSIS, SSRS and SSAS.
https://msdn.microsoft.com/en-us/library/mt204009.aspx

Download Free eBook

While at your downloading spree, get the final version of the free MS Press eBook that gives an overview of the latest features in the SQL Server 2016 version.

Introducing Microsoft SQL Server 2016
Mission-Critical Applications, Deeper Insights, Hyperscale Cloud

Authors: Stacia Varga (t), Denny Cherry (t), Joseph D’Antoni (t)
Pages: 215
ISBN: 978-1-5093-0195-9
https://info.microsoft.com/Introducing-SQL-Server-2016-eBook.html

Although the title says that it is an introductory book, it is certainly not a beginner’s level text. In its nine chapters, it introduces the new features of SQL Server 2016. So it will be more useful if you have already worked with SQL Server for a while and are aware of features (or possibilities for enhancements) in the previous versions.

Free SQL Server eBook - Introducing SQL Server 2016

Simple Fix to a Backup Restore Error Due to Disk or Cluster Resource Issue on SQL Server

One of our database restore attempt failed with an error message that mentioned cluster resources. At least the error message indicated that the issue was not related to backward compatibility but rather a physical resource or cluster settings.

Error Details

The Error Message Window –

SQL Restore Error - sqlerudition.wordpress.com
SQL Restore Error – sqlerudition.wordpress.com

The Error Message –

TITLE: Microsoft SQL Server Management Studio
------------------------------

Restore failed for Server 'MYDEVSQLSERVER'.  (Microsoft.SqlServer.SmoExtended)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&

EvtSrc=Microsoft.SqlServer.Management.Smo.

ExceptionTemplates.FailedOperationExceptionText

&EvtID=Restore+Server&LinkId=20476

------------------------------
ADDITIONAL INFORMATION:

System.Data.SqlClient.SqlError: Cannot use file 'J:MSSQL10_50MSSQLDATAMyDatabaseName.mdf' for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&LinkId=20476

------------------------------
BUTTONS:

OK
------------------------------

Both the links in the error message above pointed to a missing information message on Microsoft website –

No information on the restore error.
No information on the restore error – sqlerudition.wordpress.com
Details
ID: Restore Server
Source:
We’re sorry
There is no additional information about this issue in the Error and Event Log Messages or Knowledge Base databases at this time. You can use the links in the Support area to determine whether any additional information might be available elsewhere.
Thank you for searching on this message; your search helps us identify those areas for which we need to provide more information.

Cause and Resolution

We determined the cause rather quickly. The source system of the backup file had a drive letter layout that was different from the destination server. The restore process was trying to create the data files on a drive that didn’t exist on the destination! So the location of the files was changed in the restore dialog to a correct drive letter of the destination server. After that the restore progressed normally.

Resolve Error: 102 while creating Full-Text Index Stoplist in SQL Server

Full-text index stoplist error 102One of my SQL Server databases was returning an error 102 while creating a full-text stoplist. We were trying to create a stoplist based on the system stoplist and later also tried to create a blank stoplist. The error happened both via SSMS, and equivalent TSQL commands.

The Error, 102

The following TSQL gave the error –


USE [DEMO_Database]
GO
CREATE FULLTEXT STOPLIST [DemoStopList]
AUTHORIZATION [dbo];
GO

CREATE FULLTEXT STOPLIST [DemoStopList]
FROM SYSTEM STOPLIST
AUTHORIZATION [dbo];
GO

The error dialog box –

Full-text index stoplist error 102
Image 1 (click to enlarge)

The text in the error message –

TITLE: Microsoft SQL Server Management Studio
------------------------------

Cannot execute changes.

------------------------------
ADDITIONAL INFORMATION:

Create failed for FullTextStopList 'Demo'.  (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.5058.0+((SQL11_PCU_Main).140514-1820+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Create+FullTextStopList&LinkId=20476

------------------------------

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

------------------------------

Incorrect syntax near 'STOPLIST'. (Microsoft SQL Server, Error: 102)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=102&LinkId=20476

------------------------------
BUTTONS:

OK
------------------------------

The Cause

I looked at the MSDN page related to the TSQL command to check if I was using the right syntax.

REFERENCE:

  • CREATE FULLTEXT STOPLIST (Transact-SQL)
    https://msdn.microsoft.com/en-us/library/Cc280405(v=sql.105).aspx

My syntax was correct but there was something else on the page that looked relevant. Right at the top of the documentation page is the following message –

Important noteImportant
CREATE FULLTEXT STOPLIST, ALTER FULLTEXT STOPLIST, and DROP FULLTEXT STOPLIST are supported only under compatibility level 100. Under compatibility levels 80 and 90, these statements are not supported. However, under all compatibility levels the system stoplist is automatically associated with new full-text indexes.

To verify if the compatibility level of my database could indeed be an issue, I checked the properties of the database by –

SELECT
is_fulltext_enabled,
compatibility_level
FROM
sys.databases
is_fulltext_enabled compatibility_level
0 90

There you have it! My database was originally on a SQL Server 2005 installation so its compatibility level was 90, and that was the reason the CREATE/ALTER/DROP STOPLIST commands were unavailable. The current server that I was working on was SQL Server 2008 R2, which could be checked by –

SELECT @@VERSION
GO
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1

So the resolution to the error lies in changing the compatibility level. As per the documentation, the highest compatibility level I could go on a SQL Server 2008 R2 installation was 100.

REFERENCE:

  • View or Change the Compatibility Level of a Database
    https://msdn.microsoft.com/en-us/subscriptions/index/bb933794
  • ALTER DATABASE Compatibility Level (Transact-SQL)
    https://msdn.microsoft.com/en-us/subscriptions/index/bb510680

Changing the Compatibility Level

I checked that no other users were connected to the database and then issued this command to change the compatibility level.

USE [master]
GO
ALTER DATABASE [DEMO_database]
SET COMPATIBILITY_LEVEL = 100;
GO

It ran successfully and I could verify in the sys.databases catalog view that the compatibility level has changed to 100.

Now I was able to create a Stop List, Full-text Catalog and a Full-text Index on my table, and was able to run queries using the CONTAINS and CONTAINSTABLE keywords.

Fixed? Not so fast!

Interestingly, even though I could use the Full-text features now, the is_fulltext_enabled property still showed up as 0 (i.e. Disabled).

That was fixed by running the following –

EXEC [DEMO_Database].[dbo].[sp_fulltext_database]
@action = 'enable'
GO

REFERENCE:

  • sp_fulltext_database (Transact-SQL)
    https://msdn.microsoft.com/en-us/library/ms190321(v=sql.105).aspx

Gotcha – SSIS ImportExport Wizard Can Kill Your Diagrams

Some things are meant to be learnt the hard way. And that is how I learnt about today’s gotcha.

Overview of [sysdiagrams]

I have been working on an ERD (Entity Relationship Diagram) recently and used the Database Diagram feature in SSMS for this purpose. When you try to create a diagram for the first time in a database, a message box asks you if you would like to create diagramming objects.

Confirmation dialog
Image 1 (Click to enlarge)

On clicking Yes, a system table by the name of [sysdiagrams] is created in the same database that you are creating the diagram in. The diagrams are stored in this table. SSMS shows the diagram in the Database Diagrams node of the Object Explorer tree view.

Diagram and its table
Image 2 (Click to enlarge)

The Scenario

I was creating my diagram in a test environment. At some point I had to refresh all data from the production environment. The easiest way for me to do a full refresh is to use the Import and Export Wizard. The wizard can be launched either via SSMS context menu or in the Business Intelligence Developer Studio as an SSIS project. As usual in the case of quick data refreshes, I selected all tables using the top-left Source checkbox in the wizard, and chose the options to delete and reinsert all rows with identity values.

Select all tables and reinsert rows
Image 3 (Click to enlarge)

When the wizard ran successfully, my database diagram at the destination test system was missing!

Gotcha

Upon some research and trials I found that that if the diagramming capabilities in the source and destination servers are enabled, the wizard includes the [sysdiagrams] table automatically in the list of tables to refresh. As you can see, there are no other system tables in the wizard except the [sysdiagrams] table so it is easy to miss it in a long list.

sysdiagrams is included
Image 4 (Click to enlarge)

So in my case, all data in the destination [sysdiagrams] table was deleted. There were no diagrams at the source so nothing was imported for this table. This outcome would have been the same with the drop and recreate option too because the destination table would have been recreated.

Conclusion

One needs to be careful while using the Import and Export Wizard. Uncheck this table in the selection list to preserve the diagrams at destination.

How to Have Standard Event Logging in SSIS and Avoid Traps

Event logging in SSIS gives a lot of valuable information about the run-time behavior and execution status of the SSIS package. Having a common minimum number of events is good for consistency in reports and general analysis. Lets say your team wants to ensure that all packages must log at least OnError, OnWarning, OnPreExecute and OnPostExecute events. If the package has a DataFlowTask then the BufferSizeTuning should also be logged. The developer can include more events to log as required but these mentioned previously are the minimum that must be included.

You can create pre-deployment checklists or documentation to ensure this minimum logging. Probably you already have that. As the number of developers and/or packages increase, it becomes difficult to ensure consistency in anything, not just for event logging. Therefore documentation, checklists and training are helpful to an extent only. Your requirement could be more complex than my five-event example above and thus more prone to to oversight.

The easiest way to ensure a common logging implementation would be a logging template that has all the minimum event pre-selected. The developer should just need to apply that to the package.

Event Logging in SSIS with a Template

I assume that you are already familiar with the concepts of event logging in SSIS so this post is not going to be a beginners level introduction to event logging. I will rather discuss options to have a minimum standard event logging across SSIS packages and teams with minimal effort. I’ll also mention some traps to avoid.

I am using a demo SSIS package with two Data Flow Tasks and an Execute SQL Task. I have enabled event logging in SSIS for the first few events at the package level for the sake of demonstration. The logging configuration options for the package node (which is the top node) are shown in the first image.

Image 1 - Logging configuration window for the package node
Image 1 – Event logging configuration window for the package node

The logging options at the child container node Data Flow Task 1 are shown in the second image. The configuration for other Data Flow and the Execute SQL Task look the same.

Image 2 - Logging configuration window at the child container node
Image 2 – Event logging configuration window at the child container node

The check marks for the tasks are grayed out which means they are inheriting the logging options from their parent, i.e. the package. To disable logging for a task, remove its check mark in the left tree view window.

TIP: Logging can also be disabled by going to the Control Flow canvas and changing the LoggingMode property of the task to Disabled.

The Trick

Now look at the bottom of the images again. Notice the Load… and Save… buttons? They do exactly what they say. You can set your logging options and save them as an XML template. Later, this XML template can be loaded into other packages to enable the same logging options.

The XML template file has nodes for each event. For example, the logging options for OnError event are saved like this –


-<EventsFilter Name="OnError">

-<Filter>

<Computer>true</Computer>

<Operator>true</Operator>

<SourceName>true</SourceName>

<SourceID>true</SourceID>
<ExecutionID>true</ExecutionID>

<MessageText>true</MessageText>

<DataBytes>true</DataBytes>

</Filter>

</EventsFilter>

Notice that the XML just mentions the event name, not the name of any task. This means that when the template file is loaded, this logging option will be set for any task where the event is applicable. More on this later.

The Traps

The OnError event is a generic event applicable to all tasks. Lets talk about events that are specific to tasks. For example, the BufferSizeTuning event is applicable just to the Data Flow Tasks, not Execute SQL Tasks.

When I proceed to set logging for BufferSizeTuning event, I have to set it individually in the Data Flow Task tree node. Notice the message at the bottom of the second image that says –

To enable unique logging options for this container, enable logging for it in the tree view.

This message is important in the context of saving and loading a template file too. When I save a template file, the logging options of just that tree view node are saved. For example, the BufferSizeTuning event will be saved in the template only if I am at the Data Flow task in the tree view. It will not be saved if I am at the Package or the Execute SQL task in the tree view.

The reverse is also true. When I load a template, its logging options are applied to just that node which I select in the tree view. For example, if I load a template at the Data Flow Task 1, the options will not be applied to the Data Flow Task 2 or the Execute SQL Task. If the template has an event that is not applicable to the task then that event’s settings will be ignored. For example, the BufferSizeTuning event logging option is meant for Data Flow Tasks so it will be ignored for the Execute SQL Task. The fact that non-relevant options are ignored can be helpful for us to consolidate all logging options in a single template file.

Conclusion

A package level Save and Load of a logging template is straight forward. But if you need to have logging for events that are specific to a task type, then consider creating a logging template for each type of task. Also, if your logging configuration requires anything else than the package level settings, remember to load the template for each task in the tree view.

Number of Template Files How Pros and Cons
Individual File per Task Create one template file for each type of task. The file will have events applicable to that task. Pros –
Easier to know what type of tasks have a template and which ones do not.Cons –
More files to manage.
Single File for All Tasks Create a template file for each task. Then copy all event options in a single XML file. Pros –
One file is easier to manage.Cons –
Not obvious which tasks are include. Need to put in comments in the XML file.

How To Manage SQL Server Job Failure Emails with MS Outlook Conditional Formatting

Some scheduled jobs send failure emails upon an error and DBAs need to remember to fix the job and re-execute it. I use filter rules in MS Outlook to organize my inbox by redirecting all the scheduled job emails to a separate folder. In addition to that, I use conditional formatting rules to keep track of action items as pending or done. The conditional formatting rules highlight the job failure emails with the red color and turn them green when marked complete as shown in the following image.

JobEmailsAsTasksExpected

Create a Conditional Formatting Rule to Highlight a Failed Job Email in Red:

Navigate to the folder where you would like to modify the view. You can even customize your main Inbox folder view, if that is where your emails are. Right-click on the header row and go to View Settings.

JobEmailsAsTasks1

Click on Conditional Formatting.

JobEmailsAsTasks2

In the Conditional Formatting window, click on Add and rename the new untitled rule. Then change the font color to red and click on Condition.

JobEmailsAsTasks3

In the Filter window, put a sufficiently unique text phrase from the error email that can distinguish it from good emails. Also specify in the drop-down where exactly the filter should look for the text phrase i.e. just in the email subject or, email body or both.

My emails have the following text in the email body that I can use for identification –

STATUS:  Failed

Now this a simple example with a text phrase for filtering but you can use other criteria in More Choices and Advanced tabs too for more complex scenarios as necessary. If you get emails with different text patterns then you can always create a new rule for each phrase.

JobEmailsAsTasks4

That is it. Click on OK to come back to your inbox and verify that the error emails have a red font.

Create a Conditional Formatting Rule to Highlight a Failed Job Email in Green After Resolution:

Now we want another rule that will change the red email to a green one when it is marked as complete. Add another Conditional Formatting rule, rename it and change the font color to green and click on Condition.

JobEmailsAsTasks5

In the filter window, follow the same steps as in the previous red rule. Then go the More Choices tab.

JobEmailsAsTasks6

In the More Choices tab, check the Only Items which: and select the drop-down value of are marked complete.

JobEmailsAsTasks7

Back in the Conditional Formatting window, move the green job up. This is important otherwise the red rule will override the effects of the green rule.

JobEmailsAsTasks8

The final window will look like this.

JobEmailsAsTasks9Final

Now when ever the issue from the failure email is resolved, just right-click on the email, go to Follow Up, select Mark Complete. The email will turn green.

JobEmailsAsTasks10

How to Avoid Orphan Users in SQL Server ALWAYSON, Create Logins Correctly

SQL Server logins are stored in the [master] database. System databases (master, model, msdb and tempdb) cannot be included in an availability group so a login created on the PRIMARY replica will not show up on the SECONDARY replica automatically. It has to be created manually on the SECONDARY replica.

Logins have an SID associated with them. When a Windows authentication login is created in SQL Server, it uses the SID from the Windows Active Directory. So the SID for such a login will be the same on every SQL Server in the network domain.

When a SQL authentication login is created, it gets a new auto-generated SID assigned by the SQL Server. This auto-generated SID will be different in each SQL Server even if the login name and the password combination are the same.

The database users are mapped to the logins internally using the SID, not the login name/user name. There are some situations where the SID may have a mismatch, e.g. when a database is restored to a different server where the supposedly matching login already exists, or a login is dropped and recreated without consideration to the mapped users, or a login is recreated between a database detach and reattach etc. Such users that do not have a login with a matching SID are known as orphan users. This SID mismatch means that although an application or a user can connect to the server using the login, but it can not access the database.

The following options with varying degree of effectiveness can be used to fix the SID mapping between a login and an orphan user –

  • Drop and recreate the user in the restored database. Of course the user permissions will get deleted too and have to be granted again.
  • Drop and recreate the login with same SID as the restored database. This is a definite no-no if there are other databases on the server linked to that login. It will only complicate matters.
  • Run the system stored procedure sp_change_users_login. It has parameters to just report, fix one or fix all orphans. But the stored procedure is now marked as deprecated so there are no guarantees of future availability.
  • ALTER the user (there are some restrictions, like there cannot be a one-to many mapping etc.) –
    USE [MyUserDBName]
    GO
    ALTER USER someuser WITH LOGIN = somelogin
    

    As the user databases on the SECONDARY replica are read-only, the role of the SECONDARY server has to be changed to PRIMARY by doing a failover before the above ALTER command can be executed.

All of this can be avoided if the login is created correctly on the SECONDARY replica. We just have to make sure that the SID for the login on the SECONDARY replica matches the PRIMARY replica.

Let us begin with creating a new login on the PRIMARY replica of an existing Availability Group –

/* On the PRIMARY replica */

-- Create the Login
USE [master]
GO
CREATE LOGIN [TestLogin]
WITH PASSWORD=N'abc123#'
, DEFAULT_DATABASE=[master]
GO

Grant privileges to this login if necessary. They will NOT automatically replicate to the SECONDARY replica. Now find the SID of this new login.

-- Get the SID for the new Login
SELECT name, sid 
FROM sys.server_principals 
WHERE name = 'TestLogin'

/* Results:
name          sid
TestLogin     0x8EA0E033BD83524180CF813A20C5265B
*/

On the SECONDARY replica, create the login with the same SID. The GUI wizard to create logins does not have this feature to specify the SID, so the login has to be created using TSQL with an additional parameter.

/* On the SECONDARY replica */
-- Create the Login with the same SID as
-- the PRIMARY replica
CREATE LOGIN [TestLogin]
WITH PASSWORD=N'abc123#'
-- use the SID retrieved above
, SID = 0x8EA0E033BD83524180CF813A20C5265B
, DEFAULT_DATABASE=[master]
GO

Grant the same privileges to this login as done on the PRIMARY replica.

Now go back to the PRIMARY replica and create database user mapped to the login and grant required permissions at the database level. This new database user will be automatically replicated on the SECONDARY replica with its permissions and correctly map to the login. No action on SECONDARY required because the user database is in an Availability Group that is synced across replicas.

Further reading:
For the sake of completeness, I must mention the widely cited KB 918992 article (How to transfer logins and passwords between instances of SQL Server) which provides a stored procedure [sp_rev_login] to move the logins from one server to another. This stored procedure generates the CREATE LOGIN script with the password hash and the SID. You would need that stored procedure only if you don’t have access to the clear text passwords or, if you want to include that script as a scheduled job but not hardcode the password in the job. If you do not have those constraints then you can simply use the steps described in my blog post here.

How to Log SSIS Variable Values During Execution in the Event Log

Log entries created during the execution of an SSIS package help in monitoring, analysis, and issue resolution. In addition to logging events, you might want to capture the run-time values of variables in the package. I couldn’t find a native feature to log SSIS variable values so the following post shows how I did it.

The Setup

For this demo, I’ll do an INSERT operation on a table named TableA and use variables to save the before and after INSERT row count. The TableA is always blank at the beginning of the package i.e. the row count is zero. The Execute SQL Task named ESQLT-InsertRowsInTableA inserts 502 rows in TableA. The simple package looks like the image below. If you are wondering about the funny prefixes in the object names then you can read about my naming conventions mentioned in my other blog post.

Log SSIS Variable Value 1 - Package

I have two integer variables named rcTableA_PreRefresh and rcTableA_PostRefresh scoped at the package level.

Log SSIS Variable Value 2 - Variables Init Zero

Enable the Event

A log entry is generated when an event is triggered. Each object in SSIS has its own events that can be logged. I’ll use the event called OnVariableValueChanged, which as the name denotes, is triggered whenever the value of the variable changes. This event is disabled by default. To enable it, go to the Properties window of the variable and make the RaiseChangedEvent property to True. It must be enabled for each variable individually.

Log SSIS Variable Value 3 - Variable Properties

Next, I include the OnVariableValueChanged event in the logging configuration. It has to be included at the container level where the variables are scoped to. In my case, at the package level. I’m using the SSIS Log Provider for SQL Server in this package.

Log SSIS Variable Value 44 - Configure Log

Then I execute the package and look at the [dbo].[sysssislog] table for the log entries.

Log SSIS Variable Value 5 - Log Entries 1

There are some log entries but something is missing. I see the OnVariableValueChanged event logged for the Post Refresh variable but not the Pre Refresh variable.

The reason is that the initial value of the variable is set to zero in the package. The row count of a brand new empty table is also zero. So there was no change in variable value. The OnVariableValueChanged event fires only when the value actually changes! Overwriting with the same value doesn’t fulfill this condition.

To resolve that, I change the initial values in the package to -1. Now even if the row count turns out to be zero, the variable value will still change from -1 to zero. The COUNT function can’t count below zero, can it?

Log SSIS Variable Value 6 - Variables Init Minus One

I run the package again and check out the [dbo].[sysssislog] table.

Log SSIS Variable Value 7 - Log Entries 2

Things are better. The OnVariableValueChanged event for both the variables show up in the log. But the variable values are still not there.

Log SSIS Variable

The reason for missing values is that the event logging just captures the fact that the variable value changed. It doesn’t capture the value by itself. I’ll make an addition to the event handler to get the variable values too. I add an Execute SQL Task to the package level event handler for OnVariableValueChanged event.

Log SSIS Variable Value 8 - Event Handler

The General tab of the Execute SQL Task has the following properties and SQL command –

Log SSIS Variable Value 8 - Event Handler - ESQLT 1

INSERT INTO [dbo].[sysssislog]
([event]
,[computer]
,[operator]
,[ source]
,[sourceid]
,[executionid]
,[starttime]
,[endtime]
,[datacode]
,[databytes]
,[message])
VALUES
('*SSIS-OnVariableValueChanged' -- Custom event name
,? -- param 0
,? -- param 1
,? -- param 2
,? -- param 3
,? -- param 4
,? -- param 5
,? -- param 6
,0 -- Zero
,'' -- Blank string
,?) -- param 7

Notice that I precede the custom event name with an asterisk to differentiate it from the log entries created by the system.

The Parameter Mapping tab of the Execute SQL Task has the following properties –

Log SSIS Variable Value 9 - Event Handler - ESQLT 2

Pay attention to the System::VariableValue (last variable) in this screen. Its data type is LONG, which is appropriate for the numeric row counts in my example. You may have different data types for your variables. Do adjust the data type and length appropriately. Using a wrong type could lead to no value logged at all.

I run the package again and this time, the variable values are also logged in the table.

Log SSIS Variable Value 10 - Log Entries 2

Summary

A single event handler will take care of all variables in that scope. In my case, two package scoped variables are handled by a single package level event handler.

The variable value really has to change to fire the event.

The OnVariableValueChanged event is triggered for the container that has the variable in its scope. The container triggering the change in variable value could be different than the container that has the variable in its scope. In my demo, the variables were scoped to the package. Even though an Execute SQL Task is changing the variable values, I still put the event handler at the package level. As another example, assume there is a variable declared in the scope of a ForEachLoop container and there is a Script Task in the ForEachLoop. The Script Task changes the variable value. The OnVariableValueChanged event will be triggered for the ForEachLoop task.

I have used the default [sysssislog] logging table to log my variable values. You can easily use a different custom table by changing the OLEDB connection and making appropriate changes to the INSERT statement.

[jetpack_subscription_form subscribe_text=”If you liked this post then please subscribe to get new post notifications in email.”]

Find Top 25 Inefficient Query Plans by CPU, IO, Recompiles, Execution Count

Microsoft TechNet Gallery is a treasure trove of scripts that can save you a lot of coding time or sometimes introduce creative ways of solving a challenge.

I came across this collection of scripts that will show you the top 25 inefficient query plans (in XML format) sorted by CPU, IO, recompiles, execution counts etc.

Download: http://gallery.technet.microsoft.com/Find-inefficient-query-88f4611f