Free Developer Editions of SQL Server 2014 and 2016

Microsoft is releasing its Developer Editions of SQL Server 2014 and 2016 for free!

Developer Edition is the same as the Enterprise Edition but it is licensed for development and testing purposes only.

The Developer license for the previous versions (2008 R2, 2012) used to go for about $50 and up. Another option was to download the fully functional edition valid for 180-days and later buy a licence.

More information about Microsoft’s new approach to developer licensing is at our Facebook post –

Use 130 Characters to Store an Object Name

SQLErudition.wordpress.com-Use-130-Characters-to-Store-an-Object-Name

Some index maintenance scripts or dynamic SQL scripts use SQL object names as values for variables or columns i.e. database name, table name, index name etc.

SQL Server object names can be at most 128 characters long, so common wisdom is to declare the holding variable or column as SYSNAME data type or one of the alphanumeric datatype with a width of 128 characters.

Example –

declare @dbname sysname
declare @tablename nvarchar(128)
declare @indexname varchar(128)

This is technically correct.

Another technically correct thing to do is to use the QUOTENAME function to wrap the object name in [ and ] brackets. This handles those cases where there are special characters in the name or the name is a reserved keyword.

Example –

set @dbname = quotename(db_name(db_id()))
set @tablename = quotename(object_name(object_id))
set @indexname = quotename(object_name(object_id))

The Issue

The brackets will add two more characters to the value.

So for really long object names that are 128 characters in length, the value would be 128 + 2 = 130 characters.

The two extra characters will break the variable assignment or row insert statements with truncation error –

String or binary data would be truncated. [SQLSTATE 22001] (Error 8152).

In my shop, we do have some long index names and I have had to debug some scripts for this issue. And this is not the first time!

The Solution

I suggest that you use 130 as the width for the variables or columns that will store an object name. SQL Server object names can be maximum 128 characters long so using 130 characters in scripts will handle the extra two characters.

Simple Fix to a Backup Restore Error Due to Disk or Cluster Resource Issue on SQL Server

One of our database restore attempt failed with an error message that mentioned cluster resources. At least the error message indicated that the issue was not related to backward compatibility but rather a physical resource or cluster settings.

Error Details

The Error Message Window –

SQL Restore Error - sqlerudition.wordpress.com
SQL Restore Error – sqlerudition.wordpress.com

The Error Message –

TITLE: Microsoft SQL Server Management Studio
------------------------------

Restore failed for Server 'MYDEVSQLSERVER'.  (Microsoft.SqlServer.SmoExtended)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&

EvtSrc=Microsoft.SqlServer.Management.Smo.

ExceptionTemplates.FailedOperationExceptionText

&EvtID=Restore+Server&LinkId=20476

------------------------------
ADDITIONAL INFORMATION:

System.Data.SqlClient.SqlError: Cannot use file 'J:MSSQL10_50MSSQLDATAMyDatabaseName.mdf' for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&LinkId=20476

------------------------------
BUTTONS:

OK
------------------------------

Both the links in the error message above pointed to a missing information message on Microsoft website –

No information on the restore error.
No information on the restore error – sqlerudition.wordpress.com
Details
ID: Restore Server
Source:
We’re sorry
There is no additional information about this issue in the Error and Event Log Messages or Knowledge Base databases at this time. You can use the links in the Support area to determine whether any additional information might be available elsewhere.
Thank you for searching on this message; your search helps us identify those areas for which we need to provide more information.

Cause and Resolution

We determined the cause rather quickly. The source system of the backup file had a drive letter layout that was different from the destination server. The restore process was trying to create the data files on a drive that didn’t exist on the destination! So the location of the files was changed in the restore dialog to a correct drive letter of the destination server. After that the restore progressed normally.

Embed Facebook Posts in WordPress. But Why?

Facebook with WordPress - aalamrangi.wordpress.com

Posts on a WordPress blog can be shared on social media sites like Facebook, Twitter etc. You can do the reverse too and share Facebook page posts in a WordPress blog post which will include the Facebook comments, likes and shares. The Facebook page timeline has to be public.

So you first post something on your blog. Then you share it on Facebook. Then you again share the Facebook post back to your blog! Why would someone do this kind of cross posting?! The answer is, to promote your Facebook page with your blog readers and vice-a-versa. Even if you are not promoting a page (because it is not YOUR page), Facebook posts from others’ pages can still be useful to your blog readers or relevant to what you already write about.

Now, why Facebook? A Facebook user is most likely just browsing, not actively looking for solutions to any technical issue. But they sure love to share and read interesting stuff. So by posting to a Facebook page, there is a higher chance of someone liking or sharing it with their friends and as a result catch the attention of another casual browser. Oh yes, there are social-sharing buttons on a WordPress post too. But those buttons will be used by a reader who is already on your blog. Also, in some situations the sharing buttons may not even work due to restrictions like work network, device issues etc. The point here is to engage a casual reader on Facebook who is not actively looking for content on a search engine or a forum.

On the other hand, most of blog readers are looking actively for a solution to an issue. There is a high chance that they have been directed to the blog via a search engine result or a technical forum thread about a specific issue. (Check your stats, duh!) Very few are here for general reading. As a blogger, I would like to capture the interest of this “accidental” reader and hope that they return. One way for me to stay connected with them is to have them subscribed to or follow my blog through (at least one or preferably) various channels like email, RSS, WordPress, Twitter, Facebook etc. Some people may be reluctant to share their email address but may be willing to Like the Facebook page. Sending a new reader to your Facebook page and getting them to Like your Facebook page is equivalent to getting a new subscriber. Of course the restrictions mentioned above will come into play here too but the opportunity can still be used to make the reader aware of a Facebook page. With its huge active user count, Facebook is a good medium for outreach and engagement. Major websites report that bulk of their views are driven by social media sites. Consider the Facebook cross-post as a banner ad for yourself!

To promote your blog, you can either use your personal Facebook profile page or create a new page for the blog. I would suggest the latter so that you don’t spam your non-tech friends with technical rants.

I plan to experiment with various blog promotion strategies, especially in the technical niche, and write in more detail about them. I have started a Facebook page for this blog and will share my adventures periodically. If you are interested in following my learning path, you can subscribe to this blog or like my Facebook page. (Notice what I did right there?!)

For now, here is an embedded Facebook post –

Resolve Error: 102 while creating Full-Text Index Stoplist in SQL Server

Full-text index stoplist error 102One of my SQL Server databases was returning an error 102 while creating a full-text stoplist. We were trying to create a stoplist based on the system stoplist and later also tried to create a blank stoplist. The error happened both via SSMS, and equivalent TSQL commands.

The Error, 102

The following TSQL gave the error –


USE [DEMO_Database]
GO
CREATE FULLTEXT STOPLIST [DemoStopList]
AUTHORIZATION [dbo];
GO

CREATE FULLTEXT STOPLIST [DemoStopList]
FROM SYSTEM STOPLIST
AUTHORIZATION [dbo];
GO

The error dialog box –

Full-text index stoplist error 102
Image 1 (click to enlarge)

The text in the error message –

TITLE: Microsoft SQL Server Management Studio
------------------------------

Cannot execute changes.

------------------------------
ADDITIONAL INFORMATION:

Create failed for FullTextStopList 'Demo'.  (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.5058.0+((SQL11_PCU_Main).140514-1820+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Create+FullTextStopList&LinkId=20476

------------------------------

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

------------------------------

Incorrect syntax near 'STOPLIST'. (Microsoft SQL Server, Error: 102)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=102&LinkId=20476

------------------------------
BUTTONS:

OK
------------------------------

The Cause

I looked at the MSDN page related to the TSQL command to check if I was using the right syntax.

REFERENCE:

  • CREATE FULLTEXT STOPLIST (Transact-SQL)
    https://msdn.microsoft.com/en-us/library/Cc280405(v=sql.105).aspx

My syntax was correct but there was something else on the page that looked relevant. Right at the top of the documentation page is the following message –

Important noteImportant
CREATE FULLTEXT STOPLIST, ALTER FULLTEXT STOPLIST, and DROP FULLTEXT STOPLIST are supported only under compatibility level 100. Under compatibility levels 80 and 90, these statements are not supported. However, under all compatibility levels the system stoplist is automatically associated with new full-text indexes.

To verify if the compatibility level of my database could indeed be an issue, I checked the properties of the database by –

SELECT
is_fulltext_enabled,
compatibility_level
FROM
sys.databases
is_fulltext_enabled compatibility_level
0 90

There you have it! My database was originally on a SQL Server 2005 installation so its compatibility level was 90, and that was the reason the CREATE/ALTER/DROP STOPLIST commands were unavailable. The current server that I was working on was SQL Server 2008 R2, which could be checked by –

SELECT @@VERSION
GO
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1

So the resolution to the error lies in changing the compatibility level. As per the documentation, the highest compatibility level I could go on a SQL Server 2008 R2 installation was 100.

REFERENCE:

  • View or Change the Compatibility Level of a Database
    https://msdn.microsoft.com/en-us/subscriptions/index/bb933794
  • ALTER DATABASE Compatibility Level (Transact-SQL)
    https://msdn.microsoft.com/en-us/subscriptions/index/bb510680

Changing the Compatibility Level

I checked that no other users were connected to the database and then issued this command to change the compatibility level.

USE [master]
GO
ALTER DATABASE [DEMO_database]
SET COMPATIBILITY_LEVEL = 100;
GO

It ran successfully and I could verify in the sys.databases catalog view that the compatibility level has changed to 100.

Now I was able to create a Stop List, Full-text Catalog and a Full-text Index on my table, and was able to run queries using the CONTAINS and CONTAINSTABLE keywords.

Fixed? Not so fast!

Interestingly, even though I could use the Full-text features now, the is_fulltext_enabled property still showed up as 0 (i.e. Disabled).

That was fixed by running the following –

EXEC [DEMO_Database].[dbo].[sp_fulltext_database]
@action = 'enable'
GO

REFERENCE:

  • sp_fulltext_database (Transact-SQL)
    https://msdn.microsoft.com/en-us/library/ms190321(v=sql.105).aspx

Gotcha – SSIS ImportExport Wizard Can Kill Your Diagrams

Some things are meant to be learnt the hard way. And that is how I learnt about today’s gotcha.

Overview of [sysdiagrams]

I have been working on an ERD (Entity Relationship Diagram) recently and used the Database Diagram feature in SSMS for this purpose. When you try to create a diagram for the first time in a database, a message box asks you if you would like to create diagramming objects.

Confirmation dialog
Image 1 (Click to enlarge)

On clicking Yes, a system table by the name of [sysdiagrams] is created in the same database that you are creating the diagram in. The diagrams are stored in this table. SSMS shows the diagram in the Database Diagrams node of the Object Explorer tree view.

Diagram and its table
Image 2 (Click to enlarge)

The Scenario

I was creating my diagram in a test environment. At some point I had to refresh all data from the production environment. The easiest way for me to do a full refresh is to use the Import and Export Wizard. The wizard can be launched either via SSMS context menu or in the Business Intelligence Developer Studio as an SSIS project. As usual in the case of quick data refreshes, I selected all tables using the top-left Source checkbox in the wizard, and chose the options to delete and reinsert all rows with identity values.

Select all tables and reinsert rows
Image 3 (Click to enlarge)

When the wizard ran successfully, my database diagram at the destination test system was missing!

Gotcha

Upon some research and trials I found that that if the diagramming capabilities in the source and destination servers are enabled, the wizard includes the [sysdiagrams] table automatically in the list of tables to refresh. As you can see, there are no other system tables in the wizard except the [sysdiagrams] table so it is easy to miss it in a long list.

sysdiagrams is included
Image 4 (Click to enlarge)

So in my case, all data in the destination [sysdiagrams] table was deleted. There were no diagrams at the source so nothing was imported for this table. This outcome would have been the same with the drop and recreate option too because the destination table would have been recreated.

Conclusion

One needs to be careful while using the Import and Export Wizard. Uncheck this table in the selection list to preserve the diagrams at destination.

What is the RetainSameConnection Property of OLEDB Connection in SSIS?

I recently wrote about How to Use Temp Table in SSIS. One of the requirements to successfully reuse a temporary table across multiple tasks in SSIS is to set the RetainSameConnection property of the OLEDB Connection to TRUE. In this post, I will discuss the property and also use a Profiler Trace to find out its behavior.

The RetainSameConnection Property

RetainSameConnection is a property of an OLEDB Connection Manager. The default value of this property is FALSE. This default value makes SSIS execution engine open a new OLEDB connection for each task that uses the connection and closes that connection when the task is complete. I believe the idea behind this is to not block a connection to a server unnecessarily and release it until it is needed again. And it makes sense too because some packages can run for an extended duration and may not need to be connected to an OLEDB server all the time. For example, an OLEDB in not required to be open while parsing text files, sending emails, ETL operations not involving the OLEDB server in question etc. Releasing connections unless really required can be certainly helpful on busy servers because SQL Server needs some memory for each open connection.

On the other hand, some scenarios require a persistent connection e.g. temporary table reuse across multiple tasks. We can set the property value to TRUE and then it will open just one OLEDB connection with a server and keep it alive until the end of the package execution. The property can be set via the Properties window for the OLEDB Connection Manager.

aalamrangi.wordpress.com-SSISRetainSameConnection2

The Temporary Table Scenario

Local temporary tables (with a # in front of their name) in SQL Server are scoped to a session. SQL Server drops them when the session is closed. This means, local temporary tables created in one session are not available in another session. In SSIS, with the RetainSameConnection set to FALSE (the default), a new session is opened for each task. Therefore, temporary tables created by a task are not available to another task.

Demo

I have a demo package with two Execute SQL Tasks and one OLEDB Connection Manager. The Execute SQL Tasks have a simple SELECT statement and they both use the same connection manager.

aalamrangi.wordpress.com-SSISRetainSameConnection1

I have a Profiler Trace to monitor the number of connections created by the SSIS package.

The first execution of the package is with the RetainSameConnection set to the default value of FALSE. The trace captures two pairs of login/logout events, one for each task. The second execution is with the property value set to TRUE. This time, the trace captures only one pair of login/logout events.

aalamrangi.wordpress.com-SSISRetainSameConnection3

Conclusion

In most cases, the default value of RetainSameConnection=FALSE will be suitable. A developer should make a decision to enable it when the package tasks really need a persistent connection. In addition to the temporary table reuse, a TRUE value for this property can also be useful in managing transactions and reducing the number of recurring connection requests to a server.

How to Have Standard Event Logging in SSIS and Avoid Traps

Event logging in SSIS gives a lot of valuable information about the run-time behavior and execution status of the SSIS package. Having a common minimum number of events is good for consistency in reports and general analysis. Lets say your team wants to ensure that all packages must log at least OnError, OnWarning, OnPreExecute and OnPostExecute events. If the package has a DataFlowTask then the BufferSizeTuning should also be logged. The developer can include more events to log as required but these mentioned previously are the minimum that must be included.

You can create pre-deployment checklists or documentation to ensure this minimum logging. Probably you already have that. As the number of developers and/or packages increase, it becomes difficult to ensure consistency in anything, not just for event logging. Therefore documentation, checklists and training are helpful to an extent only. Your requirement could be more complex than my five-event example above and thus more prone to to oversight.

The easiest way to ensure a common logging implementation would be a logging template that has all the minimum event pre-selected. The developer should just need to apply that to the package.

Event Logging in SSIS with a Template

I assume that you are already familiar with the concepts of event logging in SSIS so this post is not going to be a beginners level introduction to event logging. I will rather discuss options to have a minimum standard event logging across SSIS packages and teams with minimal effort. I’ll also mention some traps to avoid.

I am using a demo SSIS package with two Data Flow Tasks and an Execute SQL Task. I have enabled event logging in SSIS for the first few events at the package level for the sake of demonstration. The logging configuration options for the package node (which is the top node) are shown in the first image.

Image 1 - Logging configuration window for the package node
Image 1 – Event logging configuration window for the package node

The logging options at the child container node Data Flow Task 1 are shown in the second image. The configuration for other Data Flow and the Execute SQL Task look the same.

Image 2 - Logging configuration window at the child container node
Image 2 – Event logging configuration window at the child container node

The check marks for the tasks are grayed out which means they are inheriting the logging options from their parent, i.e. the package. To disable logging for a task, remove its check mark in the left tree view window.

TIP: Logging can also be disabled by going to the Control Flow canvas and changing the LoggingMode property of the task to Disabled.

The Trick

Now look at the bottom of the images again. Notice the Load… and Save… buttons? They do exactly what they say. You can set your logging options and save them as an XML template. Later, this XML template can be loaded into other packages to enable the same logging options.

The XML template file has nodes for each event. For example, the logging options for OnError event are saved like this –


-<EventsFilter Name="OnError">

-<Filter>

<Computer>true</Computer>

<Operator>true</Operator>

<SourceName>true</SourceName>

<SourceID>true</SourceID>
<ExecutionID>true</ExecutionID>

<MessageText>true</MessageText>

<DataBytes>true</DataBytes>

</Filter>

</EventsFilter>

Notice that the XML just mentions the event name, not the name of any task. This means that when the template file is loaded, this logging option will be set for any task where the event is applicable. More on this later.

The Traps

The OnError event is a generic event applicable to all tasks. Lets talk about events that are specific to tasks. For example, the BufferSizeTuning event is applicable just to the Data Flow Tasks, not Execute SQL Tasks.

When I proceed to set logging for BufferSizeTuning event, I have to set it individually in the Data Flow Task tree node. Notice the message at the bottom of the second image that says –

To enable unique logging options for this container, enable logging for it in the tree view.

This message is important in the context of saving and loading a template file too. When I save a template file, the logging options of just that tree view node are saved. For example, the BufferSizeTuning event will be saved in the template only if I am at the Data Flow task in the tree view. It will not be saved if I am at the Package or the Execute SQL task in the tree view.

The reverse is also true. When I load a template, its logging options are applied to just that node which I select in the tree view. For example, if I load a template at the Data Flow Task 1, the options will not be applied to the Data Flow Task 2 or the Execute SQL Task. If the template has an event that is not applicable to the task then that event’s settings will be ignored. For example, the BufferSizeTuning event logging option is meant for Data Flow Tasks so it will be ignored for the Execute SQL Task. The fact that non-relevant options are ignored can be helpful for us to consolidate all logging options in a single template file.

Conclusion

A package level Save and Load of a logging template is straight forward. But if you need to have logging for events that are specific to a task type, then consider creating a logging template for each type of task. Also, if your logging configuration requires anything else than the package level settings, remember to load the template for each task in the tree view.

Number of Template Files How Pros and Cons
Individual File per Task Create one template file for each type of task. The file will have events applicable to that task. Pros –
Easier to know what type of tasks have a template and which ones do not.Cons –
More files to manage.
Single File for All Tasks Create a template file for each task. Then copy all event options in a single XML file. Pros –
One file is easier to manage.Cons –
Not obvious which tasks are include. Need to put in comments in the XML file.

How to Use Temp Table in SSIS

Using a temporary table in SSIS, especially in a Data Flow Task, could be challenging. SSIS tries to validate tables and their column metadata at design time. As the Temp table does not exist at the design time, SSIS cannot validate its metadata and throws an error. I will present a pretty straight forward solution here to trick SSIS into believing that the Temp table actually exists and proceed as normal.

Temporary Table Reference Across Two Tasks

To begin with, I will demonstrate that a Temp table can be referenced across two tasks. Add two Execute SQL Tasks in your package. Both of them use the same OLEDB connection. The first task creates a Local Temp table and inserts one row into it. The second task tries to insert one more row in the same table.

www.sqlerudition.com - Temp Table In SSIS - 1

TSQL script in the first task –

/* Create a LOCAL temp table*/
IF
(
Object_id('[tempdb].[dbo].[#LocalTable]')
IS NOT NULL
)
DROP TABLE
[tempdb].[dbo].[#LocalTable]
GO

CREATE TABLE [#LocalTable]
(
id INT IDENTITY,
label VARCHAR(128)
);
GO

/* Insert one row */
INSERT INTO [#LocalTable]
(label)
VALUES ('First row');
GO

TSQL script in the second task –

/* Insert one row */
INSERT INTO [#LocalTable]
(label)
VALUES ('Second row');
GO

Invalid Object Name Error

When executed, the SSIS package gives the following error because the second task cannot see the Temp table created in the first task. Local Temp tables are specific to a connection. When SSIS switches from one task to another, it resets the connection so the Local Temp table is also dropped.

Error: 0xC002F210 at ESQLT-InsertSecondRow
, Execute SQL Task: Executing the query
&quot;/* Insert second row */
INSERT INTO [#LocalTable]...&quot;
failed with the following error:
&quot;Invalid object name '#LocalTable'.&quot;.
Possible failure reasons:
Problems with the query
, &quot;ResultSet&quot; property not set correctly
, parameters not set correctly
, or connection not established correctly.
Task failed: ESQLT-InsertSecondRow
Warning: 0x80019002 at SSIS-DemoTempTable:
SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED.
The Execution method succeeded, but the number
of errors raised (1) reached the maximum
allowed (1); resulting in failure. This occurs
when the number of errors reaches the number
specified in MaximumErrorCount. Change the
MaximumErrorCount or fix the errors.
SSIS package &quot;SSIS-DemoTempTable.dtsx&quot;
finished: Failure.

The Fix

The fix is pretty simple. Right-click on the OLEDB connection manager and go to the Properties window. Change the RetainSameConnection property to True. This will force the connection manager to keep the same connection open. I have another post with more details about the RetainSameConnection property of OLEDB connection managers.

www.sqlerudition.com - Temp Table In SSIS - 2

This fixes the error and the package executes successfully.

Temporary Table in a Data Flow Task

Now let me demonstrate that a Temp table can be used in a Data Flow Task.

Add a Data Flow Task to the package.

www.sqlerudition.com - Temp Table In SSIS - 3

In the Data Flow task, add an OLEDB source that will use the same OLEDB connection as used by the Execute SQL Tasks earlier. In the OLEDB Source Editor window, there is no way to find our Local Temp table in the list so close the Editor window.

The Development Workaround

Open a SSMS query window and connect to the SQL Server used in the OLEDB connection. Now create a Global Temp table with the same column definition. You can just copy the CREATE TABLE script and add one more # symbol to the table name.

The Global Temp table is just a development workaround for the restriction imposed by the volatility of the Local Temp table. You can even use an actual physical table instead of the Global Temp table. We will switch to the Local Temp table in the end of this post and then the Global Temp table (or the actual physical table) can be dropped.

Script –

/* Create a GLOBAL temp table
with the same schema as the
earlier LOCAL temp table.
Note the ## in the table name */

CREATE TABLE [##LocalTable]
(
id INT IDENTITY,
label VARCHAR(128)
);
GO

Come back to the SSIS Control Flow. Create a new package scoped variable of String data type. Give it the name TableName and put the Global Temp table name as its value.

www.sqlerudition.com - Temp Table In SSIS - 4

Go to the Data Flow > OLEDB Source and double click to open the OLEDB Source Editor window. Choose the Data Access Mode as Table name or view name variable. In the Variable name drop-down, choose the new variable that we created. This means that now the OLEDB Source is going to use the GLOBAL Temp table. Of course, it is not the same as the LOCAL Temp table but we will get to that in a minute. Click on the Columns tab to load the table metadata. Then click on OK to close the OLEDB Source Editor.

www.sqlerudition.com - Temp Table In SSIS - 5

Now add a Flat File Destination and configure its properties. I’ll not go into those details. Please let me know in the comments or via email if you need information on how to configure a Flat File Destination.

The final Data Flow Task looks like this.

www.sqlerudition.com - Temp Table In SSIS - 6

You can execute the package now to verify if it runs successfully. Although it will run fine, the flat file will not have rows because the source of the data is the Global Temp table, not the Local Temp table populated by the Execute SQL Tasks.

A Global Temp table (or a physical table) is common to all users so it could cause issues in multi-user environments. Local Temp tables are specific to a connection, hence more scalable. All that is needed now is to remove one # in the variable value and the OLEDB Source will point to the correct Local Temp table. To clean up, you can drop the Global Temp table.

www.sqlerudition.com - Temp Table In SSIS - 7

The flat file will have the rows inserted by the Execute SQL Tasks.

www.sqlerudition.com - Temp Table In SSIS - 8

Avoid Validation

Subsequent runs of the package will show validation errors because the Local Temp table is not available when the package starts. To go around this, you can set the DelayValidation property of the package to TRUE. As the package is the parent container for all other tasks, this property will be applied to all tasks in the package. If you do not wish to disable validation for all tasks, then you can set it for individual tasks, i.e. the first Execute SQL Task and the Data Flow Task. Again, the Data Flow Task may contain multiple sources, destinations and transformations and you may not want to disable validation for all of them. In that case you can be more granular and set just the ValidateExternalMetadata property of the OLEDB Source to FALSE.

Further Reading:

Hi there!

Photo credit: https://www.flickr.com/photos/kalexanderson/5696097036/
Looking through binoculars/en spanare. Credit: Kristina Alexanderson

Hi there, fellow learner!

Welcome to this blog with articles about SQL Server, TSQL, SSIS, SSRS, SSAS, Power BI, Databases, Data Visualization, Tips, Administration, and Productivity.

Mostly.

Check out the post categories and the tag cloud in the sidebar to get a feel of what else you can expect to read here. I have contributed scripts and tools to MSDN TechNet and Wikis. Check out the Download section for a list.

Do you like awesome learning opportunities and resources? Me too! I often share that kind of stuff on Twitter.

Are you facing a challenge in learning about SQL Server? Tell me about it. I love to listen and might have some inputs. Let me know via comments, email or a tweet.

Do subscribe via email or connect on social media so that we can keep in touch. Scroll to find the links at the bottom of the page.

By the way, if you were wondering about the name of the blog –

erudition
[er-yoo-dish-uh n, er-oo-]
noun
knowledge acquired by study, research, etc.; learning; scholarship.