Thursday, March 26, 2015

System backup software triggers unwanted SQL backups

Problem

Sometimes backup software that is used to backup the Windows servers on which SQL Server is running will trigger a SQL backup automatically. Although you may have setup a neat SQL backup schedule in SQL Server Agent with Full, Differential and transaction log backups, consequences are that whenever this Server backup software kicks in and triggers another full backup of your databases, your backup files may be the wrong ones in case you ever need to do a restore with your Full, differential and transaction log backups, because the order in which they are created by you is broken by the unwanted backup in the background.

You do not see the unwanted backups in the SQL server Agent log, but you will find them back in the standard SQL server logs. The SQL server log typically show lines like this:

  • Database backed up. Database: master, creation date(time): 2015/02/19(16:52:39), pages dumped: 1356, first LSN: 388:415:117, last LSN: 388:465:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{9603DF6C-C7E9-4A19-BAAF-C827EFE22023}7'}).
  • I/O was resumed on database master. No user action is required.
  • I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

The same messages are repeated for all the other system and user databases on that particular server.

Cause

The SQL server service called ‘SQL Server VSS Writer’ is used by the Windows backup software to trigger the SQL server backup. From SQL BOL:

When running, Database Engine locks and has exclusive access to the data files. When the SQL Writer Service is not running, backup programs running in Windows do not have access to the data files, and backups must be performed using SQL Server backup.

  • ‘Use the SQL Writer Service to permit Windows backup programs to copy SQL Server data files while SQL Server is running.’

Solution

In the Services management console: Stop and disable the ‘SQL Server VSS Writer’ service.

image

Working with environments on a deployed SSIS 2012 Project

When an SSIS project is deployed from SQL Server Data Tools to the Integration Services Catalog, the Project and Package connections and parameters are stored together with the project and the packages. Each parameter or connection property in de Catalog is reconfigurable by means of environments. With environments properly setup, developers do not need to worry about changing connections or parameters when they release an upgrade to the Production server. Because the production server environment will automatically replace the test values in the connections and in the parameters by values setup in the production environment.

Catalog folder after 1st time deployment

An example of a project in the catalog with 1 package. No Environments have been created yet.

image

Create Environments

In this example we will create 2 new environments with a sql script: Test and Production.

---- create environment
EXEC [SSISDB].[catalog].[create_environment] @environment_name=N'Production', @environment_description=N'', @folder_name=N'SSIS_2012_Test'
GO
EXEC [SSISDB].[catalog].[create_environment] @environment_name=N'Test', @environment_description=N'', @folder_name=N'SSIS_2012_Test'
GO

After the script is executed: the 2 new environments are visible in the Environments folder.


image


Create environment variables


For project parameters



There is a project parameter in the SSIS project ‘DaysToLoad’ with value 15. This is shown in the project configuration, right-click on the project and select ‘Configure…’


image


Or with a T-SQL script, create an environment variable ‘prjDaysToLoad’ in both environments:



DECLARE @var int = N'30'
EXEC [SSISDB].[catalog].[create_environment_variable] @variable_name=N'prjDaysToLoad', @sensitive=False, @description=N''
    , @environment_name=N'Production', @folder_name=N'SSIS_2012_Test', @value=@var, @data_type=N'Int32'
EXEC [SSISDB].[catalog].[create_environment_variable] @variable_name=N'prjDaysToLoad', @sensitive=False, @description=N''
    , @environment_name=N'Test', @folder_name=N'SSIS_2012_Test', @value=@var, @data_type=N'Int32'
GO

Add a reference to the environment in the reference


The project needs to be made aware of the available environments first.


You can do this by right-clicking on the project name and select ‘Configure…’ and add the reference in the References panel. Or you can do it with a T-SQL script



Declare @reference_id bigint
EXEC [SSISDB].[catalog].[create_environment_reference] @environment_name=N'Production', @reference_id=@reference_id OUTPUT, @project_name=N'SSIS_2012_TestOdata3', @folder_name=N'SSIS_2012_Test', @reference_type=R
EXEC [SSISDB].[catalog].[create_environment_reference] @environment_name=N'Test', @reference_id=@reference_id OUTPUT, @project_name=N'SSIS_2012_TestOdata3', @folder_name=N'SSIS_2012_Test', @reference_type=R
GO

The result in the configure dialog window looks like this


image


Create an environment variable in production and test environment



---- create env variable in the 2 environments
DECLARE @var int = N'30'
EXEC [SSISDB].[catalog].[create_environment_variable] @variable_name=N'prjDaysToLoad', @sensitive=False, @description=N''
    , @environment_name=N'Production', @folder_name=N'SSIS_2012_Test', @value=@var, @data_type=N'Int32'
EXEC [SSISDB].[catalog].[create_environment_variable] @variable_name=N'prjDaysToLoad', @sensitive=False, @description=N''
    , @environment_name=N'Test', @folder_name=N'SSIS_2012_Test', @value=@var, @data_type=N'Int32'
GO

See the result in the ‘Production’ environment


image



Reference the environment variable from a parameter in the Project


image


or do it with a T-SQL script



---- reference the environment from the project parameter
EXEC [SSISDB].[catalog].[set_object_parameter_value] @object_type=20, @parameter_name=N'DaysToLoad', @object_name=N'SSIS_2012_TestOdata3', @folder_name=N'SSIS_2012_Test', @project_name=N'SSIS_2012_TestOdata3', @value_type=R, @parameter_value=N'prjDaysToLoad'
GO


This is the result


image


Be careful: The name of the environment variable needs to be identical in both environments


Run the package manually from the Catalog with a reference to the environment


When a package in this project is run we add the reference to one of the environments ‘Production’ or ‘Test’.


Right-click on a package to Execute. The execute package will warn that no environment is selected for the parameter that has a reference:


image


Check the Environment checkbox and select one of the environments


image


When OK is clicked the package will use the value for the parameter that comes from the selected environment.


Run the package in a SQL Server Agent job with a reference to the environment


In a job step the procedure is similar, select the package and check the environment checkbox and select one of the environments.


image


Conclusion


Once SSIS projects are deployed from a developer environment to a production environment you do not need to change parameter or connection settings in your Visual Studio project before the deployment. In the Integration Services you can set up one or more environments with variables for parameters and connections that differ from the developer environment. The developer has no worries about deploying in the production environment (either directly or through a deployment script), because the DBA administrator can setup up an environment in the Catalog so that the packages connect to databases and use parameters adapted for this environment.


In a production environment I tend to create only the ‘production’ environment and on a test or QA server I create only the ‘test’ environment. But you can still use several different environments in a production environment the package may need to address ETL operations on several production servers.

Schedule to recycle the SQL Server Logs

A new SQL Server log file is created each time the SQL service is restarted. On a production server that is very sparingly restarted this can result in enormous log files.

Below is a script to schedule a weekly recycle of the SQL Server Logs. The number of log files is set to 12 before they are recycled. Thereby giving a 3 month long history of the logs.

Other combinations are possible, e.g. make a daily schedule and set the number of files to 30, this way the log files are shorter and the history is one month long.

---- set SQL server log recycle property to Recycle after 12 files
USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 12
GO
---- Create SQL Server Agent Job to do a weekly recycle (Saturday 12:00) of the SQL Server log
USE [msdb]
GO
DECLARE @jobId BINARY(16)
EXEC  msdb.dbo.sp_add_job @job_name=N'MSOL Recycle SQL Server Logs', 
        @enabled=1, 
        @notify_level_eventlog=0, 
        @notify_level_email=2, 
        @notify_level_netsend=2, 
        @notify_level_page=2, 
        @delete_level=0, 
        @category_name=N'Database Maintenance', 
        @owner_login_name=N'sa', 
        @notify_email_operator_name=N'ICT Helpdesk', @job_id = @jobId OUTPUT
select @jobId
GO
EXEC msdb.dbo.sp_add_jobserver @job_name=N'MSOL Recycle SQL Server Logs', @server_name = N'VMXYZ'
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_add_jobstep @job_name=N'MSOL Recycle SQL Server Logs', @step_name=N'RecycleErrorLog', 
        @step_id=1, 
        @cmdexec_success_code=0, 
        @on_success_action=1, 
        @on_fail_action=2, 
        @retry_attempts=0, 
        @retry_interval=0, 
        @os_run_priority=0, @subsystem=N'TSQL', 
        @command=N'EXEC sp_cycle_errorlog', 
        @database_name=N'master', 
        @flags=0
GO
USE [msdb]
GO
EXEC msdb.dbo.sp_update_job @job_name=N'MSOL Recycle SQL Server Logs', 
        @enabled=1, 
        @start_step_id=1, 
        @notify_level_eventlog=0, 
        @notify_level_email=2, 
        @notify_level_netsend=2, 
        @notify_level_page=2, 
        @delete_level=0, 
        @description=N'', 
        @category_name=N'Database Maintenance', 
        @owner_login_name=N'sa', 
        @notify_email_operator_name=N'ICT Helpdesk', 
        @notify_netsend_operator_name=N'', 
        @notify_page_operator_name=N''
GO
USE [msdb]
GO
DECLARE @schedule_id int
EXEC msdb.dbo.sp_add_jobschedule @job_name=N'MSOL Recycle SQL Server Logs', @name=N'Weekly Saturday 12:00', 
        @enabled=1, 
        @freq_type=8, 
        @freq_interval=64, 
        @freq_subday_type=1, 
        @freq_subday_interval=0, 
        @freq_relative_interval=0, 
        @freq_recurrence_factor=1, 
        @active_start_date=20150326, 
        @active_end_date=99991231, 
        @active_start_time=120000, 
        @active_end_time=235959, @schedule_id = @schedule_id OUTPUT
select @schedule_id
GO



Wednesday, March 25, 2015

SSIS 2012 package with Odata source–execution failed (part 3)

In post SSIS 2012 package with Odata source-failure (part 2) I was able to deploy my package with Odata source. But the package failed to run.

Change from Project deployment to Package Deployment

I have converted my project with package connections from project deployment to Package deployment. Then I deployed the package on the server in MSDB catalog from SSIS. I.e. I connect with SSMS to Integration Services as an administrator and under the MSDB folder I added a new folder for my package. I then imported the dtsx into the mdsb package store. The Import worked. Still from within SSMS and connected to the Integration Services, i right-clicked on the package in the MSDB store and selected 'Run package'. In the Execute Package dialog window, I clicked on 'Execute' and the package actually ran. The Odata connector was opened, it read the data and imported into my database.

This experience made me realize what the real culprit  was: The OData connector is a 32-bit connector in SQL Server 2012. 

Odata source is 32-bit

To prove this: with SSMS I connected to the SQL database server and in the Integration services catalog, i right-clicked on the package that was deployed in Project deployment mode, then selected 'Execute...'. In the execute dialog windo in the 'Advanced' tab I checked 32-bit runtime. And the package ran.

Final conclusion

  1. The Project deployment will work, but not with the Odata source as project connection. The Odata connection must be defined in each package as a package connection.
  2. Run the deployed packages in 32-bit mode on the server

Tuesday, March 24, 2015

SSIS 2012 package with Odata source-failure (part 2)

Deployment failure

In my previous post SSIS 2012 package with Odata source–failed to deploy project I have encountered a deployment problem with an SSIS project created in project deployment mode with VS2012 and deployed to SQL 2012.

In this package I have an OData connection to a SharePoint Odata service. I put the connection under the connection managers so that I could share the same connection amongst several packages in the project. The deployment failed on the system on which I was developing, as well as on another SQL 2012 server.

This a project connection

image

Maybe the OData should not be in the project connection managers, but defined each time as a package connection. I decided to create a new SSIS project with one package with the Odata connection defined in the package.

This is a Package connection

image

Change OData from Project to Package connection

The project with the Package connections deploys without errors in project deployment mode.

image

The project and package can be seen in the Integration Services catalog:

image

Package deploys but Executes with runtime errors

When I Execute the package, it fails with error

‘Error: The connection type "ODATA" specified for connection manager "OData Incidents" is not recognized as a valid connection manager type. This error is returned when an attempt is made to create a connection manager for an unknown connection type. Check the spelling in the connection type name.’

I shall have to investigate further.

SSIS 2012 package with Odata source–failed to deploy project

On a test server with VS 2012 and SQL 2012 installed, I have created an SSIS project in project deployment mode. In the packages of the project I use the Odata source for SSIS. The packages run fine in VS 2012. But when I deploy the package to the Integration Services catalog on the same server as where I am running VS 2012 I get a deployment error:

‘Failed to deploy project. For more information, query the operation_messages view for the operation identifier '25084'.  (Microsoft SQL Server, Error: 27203)

image

To find out the cause of the problem, you can connect with SSMS to the SSISDB database where the SSIS projects are stored. Then select the record with operation_id from the operation_messages as suggested in the above-mentioned errormessage.

use SSISDB
go

select * from catalog.operation_messages where operation_id = 25084



The message column in the result contains the cause of the error:


‘Failed to deploy the project. Fix the problems and try again later.:Unable to create the type with the name 'ODATA'.’


In my case the deployment failed of SSIS packages that use the Odata source on the same server as where I tested successfully with VS2012.


Now I have to find out what is wrong, see SSIS 2012 package with Odata source-failure (part 2)

Process SSAS cube in good order with SSIS package

Based on an article by Benny Austin about SSIS Package to Process SSAS Cube

I have decided to process the cubes I have to maintain in the same order as suggested in his article.

  1. Process Dimensions
    1. Process default
    2. Process Update
  2. Process Facts
    1. Process default
      1. Process Partitions – default
      2. Process Measure groups – default
      3. Process Cubes – default
    2. Process Data
      1. Process Partitions – data
      2. Process Measure groups – data
      3. Process Cubes – data
  3. Process Index
    1. Process Dimensions - index
    2. Process Cube measures - index
    3. Process Partitions – index

A screenshot of what the SSIS package looks like in VS 2012:

image