Thursday, June 28, 2012

How to deadlock SQL Server?

PROBLEM:
How to deadlock SQL Server?

SOLUTION:
-- step 1: open new query window (window 1) and run only the following 3 lines
begin transaction
update [AdventureWorks].[Person].[Address]
set AddressLine1='aaa' where AddressID=1

-- step 2: open new query window (window 2) and run only the following 5 lines
begin transaction
update [AdventureWorks].[Person].[Address]
set AddressLine1='bbb' where AddressID=2
update [AdventureWorks].[Person].[Address]
set AddressLine1='aaa' where AddressID=1

-- step 3: go back to your first query window (window 1) and run the following 2 line
update [AdventureWorks].[Person].[Address]
set AddressLine1='bbb' where AddressID=2

-- go back to window 2, you will see the following error
Msg 1205, Level 13, State 51, Line 4
Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.


More about deadlocking here
How to track down deadlocks using SQL Server Profiler here
And how to detect and ending deadlocks here

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Thursday, June 21, 2012

Shrink all SQL Server databases transaction log files

PROBLEM:
You want to shrink all SQL Server databases transaction log files to clean up some space

SOLUTION:
Use the following script to shrink all databases transaction log files. You can use it also as a maintenance job using SQL Server Agent

DECLARE @Log_name VARCHAR(1000)
DECLARE @Db_name VARCHAR(1000)
DECLARE @Recovery_model_desc VARCHAR(1000)
DECLARE @SQL nvarchar(2000)
DECLARE @ParmDefinition nvarchar(1000)
DECLARE @SizeAfter int

DECLARE db_cursor CURSOR FOR
SELECT
       F.NAME AS [LOG_NAME],
       DB.NAME AS [DB_NAME],
       DB.RECOVERY_MODEL_DESC AS [RECOVERY_MODEL_DESC]
FROM
       MASTER.SYS.MASTER_FILES F INNER JOIN MASTER.SYS.DATABASES DB
       ON DB.DATABASE_ID = F.DATABASE_ID
WHERE F.FILE_ID=2 AND DB.NAME <> 'tempdb'

OPEN db_cursor 
       FETCH NEXT FROM db_cursor INTO @Log_name, @Db_name, @Recovery_model_desc
       WHILE @@FETCH_STATUS =
       BEGIN 
              SET @SQL = N'
              ALTER DATABASE '+ @Db_name + N' SET RECOVERY SIMPLE WITH NO_WAIT
              SELECT F.SIZE
              FROM MASTER.SYS.MASTER_FILES F INNER JOIN MASTER.SYS.DATABASES DB
              ON DB.DATABASE_ID = F.DATABASE_ID
              WHERE F.NAME = ''' + @Log_name +''' AND DB.NAME =  ''' + @Db_name+'''
              '

              SET @SQL = @sql +'
              USE [' + @Db_name + ']
              DBCC SHRINKFILE (' + @Log_name + ', 10)';
             
              SET @sql = @sql + N'
              SELECT F.SIZE
              FROM MASTER.SYS.MASTER_FILES F INNER JOIN MASTER.SYS.DATABASES DB
              ON DB.DATABASE_ID = F.DATABASE_ID
              WHERE F.NAME = ''' + @Log_name +''' AND DB.NAME =  '''+ @Db_name+'''
              '
             
              SET @SQL = @SQL + N'
              ALTER DATABASE ' + @Db_name + N' SET RECOVERY ' + @Recovery_model_desc + ' WITH NO_WAIT
              '

              SET @ParmDefinition =N'@Size int OUTPUT';
              EXECUTE sp_executesql @SQL ,@ParmDefinition,@Size = @SizeAfter OUTPUT;
              FETCH NEXT FROM db_cursor INTO @Log_name, @Db_name, @Recovery_model_desc
       END 

CLOSE db_cursor
DEALLOCATE db_cursor

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, June 13, 2012

What you need to know before starting to make query performance improvements in SQL Server?

PROBLEM:
What you need to know before starting to make query performance improvements in SQL Server?

SOLUTION:
To perform actions for performance improvements in SQL Server you need to understand the use of the following 3 statements and their scope before you are using them:
1. Use CHECKPOINT to forces all dirty pages for the current database to be written to disk and clean the buffers. The database engine performs modifications to database pages in memory (in the buffer cache) and does not write these pages to disk after every change. Rather, the database engine periodically issues a CHECKPOINT on each database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.
2. Use DBCC FREEPROCCACHE to clear the procedure cache. Freeing the procedure cache would cause, for example, an ad-hoc SQL statement to be recompiled rather than reused from the cache. If observing through SQL Profiler, one can watch the cache remove events occur as DBCC FREEPROCCACHE goes to work. DBCC FREEPROCCACHE will invalidate all stored procedure plans that the optimizer has cached in memory and force SQL Server to compile new plans the next time those procedures are run.
3. Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server. DBCC DROPCLEANBUFFERS serves to empty the data cache. Any data loaded into the buffer cache due to the prior execution of a query is removed. To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache.

CHECKPOINT
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Scope - The CHECKPOINT is database-specific, the FREEPROCCACHE  and DROPCLEANBUFFERS drops every single clean page from the entire database data cache. By executing them all sessions of all users in the database are effected, so if you perform it in a shared "performance database" it affects the other users that are currently connected.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Tuesday, June 5, 2012

SQL Server - Optimizing query performance

PROBLEM:
How to optimize SQL Server query performance?

SOLUTION:
1. Turn on the execution plan, and statistics
2. Use Clustered Indexes
3. Use Indexed Views
4. Use Covering Indexes
5. Keep your clustered index small
6. Avoid cursors
7. Archive old data
8. Partition your data correctly
9. Remove user-defined inline scalar functions
10. Use APPLY
11. Use computed columns
12. Use the correct transaction isolation level
More details..

1. Aim for the ideal of only one single sql call per page request
2. Turn on SQL Profiler
3. Preprocess expensive queries
4. Review query plans to ensure optimally performing sql statements
5. Identify table scans or index scans
6. Identify key lookups
7. Check the order of columns in your where clause
8. Ensure the where clause is ordered most restrictive to least restrictive
9. Remove CTEs
10. Remove Temp Tables
11. Remove Cursors
12. Reduce the number of joins in your queries
13. Remove all declared variables
More details..

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!


Azure database on-going maintenance

PROBLEM:
Once your SQL Azure database is in production, what are the tasks required for ongoing maintenance?

SOLUTION:
1. Backing up SQL Azure data

For the latest information on SQL Azure backup and restore strategy and disaster recovery techniques, see Business Continuity in SQL Azure article in the SQL Azure MSDN library.
Native backup and restore as provided by on-premise SQL Server is not currently supported by SQL Azure.

SQL Azure natively supports a Database Copy capability. This creates a new database in SQL Azure which is a transactionally consistent copy of an existing database. This is accomplished using special syntax within the CREATE DATABASE command:

CREATE DATABASE destination_database_name
AS COPY OF [source_server_name.]source_database_name

For full discussion of this feature, see Copying Databases in SQL Azure.

The data in a SQL Azure database can be backed up on-premise, or to another SQL Azure database, using BCP, SSIS, or the .NET SqlBulkCopy API.
These approaches do not backup the structure and objects in the database, so you will need to generate build scripts to accompany the data if you wish to bootstrap the database into a new environment.
For a thorough discussion of the options for moving data in bulk out of (or into) a SQL Azure database, see this blog post and this discussion of BCP.

2. Defragmenting Indexes and other routine maintenance

In SQL Azure, as with on-premise SQL Server, indexes can become fragmented over time. They should be periodically checked and, if necessary, rebuilt. The syntax for the ALTER INDEX statement is restricted in SQL Azure, as compared to on-premise SQL Server, but the syntax for rebuilding indexes is identical. Reorganizing indexes is not supported by SQL Azure.
Both online and offline index rebuilds are supported by SQL Azure.

3. Keeping statistics current
There are no significant differences in statistics management between SQL Azure and on-premise SQL Server. The following are supported:
  • UPDATE STATISTICS
  • sp_updatestats
  • sp_autostats
The ALTER DATABASE statement, however, cannot be used to change the database-level settings automatic statistics creation and updating. These settings both default to True (ON) for SQL Azure databases and they cannot be changed. If you truly find it necessary to suppress auto stats updates, use the sp_autostats stored procedure to control this behavior at the table/index level. In the vast majority of circumstances, you can just leave the auto update behavior unchanged.

4. Tracking Usage and Billing

Depending on the subscription model used by your customer, SQL Azure may be billed on a consumption basis. This makes it valuable to track consumption of bandwidth and space on an ongoing basis. Although there are reports within the Azure billing portal which help track this, customers often wish to do some kind of customized tracking of usage expense. SQL Azure provides DMV’s to enable this.

Bandwidth usage from your SQL Azure account is available in a DMV calledsys.bandwidth_usage. This DMV is only available in the master database. Since three-part names are invalid in SQL Azure, you will need to be connected to the master database to query the sys.bandwidth_usage view.

Database usage is available in a DMV called sys.database_usage. This DMV is also available only in the master database.

These bandwidth_usage and database_usage DMV’s are helpful, but they only report past usage. You may want to use T-SQL to check the size of a SQL Azure database as it is right now. Unfortunately, the sp_spaceused stored procedure is not available in SQL Azure. Also, sys.database_files is among those DMV’s not exposed by SQL Azure. Another DMV,sys.dm_db_partition_stats, can help here though. The following T-SQL query will report the approximate size of the database.

SELECT (8.0 * SUM(reserved_page_count)) / 1024 AS 'Database Size (MB)'
FROM sys.dm_db_partition_stats

This query reports on the current database only, so you will need to run it while connected to the specific database you are interested in. Note that permission on sys.dm_db_partition_stats is denied in the master database, even for administrators.

5. Scheduling Maintenance Tasks
Scheduling maintenance (e.g., index reorganization) will sometimes be required, especially for large or heavily accessed SQL Azure databases. Unfortunately, SQL Server Agent, the on-premise SQL Server scheduling engine, is not part of SQL Azure. This means there is no Azure-hosted method for scheduling tasks to occur.
For the time being, the only way to address this requirement is with an on-premise or non-Azure scheduling tool. A natural choice is, of course, an on-premise instance of SQL Server Agent. This works perfectly well and will make SQL Server DBA’s comfortable.
Using on-premise SQL Agent is OK for customers with existing SQL Server infrastructure. Those with no existing SQL licenses, however, may wish to avoid buying database software just to get a scheduling engine. In this case, Windows Task Scheduler can be used to invoke sqlcmd scripts, Powershell scripts, or custom executables which reach out to SQL Azure databases to perform maintenance tasks.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!