Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Sunday, February 9, 2014

Find TOP 20 problems in your SQL Server

PROBLEM:
You need to investigate problems with your SQL Server database and to gather some useful information about performance

SOLUTION:
First, do not lock anything, and do not get held up by any locks
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

Show the version of SQL Server
SELECT @@VERSION 

Get property info about server instance. IsIntegratedSecurityOnly: 1 = windows only. ProductLevel - contains RTM or SP level. Is SP the correct one?!
SELECT 
    SERVERPROPERTY('ServerName') AS [ServerName]
    ,SERVERPROPERTY('InstanceName') AS [InstanceName]
    ,SERVERPROPERTY('MachineName') AS [MachineName]
    ,SERVERPROPERTY('Edition') AS [Edition]
    ,SERVERPROPERTY('ProductVersion') AS [ProductVersion]
    ,SERVERPROPERTY('ProductLevel') AS [ProductLevel]
    ,SERVERPROPERTY('IsIntegratedSecurityOnly') AS [IsIntegratedSecurityOnly]
    ,SERVERPROPERTY('IsClustered') AS [IsClustered]

Get OS information. CPU/Memory/last reboot time useful
SELECT cpu_count AS [Logical CPUs]
, cpu_count / hyperthread_ratio AS [Physical CPUs]
, CAST(physical_memory_in_bytes / 1024.0 / 1024.0 /1024.0 AS DECIMAL(28,2)) AS [Memory (GB)]
, DATEADD(ss, -(ms_ticks / 1000), GetDate()) AS [Start DateTime]
--  , sqlserver_start_time AS [Start DateTime]  -- In 2008+
FROM sys.dm_os_sys_info

Show SQL Server database info. Check compatibility level. is_read_committed_snapshot_on (1 is good for concurrency). recovery_model (want simple on non-prod boxes. Bulk_logged for prod). page_verify - want CHECKSUM
SELECT name, compatibility_level, recovery_model_desc, page_verify_option_desc, is_read_committed_snapshot_on
FROM sys.databases ORDER BY name

Show SQL Server configuration info. Priority boost - should be off (0). Cost threshold for parallelism (evaluate with MAXDOP). Max degree of parallelism (1 or 8?). Max server memory (MB) - evaluate in context of server memory. CLR enabled - generally disable, unless needed. Optimize for ad hoc workloads - often recommended to have on
SELECT name, description, value_in_use 
FROM sys.configurations 
WHERE NAME IN(
  'clr enabled'
, 'max degree of parallelism'   
, 'cost threshold for parallelism'  
, 'max server memory (MB)'          -- Set appropriately
, 'optimize for ad hoc workloads'   -- should be 1.
, 'priority boost'                  -- should be 0
)
ORDER BY name

Identify what is causing waits
WITH Waits AS
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK'
,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE'
,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT'
,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT'
,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP','ONDEMAND_TASK_QUEUE'))

SELECT W1.wait_type, 
CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
HAVING SUM(W2.pct) - W1.pct < 99; -- percentage threshold

How much time is spent swapping threads - above 20% is bad
SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits],
CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits]
FROM sys.dm_os_wait_stats WITH (NOLOCK) OPTION (RECOMPILE);

CPU utilization usage per object database. Is a single DB hogging CPU? Maybe needs a separate server
--USE [YourDatabaseName]
WITH DB_CPU_Stats
AS
(SELECT DatabaseID, DB_Name(DatabaseID) AS [DatabaseName], 
 SUM(total_worker_time) AS [CPU_Time_Ms]
 FROM sys.dm_exec_query_stats AS qs
 CROSS APPLY (SELECT CONVERT(int, value) AS [DatabaseID] 
              FROM sys.dm_exec_plan_attributes(qs.plan_handle)
              WHERE attribute = N'dbid') AS F_DB
 GROUP BY DatabaseID)
SELECT ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],
       DatabaseName, [CPU_Time_Ms], 
       CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) 
       OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
FROM DB_CPU_Stats
WHERE DatabaseID > 4 -- system databases
AND DatabaseID <> 32767 -- ResourceDB
ORDER BY [CPUPercent] DESC OPTION (RECOMPILE);

Memory usage per database. Is a single DB hogging memory? Maybe needs a separate server
--USE [YourDatabaseName]
SELECT 
    ISNULL(DB_NAME(database_id), 'ResourceDb') AS DatabaseName
    , CAST(COUNT(row_count) * 8.0 / (1024.0) AS DECIMAL(28,2))  AS [Size (MB)] 
    , CAST(COUNT(row_count) * 8.0 / (1024.0) / 1024.0 AS DECIMAL(28,2)) AS [Size (GB)] 
FROM sys.dm_os_buffer_descriptors
GROUP BY database_id
ORDER BY [Size (MB)] DESC

Find what are the physical disk metric values. Under 20ms is ok, above this need investigating
--USE [YourDatabaseName]
SELECT DB_NAME(database_id) AS DatabaseName
 , file_id
 , io_stall_read_ms / num_of_reads AS 'Average read time'
 , io_stall_write_ms / num_of_writes AS 'Average write time'
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
WHERE num_of_reads > 0 
AND num_of_writes > 0
ORDER BY DatabaseName

Check reads per write, by database. Separate out a DB or for OLAP/OLTP? OLAP often needs more indexes.. NULL DatabaseName means this query was run adhoc or prepared
--USE [YourDatabaseName]
SELECT TOP 10 
        DB_NAME(qt.dbid) AS DatabaseName
        , SUM(total_logical_reads) AS [Total Reads]
        , SUM(total_logical_writes) AS [Total Writes]
        , SUM(total_logical_reads) / CASE WHEN SUM(total_logical_writes) = 0 THEN 1 ELSE SUM(total_logical_writes) END AS [Reads Per Write]
        , SUM(qs.execution_count) AS [Execution count]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
GROUP BY DB_NAME(qt.dbid)
ORDER BY [Reads Per Write] DESC;

Check space used. What is DB size, and how much is unused?
--USE [YourDatabaseName]
EXEC sp_SpaceUsed
-- Database statistics settings
-- is_auto_create_stats_on should be on (1)
-- is_auto_update_stats_on should be on (1)
-- is_auto_update_stats_async_on should be off (0)
SELECT name
, is_auto_create_stats_on AS [AutoCreateStatistics]
, is_auto_update_stats_on AS [AutoUpdateStatistics]
, is_auto_update_stats_async_on AS [AutoUpdateStatisticsAsync]
FROM master.sys.databases
ORDER BY name

Check the state of your statistics. When last updated?! Maybe needs intelligent stats utility?
--USE [YourDatabaseName]
SELECT 
 ss.name AS SchemaName
 , st.name AS TableName
 , s.name AS IndexName 
 , STATS_DATE(s.id,s.indid) AS 'Statistics Last Updated'
 , s.rowcnt AS 'Row Count'    
 , s.rowmodctr AS 'Number Of Changes' 
 , CAST((CAST(s.rowmodctr AS DECIMAL(28,8))/CAST(s.rowcnt AS DECIMAL(28,2)) * 100.0)
     AS DECIMAL(28,2)) AS '% Rows Changed'
FROM sys.sysindexes s 
INNER JOIN sys.tables st ON st.[object_id] = s.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE s.id > 100 -- user table/index
AND s.indid > 0  -- Clustered and non-clustered indexes
AND s.rowcnt >= 500 -- want at least 500 rows
ORDER BY 'Statistics Last Updated' DESC 

Missing indexes can be indicative of other bad practices in a given DB
--USE [YourDatabaseName]
SELECT 
 DB_NAME(database_id) AS DatabaseName
 , COUNT(*) AS [Missing Index Count]
FROM sys.dm_db_missing_index_details
GROUP BY DB_NAME(database_id)
ORDER BY [Missing Index Count] DESC

What are the most costly missing indexes?
--USE [YourDatabaseName]
SELECT TOP 40 -- Too many INCLUDE cols in top entries...  
 ROUND(avg_total_user_cost * avg_user_impact   
  * (user_seeks + user_scans),0) AS [Total Cost]  
 , [statement] AS [Table Name]
 , equality_columns
 , inequality_columns
 , included_columns
FROM  sys.dm_db_missing_index_groups g 
INNER JOIN sys.dm_db_missing_index_group_stats s 
ON s.group_handle = g.index_group_handle 
INNER JOIN sys.dm_db_missing_index_details d 
ON d.index_handle = g.index_handle
ORDER BY [Total Cost] DESC

Identify missing PKs. Most tables should have PK (for RI, FK), unless good reason
--USE [YourDatabaseName]
SELECT DB_NAME(DB_ID()) AS DatabaseName, SCHEMA_NAME(schema_id) AS SchemaName,name AS TableName
FROM sys.tables
WHERE OBJECTPROPERTY(OBJECT_ID,'TableHasPrimaryKey') = 0
ORDER BY SchemaName, TableName;

Identify heaps that have non-clustered indexes. Often Heaps with Non-clustered Indexes better as Cluster based on NC
--USE [YourDatabaseName]
SELECT 
 DB_NAME(DB_ID()) AS DatabaseName
 , ss.name AS SchemaName
 , st.NAME AS TableName
 , i.NAME AS IndexName
 , i.type_desc
 , si.rowcnt
INTO #HeapWithIndexes2
FROM sys.indexes I 
LEFT JOIN sys.sysindexes SI ON SI.indid = I.index_Id AND i.object_id = si.id
INNER JOIN sys.tables st ON st.[object_id] = si.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE 1=2

Check tables names that have a heap...
--USE [YourDatabaseName]
SELECT DISTINCT ss.name AS schemaName, st.name
INTO #Heaps
FROM sys.indexes I 
LEFT JOIN sys.sysindexes SI ON SI.indid = I.index_Id 
       AND i.object_id = si.id
INNER JOIN sys.tables st ON st.[object_id] = si.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE si.id > 100 
AND i.type_desc = 'HEAP'
INTERSECT
SELECT DISTINCT ss.name AS schemaName, st.name
FROM sys.indexes I 
LEFT JOIN sys.sysindexes SI ON SI.indid = I.index_Id 
       AND i.object_id = si.id
INNER JOIN sys.tables st ON st.[object_id] = si.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
WHERE si.id > 100 
AND i.type_desc != 'HEAP'

ALL tables and their indexes...
--USE [YourDatabaseName]
INSERT INTO #HeapWithIndexes2 
SELECT
 DB_NAME(DB_ID()) AS DatabaseName
 , ss.name AS SchemaName
 , st.NAME AS TableName
 , i.NAME AS IndexName
 , i.type_desc
 , si.rowcnt
FROM sys.indexes I 
LEFT JOIN sys.sysindexes SI ON SI.indid = I.index_Id AND i.object_id = si.id
INNER JOIN sys.tables st ON st.[object_id] = si.[id]
INNER JOIN sys.schemas ss ON ss.[schema_id] = st.[schema_id]
INNER JOIN #Heaps h ON st.name = h.name AND ss.name = h.schemaName
WHERE si.id > 100 
 
SELECT * FROM #HeapWithIndexes2 ORDER BY DatabaseName, SchemaName, TableName, IndexName
 
DROP TABLE #Heaps
DROP TABLE #HeapWithIndexes2

Allows targeted improvements, based on query duration. Query plan may offer hints for improvements
SELECT TOP 20 
 CAST(total_elapsed_time / 1000000.0 AS DECIMAL(28, 2)) AS [Total Duration (s)]
 , CAST(total_worker_time * 100.0 / total_elapsed_time AS DECIMAL(28, 2)) AS [% CPU]
 , CAST((total_elapsed_time - total_worker_time)* 100.0 / total_elapsed_time AS DECIMAL(28, 2)) AS [% Waiting] 
 , execution_count
 , CAST(total_elapsed_time / 1000000.0 / execution_count AS DECIMAL(28, 2)) AS [Average Duration (s)]
 , SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1, 
    ((CASE WHEN qs.statement_end_offset = -1 
    THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
    ELSE qs.statement_end_offset
    END - qs.statement_start_offset)/2) + 1) AS [Individual Query] 
 , SUBSTRING(qt.text,1,100) AS [Parent Query]
 , DB_NAME(qt.dbid) AS DatabaseName 
 , qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
INNER JOIN sys.dm_exec_cached_plans as cp 
      ON qs.plan_handle=cp.plan_handle
WHERE total_elapsed_time > 0
ORDER BY total_elapsed_time DESC

If you find this useful, you are welcome to press one of the ads in this page.. Thanks!

Show execution plan for all sessions in SQL Server

PROBLEM:
How to show execution plan for all sessions?

SOLUTION:
Run this before starting
DBCC FREEPROCCACHE

Show execution plan for all sessions
SELECT TOP 1000 qs.execution_count, 
      StoredProcedure = Db_name(qp.dbid) + '..' + Object_name(qp.objectid, qp.dbid), 
      StatementDefinition = Substring (st.text,
       ( qs.statement_start_offset / 2 ) + 1, 
         ( 
           ( CASE qs.statement_end_offset 
             WHEN -1 THEN Datalength(st.text) 
             ELSE qs.statement_end_offset 
             END - qs.statement_start_offset
           ) / 2
         ) + 1
      ), 
      query_plan, 
      st.text, 
      total_elapsed_time 
FROM
   sys.dm_exec_query_stats AS qs 
   CROSS apply sys.Dm_exec_sql_text(qs.sql_handle) AS st 
   CROSS apply sys.Dm_exec_query_plan (qs.plan_handle) qp 
WHERE
   st.encrypted = 0 
ORDER BY
   total_elapsed_time / QS.execution_count DESC, 
   qs.execution_count, 
   3 

If you find this useful, you are welcome to press one of the ads in this page.. Thanks!

Tuesday, August 6, 2013

Performance best practices for very large tables

PROBLEM:
What are the performance best practices for very large tables?

SOLUTION:
Here is a summary of best practices for querying, monitoring, and performance tuning for your very large tables:
  • Pay attention to both the results of a query and the execution plan designed to return those results. Make it a routine exercise to scan query plans to become more familiar with how SQL Server resolves queries.
  • De-normalize your schema where possible to reduce the need to join very large tables.
  • Use table partitioning for your largest tables, and be sure when querying that the optimizer uses partition elimination whenever possible.
  • Align secondary indexes with the same table partition function to reduce partitioned table maintenance.
  • Consider using a heap instead of a clustered index, especially with a table that partially mimics a clustered index via its partition function.
  • Design indexes to be covering for larger recurring queries whenever possible, especially when creating a filtered (i.e., partial) index.
  • Use an indexed view as an alternative to a denormalized table or to materialize an aggregated dataset. The more static the underlying data, the less overhead will be required to maintain the indexed views.
  • Use summary tables with automatic (scheduled) updates as an alternative to indexed views, especially when there is less significance in the currency of the data.
  • For query monitoring, focus on the quantitative measures of Disk I/O, CPU Time, and Query Duration.
  • Use the features of the Management Data Warehouse for overall server monitoring, and use the SQL Trace data collection set for query monitoring.
  • Focus your tuning efforts on SQL code that is repeatedly run, either as a stored procedure or a SQL script, and that consumes the most server resources. Data warehouse applications tend to place the greatest pressure on Disk I/O.
  • For tuning and management of large queries, consider saving historical copies of XML execution plans to disk for later reference.


If you find this useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, February 27, 2013

Union vs. Union All

PROBLEM:
What happens when you combine the same retrieval commands UNION (performs distinct query) and UNION ALL ?

SOLUTION:
Lets create test data:
CREATE TABLE #T (N INT);
GO
INSERT INTO #T
SELECT 1;
INSERT INTO #T
SELECT 2;
INSERT INTO #T
SELECT 2;
INSERT INTO #T
SELECT 3;
INSERT INTO #T
SELECT 3;
INSERT INTO #T
SELECT 3;
GO
SELECT *
FROM #T;


Now, lets query it:
SELECT * FROM #T
UNION ALL
SELECT * FROM #T
UNION
SELECT * FROM #T

SELECT * FROM #T
UNION
SELECT * FROM #T
UNION ALL
SELECT * FROM #T;



In the first case, the UNION makes distinct query and contracted three output lines, and in the second case the UNION ALL received three lines added all six.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, November 7, 2012

Find and drop duplicate indexes

PROBLEM:
You want to find and drop duplicate indexes

SOLUTION:
Find duplicate indexes with same key columns in the same order, and the same included columns but in any order
WITH IX AS
  ( SELECT OBJECT_ID AS ID,
           INDEX_ID AS INDID,
           NAME,

     (SELECT CASE KEYNO WHEN 0 THEN NULL ELSE COLID END AS [data()]
      FROM SYS.SYSINDEXKEYS AS K
      WHERE K.ID = I.OBJECT_ID
        AND K.INDID = I.INDEX_ID
      ORDER BY KEYNO,
               COLID
      FOR XML PATH('')) AS COLS,

     (SELECT CASE KEYNO WHEN 0 THEN COLID ELSE NULL END AS [data()]
      FROM SYS.SYSINDEXKEYS AS K
      WHERE K.ID = I.OBJECT_ID
        AND K.INDID = I.INDEX_ID
      ORDER BY COLID
      FOR XML PATH('')) AS INC
   FROM SYS.INDEXES AS I)
SELECT OBJECT_SCHEMA_NAME(C1.ID) + '.' + OBJECT_NAME(C1.ID) AS TABLE_NAME,
       C1.NAME AS INDEX_NAME,
       C2.NAME AS DUPLICATE_INDEX,
       'DROP INDEX [' + C2.NAME + '] ON [' + OBJECT_SCHEMA_NAME(C1.ID) + '].[' + OBJECT_NAME(C1.ID) + ']' AS DROP_STATEMENT
FROM IX AS C1
JOIN IX AS C2 ON C1.ID = C2.ID
AND C1.INDID < C2.INDID
AND C1.COLS = C2.COLS
AND C1.INC = C2.INC;

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, September 12, 2012

Tables row count, total space, used space and unused space


PROBLEM:
You want to find tables row count, total space (KB), used space (KB) and unused space (KB)

SOLUTION:
The following query returns the row count and size of tables in your database:
SELECT T.NAME AS TableName, P.ROWS AS RowCounts, SUM(A.TOTAL_PAGES) * 8 AS TotalSpaceKB, SUM(A.USED_PAGES) * 8 AS UsedSpaceKB, (SUM(A.TOTAL_PAGES) - SUM(A.USED_PAGES)) * 8 AS UnusedSpaceKB
FROM SYS.TABLES T
INNER JOIN SYS.INDEXES I ON T.OBJECT_ID = I.OBJECT_ID
INNER JOIN SYS.PARTITIONS P ON I.OBJECT_ID = P.OBJECT_ID
AND I.INDEX_ID = P.INDEX_ID
INNER JOIN SYS.ALLOCATION_UNITS A ON P.PARTITION_ID = A.CONTAINER_ID
GROUP BY T.NAME,
         P.ROWS
ORDER BY T.NAME

The following query returns the size of your database (in megabytes):
SELECT sum(reserved_page_count) * 8.0 / 1024
FROM sys.dm_db_partition_stats

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Tuesday, September 11, 2012

Find all foreign keys that have cascade delete or update

PROBLEM:
How do you find all foreign keys that have cascade delete or update?

SOLUTION:
SELECT B.NAME AS TableName,
       A.NAME AS FKname,
       A.DELETE_REFERENTIAL_ACTION_DESC AS DeleteAction,
       A.UPDATE_REFERENTIAL_ACTION_DESC AS UpdateAction
FROM SYS.FOREIGN_KEYS A
JOIN SYS.TABLES B ON A.PARENT_OBJECT_ID = B.OBJECT_ID
WHERE A.DELETE_REFERENTIAL_ACTION=1
  OR A.UPDATE_REFERENTIAL_ACTION=1
ORDER BY 1,
         2

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Monday, September 10, 2012

View database roles, logins, users and permissions

PROBLEM:
How do I list all my database roles, logins, users and permissions?

SOLUTION:

Who has access to my Databases?
SELECT DP.NAME AS UserName,
       DP.TYPE_DESC AS UserType,
       SP.NAME AS LoginName,
       SP.TYPE_DESC AS LoginType
FROM SYS.DATABASE_PRINCIPALS DP
JOIN SYS.SERVER_PRINCIPALS SP ON DP.PRINCIPAL_ID = SP.PRINCIPAL_ID

Server Roles (view which logins are tied to which server roles)
SELECT P1.NAME AS LoginName,
       P1.TYPE_DESC AS PrincipalType,
       P2.NAME AS RoleName,
       P2.TYPE_DESC AS RoleType
FROM SYS.SERVER_ROLE_MEMBERS ROLES
JOIN SYS.SERVER_PRINCIPALS P1 ON ROLES.MEMBER_PRINCIPAL_ID = P1.PRINCIPAL_ID
JOIN SYS.SERVER_PRINCIPALS P2 ON ROLES.ROLE_PRINCIPAL_ID = P2.PRINCIPAL_ID

Database Roles (determine which users are assigned to database roles)
SELECT P1.NAME AS UserName,
       P1.TYPE_DESC AS UserType,
       P2.NAME AS RoleName,
       P2.TYPE_DESC AS RoleType,
       P2.IS_FIXED_ROLE AS IsFixedRole
FROM SYS.DATABASE_ROLE_MEMBERS ROLES
JOIN SYS.DATABASE_PRINCIPALS P1 ON ROLES.MEMBER_PRINCIPAL_ID = P1.PRINCIPAL_ID
JOIN SYS.DATABASE_PRINCIPALS P2 ON ROLES.ROLE_PRINCIPAL_ID = P2.PRINCIPAL_ID

What can these users do? (indicate which users has specific permissions inside the current database)
SELECT DP.CLASS_DESC AS ClassDesc,
       DP.PERMISSION_NAME AS PermissionName,
       DP.STATE_DESC AS StateDesc,
       OBJECT_NAME(MAJOR_ID) AS ObjectName,
       GRANTEE.NAME AS GranteeName,
       GRANTOR.NAME AS GrantorName
FROM SYS.DATABASE_PERMISSIONS DP
JOIN SYS.DATABASE_PRINCIPALS GRANTEE ON DP.GRANTEE_PRINCIPAL_ID = GRANTEE.PRINCIPAL_ID
JOIN SYS.DATABASE_PRINCIPALS GRANTOR ON DP.GRANTOR_PRINCIPAL_ID = GRANTOR.PRINCIPAL_ID -- More detailed query..
WITH P AS
  ( SELECT USER_NAME(P.GRANTEE_PRINCIPAL_ID) AS PRINCIPAL_NAME,
           DP.PRINCIPAL_ID,
           DP.TYPE_DESC AS PRINCIPAL_TYPE_DESC,
           P.CLASS_DESC,
           OBJECT_NAME(P.MAJOR_ID) AS OBJECT_NAME,
           P.PERMISSION_NAME,
           P.STATE_DESC AS PERMISSION_STATE_DESC
   FROM SYS.DATABASE_PERMISSIONS P
   INNER JOIN SYS.DATABASE_PRINCIPALS DP ON P.GRANTEE_PRINCIPAL_ID = DP.PRINCIPAL_ID)
SELECT P.PRINCIPAL_NAME,
       P.PRINCIPAL_TYPE_DESC,
       P.CLASS_DESC,
       P.[OBJECT_NAME],
       P.PERMISSION_NAME,
       P.PERMISSION_STATE_DESC,
       CAST(NULL AS SYSNAME) AS ROLE_NAME
FROM P
WHERE PRINCIPAL_TYPE_DESC <> 'DATABASE_ROLE'
UNION
SELECT R.MEMBER_PRINCIPAL_NAME,
       R.PRINCIPAL_TYPE_DESC,
       P.CLASS_DESC,
       P.OBJECT_NAME,
       P.PERMISSION_NAME,
       P.PERMISSION_STATE_DESC,
       R.ROLE_NAME
FROM P
RIGHT OUTER JOIN
  ( SELECT ROLE_PRINCIPAL_ID,
           DP.TYPE_DESC AS PRINCIPAL_TYPE_DESC,
           MEMBER_PRINCIPAL_ID,
           USER_NAME(MEMBER_PRINCIPAL_ID) AS MEMBER_PRINCIPAL_NAME,
           USER_NAME(ROLE_PRINCIPAL_ID) AS ROLE_NAME
   FROM SYS.DATABASE_ROLE_MEMBERS R
   INNER JOIN SYS.DATABASE_PRINCIPALS DP ON R.MEMBER_PRINCIPAL_ID = DP.PRINCIPAL_ID) R ON R.ROLE_PRINCIPAL_ID = P.PRINCIPAL_ID
ORDER BY 1

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Sunday, September 2, 2012

The most useful shortcut key in SQL Server management studio

PROBLEM:
What is the most useful shortcut key in SQL Server management studio?

SOLUTION:
In the new query window, mark your table name and press ALT - F1

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Database Engine Tuning Advisor Error

PROBLEM:
When trying to open SQL Server Database Engine Tuning Advisor you get an error: "Failed to connect to an IPC Port: The system cannot find the file specified.(mscorlib)"

SOLUTION:
Kill DTASHELL.exe in the task manager

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Thursday, August 30, 2012

Permission to see one table, view or other object

PROBLEM:
You want to allow a user to read only one database table, view or other object

SOLUTION:

Use the following script:
-- replace 'MY_DOMAIN\MyLogin' with your login name
-- you can skip this create login if you have already a domain user
-- alternatively, you can create login using SQL Server authentication
USE [master]
GO
CREATE LOGIN [MY_DOMAIN\MyLogin] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GO

-- replace 'MyUser' with your user name
-- replace 'MyDatabase' with your database name that contains the table
-- replace 'MyTable' with your table name
USE [MyDatabase]

--create a role to wrap up the permission
IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'MyObjectAccess' AND type = 'R')
BEGIN
       EXEC SP_DROPROLEMEMBER 'MyObjectAccess', 'MyUser'
       DROP ROLE [MyObjectAccess]
END
GO

CREATE ROLE MyObjectAccess
GRANT SELECT ON [MyDatabase].[DBO].[MyTable] TO MyObjectAccess;
GO

IF EXISTS (SELECT * FROM sys.database_principals WHERE name = N'MyUser')
DROP USER [MyUser]
GO
CREATE USER [MyUser] FOR LOGIN [MyLogin]
GO

EXEC sp_addrolemember N'MyObjectAccess', N'MyUser'
GO

-- Now, login to SQL Server as MyLogin
-- You should see all databases that exist in the instance, but you will not see their objects.
-- You can read MyDatabase.MyTableName

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, August 29, 2012

Transaction log cannot be truncated

PROBLEM:
You cannot truncate SQL Server transaction log

SOLUTION:

If you are wondering why a transaction log cannot be truncated, exeuting the following query can assist
select name, log_reuse_wait_desc from sys.databases

The following briefly describes the values of the log_reuse_wait_desc column:
  • NOTHING - Currently there are one or more reusable log files. This is the normal state that means you can do a log truncation
    • Wait 15 min, try again. If you still cannot truncate, restart the server
  • CHECKPOINT - No checkpoint has occurred since the last log truncation, or the head of the log has not yet moved beyond a virtual log file (all recovery models). This is a routine reason for delaying log truncation
    • Do a manual checkpoint
  • LOG_BACKUP - A log backup is required to move the head of the log forward (full or bulk-logged recovery models only)
    • You need to make two backups to actually free the space
  • ACTIVE_BACKUP_OR_RESTORE - A data backup or a restore is in progress (all recovery models). A data backup works like an active transaction, and, when running, the backup prevents truncation
    • Wait until it ends
  • ACTIVE_TRANSACTION - A transaction is active (all recovery models). A long-running transaction might exist at the start of the log backup
    • Do a log backup
  • DATABASE_MIRRORING - Database mirroring is paused, or under high-performance mode, the mirror database is significantly behind the principal database (full recovery model only)
    • Configure the mirroring
  • REPLICATION - During transactional replications, transactions relevant to the publications are still undelivered to the distribution database (full recovery model only)
    • Wait until it ends
  • DATABASE_SNAPSHOT_CREATION - A database snapshot is being created (all recovery models). This is a routine, and typically brief
    • Wait until it ends
  • LOG_SCAN - A log scan is occurring (all recovery models). This is a routine, and typically brief
    • Wait until it ends
  • AVAILABILITY_REPLICA - To identify which secondary database is delaying log truncation, see the truncation_lsn column of the sys.dm_hadr_database_replica_states dynamic management view
  • OTHER_TRANSIENT - This value is currently not used

After resolving the reason that is shown, if you are using full recovery model then perform a log backup to truncate the log file, and then use DBSS SHRINKFILE to reduce the filesize of the log file.
USE [db_name]
GO
DBCC SHRINKFILE(N'db_name_log', 1)
BACKUP LOG [db_name]
DBCC SHRINKFILE(N'db_name_log', 1)
GO

If the log file does not reduce in size when using DBCC SHRINKFILE as part of the steps above, the active part of the log file must have been at the end of the log file at that point in time. You can correct that situation by generating enough dummy transactions to cause SQL Server to switch back to the beginning of the log file. You should then attempt the truncation again.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Tuesday, August 28, 2012

Find why your SQL Server query is stuck


PROBLEM:
What should I do when SQL Server query is stuck?

SOLUTION:
First, open SQL Server Activity Monitor, you should wait some time before data is loaded



The Activity Monitor is available in SQL Server 2008 Management Studio is a great tool which can be used to get a quick overview of SQL Server 2008 system performance. The Activity Monitor tool in the previous version of SQL Server used to display information related to processes, lock by objects and locks by process. There are many enhancements in Activity Monitor in SQL Server 2008 like a graphical display of processor time, waiting tasks, database I/O's, batch requests, processes, resource waits, data file I/O's and also information about the most expensive queries.
To view Activity Monitor in SQL Server 2005 and in SQL Server 2008, a user must have VIEW SERVER STATE permission

One of the first lines of defense in determining why a query is stuck is to use sp_who2. It will shows all the sessions that are currently established in the database. These are denoted as SPID’s, or server process Id’s.
sp_who2
The first 50 results are system SPIDs. Generally these do not effect slowdowns of the system. These system events include the Checkpoint writer, Log writers and task schedulers. User processes are SPID numbers 50 and over. In diagnosing slowdowns it is these SPIDs that are the potential resource hogs.

There are four main things to look for when when diagnosing slowdowns:
  • Blocking
  • High CPU usage
  • High IO usage
  • Multiple entries for the same SPID (representing parallelism)
When a number is shown in the column named BlkBy, this represents the SPID that is currently stopping the SPID in the row shown. Sometimes many rows will show SPID numbers in the BlkBy column. This is because there is a chain of blockers. The way this occurs usually starts with one “lead” blocker blocking another process. In turn, the process that is being blocked, blocks others. This occurs down a chain. It can be a messy situation. In order to rectify, you may have to kill the lead blocker. If it happens often, you will want to research why this particular process is blocking. So, before you kill any process, find out what statement it is running first. To do this, execute DBCC INPUTBUFFER
DBCC INPUTBUFFER(500) -- 500 represent the SPID you see in the result of sp_who2
Once you find the blocker, you may need to kill it. To do so use the kill command (replace the number below with the blocking SPID)
IMPORTANT: before the kill, keep a copy of the SQL statement being run for later investigation
KILL 500
sp_who2 does provide limited information regarding slowdowns.
For more information use the following script that shows the SQL statement being run, only sessions that have a current executing request, reads and writes for the current command, along with the number of reads and writes for the entire SPID and the protocol being used (TCP, NamedPipes, or Shared Memory)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT SPID = er.session_id ,
       BlkBy = er.blocking_session_id ,
       ElapsedMS = er.total_elapsed_time ,
       CPU = er.cpu_time ,
       IOReads = er.logical_reads + er.reads ,
       IOWrites = er.writes ,
       Executions = ec.execution_count ,
       CommandType = er.command ,
       ObjectName = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid) ,
       SQLStatement = SUBSTRING ( qt.text, er.statement_start_offset/2, (CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE er.statement_end_offset END - er.statement_start_offset)/2 ) ,
       Status = ses.status ,
       [Login] = ses.login_name ,
       HOST = ses.host_name ,
       DBName = DB_Name(er.database_id) ,
       LastWaitType = er.last_wait_type ,
       StartTime = er.start_time ,
       Protocol = con.net_transport ,
       transaction_isolation = CASE ses.transaction_isolation_level
            WHEN 0 THEN 'Unspecified'
            WHEN 1 THEN 'Read Uncommitted'
            WHEN 2 THEN 'Read Committed'
            WHEN 3 THEN 'Repeatable'
            WHEN 4 THEN 'Serializable'
            WHEN 5 THEN 'Snapshot'
        END ,
        ConnectionWrites = con.num_writes ,
        ConnectionReads = con.num_reads ,
        ClientAddress = con.client_net_address ,
        Authentication = con.auth_scheme
FROM sys.dm_exec_requests er
LEFT JOIN sys.dm_exec_sessions ses ON ses.session_id = er.session_id
LEFT JOIN sys.dm_exec_connections con ON con.session_id = ses.session_id CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS qt OUTER APPLY
( SELECT execution_count = MAX(cp.usecounts)
 FROM sys.dm_exec_cached_plans cp
 WHERE cp.plan_handle = er.plan_handle) ec
ORDER BY er.blocking_session_id DESC,
         er.logical_reads + er.reads DESC,
         er.session_id

The following script will show the blocking processes (lead blocker)
SELECT spid ,
       sp.status ,
       loginame = SUBSTRING(loginame, 1, 12) ,
       hostname = SUBSTRING(hostname, 1, 12) ,
       blk = CONVERT(char(3), blocked) ,
       open_tran ,
       dbname = SUBSTRING(DB_NAME(sp.dbid),1,10) ,
       cmd ,
       waittype ,
       waittime ,
       last_batch ,
       SQLStatement = SUBSTRING ( qt.text, er.statement_start_offset/2, (CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE er.statement_end_offset END - er.statement_start_offset)/2 )
FROM master.dbo.sysprocesses sp
LEFT JOIN sys.dm_exec_requests er ON er.session_id = sp.spid OUTER APPLY sys.dm_exec_sql_text(er.sql_handle) AS qt
WHERE spid IN
    (SELECT blocked
     FROM master.dbo.sysprocesses)
  AND blocked = 0

The following script will find most expensive queries (remark the needed ORDER BY)
SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1),
              qs.execution_count,
              qs.total_logical_reads,
              qs.last_logical_reads,
              qs.total_logical_writes,
              qs.last_logical_writes,
              qs.total_worker_time,
              qs.last_worker_time,
              qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
              qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
              qs.last_execution_time,
              qp.query_plan
FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_logical_reads DESC -- logical reads
-- ORDER BY qs.total_logical_writes DESC -- logical writes
-- ORDER BY qs.total_worker_time DESC -- CPU time

The following script will find longest running query
SELECT DISTINCT TOP 10 t.TEXT QueryName,
   s.execution_count AS ExecutionCount,
   s.max_elapsed_time AS MaxElapsedTime,
   ISNULL(s.total_elapsed_time / s.execution_count, 0) AS AvgElapsedTime,
   s.creation_time AS LogCreatedOn,
   ISNULL(s.execution_count / DATEDIFF(s, s.creation_time, GETDATE()), 0) AS FrequencyPerSec
FROM sys.dm_exec_query_stats s CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t
ORDER BY s.max_elapsed_time DESC

LETS CHECK, do the following steps and run the above scripts when needed
-- open new query window (SESSION A) and run the following 3 lines
USE [AdventureWorks]
BEGIN TRANSACTION
UPDATE Sales.SalesOrderDetail
SET [UnitPrice] = 1 -- this update takes ~30sec

-- while the above update in progress, open a new query window (SESSION B) and run sp_who2
-- open another query window (SESSION C) and run the following line, it will be blocked by SESSION A because the transaction there is not committed yet
UPDATE Sales.SalesOrderDetail
SET [UnitPrice] = 2

-- while the above update in progress, go to SESSION B query window and run again sp_who2, you will see the blocking
-- return to SESSION A query window and rollback the transaction. This will free the blocking in SESSION C (after the update finishes, takes ~30sec)
ROLLBACK

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Shrink SQL Server log file

PROBLEM:
My SqlServer database log files are too big

SOLUTION:
If this is a development or testing environment and you don't need the log backup, you can change the recovery model to simple and then use DBCC SHRINKFILE command.

USE [db_name]
GO
ALTER DATABASE [db_name] SET RECOVERY SIMPLE WITH NO_WAIT
DBCC SHRINKFILE(N'db_name_log', 1)

The following code shrink the log for databases in FULL recovery mode.
In production environment, this breaks the chain of the logs and in future you will not be able to restore point in time so you must take log backup before

USE [db_name]
GO
DBCC SHRINKFILE(N'db_name_log', 1)
BACKUP LOG [db_name] WITH TRUNCATE_ONLY
DBCC SHRINKFILE(N'db_name_log', 1)
GO

Problems when trying to shrink log

1. Long running transaction - Can prevent transaction log truncation.  These types of transactions can range from transactions being blocked from completing to open transactions waiting for user input. The longer the transaction remains open, the larger the transaction log can grow.  To see the longest running transaction on your SQL Server instance, run the following statement

DBCC OPENTRAN

If there are open transactions, DBCC OPENTRAN will provide a session_id (SPID) of the connection that has the transaction open.  You can pass this session_id to sp_who2 to determine which user has the connection open

sp_who2 SPID

You can determine the SQL statement being executed inside the transactions using the DBCC INPUTBUFFER() statement to return the first part of the SQL statement

DBCC INPUTBUFFER(SPID) --from DBCC OPENTRAN

2.  Backup or restore operation - In SQL Server 2005 and later, you can create a transaction log backup while a full or differential backup is occurring, but the log backup will not truncate the log due to the fact that the entire transaction log needs to remain available to the backup operation.  If a database backup is keeping your log from being truncated you might consider cancelling the backup to relieve the immediate problem.

3. Replication - The inactive portion of the transaction log is not truncated until transactions have been replicated to the distributor.  This may be due to the fact that the distributor is overloaded and having problems accepting these transactions or maybe because the Log Reader agent should be ran more often.  IF DBCC OPENTRAN indicates that your oldest active transaction is a replicated one and it has been open for a significant amount of time, this may be your problem.

4. Mirroring - Similar to transactional replication in that it requires that the transactions remain in the log until the record has been written to disk on the mirror server. If the mirror server instance falls behind the principal server instance, the amount of active log space will grow. In this case, you may need to stop database mirroring, take a log backup that truncates the log, apply that log backup to the mirror database and restart mirroring.

5. Disk Space - It is possible that you’re just running out of disk space and it is causing your transaction log to error.  You might be able to free disk space on the disk drive that contains the transaction log file for the database by deleting or moving other files. The freed disk space will allow for the log file to enlarge.  If you cannot free enough disk space on the drive that currently contains the log file then you may need to move the file to a drive with enough space to handle the log.  If your log file is not set to grow automatically, you’ll want to consider changing that or adding additional space to the file.  Another option is to create a new log file for the database on a different disk that has enough space by using the ALTER DATABASE YourDatabaseName ADD LOG FILE syntax.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Monday, August 27, 2012

Generate script that create and drop foreign keys references specific table

PROBLEM:
You want to generate script that create and drop foreign keys that references to specific table

SOLUTION:

To generate script that create foreign keys references specific table (change the name 'MY_REFERENCED_TABLE' to your table name)
WITH TABLESWITHFOREIGNKEY AS
  ( SELECT DISTINCT T.NAME AS TABLEWITHFOREIGNKEYNAME,
                    O.NAME AS FOREIGNKEYNAME,
--sub query with a comma separated resultset that returns the foreign key columns
   SUBSTRING (
     ( SELECT ',' + A.NAME
       FROM SYS.FOREIGN_KEY_COLUMNS
       INNER JOIN SYS.COLUMNS AS A ON PARENT_OBJECT_ID = A.OBJECT_ID
       AND PARENT_COLUMN_ID = A.COLUMN_ID
       WHERE PARENT_OBJECT_ID = T.OBJECT_ID
       AND CONSTRAINT_OBJECT_ID = O.OBJECT_ID
       ORDER BY CONSTRAINT_COLUMN_ID
       FOR XML PATH('') ), 2, 1000000) AS FOREIGNKEYCOLUMNS,
--sub query with a comma separated resultset that returns the referenced columns
   SUBSTRING (
      ( SELECT ',' + A.NAME
        FROM SYS.FOREIGN_KEY_COLUMNS
        INNER JOIN SYS.COLUMNS AS A ON REFERENCED_OBJECT_ID = A.OBJECT_ID
        AND REFERENCED_COLUMN_ID = A.COLUMN_ID
        WHERE PARENT_OBJECT_ID = T.OBJECT_ID
        AND CONSTRAINT_OBJECT_ID = O.OBJECT_ID
        ORDER BY CONSTRAINT_COLUMN_ID
        FOR XML PATH('') ), 2, 1000000) AS REFERENCESCOLUMNS
   FROM SYS.FOREIGN_KEY_COLUMNS AS FK
   INNER JOIN SYS.TABLES AS T ON FK.PARENT_OBJECT_ID = T.OBJECT_ID
   INNER JOIN SYS.OBJECTS AS O ON FK.CONSTRAINT_OBJECT_ID = O.OBJECT_ID
   INNER JOIN SYS.COLUMNS AS C ON FK.PARENT_OBJECT_ID = C.OBJECT_ID
   AND FK.PARENT_COLUMN_ID = C.COLUMN_ID
   WHERE FK.REFERENCED_OBJECT_ID =
       (SELECT OBJECT_ID
        FROM SYS.TABLES
        WHERE NAME = 'MY_REFERENCED_TABLE'))
SELECT ' IF NOT EXISTS
  (SELECT *
   FROM sys.foreign_keys
   WHERE object_id = OBJECT_ID(N''[' + FOREIGNKEYNAME + ']'') AND parent_object_id = OBJECT_ID(N''[' + TABLEWITHFOREIGNKEYNAME + ']''))
ALTER TABLE [' + TABLEWITHFOREIGNKEYNAME + '] WITH CHECK ADD CONSTRAINT [' + FOREIGNKEYNAME + '] FOREIGN KEY([' + FOREIGNKEYCOLUMNS + ']) REFERENCES [MY_REFERENCED_TABLE] ([' + REFERENCESCOLUMNS + ']) '
FROM TABLESWITHFOREIGNKEY
ORDER BY TABLEWITHFOREIGNKEYNAME

To generate script that drop foreign keys references specific table (change the name 'MY_REFERENCED_TABLE' to your table name)
WITH TABLESWITHFOREIGNKEY AS
  ( SELECT T.NAME AS TABLEWITHFOREIGNKEYNAME,
           O.NAME AS FOREIGNKEYNAME
   FROM SYS.FOREIGN_KEY_COLUMNS AS FK
   INNER JOIN SYS.TABLES AS T ON FK.PARENT_OBJECT_ID = T.OBJECT_ID
   INNER JOIN SYS.OBJECTS AS O ON FK.CONSTRAINT_OBJECT_ID = O.OBJECT_ID
   WHERE FK.REFERENCED_OBJECT_ID =
       (SELECT OBJECT_ID
        FROM SYS.TABLES
        WHERE NAME = 'MY_REFERENCED_TABLE'))
SELECT ' IF EXISTS
  (SELECT *
   FROM sys.foreign_keys
   WHERE object_id = OBJECT_ID(N''[' + FOREIGNKEYNAME + ']'') AND parent_object_id = OBJECT_ID(N''[' + TABLEWITHFOREIGNKEYNAME + ']''))
ALTER TABLE [' + TABLEWITHFOREIGNKEYNAME + '] DROP CONSTRAINT [' + FOREIGNKEYNAME + '] '
FROM TABLESWITHFOREIGNKEY
ORDER BY TABLEWITHFOREIGNKEYNAME

Check the scripts: create 2 tables with FK and run the scripts above. In this case, change MY_REFERENCED_TABLE to AAA
CONSTRAINT PK_1 PRIMARY KEY CLUSTERED (ID_AAA1,ID_AAA2))

CREATE TABLE BBB ( ID_BBB1 INT, ID_BBB2 INT, NAME NVARCHAR(50), CONSTRAINT PK_2 PRIMARY KEY CLUSTERED (ID_BBB1,ID_BBB2))

ALTER TABLE [dbo].[BBB]  WITH CHECK ADD  CONSTRAINT [FK_1] FOREIGN KEY([ID_BBB1],[ID_BBB2]) REFERENCES [dbo].[AAA] ([ID_AAA1],[ID_AAA2])

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Thursday, August 23, 2012

ALTER TABLE CHECK CONSTRAINT after ALTER TABLE WITH CHECK ADD CONSTRAINT

PROBLEM:
Do you need to add ALTER TABLE CHECK CONSTRAINT statement after ALTER TABLE WITH CHECK ADD CONSTRAINT statement?

SOLUTION:
Let's check:
CREATE TABLE AAA
(
       ID_AAA1 INT,
       ID_AAA2 INT,
       NAME NVARCHAR(50),
       CONSTRAINT PK_1 PRIMARY KEY CLUSTERED (ID_AAA1,ID_AAA2)
)

CREATE TABLE BBB
(
       ID_BBB1 INT,
       ID_BBB2 INT,
       NAME NVARCHAR(50),
       CONSTRAINT PK_2 PRIMARY KEY CLUSTERED (ID_BBB1,ID_BBB2)
)

ALTER TABLE [dbo].[BBB]  WITH CHECK ADD  CONSTRAINT [FK_1] FOREIGN KEY([ID_BBB1],[ID_BBB2])
REFERENCES [dbo].[AAA] ([ID_AAA1],[ID_AAA2])
GO

INSERT AAA(ID_AAA1, ID_AAA2) VALUES (1,1);
INSERT AAA(ID_AAA1, ID_AAA2) VALUES (2,2);

-- The following INSERT statement is OK
INSERT BBB(ID_BBB1, ID_BBB2) VALUES (1,1);
-- The following INSERT statement conflicted with the FOREIGN KEY constraint "FK_1"
INSERT BBB(ID_BBB1, ID_BBB2) VALUES (3,3);

ALTER TABLE {CHECK | NOCHECK} CONSTRAINT 
Specifies that constraint_name is enabled or disabled. When NOCHECK is specified, the constraint is disabled and future inserts or updates to the column are not validated against the constraint conditions.

ALTER TABLE {WITH CHECK | WITH NOCHECK} ADD CONSTRAINT
If you want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH CHECK.
It is not recommend using WITH NOCHECK, except in rare cases. The new constraint will be evaluated in all later data updates. Any constraint violations that are suppressed by WITH NOCHECK when the constraint is added may cause future updates to fail if they update rows with data that does not comply with the constraint.
By default, when you add a constraint, its enable, so you don't need to add ALTER TABLE CHECK CONSTRAINT statement after it.

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!


Wednesday, August 22, 2012

GO statement inside a transaction


PROBLEM:
Can I use GO statement inside a transaction?

SOLUTION:
GO is just a command to the Client SQL program (Query Analyzer, SSMS, etc.) to terminate the current batch and execute it. GO does not terminate the current session or process and transactions are session-scoped entities. So, transactions are not commited or rollbacked when a GO is encountered

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, August 15, 2012

Grant SQL Server user the needed permissions to backup and restore his database

PROBLEM:
You want to grant SQL Server user the needed permissions to backup and restore his database. Not other databases

SOLUTION:
This script gives 'UserName' the ability to backup and restore only 'TheDatabase' or other databases he created.
When restoring, the user should give direct path to the BAK file. SSMS will not open the default location and will show the following error:

C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup
Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists..
USE [TheDatabase]
GO
CREATE USER [UserName] FOR LOGIN [UserName]
GO
EXEC sp_addrolemember N'db_backupoperator', N'UserName'
GO
USE [master]
EXEC master..sp_addsrvrolemember @loginame = N'UserName', @rolename = N'dbcreator'
GO

If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!

Wednesday, July 11, 2012

Rebuild a full-text catalog

PROBLEM:
How do I rebuild a full-text catalog using SQL Server Management Studio?

SOLUTION:
To rebuild a full-text catalog:
  1. In Object Explorer, expand the server, expand Databases, and then expand the database that contains the full-text catalog that you want to rebuild.
  2. Expand Storage, and then expand Full Text Catalogs.
  3. Right-click the name of the full-text catalog that you want to rebuild, and select Rebuild.
  4. To the question Do you want to delete the full-text catalog and rebuild it?, click OK.
  5. In the Rebuild Full-Text Catalog dialog box, click Close.
If you find this solution useful, you are welcome to press one of the ads in this page.. Thanks!