Archivio per la categoria ‘sql’

… when You can’t use InMemory

We read more and more on temporary tables and table variables, and the pro/cons of one over the other, and also some myths have been explained ( …both are on the tempdb ) ; to make it short, there are a couple of pro in favor of temp tables:

  1. Have statistics, while for table variables (also InMemory) SQL Server always estimates 1 row.
  2. Are accessible from stored procedures called inside the one that created the temp table.

Point 1 become negligible in case of objects with few rows, while in case of a great number of rows we can use:

  • OPTION (RECOMPILE), but particular attention may be needed in case of several calls (also every second), because You may spend more CPU resources in continuously recompiling the statements instead of run them.
  • TRACE FLAG 2453, but the setting is server wide, so the risk is to fall back on the situation mentioned above.

Given this let’s look at a more “practical” approach to this comparison, which basically take into account two fundamental key points for the TempDb performance when dealing with “High Workload Scenarios”:

  • PAGELATCH contention, EX and SH, on PFS pages ( 1 and every 8088 pages, 64Mb ), GAM and SGAM ( 2 and 3, every 511230 pages, 4Gb).
  • TempDb Tlog traffic, created by Log records, which translates into MB/s.

To show the differences I’ll use two stored procedures, one which creates a temporary table and the other which creates a table variable, inserting 5 rows each; their structure and average rows is very similar to that of our environments. An Extended Events session and a query over the TempDb Tlog will show the differences.


CREATE PROCEDURE [dbo].[proc_TestTemp]

AS

BEGIN

SET NOCOUNT ON

CREATE TABLE #Table1(

[Fld1] [bigint] NULL,

[Fld2] [int] NULL,

[Fld3] [int] NULL,

[Fld4] [tinyint] NULL,

[Fld5] [decimal](9, 2) NULL,

[Fld6] [decimal](9, 2) NULL,

[Fld7] [tinyint] NULL,

[Fld8] [varchar](15) ,

INDEX [IX_Fld1] CLUSTERED ([Fld1] ASC))       

INSERT INTO #Table1 VALUES (1,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO #Table1 VALUES (10,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO #Table1 VALUES (100,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO #Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO #Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Test')

END

GO       

CREATE PROCEDURE [dbo].[proc_TestVar]

AS

BEGIN

SET NOCOUNT ON

DECLARE @Table1 TABLE (

[Fld1] [bigint] NULL,

[Fld2] [int] NULL,

[Fld3] [int] NULL,

[Fld4] [tinyint] NULL,

[Fld5] [decimal](9, 2) NULL,

[Fld6] [decimal](9, 2) NULL,

[Fld7] [tinyint] NULL,

[Fld8] [varchar](15),

INDEX [IX_Fld1] CLUSTERED ([Fld1] ASC))

INSERT INTO @Table1 VALUES (1,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO @Table1 VALUES (10,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO @Table1 VALUES (100,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO @Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Test')

INSERT INTO @Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Test')

END

GO

The Extended Event session uses the sqlserver.latch_acquired event filtered for dbid 2 and the session from which I’m running the stored procedures.


CREATE EVENT SESSION [Latch]

ON SERVER

ADD EVENT sqlserver.latch_acquired(

ACTION(sqlserver.session_id,sqlserver.sql_text)

WHERE ([package0].[equal_uint64]([database_id],(2))

AND [sqlserver].[session_id]=(57)))

ADD TARGET package0.event_file(SET filename=N'Latch'),

ADD TARGET package0.ring_buffer(SET max_memory=(40960))

WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,

MAX_DISPATCH_LATENCY=1 SECONDS, MAX_EVENT_SIZE=0 KB,

MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)

GO

While the Tlog query is something like


SELECT

    fd.[Current LSN],

    fd.Operation,

    fd.AllocUnitName,

    fd.[Transaction Name],

    fd.[Transaction ID]

FROM sys.fn_dblog(NULL, NULL) AS fd

Putting all together and issuing a checkpoint before starting let’s check the results with the Temp Table…


USE tempdb

GO

CHECKPOINT

GO

ALTER EVENT SESSION Latch ON SERVER STATE = start

GO

exec [TestDb].[dbo].[proc_TestTemp]

GO

ALTER EVENT SESSION Latch ON SERVER STATE = stop

GO

SELECT

    fd.[Current LSN],

    fd.Operation,

    fd.AllocUnitName,

    fd.[Transaction Name],

    fd.[Transaction ID]

FROM sys.fn_dblog(NULL, NULL) AS fd

The Tlog query tells that there are 38 log records (138 at the first execution)

image

The Extended Events session show a total of 51 acquired Latchesimage

Let’s remove the just created files and repeat the same test with the Table Variable.

This time the query over the Tlog is telling that the number of records is 23 (240 at the first execution, meaning that caching a table variable creates more records)

image

The session shows now a total of 11 acquired Latches.

Cattura

Basically we have that Table Variables require much less Latch and Tlog records than Temp Tables, a condition that in case of thousands of calls per minute could make a great difference. If we change the Tlog query to extract the SUM ([Log Record Lenght]) we find that the sp using the Temp Table writes 5552 bytes, while the other 2632 bytes.

Let’s check now what happens on the Performance Counters “Log Flushes/sec” and “Log Bytes Flushed/sec” of the “Databases” object, tempdb instance; for the purpose I used the SQL Load Generator to generate (only) around 130 Batch/sec with the two stored procedure.

This is the result with the Temp Tables

image

16 Log Flushes per second and around 650Kb/sec of Tlog traffic.

On the contrary with the Table Variables we have

image

Almost half the  Log Flushes/sec ( 8 ) and the  Log Bytes Flushed /sec more than halved at 300Kb/sec.

So, from a practical point of view, when we can’t enable the InMemory feature, it is more useful to start playing with the table variables, particularly when we work with a small number of rows and several calls per minute (or second); and when the performance favors Temp Tables, and the rate is not so high, a RECOMPILE in the statement should eliminate any difference.

 

… quando non si possono usare le InMemory

Sulle tabelle temporanee e sulle variabili di tipo Table, e dei vantaggi e/o svantaggi delle prime rispetto alle altre, si è già scritto di tutto e di più, ed anche sfatati “strani” miti ( ma entrambe sono nel TempDb ); alla fine per farla breve rimangono a favore delle temporanee un paio di aspetti:

  1. Hanno le statistiche mentre per le variabili ti tipo table (anche InMemory) SQL Server stima sempre 1 riga
  2. Sono visibili da stored procedure richiamate da quella che ha creato la temporanea

Il punto 1 diventa trascurabile nel caso di oggetti con poche righe, mentre gli svantaggi con molte righe possono essere tranquillamente superati con:

  • OPTION (RECOMPILE), ma prestate attenzione al caso di numerose chiamate (anche ogni secondo), altrimenti si spreca più CPU per ricompilare gli statement che non eseguirli.
  • TRACE FLAG 2453, ma poi l’impostazione è per tutto il server, e per quanto riguarda la CPU si ricade nel punto precedente per tutte le stored procedure che usano variabili di tipo table.

Fatte queste premesse vediamo ora un approccio molto più pratico del confronto, che riguarda 2 aspetti fondamentali nelle performance del TempDb quando si ha a che fare con “High Workload Scenarios”;

  • PAGELATCH contention, EX e SH, nelle pagine PFS ( 1 e ogni 8088, 64Mb ), GAM e SGAM ( 2 e 3, e ogni 511230 pagine, 4Gb)
  • Traffico nel Tlog, generato dai record scritti, che alla fine si traduce in MB/s.

Per dimostrare le differenze userò due stored procedure che creano una tabella temporanea e una variabile di tipo table, inserendo 5 righe per ognuna; la loro struttura ed il numero di righe medie rispecchia una situazione tipica dei nostri ambienti; una sessione Extended Events e una query sul Tlog del Tempdb mostreranno le differenze.

CREATE PROCEDURE [dbo].[proc_TestTemp]
AS
BEGIN
SET NOCOUNT ON
CREATE TABLE #Table1(
[Fld1] [bigint] NULL,
[Fld2] [int] NULL,
[Fld3] [int] NULL,
[Fld4] [tinyint] NULL,
[Fld5] [decimal](9, 2) NULL,
[Fld6] [decimal](9, 2) NULL,
[Fld7] [tinyint] NULL,
[Fld8] [varchar](15) ,
INDEX [IX_Fld1] CLUSTERED ([Fld1] ASC))       

INSERT INTO #Table1 VALUES (1,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO #Table1 VALUES (10,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO #Table1 VALUES (100,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO #Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO #Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Prova')
END
GO       

CREATE PROCEDURE [dbo].[proc_TestVar]
AS
BEGIN
SET NOCOUNT ON
DECLARE @Table1 TABLE (
[Fld1] [bigint] NULL,
[Fld2] [int] NULL,
[Fld3] [int] NULL,
[Fld4] [tinyint] NULL,
[Fld5] [decimal](9, 2) NULL,
[Fld6] [decimal](9, 2) NULL,
[Fld7] [tinyint] NULL,
[Fld8] [varchar](15),
INDEX [IX_Fld1] CLUSTERED ([Fld1] ASC))

INSERT INTO @Table1 VALUES (1,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO @Table1 VALUES (10,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO @Table1 VALUES (100,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO @Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Prova')
INSERT INTO @Table1 VALUES (1000,100000,200000,35,9.20,9.20, 3, 'Prova')
END
GO

La sessione Extended Events usa l’evento sqlserver.latch_acquired filtrato per il dbid 2 e la sessione dalla quale si eseguono le due sp.

CREATE EVENT SESSION [Latch]
ON SERVER
ADD EVENT sqlserver.latch_acquired(
ACTION(sqlserver.session_id,sqlserver.sql_text)
WHERE ([package0].[equal_uint64]([database_id],(2))
AND [sqlserver].[session_id]=(57)))
ADD TARGET package0.event_file(SET filename=N'Latch'),
ADD TARGET package0.ring_buffer(SET max_memory=(40960))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=1 SECONDS, MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO


Mentre la query sul Tlog è una cosa del tipo

SELECT
    fd.[Current LSN],
    fd.Operation,
    fd.AllocUnitName,
    fd.[Transaction Name],
    fd.[Transaction ID]
FROM sys.fn_dblog(NULL, NULL) AS fd


Mettendo tutto insieme ed eseguendo un checkpoint prima di iniziare vediamo cosa succede con la tabella temporanea…

USE tempdb
GO
CHECKPOINT
GO
ALTER EVENT SESSION Latch ON SERVER STATE = start
GO
exec [TestDb].[dbo].[proc_TestTemp]
GO
ALTER EVENT SESSION Latch ON SERVER STATE = stop
GO
SELECT
    fd.[Current LSN],
    fd.Operation,
    fd.AllocUnitName,
    fd.[Transaction Name],
    fd.[Transaction ID]
FROM sys.fn_dblog(NULL, NULL) AS fd

La query sul transaction log ci dice che sono stati scritti 38 log record ( 138 alla prima esecuzione )
image
Mentre per la sessione Extended Events sono stati acquisiti un totale di 51 Latch
image
Cancelliamo adesso i file della sessione Extended Events e ripetiamo la stessa cosa per la sp che usa la table variabile

Questa volta la query sul Tlog dice che i record scritti sono 23 ( 240 alla prima esecuzione, quindi mettere in cache una Table Variable genera più record)
image
Mentre la sessione Extended Events dice che i Latch acquisiti in totale sono solo 11
Cattura

 

Quindi in sostanza abbiamo che le variabili di tipo Table richiedono molti meno LATCH e record nel Tlog, condizione che nel caso di migliaia di chiamate al minuto può fare una differenza notevole. Se la query sul Transaction Log viene modificata per prendere una SUM ([Log Record Lenght]) abbiamo che la stored procedure che usa la tabella temporanea usa 5552 bytes, mentre l’altra 2632 bytes.

Vediamo adesso cosa succede prendendo come riferimento i contatori di Performance “Log Flushes/sec” e “Log Bytes Flushed/sec” dell’oggetto “Databases, istanza tempdb: a tal proposito ho configurato il tool SQL Load Generator per generare (solo) circa 130 Batch al secondo con le due stored appena viste.

Nel caso della stored procedure con tabella temporanea abbiamo il seguente risultato
image
16 Log Flushes al secondo e circa 650Kb/sec scritti nel tlog.
Passando alla varsione con variabile di tipo table otteniamo invece
image
I Log Flushes/sec sono dimezzati a 8 e i Log Bytes Flushed /sec più che dimezzati a 300Kb/sec.

Dal punto di vista pratico quindi,nel momento in cui non si possa abilitare l’ InMemory, è più conveniente affidarsi in prima istanza alle variabili ti tipo table, soprattutto quando si ha a che fare con poche righe e molte chiamate al minuto, e nei casi in cui la differenza di prestazioni volga a favore delle tabelle temporanee provare una RECOMPILE che molto spesso livella le prestazioni dei due oggetti.

 

In this article I’ll write nothing new regarding monitoring, but rather how to use what SQL Server already provide us and make it as light as possible; when working in “high OLTP workload” environments monitoring should be almost “invisible”, and by “high OLTP workload” I mean a server running at 50000/60000 Batches/sec

image

with the load made almost by stored procedures in the average CPU time from few ms to few tens of ms. Just for an example, the image below shows 10 minutes sampling taken at about 1/4 maximum load of one customer, with the first column showing the CPU time of every stored procedure over the total SQL Server CPU usage; so, if for example SQL was at 40% the first sp was responsible for the 19.8% of that load, and so on..

image

The above report is obtained from a table populated every minute with a simple query over the sys.dm_exec_procedure_stats

SELECT dateadd(mi, datediff(mi, 0, getdate()), 0), sql_handle,
plan_handle, total_elapsed_time, total_worker_time,
total_logical_reads, total_logical_writes, execution_count
FROM sys.dm_exec_procedure_stats

In addition to this, on our servers we usually capture every minute a snapshot over the sys.dm_exec_requests to get a photo of the running statements, and use it also in realtime when there is something that “smells” strange Sorriso; for this purpose we initially used the famous sp_WhoIsActive, very powerfull, but sometimes, when there were several process running or some blocking conditions, it happened that it was lasting too much or, never ending.

After digging a bit into the issue it came out to be the high usage of string manipulation functions which caused heavy usage of the TempDb and CPU time, that’s why the idea of something “lighter” and with only the functionality needed for our purpose.

Basycally we use a statement like this

SELECT s.host_name, r.session_id as 'session', r.blocking_session_id as [blocked by],
CAST  (SUBSTRING(st.text, (r.statement_start_offset/2)+1,
    ((CASE r.statement_end_offset
      WHEN -1 THEN DATALENGTH(st.text)
     ELSE r.statement_end_offset
     END - r.statement_start_offset)/2) + 1) as text) AS statement_text,  ISNULL(cast(OBJECT_SCHEMA_NAME(st.objectid, st.dbid) + '.' + OBJECT_NAME(st.objectid, st.dbid) as varchar(100)), 'AdHoc Statement') as object_name,
     s.login_name,
     CASE  WHEN LEFT(s.program_name, 8) = 'SQLAgent' then
                (SELECT 'SQLAgent Job: ' + b.name from msdb.dbo.sysjobs b WHERE (SUBSTRING(MASTER.dbo.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), b.JOB_ID)),1,10)) = SUBSTRING(s.PROGRAM_NAME,30,10))
        ELSE s.program_name
    END AS [program_name],
r.start_time as request_start_time,r.last_wait_type,r.wait_time,
r.cpu_time, r.total_elapsed_time,
r.logical_reads, r.reads as physical_reads, r.status as request_status
FROM sys.dm_exec_sessions s left join sys.dm_exec_requests r on s.session_id = r.session_id
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
WHERE r.session_id > 50 and r.session_id <> @@SPID
AND last_wait_type <> 'SP_SERVER_DIAGNOSTICS_SLEEP'
ORDER BY cpu_time DESC

for cehcking in realtime what is currently running,  with an added

OUTER APPLY sys.dm_exec_text_query_plan( r.plan_handle,
r.statement_start_offset, r.statement_end_offset) tp

when we want to take a look at the Execution Plan ( CAST (tp.query_plan as xml) AS plan_xml in the fields list).

Second, we check for blocking processes with something like this

SELECT    s.session_id, r.blocking_session_id AS [blocked by], CAST(SUBSTRING(st.text, (r.statement_start_offset / 2) + 1,
                ((CASE r.statement_end_offset
                    WHEN - 1 THEN DATALENGTH(st.text)
                    ELSE r.statement_end_offset
                END - r.statement_start_offset) / 2) + 1) AS text) AS statement_text,
                ISNULL(cast(OBJECT_SCHEMA_NAME(st.objectid, st.dbid) + '.' + OBJECT_NAME(st.objectid, st.dbid)
                         AS varchar(100)), 'AdHoc Statement') AS object_name,
                s.login_name,
                CASE
                WHEN LEFT(s.program_name, 8) = 'SQLAgent' then
                        (SELECT 'SQLAgent Job: ' + b.name from msdb.dbo.sysjobs b WHERE (SUBSTRING(MASTER.dbo.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), b.JOB_ID)),1,10)) = SUBSTRING(s.PROGRAM_NAME,30,10))
                ELSE s.program_name
                END AS [program_name], r.start_time AS request_start_time, r.last_wait_type, r.cpu_time, r.total_elapsed_time, r.logical_reads,
                r.reads AS physical_reads, r.status AS request_status, r.wait_time, r.command, master.dbo.fn_varbintohexstr(r.plan_handle) AS plan_handle
        FROM sys.dm_exec_sessions s
            LEFT JOIN sys.dm_exec_requests r
            ON r.session_id = s.session_id
            OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
        Where  S.Session_ID In
            (
                 Select Blocking_Session_ID
                 From Sys.DM_Exec_Requests (READUNCOMMITTED)
                 Where Blocking_Session_ID <> 0
            )

Finally, becasue TempDb is a precious resource we introduced a check for long running transactions which can block the checkpoint process and cause a huge increase of the TLog size and, why not, exhaust the disk space. Just remember that on the TempDb the checkpoint happens when the TLog is at 70% usage and any open (long) transaction delay the process, and could start an “avalanche effect”; several times it does not depend on how much data You write in the TempDb but rather how much that transaction last

This script can also be changed for checking a different database name

SELECT  ISNULL(cast(tst.session_id as varchar),'Internal') as SessionID,
    tdt.database_transaction_log_bytes_reserved/1024 AS [TlogKB],
    tat.transaction_id AS [Transacton ID],
    tat.name      AS [TRANSACTION Name],
    tat.transaction_begin_time AS [TRANSACTION BEGIN TIME],
    DATEDIFF(mi, tat.transaction_begin_time, GETDATE()) AS [Elapsed TIME (in MIN)],
    CASE tat.transaction_type
        WHEN 1 THEN 'Read/write'
        WHEN 2 THEN 'Read-only'
        WHEN 3 THEN 'System'
        WHEN 4 THEN 'Distributed'
        END AS [TRANSACTION Type],
    CASE tat.transaction_state
        WHEN 0 THEN 'The transaction has not been completely initialized yet.'
        WHEN 1 THEN 'The transaction has been initialized but has not started.'
        WHEN 2 THEN 'The transaction is active.'
        WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
        WHEN 4 THEN 'The commit process has been initiated on the distributed transaction. This is for distributed transactions only. The distributed transaction is still active but further processing cannot take place.'
        WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
        WHEN 6 THEN 'The transaction has been committed.'
        WHEN 7 THEN 'The transaction is being rolled back.'
        WHEN 8 THEN 'The transaction has been rolled back.'
        END AS [TRANSACTION Description],
    CAST  (SUBSTRING(t.text, (r.statement_start_offset/2)+1,
    ((CASE r.statement_end_offset
        WHEN -1 THEN DATALENGTH(t.text)
        ELSE r.statement_end_offset
    END - r.statement_start_offset)/2) + 1) as text) AS statement_text
FROM sys.dm_tran_active_transactions tat
    INNER JOIN sys.dm_tran_database_transactions tdt
        on tat.transaction_id=tdt.transaction_id
    LEFT JOIN sys.dm_tran_session_transactions AS tst
        ON tdt.transaction_id = tst.transaction_id
    LEFT OUTER JOIN sys.dm_exec_requests AS r
         ON tst.session_id = r.session_id
    OUTER APPLY sys.dm_exec_sql_text(r.plan_handle) AS t
WHERE tdt.database_id=DB_ID('tempdb') and tdt.database_transaction_log_bytes_reserved > 0
ORDER BY 2 DESC     

Putting it all together I wrote a sp with 3 input parameters and named it sp_WhatIsRunning

USE master
GO
CREATE PROC [dbo].[sp_WhatIsRunning]
(
    @ShowPlan Bit = 0,
    @ShowLock Bit = 0,
    @ShowTempLog Bit = 0
)
AS
BEGIN
    SET NOCOUNT ON
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    IF (@ShowPlan = 0)
    BEGIN
        SELECT s.host_name, r.session_id as 'session', r.blocking_session_id as [blocked by],
        CAST  (SUBSTRING(st.text, (r.statement_start_offset/2)+1,
            ((CASE r.statement_end_offset
             WHEN -1 THEN DATALENGTH(st.text)
            ELSE r.statement_end_offset
        END - r.statement_start_offset)/2) + 1) as text) AS statement_text,  ISNULL(cast(OBJECT_SCHEMA_NAME(st.objectid, st.dbid) + '.' + OBJECT_NAME(st.objectid, st.dbid) as varchar(100)), 'AdHoc Statement') as object_name,
        s.login_name,
        CASE  WHEN LEFT(s.program_name, 8) = 'SQLAgent' then
                    (SELECT 'SQLAgent Job: ' + b.name from msdb.dbo.sysjobs b WHERE (SUBSTRING(MASTER.dbo.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), b.JOB_ID)),1,10)) = SUBSTRING(s.PROGRAM_NAME,30,10))
            ELSE s.program_name
        END AS [program_name],
        r.start_time as request_start_time,r.last_wait_type,r.wait_time,
        r.cpu_time, r.total_elapsed_time,
        r.logical_reads, r.reads as physical_reads, r.status as request_status
        FROM sys.dm_exec_sessions s left JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
        CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
        WHERE r.session_id > 50 AND r.session_id <> @@SPID
        AND last_wait_type <> 'SP_SERVER_DIAGNOSTICS_SLEEP'
        ORDER BY cpu_time DESC
    END
    ELSE
    BEGIN
    SELECT s.host_name, r.session_id as 'session', r.blocking_session_id as [blocked by],
    CAST  (SUBSTRING(st.text, (r.statement_start_offset/2)+1,
        ((CASE r.statement_end_offset
         WHEN -1 THEN DATALENGTH(st.text)
        ELSE r.statement_end_offset
        END - r.statement_start_offset)/2) + 1) as text) AS statement_text,  ISNULL(cast(OBJECT_SCHEMA_NAME(st.objectid, st.dbid) + '.' + OBJECT_NAME(st.objectid, st.dbid) as varchar(100)), 'AdHoc Statement') as object_name,
        s.login_name,
             CASE  WHEN LEFT(s.program_name, 8) = 'SQLAgent' then
                        (SELECT 'SQLAgent Job: ' + b.name from msdb.dbo.sysjobs b WHERE (SUBSTRING(MASTER.dbo.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), b.JOB_ID)),1,10)) = SUBSTRING(s.PROGRAM_NAME,30,10))
                ELSE s.program_name
            END AS [program_name],         r.start_time as request_start_time,r.last_wait_type,r.wait_time,
        r.cpu_time, r.total_elapsed_time,
        r.logical_reads, r.reads as physical_reads, r.status as request_status,
        CAST (tp.query_plan as xml) AS plan_xml
        from sys.dm_exec_sessions s left join sys.dm_exec_requests r on s.session_id = r.session_id
        CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
        OUTER APPLY sys.dm_exec_text_query_plan( r.plan_handle, r.statement_start_offset, r.statement_end_offset) tp
        WHERE r.session_id > 50 and r.session_id <> @@SPID
        AND last_wait_type <> 'SP_SERVER_DIAGNOSTICS_SLEEP'
        ORDER BY cpu_time DESC
    END

    IF (@ShowLock=1)
    BEGIN
        SELECT        s.session_id, r.blocking_session_id AS [blocked by], CAST(SUBSTRING(st.text, (r.statement_start_offset / 2) + 1, ((CASE r.statement_end_offset WHEN - 1 THEN DATALENGTH(st.text)
                         ELSE r.statement_end_offset END - r.statement_start_offset) / 2) + 1) AS text) AS statement_text, ISNULL(cast(OBJECT_SCHEMA_NAME(st.objectid, st.dbid) + '.' + OBJECT_NAME(st.objectid, st.dbid)
                         AS varchar(100)), 'AdHoc Statement') AS object_name, s.login_name,
             CASE  WHEN LEFT(s.program_name, 8) = 'SQLAgent' then
                        (SELECT 'SQLAgent Job: ' + b.name from msdb.dbo.sysjobs b WHERE (SUBSTRING(MASTER.dbo.FN_VARBINTOHEXSTR(CONVERT(VARBINARY(16), b.JOB_ID)),1,10)) = SUBSTRING(s.PROGRAM_NAME,30,10))
                ELSE s.program_name
            END AS [program_name]                         , r.start_time AS request_start_time, r.last_wait_type, r.cpu_time, r.total_elapsed_time, r.logical_reads,
                         r.reads AS physical_reads, r.status AS request_status, r.wait_time, r.command, master.dbo.fn_varbintohexstr(r.plan_handle) AS plan_handle
                        FROM sys.dm_exec_sessions s
                            LEFT JOIN sys.dm_exec_requests r
                            ON r.session_id = s.session_id
                         OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) AS st
        Where  S.Session_ID In
            (
                 SELECT Blocking_Session_ID
                    FROM Sys.DM_Exec_Requests (READUNCOMMITTED)
                 WHERE Blocking_Session_ID <> 0
            )
    END
    IF (@ShowTempLog =1)
            BEGIN
            SELECT  ISNULL(cast(tst.session_id as varchar),'Internal') as SessionID,
            tdt.database_transaction_log_bytes_reserved/1024 AS [TlogKB],
            tat.transaction_id AS [Transacton ID],
            tat.name      AS [TRANSACTION Name],
            tat.transaction_begin_time AS [TRANSACTION BEGIN TIME],
            DATEDIFF(mi, tat.transaction_begin_time, GETDATE()) AS [Elapsed TIME (in MIN)],
            CASE tat.transaction_type
                WHEN 1 THEN 'Read/write'
                WHEN 2 THEN 'Read-only'
                WHEN 3 THEN 'System'
                WHEN 4 THEN 'Distributed'
                END AS [TRANSACTION Type],
            CASE tat.transaction_state
                WHEN 0 THEN 'The transaction has not been completely initialized yet.'
                WHEN 1 THEN 'The transaction has been initialized but has not started.'
                WHEN 2 THEN 'The transaction is active.'
                WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
                WHEN 4 THEN 'The commit process has been initiated on the distributed transaction. This is for distributed transactions only. The distributed transaction is still active but further processing cannot take place.'
                WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
                WHEN 6 THEN 'The transaction has been committed.'
                WHEN 7 THEN 'The transaction is being rolled back.'
                WHEN 8 THEN 'The transaction has been rolled back.'
                END AS [TRANSACTION Description],
            CAST  (SUBSTRING(t.text, (r.statement_start_offset/2)+1,
            ((CASE r.statement_end_offset
                WHEN -1 THEN DATALENGTH(t.text)
                ELSE r.statement_end_offset
            END - r.statement_start_offset)/2) + 1) as text) AS statement_text
        FROM sys.dm_tran_active_transactions tat
            INNER JOIN sys.dm_tran_database_transactions tdt
                on tat.transaction_id=tdt.transaction_id
            LEFT JOIN sys.dm_tran_session_transactions AS tst
                ON tdt.transaction_id = tst.transaction_id
            LEFT OUTER JOIN sys.dm_exec_requests AS r
                 ON tst.session_id = r.session_id
            OUTER APPLY sys.dm_exec_sql_text(r.plan_handle) AS t
        WHERE tdt.database_id=DB_ID('tempdb') and tdt.database_transaction_log_bytes_reserved > 0
        ORDER BY 2 DESC
    END
END     

You can call it simply with

  • exec sp_WhatIsRunning to get a list of running processes
  • exec sp_WhatIsRunning 1 to add the Execution Plan to the output
  • exec sp_WhatIsRunning 0,1 to show the list of running together with blocking/blocked processes
  • exec sp_WhatIsRunning 0,0,1 to show the list of running processes together with the open transactions list on the TempDb
  • exec sp_WhatIsRunning 1,1,1 to show all; running processes, blocked/blocking and TempDb active transactions

This is just a starting point, we created the sp this way because this is what we need for realtime monitoring, but the it can be expanded with additional fields and/or visualization options.

Con SQL Server 2014 ed il Cumulative Update 2 della Service Pack 1 di SQL Server 2012 è stata introdotta l’opzione “TO URL” del comando BACKUP, che permette di memorizzare direttamente un file di backup sul blob storage di Azure. L’operazione è possibile anche con le versioni precedenti (2005, 2008, 2008R2) tramite il tool “SQL Server Backup to Windows Azure”, reso disponibile nei primi mesi del 2014.

(altro…)

A partire da SQL Server 2005 la Plan Cache di SQL Server si rende visibile tramite la Dynamic Management View (DMV) sys.dm_exec_cached_plans, che ritorna una riga per ogni Execution Plan memorizzato e che nella sua forma più semplice (SELECT * FROM…) ci dice sostanzialmente poco, a parte permetterci di fare delle aggregazioni su alcuni campi come ad esempio size_in_bytes per avere un’idea della distribuzione degli oggetti all’interno della Plan Cache stessa

 select sum(size_in_bytes)/1024 AS 'SizeInKB', objtype
from sys.dm_exec_cached_plans group by objtype order by 1 desc

(altro…)