Monday, November 14, 2022

Improved and updated, round 2: performance tuning a stored procedure with user defined function calls in SQL Server

Recently I was reviewing performance on a troublesome stored procedure. I used the method detailed in my recent blog post: UPDATED! Performance tuning a stored procedure with user defined function calls in SQL Server

But in looking at the output, one of the functions just never showed up, as though it were not taking any resources. Why was that? After some research, I found that only scalar functions show up in the system table sys.dm_exec_function_stats, but the function that didn't show up was a table valued function.

So, back to the drawing board. It turns out that another way to get all the performance-related stats for stored procedures and all types of functions, is by using the system table sys.dm_exec_query_stats.

The steps are very similar, in that you're clearing the cache with FREEPROCCACHE, running the stored procedure, and then checking the query statistics. When I checked performance this way, the table valued function showed up properly. 

This is what I ran:

 DBCC FREEPROCCACHE   
 Exec API_REPORT_Transcript   
 Select top 20  
   ObjectName = Object_Name(QueryText.objectid, QueryText.dbid)   
   ,Executions = QueryStats.execution_count   
   ,CPUTime = QueryStats.total_worker_time   
   ,LogicalReads = QueryStats.total_logical_reads   
   ,LogicalWrites = QueryStats.total_logical_writes   
   ,Duration = QueryStats.total_elapsed_time   
   ,QueryText =   
     convert(  
       varchar(200), SUBSTRING(QueryText.text,QueryStats.statement_start_offset/2 +1,   
       (CASE WHEN QueryStats.statement_end_offset = -1   
       THEN LEN(CONVERT(NVARCHAR(MAX), QueryText.text)) * 2   
       ELSE QueryStats.statement_end_offset END -  
       QueryStats.statement_start_offset  
       )/2  
       )   
     )   
 From sys.dm_exec_query_stats AS QueryStats   
      CROSS APPLY sys.dm_exec_sql_text(QueryStats.sql_handle) AS QueryText   
 Where   
   DB_NAME(QueryText.dbid) = 'MyDatabaseName'  
 Order by CPUTime desc     

Tuesday, July 26, 2022

The magic of Query Store for tracking application timeouts

If I were writing a click-baity title for an article on this, it would be "One Crazy Trick To Find Application Timeouts That Will Blow Your Mind!!!" It really is that useful. 

The bottom line is, Query Store is useful for far more than you might think, and specifically researching application timeouts.  

A system I've been working on lately has had many application timeouts, cause by the default setting (timeout after 30 seconds). The tool we were using for error tracking had some flaws, and it was very difficult to actually come up with the stored procedure name, that had the timeout. 

I had set up the Extended Event session for timeouts (on sqlserver.attention). However, potentially due to certain issues with the front end setup, the sqlserver.sql_text value was almost always blank when there was a timeout. I didn't investigate in depth, and moved on to other tasks.

But once I started spending more time on Query Store and using it more, I discovered the execution_type_desc field in query_store_runtime_stats. What does it contain?

Well, 99.9% of the time it has this string "regular". But occasionally, it contains this one "aborted". And these are your timeouts! It's so easy to find timeouts with this method that I'm surprised it's not more well known.

By using the execution_type_desc field in query_store_runtime_stats, I was able to find clusters of linked timeouts, and then dig deeper (also using query_store_runtime_stats, but that's another story) to find the root cause. 

Here's the query I frequently use, commenting out lines when necessary, to get the details I need. This will show 1 row for each QueryID and PlanID. If it's a stored procedure with many separate statements, you may want to filter on one particular QueryID or SQL text.


 SELECT   
   ObjectName = Object_Name(q.object_id)  
   ,RunTimeFirst = convert(varchar(16), rs.first_execution_time,121)  
   ,RunTimeLast = convert(varchar(16), rs.first_execution_time,121)  
   ,TotalExecutions = rs.count_executions  
   ,AverageCPU = round(rs.avg_cpu_time , 0)  
   ,AverageLogicalReads = round(rs.avg_logical_io_reads ,0)  
   ,AverageDuration = round(rs.avg_duration ,0)  
   ,q.query_id   
   ,p.plan_id   
   ,p.Is_Forced_Plan  
   ,rs.runtime_stats_id  
   ,execution_type_desc  
   ,qt.query_sql_text  
   -- Comment out unless necessary...takes a long time.   
   -- ,ExecutionPlan = CAST(p.query_plan as XML)  
 FROM sys.query_store_runtime_stats rs  
 JOIN sys.query_store_plan p   
   ON p.plan_id = rs.plan_id  
 JOIN sys.query_store_query q   
   ON q.query_id = p.query_id  
 JOIN sys.query_store_query_text qt   
   ON q.query_text_id = qt.query_text_id  
 WHERE   
   1=1 -- make it easier to comment/uncomment
   -- just show last 2 days by default  
   and first_execution_time >= dateadd(d, -2, getdate())  
   and Object_Name(q.object_id) = 'API_GetUserLearningPlanObjects'  
   and execution_type_desc = 'Aborted'  
   -- and q.query_id in (4013401)  
   -- and qt.query_sql_text like '%c.CourseID as ID%'  
 Order by RunTimeFirst desc  

Tuesday, May 17, 2022

Why is my transaction log growing?

I advise on the management of multiple large databases. One of them had a steadily growing transaction log.  Figuring out exactly why the log was growing was not a simple process. Here's the steps that I took to narrow down specifically which processes were causing the growth.

First, the most precise way to figure out what, specifically, is causing log growth, is through extended events. The event database_file_size_change will log all available data about the change (size, time, specifically which SQL cause the size change, etc).

If you've never used extended events before, do a little reading to learn about them first. They're insanely handy and highly recommended, but not necessarily straightforward. The below SQL will create an extended event to track the details. 

 CREATE EVENT SESSION [DB_Size_Tracking] ON SERVER   
 ADD EVENT sqlserver.database_file_size_change(  
 ACTION(sqlserver.client_app_name,  
 sqlserver.client_hostname,  
 sqlserver.database_name,  
 sqlserver.nt_username,  
 sqlserver.plan_handle,  
 sqlserver.query_hash,  
 sqlserver.query_plan_hash,  
 sqlserver.server_principal_name,  
 sqlserver.session_id,  
 sqlserver.session_nt_username,  
 sqlserver.sql_text,  
 sqlserver.username))  
 ADD TARGET package0.event_file(SET filename=N'D:\ExtendedEvents\DB_Size_Tracking.xel')  
 WITH (MAX_MEMORY=4096 KB,  
 EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,  
 MAX_DISPATCH_LATENCY=30 SECONDS,  
 MAX_EVENT_SIZE=0 KB,  
 MEMORY_PARTITION_MODE=NONE,  
 TRACK_CAUSALITY=OFF,  
 STARTUP_STATE=ON)  
 GO  

Once this extended event has been started, you can begin to see what, specifically is causing the size changes. Here's what I use to return data from the extended event session.

 If object_id('tempdb..#SizeChanges') is not null drop table tempdb..#SizeChanges  
 Select  
   -- Object_Name = n.value ('(data[@name="object_name"]/value)[1]', 'nvarchar(256)')   
   -- ,Duration = n.value ('(data[@name="duration"]/value)[1]', 'int')   
   SQLText = n.value('(action[@name="sql_text"]/value)[1]', 'nvarchar(max)')      
   ,Size_Change_KB = n.value ('(data[@name="size_change_kb"]/value)[1]', 'int')   
   ,Timestamp = convert(smalldatetime, n.value('(@timestamp)[1]', 'datetime2'))  
   ,database_name = n.value ('(data[@name="database_name"]/value)[1]', 'nvarchar(256)')   
   ,database_name_Action = n.value('(action[@name="database_name"]/value)[1]', 'nvarchar(256)')      
   ,file_name = n.value ('(data[@name="file_name"]/value)[1]', 'nvarchar(256)')   
 Into #SizeChanges  
 From (  
   Select cast(event_data as XML) as event_data  
   -- need to edit this for correct file, then UNCOMMENT  
   From sys.fn_xe_file_target_read_file('D:\ExtendedEvents\DB_Size_Tracking*.xel', null, null, null)  
   ) ed  
 Cross apply ed.event_data.nodes('event') as q(n)  
 Select * From #SizeChanges   

Once you have the temp table #SizeChanges created, you can do all kinds of queries and analysis.

In my situation, I found that a regular index maintenance job was causing the problem. Specifically, a large table with a clustered index built on a Unique Identifier field (never a good idea) was causing transaction growth, when the index was maintained. Once this was fixed, the transaction log growth was no longer a problem.