SQL Server Performance Tuning: The Complete DBA and Developer Guide
Performance tuning is the skill that separates good DBAs from great ones. It is part science, part detective work, and part knowing where to look first. This guide covers everything from the fundamentals a junior DBA needs to understand on day one, through the advanced techniques an experienced DBA uses to solve production emergencies — with real scripts, real before/after examples, and a structured methodology you can follow on any SQL Server environment.
Every section is labeled by level so you can jump to where you are and work forward. Come back to this page — it is designed as a reference you will use repeatedly, not something you read once and put down.
Compatibility: Scripts in this guide work on SQL Server 2012 and later unless noted. SQL Server 2019 and 2022 features — including Parameter Sensitive Plan optimization, Memory Grant Feedback, and DOP Feedback — are called out inline.
- How SQL Server Processes a Query
- What “Performance” Actually Means — Metrics That Matter
- The Performance Tuning Methodology — Always Start Here
- Naming Conventions — Why They Matter for Performance
- Wait Statistics — SQL Server’s Own Diagnosis
- Finding Your Worst Queries — DMVs and Query Store
- Reading Execution Plans — A Practical Field Guide
- SET STATISTICS IO and TIME — Your First Measurement Tool
- Non-SARGable Queries — The Silent Index Killer
- Implicit Data Type Conversion — The Hidden Scan
- What Not to Do — Common Anti-Patterns
- Writing Queries That SQL Server Can Optimize
- Index Fundamentals — Seeks, Scans, Lookups
- Covering Indexes and Included Columns
- Index Design Strategy — The Right Index for the Right Query
- Indexes That Hurt — Redundant, Unused, and Write-Heavy
- Parameter Sniffing — What It Is and How to Fix It
- Statistics — Why They Go Wrong and How to Fix Them
- Blocking, Deadlocks, and Lock Management
- TempDB Spills and Memory Grants
- Intelligent Query Processing — SQL Server 2019 and 2022
1 How SQL Server Processes a Query Beginner
Understanding what happens between submitting a query and getting results is the foundation of all performance tuning. If you do not understand this pipeline, you are guessing. If you do, you can look at any slow query and immediately reason about which step is the bottleneck.
The Query Processing Pipeline
Every query that SQL Server executes goes through these stages in order:
- Parsing — SQL Server checks the T-SQL syntax and converts it to an internal parse tree. Errors here are syntax errors.
- Binding (Algebrizer) — Object names are resolved against the system catalog. Column names, table names, and data types are validated. This is where “invalid object name” errors come from.
- Optimization — The query optimizer generates one or more candidate execution plans and estimates the cost of each. It selects the plan with the lowest estimated cost. This is the most important stage for performance — the optimizer’s choices determine everything.
- Plan Cache Check — Before optimization, SQL Server checks the plan cache for a matching compiled plan. If a suitable plan exists, it is reused and the optimization step is skipped. This is why the first execution of a query is typically slower than subsequent ones.
- Execution — The storage engine executes the chosen plan, reading data from the buffer pool (RAM) where possible, and from disk for any pages not currently in memory.
The Buffer Pool — Why Memory Is Everything
SQL Server does not read data directly from disk during query execution. All data pages must first be loaded into the buffer pool — a portion of RAM managed by SQL Server. A page read from the buffer pool is called a logical read. A page that had to be fetched from disk first is a physical read. Physical reads are orders of magnitude slower than logical reads.
This is why the most important metric when evaluating a query is not elapsed time — it is logical reads. A query that does a million logical reads is doing a million page lookups in RAM. Reducing logical reads is the primary goal of query tuning.
Key insight: The query optimizer is not perfect. It makes decisions based on statistics — estimates of how many rows a condition will return. When statistics are out of date or the data distribution is unusual, the optimizer chooses a suboptimal plan. Most performance problems in the real world are caused by the optimizer making a bad choice, not by SQL Server being fundamentally slow.
2 What “Performance” Actually Means — Metrics That Matter Beginner
When someone says a query is “slow,” they are describing a symptom. Before tuning anything, you need to know which metric is actually the problem — because the fix is different depending on the answer.
| Metric | What It Measures | High Value Suggests |
|---|---|---|
| Logical Reads | Pages read from buffer pool (RAM) | Missing or inefficient index; too many rows examined |
| Physical Reads | Pages fetched from disk | Buffer pool too small; data not in cache; cold start |
| CPU Time | Processor time consumed | Complex calculation, sort, hash operation, or parallelism overhead |
| Elapsed Time | Wall-clock time from start to end | Waits (blocking, I/O, network) — not just CPU work |
| Rows Read vs Rows Returned | How many rows examined vs returned | Low ratio = efficient; high ratio = too much data scanned |
| Memory Grant | Memory allocated for sort/hash operations | Spill to TempDB if grant too small; waste if grant too large |
Elapsed time is the least useful metric for query tuning. A query that takes 10 seconds might be spending 9 of those seconds waiting for a lock held by another session. Fixing the query changes nothing. Always correlate elapsed time with CPU time and logical reads before assuming the query itself is the problem.
3 The Performance Tuning Methodology — Always Start Here Beginner
The single biggest mistake in performance tuning is jumping to a fix before understanding the problem. Randomly adding indexes, rewriting queries, or changing server settings without a diagnostic baseline almost always makes things worse, or solves a symptom while leaving the root cause untouched.
Paul Randal of SQLskills puts it clearly: determine what the scope of the problem is before gathering metrics and deciding what to change — and always consider the wider effect of any change before making it.
The Structured Approach
- Measure first. Capture the current state with numbers — logical reads, CPU time, elapsed time, wait stats. You cannot confirm an improvement without a baseline.
- Identify the scope. Is this one slow query, one slow stored procedure, a slow server, or slow for only one application? The diagnostic path is different for each.
- Find the highest-value target. Do not tune the query you are looking at just because someone complained about it. Find the query that consumes the most total resources on the instance — that is where the most improvement lives.
- Understand the cause. Is the issue a bad execution plan? Missing index? Out-of-date statistics? Blocking? TempDB spill? Each has a different fix.
- Make one change at a time. If you change three things simultaneously and performance improves, you do not know which one fixed it. If it gets worse, you do not know which one broke it.
- Measure again. Confirm the change produced the expected improvement with the same metrics you captured in step 1.
- Document everything. What you changed, why, when, and what the before/after metrics were. Future you (and your team) will thank you.
Never tune in production without a rollback plan. Adding an index, changing MAXDOP, flushing the plan cache, or updating statistics can all cause plan regressions elsewhere. Always test changes in a non-production environment first where possible, and have a documented rollback procedure before touching production.
4 Naming Conventions — Why They Matter for Performance Beginner
Naming conventions are not just about aesthetics or team consistency — several naming choices in SQL Server have direct, measurable performance consequences. Understanding these will save you hours of troubleshooting later.
The sp_ Prefix on Stored Procedures — Never Use It
This is the most consequential naming mistake in SQL Server. When you create a stored procedure with the sp_ prefix, SQL Server assumes it might be a system procedure and looks for it in the master database first before searching your user database. This causes a compile lock on the master database on every execution — a serialization point that degrades concurrency. On a busy server with many stored procedure calls, this becomes a measurable bottleneck.
❌ Never Do This
CREATE PROCEDURE sp_GetCustomer
@CustomerID INT
AS ...✅ Do This Instead
CREATE PROCEDURE dbo.GetCustomer
@CustomerID INT
AS ...Always Use a Schema — Never Rely on the Default
When you reference an object without a schema prefix — SELECT * FROM Orders instead of SELECT * FROM dbo.Orders — SQL Server must resolve the schema at runtime for every execution. This prevents plan sharing between users with different default schemas, bloats the plan cache with redundant plans, and causes unnecessary recompiles.
❌ Schema Missing — Forces Runtime Resolution
SELECT * FROM Orders
JOIN OrderDetails ON ...
EXEC GetCustomer 123✅ Always Qualify with Schema
SELECT * FROM dbo.Orders
JOIN dbo.OrderDetails ON ...
EXEC dbo.GetCustomer 123SQL Server Naming Standards Reference
| Object Type | Convention | Example | Avoid |
|---|---|---|---|
| Table | PascalCase, singular noun | Customer, SalesOrder | tbl_Customers, TBL_sales_order |
| Column | PascalCase, descriptive | CustomerId, OrderDate | id, dt, col1 |
| Primary Key | PK_TableName | PK_Customer | PrimaryKey, pk1 |
| Foreign Key | FK_ChildTable_ParentTable | FK_Order_Customer | fk1, FK_ref |
| Nonclustered Index | IX_TableName_Columns | IX_Order_CustomerId_OrderDate | index1, MyIndex |
| Unique Index | UX_TableName_Columns | UX_Customer_Email | idx_unique |
| Stored Procedure | dbo.VerbNoun or dbo.ObjectAction | dbo.GetCustomerById, dbo.OrderInsert | sp_GetCustomer, proc_get |
| Function (Scalar) | dbo.fn_Description | dbo.fn_GetFullName | fn_1, function1 |
| View | dbo.vw_Description | dbo.vw_ActiveCustomers | view1, V_Customers |
| Trigger | trg_Action_TableName | trg_Ins_Order, trg_Upd_Customer | trigger1, t_order |
| Temp Table | #PascalCase | #OrderStaging, #Results | #temp, #t1 |
| Variable | @PascalCase, matching column name | @CustomerId, @OrderDate | @x, @var1, @@MyVar |
| Boolean Column | Affirmative: IsX, HasX | IsActive, HasPermission | NotDeleted, Active (ambiguous) |
Do not use the tbl_ prefix on tables. This “tibbling” convention originated from Microsoft Access imports in the early SQL Server era. SQL Server is a strongly typed language — the object type is always known from context and the system catalog. Prefixes add noise and length with no benefit. The same applies to vw_ prefixes on views — if your code cannot tell the difference between a table and a view from context, that is an architecture problem, not a naming problem.
5 Wait Statistics — SQL Server’s Own Diagnosis Beginner
Wait statistics are the most important diagnostic signal in SQL Server. Every time a SQL Server worker thread has to stop and wait for something — a lock, a disk read, a memory allocation, a network flush — it records that wait. The accumulation of these waits, available in sys.dm_os_wait_stats, tells you exactly where time is being lost across the entire instance.
The key insight from Paul Randal, who wrote extensively on this topic: rather than guessing what is wrong with a server, read the waits. The dominant wait type points directly to the category of problem.
The Most Common Wait Types and What They Mean
| Wait Type | What Is Waiting | Most Likely Root Cause |
|---|---|---|
PAGEIOLATCH_SH | Data page read from disk | Missing index causing scans; buffer pool too small; slow storage |
PAGEIOLATCH_EX | Data page write to disk | Checkpoint pressure; heavy write workload |
WRITELOG | Transaction log flush | Log I/O bottleneck; large transactions; log on slow storage |
LCK_M_* | Lock acquisition | Blocking; long-running transactions; missing indexes causing wide locks |
CXPACKET | Parallel query coordination | MAXDOP or cost threshold for parallelism too low; skewed data distribution |
CXCONSUMER | Parallel query consumer waiting | Same as CXPACKET — benign coordination or imbalanced parallelism |
SOS_SCHEDULER_YIELD | CPU yielding to other threads | CPU pressure; queries consuming too many CPU cycles |
RESOURCE_SEMAPHORE | Memory grant queue | Memory pressure; queries requesting more grant than available |
PAGELATCH_UP on TempDB | TempDB allocation page latch | TempDB data file contention; too few files |
ASYNC_NETWORK_IO | Client consuming results | Application not consuming result sets fast enough; SSRS/Excel pulling large sets |
-- Top waits since last restart or stats clear
-- This is the single most useful starting point on any server
WITH [Waits] AS (
SELECT
wait_type,
wait_time_ms / 1000.0 AS [WaitS],
(wait_time_ms - signal_wait_time_ms) / 1000.0 AS [ResourceS],
signal_wait_time_ms / 1000.0 AS [SignalS],
waiting_tasks_count AS [WaitCount],
100.0 * wait_time_ms / SUM(wait_time_ms) OVER() AS [Percentage],
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [RowNum]
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
N'SLEEP_TASK',N'SLEEP_SYSTEMTASK',N'SLEEP_DBSTARTUP',N'SLEEP_DCOMSTARTUP',
N'SLEEP_MASTERDBREADY',N'SLEEP_MASTERMDREADY',N'SLEEP_MASTERUPGRADED',
N'WAITFOR',N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG',N'DISPATCHER_QUEUE_SEMAPHORE',
N'XE_DISPATCHER_WAIT',N'XE_TIMER_EVENT',N'BROKER_TO_FLUSH',
N'BROKER_TASK_STOP',N'CLR_AUTO_EVENT',N'REQUEST_FOR_DEADLOCK_SEARCH',
N'RESOURCE_QUEUE',N'SERVER_IDLE_CHECK',N'SLEEP_TEMPDBSTARTUP',
N'SNI_HTTP_ACCEPT',N'SQLTRACE_BUFFER_FLUSH',N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
N'CHECKPOINT_QUEUE',N'DBMIRROR_EVENTS_QUEUE',N'SQLTRACE_WAIT_ENTRIES',
N'LAZYWRITER_SLEEP',N'LOGMGR_QUEUE',N'ONDEMAND_TASK_QUEUE',
N'FT_IFTS_SCHEDULER_IDLE_WAIT',N'BROKER_EVENTHANDLER',
N'DIRTY_PAGE_POLL',N'HADR_WORK_QUEUE',N'HADR_LOGCAPTURE_WAIT',
N'HADR_TIMER_TASK',N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP',
N'QDS_ASYNC_QUEUE',N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP')
AND waiting_tasks_count > 0
)
SELECT
MAX(W1.wait_type) AS [WaitType],
CAST(MAX(W1.Percentage) AS DECIMAL(5,2)) AS [Wait %],
CAST(MAX(W1.WaitS)/MAX(W1.WaitCount) AS DECIMAL(16,4)) AS [AvgWait_Sec],
CAST(MAX(W1.WaitS) AS DECIMAL(16,2)) AS [Wait_Sec],
MAX(W1.WaitCount) AS [Wait Count],
CAST(N'https://www.sqlskills.com/help/waits/'+W1.wait_type AS XML) AS [Help URL]
FROM Waits AS W1
INNER JOIN Waits AS W2 ON W2.RowNum <= W1.RowNum
GROUP BY W1.RowNum, W1.wait_type
HAVING SUM(W2.Percentage) - MAX(W1.Percentage) < 95
OPTION (RECOMPILE);
-- Clear wait stats for a fresh baseline measurement window:
-- DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);
6 Finding Your Worst Queries — DMVs and Query Store Beginner
The most important rule in query tuning: do not tune randomly. Find the queries that cost the most total resources on the instance and start there. A query that takes 5 seconds but only runs once a day costs far less than a query that takes 100 ms but runs 100,000 times a day. Total resource consumption is the metric that matters — not individual execution time.
Finding Top Queries via Plan Cache DMVs
-- Top 25 queries by total logical reads (most impactful for I/O and memory)
SELECT TOP(25)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_logical_reads AS [Total Logical Reads],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.execution_count AS [Executions],
qs.total_worker_time / qs.execution_count AS [Avg CPU (us)],
qs.total_elapsed_time / qs.execution_count AS [Avg Elapsed (us)],
CASE WHEN CONVERT(nvarchar(max),qp.query_plan) COLLATE Latin1_General_BIN2
LIKE N'%<MissingIndexes>%' THEN 'YES' ELSE 'no' END AS [Missing Index],
qs.creation_time AS [Plan Cached]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);
-- Top 25 by total CPU time
SELECT TOP(25)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_worker_time AS [Total CPU (us)],
qs.total_worker_time / qs.execution_count AS [Avg CPU (us)],
qs.execution_count AS [Executions],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
CASE WHEN CONVERT(nvarchar(max),qp.query_plan) COLLATE Latin1_General_BIN2
LIKE N'%<MissingIndexes>%' THEN 'YES' ELSE 'no' END AS [Missing Index],
qs.creation_time AS [Plan Cached]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
ORDER BY qs.total_worker_time DESC
OPTION (RECOMPILE);
Finding Top Queries via Query Store (Recommended for SQL Server 2016+)
Query Store is more powerful than the plan cache for finding expensive queries because it retains history even after a plan cache flush or server restart. Query Store is on by default in SQL Server 2022.
-- Top 25 queries by average logical reads via Query Store
-- Run in the context of the database you want to analyze
SELECT TOP(25)
qsqt.query_sql_text,
qsp.plan_id,
qsrs.avg_logical_io_reads AS [Avg Logical Reads],
qsrs.avg_cpu_time AS [Avg CPU (us)],
qsrs.avg_duration AS [Avg Duration (us)],
qsrs.count_executions AS [Execution Count],
qsp.last_execution_time
FROM sys.query_store_runtime_stats qsrs
JOIN sys.query_store_plan qsp ON qsp.plan_id = qsrs.plan_id
JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id
JOIN sys.query_store_query_text qsqt ON qsqt.query_text_id = qsq.query_text_id
WHERE qsp.last_execution_time > DATEADD(day, -7, GETDATE())
ORDER BY qsrs.avg_logical_io_reads DESC;
-- Find queries with plan regressions (same query, multiple plans, performance gap)
SELECT
qsq.query_id,
qsqt.query_sql_text,
COUNT(DISTINCT qsp.plan_id) AS [Plan Count],
MIN(qsrs.avg_duration) AS [Best Avg Duration (us)],
MAX(qsrs.avg_duration) AS [Worst Avg Duration (us)],
MAX(qsrs.avg_duration) - MIN(qsrs.avg_duration) AS [Duration Variance (us)]
FROM sys.query_store_query qsq
JOIN sys.query_store_query_text qsqt ON qsqt.query_text_id = qsq.query_text_id
JOIN sys.query_store_plan qsp ON qsp.query_id = qsq.query_id
JOIN sys.query_store_runtime_stats qsrs ON qsrs.plan_id = qsp.plan_id
WHERE qsp.last_execution_time > DATEADD(day, -7, GETDATE())
GROUP BY qsq.query_id, qsqt.query_sql_text
HAVING COUNT(DISTINCT qsp.plan_id) > 1
AND MAX(qsrs.avg_duration) > MIN(qsrs.avg_duration) * 2 -- worst plan 2x slower than best
ORDER BY [Duration Variance (us)] DESC;
7 Reading Execution Plans — A Practical Field Guide Intermediate
The execution plan is SQL Server's record of exactly how it processed your query. It shows every operation — which indexes it used, how it joined tables, where it sorted data, and how many rows it estimated vs actually processed. Learning to read execution plans is the single most valuable skill in SQL Server performance tuning.
How to Capture an Execution Plan
-- Option 1: Include Actual Execution Plan in SSMS (Ctrl+M before running)
-- This shows the actual row counts vs estimates -- always use this for tuning
-- Option 2: Capture plan without running the query (estimated plan)
-- In SSMS: Query → Display Estimated Execution Plan (Ctrl+L)
-- Option 3: Capture from plan cache for a running or recent query
SELECT TOP(1)
qs.total_logical_reads,
qs.execution_count,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qt.[text] LIKE '%YourTableName%' -- replace with something unique in your query
ORDER BY qs.total_logical_reads DESC;
-- Click the XML in the query_plan column to open the graphical plan in SSMS
What to Look at First
- Read the plan right to left, bottom to top. The rightmost, bottommost operations are where data originates. The leftmost operation is the final result delivery. Width of arrows between operators shows relative row volume — thick arrows mean many rows.
- Operator cost percentages. The percentage shown under each operator is its share of the estimated total cost. Start investigating at the most expensive operator.
- Estimated vs Actual rows. A large discrepancy — the optimizer estimated 1 row but got 500,000 — is the most reliable indicator of a statistics problem.
- Yellow warnings. A yellow exclamation on any operator means SQL Server detected a problem — most commonly a missing index, an implicit conversion, or a memory spill.
The Most Important Operators to Recognize
| Operator | What It Does | Good or Bad? |
|---|---|---|
| Index Seek | Navigates directly to matching rows using B-tree index | Usually good — targeted access |
| Index Scan | Reads entire index from start to end | Often bad on large tables — may indicate missing index or non-SARGable predicate |
| Table Scan / Clustered Index Scan | Reads every row in the table | Almost always bad on large tables — no usable index |
| Key Lookup (RID Lookup) | Fetches columns not in the nonclustered index from the clustered index | Bad at scale — each row requires a second B-tree traversal; consider covering index |
| Nested Loops | For each row in outer input, searches inner input | Good for small outer inputs with indexed inner; bad for large outer inputs |
| Hash Match (Join/Aggregate) | Builds an in-memory hash table from one input, probes with the other | Fine for large datasets; causes memory grant; can spill to TempDB |
| Sort | Sorts rows in memory | Needs a memory grant; can spill to TempDB; consider index to eliminate sort |
| Parallelism (Exchange) | Distributes or gathers rows across parallel threads | Good if the parallelism is actually helping; bad if overhead exceeds benefit |
-- Find queries with Key Lookups in cached plans (candidates for covering index)
SELECT TOP(20)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.execution_count AS [Executions]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE CONVERT(nvarchar(max), qp.query_plan) LIKE '%KeyLookup%'
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);
8 SET STATISTICS IO and TIME — Your First Measurement Tool Beginner
Before you add an index, rewrite a query, or look at a plan, run the query with STATISTICS IO and TIME enabled. These two settings give you the baseline numbers that tell you whether your change actually helped — and by how much.
-- Enable statistics output for your session
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
-- Run your query here
SELECT o.OrderId, o.OrderDate, c.CustomerName
FROM dbo.SalesOrder o
JOIN dbo.Customer c ON c.CustomerId = o.CustomerId
WHERE o.OrderDate >= '2025-01-01'
AND o.StatusId = 1;
-- Turn off when done
SET STATISTICS IO OFF;
SET STATISTICS TIME OFF;
The output in the Messages tab will show:
- Logical reads — pages read from buffer pool. This is your primary tuning metric.
- Physical reads — pages fetched from disk. High on cold runs; should be near zero on warm runs.
- Read-ahead reads — pages SQL Server pre-fetched speculatively (normal background activity).
- CPU time — actual CPU consumed in milliseconds.
- Elapsed time — wall-clock time including all waits.
Always compare logical reads, not elapsed time, when evaluating tuning changes. Elapsed time varies with server load, cache state, and blocking. Logical reads are deterministic — the same query against the same data with the same plan returns the same logical read count every time. A tuning change that reduces logical reads from 50,000 to 500 is a genuine improvement regardless of what elapsed time shows.
9 Non-SARGable Queries — The Silent Index Killer Intermediate
SARGable stands for Search ARGument Able. A query predicate is SARGable when SQL Server can use an index to seek directly to matching rows. A non-SARGable predicate forces SQL Server to examine every row in the table or index — a scan — even when a perfect index exists. This is one of the most common performance problems in SQL Server, and one of the easiest to introduce accidentally.
The Core Rule
A predicate is non-SARGable when a function or calculation is applied to the indexed column in the WHERE clause. Once you wrap a column in a function, SQL Server cannot look up the result in the index — it has to evaluate the function for every row first.
Non-SARGable Pattern 1 — Function on a Column
❌ Non-SARGable — Forces Full Scan
-- YEAR() applied to the column
-- SQL Server cannot seek on YEAR(OrderDate)
SELECT * FROM dbo.SalesOrder
WHERE YEAR(OrderDate) = 2025;
-- LEFT() applied to the column
SELECT * FROM dbo.Customer
WHERE LEFT(LastName, 1) = 'S';
-- DATEDIFF applied to the column
SELECT * FROM dbo.Employee
WHERE DATEDIFF(DAY, HireDate, GETDATE()) > 365;✅ SARGable — Enables Index Seek
-- Range predicate on the column directly
SELECT * FROM dbo.SalesOrder
WHERE OrderDate >= '2025-01-01'
AND OrderDate < '2026-01-01';
-- LIKE with leading literal (no leading wildcard)
SELECT * FROM dbo.Customer
WHERE LastName LIKE 'S%';
-- Arithmetic on the value, not the column
SELECT * FROM dbo.Employee
WHERE HireDate <= DATEADD(DAY, -365, GETDATE());Non-SARGable Pattern 2 — Leading Wildcard in LIKE
❌ Leading Wildcard — Cannot Seek
-- % at the start means SQL Server must scan every row
-- to check whether the string ends with 'smith'
SELECT * FROM dbo.Customer
WHERE LastName LIKE '%smith';
SELECT * FROM dbo.Product
WHERE ProductCode LIKE '%ABC%';✅ Trailing Wildcard — Can Seek
-- No leading % means SQL Server can seek to the first
-- matching row in the index and read forward
SELECT * FROM dbo.Customer
WHERE LastName LIKE 'Sm%';
-- If you need contains-style search, consider Full-Text
-- or a computed column with a checksum on substringsNon-SARGable Pattern 3 — Negation and NOT IN
❌ Negation — Forces Scan
-- SQL Server cannot seek to "rows where X is NOT this value"
-- It must examine all rows
SELECT * FROM dbo.Order
WHERE StatusId != 5;
SELECT * FROM dbo.Customer
WHERE CustomerId NOT IN (
SELECT CustomerId FROM dbo.BlacklistCustomer
);✅ Positive Predicate — Enables Seek
-- If you know the domain of StatusId, rewrite positively
SELECT * FROM dbo.Order
WHERE StatusId IN (1, 2, 3, 4);
-- NOT EXISTS performs better than NOT IN
-- (NOT IN returns no rows if subquery has NULLs)
SELECT * FROM dbo.Customer c
WHERE NOT EXISTS (
SELECT 1 FROM dbo.BlacklistCustomer b
WHERE b.CustomerId = c.CustomerId
);Non-SARGable Pattern 4 — OR on Different Columns
❌ OR Across Columns — May Cause Scan
-- Optimizer often cannot use a single index seek
-- for OR predicates on different columns
SELECT * FROM dbo.Customer
WHERE FirstName = 'John'
OR LastName = 'Smith';✅ UNION ALL — Two Seeks
-- Each branch can independently use its own index seek
SELECT * FROM dbo.Customer
WHERE FirstName = 'John'
UNION ALL
SELECT * FROM dbo.Customer
WHERE LastName = 'Smith'
AND FirstName != 'John'; -- avoid duplicatesHow to Find Non-SARGable Queries in Production
-- Find cached plans with table or clustered index scans on large tables
-- These are your non-SARGable candidates
SELECT TOP(20)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.execution_count AS [Executions],
qs.total_logical_reads AS [Total Logical Reads]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE CONVERT(nvarchar(max), qp.query_plan) LIKE '%PhysicalOp="Index Scan"%'
OR CONVERT(nvarchar(max), qp.query_plan) LIKE '%PhysicalOp="Table Scan"%'
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);
10 Implicit Data Type Conversion — The Hidden Scan Intermediate
Implicit conversion is non-SARGability in disguise. When SQL Server has to convert one data type to another to compare a column value against a parameter or literal, the conversion happens on the column — making the predicate non-SARGable. The index cannot be seeked. The query scans instead.
As Erik Darling explains, implicit conversion and non-SARGable predicates produce identical symptoms — forced scans, inflated logical reads, excess CPU — because under the covers they are the same thing. The conversion function wrapping the column is just an automatically generated wrapper that the optimizer inserted.
The Most Common Implicit Conversion Scenario
❌ Parameter Type Mismatch — Causes Implicit Conversion
-- Column is NVARCHAR, parameter is VARCHAR
-- SQL Server converts every row in the column to check compatibility
-- Result: full index scan, yellow warning in execution plan
DECLARE @Email VARCHAR(100) = 'user@example.com';
SELECT * FROM dbo.Customer
WHERE Email = @Email;
-- Email column is NVARCHAR -- mismatch causes CONVERT_IMPLICIT on every row✅ Matching Types — Clean Seek
-- Match the data type of the parameter to the column definition
-- Check with: SELECT DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
-- WHERE TABLE_NAME='Customer' AND COLUMN_NAME='Email'
DECLARE @Email NVARCHAR(100) = N'user@example.com';
SELECT * FROM dbo.Customer
WHERE Email = @Email;
-- N'' prefix ensures the literal is typed as NVARCHARHow to Detect Implicit Conversions Across the Instance
-- Find queries with implicit conversions in their cached execution plans
-- The CONVERT_IMPLICIT warning in plan XML is the signal
SELECT TOP(25)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.execution_count AS [Executions]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE CONVERT(nvarchar(max), qp.query_plan) LIKE '%CONVERT_IMPLICIT%'
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);
-- Alternatively, use the plan cache directly
SELECT TOP(25)
cp.usecounts,
cp.size_in_bytes / 1024 AS [Plan Size KB],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text]
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CONVERT(nvarchar(max), qp.query_plan) LIKE '%CONVERT_IMPLICIT%'
ORDER BY cp.usecounts DESC;
ORMs are the biggest source of implicit conversions. Entity Framework, Hibernate, and other ORMs often pass parameters without explicit type information, allowing the driver to infer the type — usually defaulting to VARCHAR when the column is NVARCHAR, or INT when the column is BIGINT. The fix is to use strongly typed parameters in your application code, or configure the ORM to match column data types explicitly.
11 What Not to Do — Common Anti-Patterns Intermediate
Some query patterns are so consistently problematic that they deserve their own section. These are the things that experienced DBAs spot in code reviews immediately and ask to have changed before anything else.
SELECT * — Never in Production Code
❌ Retrieves Everything Including What You Do Not Need
-- Returns all columns across both tables
-- Prevents covering index usage
-- Returns data the application discards immediately
-- Breaks when table schema changes
SELECT * FROM dbo.Customer c
JOIN dbo.Address a ON a.CustomerId = c.CustomerId;✅ Select Only What You Need
SELECT
c.CustomerId,
c.FirstName,
c.LastName,
a.City,
a.PostalCode
FROM dbo.Customer c
JOIN dbo.Address a ON a.CustomerId = c.CustomerId;Scalar UDFs in WHERE Clauses — Row-by-Row Execution
Scalar user-defined functions in SQL Server (before SQL Server 2019 Scalar UDF Inlining) execute once per row. A scalar UDF in a WHERE clause on a million-row table executes a million times — each invocation is a separate function call with its own context switch overhead. The result is orders of magnitude slower than equivalent inline code.
❌ Scalar UDF in WHERE — Executes Per Row
-- dbo.fn_GetAge executes once for every row in Employee
-- Even if the table has 10M rows
SELECT EmployeeId, FirstName, LastName
FROM dbo.Employee
WHERE dbo.fn_GetAge(BirthDate) >= 30;✅ Inline Calculation — Single Pass
-- Move the arithmetic to the right side of the predicate
-- The column (BirthDate) is now directly seekable
SELECT EmployeeId, FirstName, LastName
FROM dbo.Employee
WHERE BirthDate <= DATEADD(YEAR, -30, GETDATE());Cursors — Almost Always Wrong
Cursors process rows one at a time in a loop. SQL Server is a set-based engine designed to process thousands of rows in a single operation. A cursor that processes 100,000 rows with a 1 ms operation per row takes at least 100 seconds. The same logic expressed as a single set-based UPDATE or INSERT typically takes under a second.
❌ Cursor — Row-by-Row Processing
DECLARE @OrderId INT;
DECLARE cur CURSOR FOR
SELECT OrderId FROM dbo.SalesOrder WHERE StatusId = 1;
OPEN cur;
FETCH NEXT FROM cur INTO @OrderId;
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE dbo.SalesOrder
SET StatusId = 2, ProcessedDate = GETDATE()
WHERE OrderId = @OrderId;
FETCH NEXT FROM cur INTO @OrderId;
END
CLOSE cur; DEALLOCATE cur;✅ Set-Based — Single Operation
-- Same result, executed as one operation
-- SQL Server can optimize the entire set at once
UPDATE dbo.SalesOrder
SET StatusId = 2,
ProcessedDate = GETDATE()
WHERE StatusId = 1;NOLOCK — Not a Performance Optimization
WITH (NOLOCK) is one of the most misused hints in SQL Server. It is commonly added to queries to "speed them up" when they are slow due to blocking — but it does not fix blocking. It trades blocking for incorrect data. A query running under NOLOCK can read uncommitted data (dirty reads), read the same row twice (phantom reads), or miss rows entirely when pages split during a scan. The data it returns may not be consistent — a reporting query that sums financials under NOLOCK can return incorrect totals.
❌ NOLOCK — Dirty Reads and Incorrect Data
-- "Solution" to blocking that returns wrong answers
-- Never acceptable for financial or transactional data
SELECT SUM(Amount)
FROM dbo.Invoice WITH (NOLOCK)
WHERE InvoiceDate >= '2025-01-01';✅ RCSI — Correct Data, No Blocking
-- Enable Read Committed Snapshot Isolation on the database
-- (see the Setup and Best Practices guide)
-- Then read at READ COMMITTED (the default) -- no hints needed
-- Readers and writers do not block each other
-- Data is always consistent and committed
SELECT SUM(Amount)
FROM dbo.Invoice
WHERE InvoiceDate >= '2025-01-01';Large IN Lists and OR Chains
❌ Massive IN List — Optimizer Struggles
-- 500+ values in an IN list causes optimizer overhead
-- Can exceed plan cache parameterization thresholds
SELECT * FROM dbo.Product
WHERE ProductId IN (1,2,3,4, /* ...500 values... */ ,500);✅ Table-Valued Parameter or Temp Table
-- Load IDs into a temp table and JOIN
CREATE TABLE #ProductFilter (ProductId INT PRIMARY KEY);
INSERT INTO #ProductFilter VALUES (1),(2),(3); -- etc.
SELECT p.*
FROM dbo.Product p
JOIN #ProductFilter f ON f.ProductId = p.ProductId;
DROP TABLE #ProductFilter;12 Writing Queries That SQL Server Can Optimize Intermediate
Beyond avoiding anti-patterns, there are positive techniques that consistently help the optimizer make better choices.
Filter Early — Push Predicates as Far Down as Possible
SQL Server can often push filter predicates into joins and subqueries, but it is not guaranteed for complex queries. Writing filters as close to the base table access as possible makes the optimizer's job easier and makes your intent explicit.
-- Let the optimizer filter before aggregating
-- WHERE eliminates rows before GROUP BY processes them
-- HAVING filters after aggregation -- more work
SELECT CustomerId, SUM(Amount) AS TotalAmount
FROM dbo.SalesOrder
WHERE OrderDate >= '2025-01-01' -- filter rows first
AND StatusId = 1
GROUP BY CustomerId
HAVING SUM(Amount) > 1000; -- HAVING only for post-aggregate conditions
Use EXISTS Instead of COUNT for Existence Checks
❌ COUNT — Processes All Matching Rows
-- Counts every qualifying row even though you only need to know if one exists
IF (SELECT COUNT(*) FROM dbo.Order WHERE CustomerId = @CustomerId) > 0
PRINT 'Has orders';✅ EXISTS — Stops at First Match
-- Stops as soon as one qualifying row is found
-- Does not need to count the rest
IF EXISTS (SELECT 1 FROM dbo.Order WHERE CustomerId = @CustomerId)
PRINT 'Has orders';CTEs vs Temp Tables vs Table Variables
Understanding when to use each of these is important for performance at scale.
- CTEs are syntactic sugar — they are expanded inline into the query. They do not materialize data and do not have statistics. They are excellent for readability but do not improve performance by themselves. Do not use a CTE and expect it to "cache" results.
- Temp Tables (#table) are real tables in TempDB with their own statistics. For intermediate result sets above a few thousand rows that are referenced multiple times, a temp table almost always outperforms a CTE because the optimizer has real statistics to work with.
- Table Variables (@table) have no statistics and always report 1 row to the optimizer regardless of actual row count. For small sets (under a few hundred rows) they are fine. For large sets they lead to terrible cardinality estimates and bad plans.
-- When to use a temp table instead of a CTE:
-- Large intermediate result set referenced more than once
-- Complex query where the optimizer is making bad estimates
-- Temp table with statistics -- optimizer sees real row counts
SELECT CustomerId, SUM(Amount) AS TotalAmount
INTO #CustomerTotals
FROM dbo.SalesOrder
WHERE OrderDate >= '2025-01-01'
GROUP BY CustomerId;
-- Create an index if you will filter or join on specific columns
CREATE INDEX IX_CustomerTotals_CustomerId
ON #CustomerTotals (CustomerId);
-- Now the optimizer has accurate row count and index statistics
SELECT c.CustomerName, ct.TotalAmount
FROM dbo.Customer c
JOIN #CustomerTotals ct ON ct.CustomerId = c.CustomerId
WHERE ct.TotalAmount > 5000;
DROP TABLE #CustomerTotals;
13 Index Fundamentals — Seeks, Scans, and Lookups Beginner
Indexes are the single most impactful tool for query performance. Every performance tuning engagement ultimately comes back to: does the right index exist, is it being used, and is it well-maintained?
Clustered Index — The Table Itself
A clustered index defines the physical order in which rows are stored on disk. Every table should have one, on the most frequently accessed column — typically the primary key. A table without a clustered index is a heap: rows are stored in no particular order, every query that does not use a nonclustered index must scan the whole thing, and update operations generate forwarded records that further degrade performance.
Nonclustered Index — A Separate Lookup Structure
A nonclustered index is a separate B-tree structure that stores a subset of columns in a sorted order, with a pointer back to the full row in the clustered index. Multiple nonclustered indexes can exist on a single table — but each one adds write overhead for every INSERT, UPDATE, and DELETE. Index design is always a tradeoff between read performance and write cost.
The Seek/Scan/Lookup Spectrum
-- Visualize the difference with SET STATISTICS IO
-- Setup example table
CREATE TABLE dbo.SalesOrder (
OrderId INT IDENTITY PRIMARY KEY, -- clustered index
CustomerId INT NOT NULL,
OrderDate DATE NOT NULL,
StatusId TINYINT NOT NULL,
Amount DECIMAL(10,2) NOT NULL
);
CREATE NONCLUSTERED INDEX IX_SalesOrder_CustomerId
ON dbo.SalesOrder (CustomerId);
SET STATISTICS IO ON;
-- Scenario 1: Index SEEK -- goes directly to matching rows
-- Low logical reads regardless of table size
SELECT OrderId, CustomerId, OrderDate
FROM dbo.SalesOrder
WHERE CustomerId = 42;
-- Uses IX_SalesOrder_CustomerId -- seek to CustomerId = 42
-- Scenario 2: Index SCAN -- reads entire index
-- Logical reads proportional to table size
SELECT OrderId, CustomerId, OrderDate
FROM dbo.SalesOrder
WHERE CustomerId > 0; -- too many rows -- optimizer may choose scan
-- Scenario 3: Key LOOKUP -- seek + second B-tree traversal per row
-- Nonclustered index found the rows but doesn't have OrderDate
SELECT CustomerId, OrderDate -- OrderDate not in IX_SalesOrder_CustomerId
FROM dbo.SalesOrder
WHERE CustomerId = 42;
-- Each matching row requires a lookup into the clustered index for OrderDate
-- Fix: add OrderDate as an INCLUDE column (see Section 14)
SET STATISTICS IO OFF;
14 Covering Indexes and Included Columns Intermediate
A covering index is a nonclustered index that contains all the columns a query needs — so SQL Server can answer the query entirely from the index without having to go back to the clustered index for additional columns. Eliminating Key Lookups with covering indexes is one of the highest-impact tuning changes available.
Key Columns vs INCLUDE Columns
- Key columns — columns the index is sorted by. Use for filtering (
WHERE) and joining (ON). - INCLUDE columns — columns stored at the leaf level only, not sorted. Use for columns the query retrieves (
SELECT) but does not filter on. Does not increase index depth or key size.
❌ Without Covering Index — Key Lookup Per Row
-- This query causes a Key Lookup because CustomerName
-- is not in IX_SalesOrder_CustomerId
SELECT o.OrderId, o.OrderDate, c.CustomerName
FROM dbo.SalesOrder o
JOIN dbo.Customer c ON c.CustomerId = o.CustomerId
WHERE o.CustomerId = 42;✅ Covering Index — No Lookup Needed
-- Add OrderDate and Amount as INCLUDE columns
-- SQL Server can satisfy the full query from the index alone
CREATE NONCLUSTERED INDEX IX_SalesOrder_CustomerId_Cover
ON dbo.SalesOrder (CustomerId)
INCLUDE (OrderDate, Amount);
-- Key column: CustomerId (for seeking)
-- Include: OrderDate, Amount (retrieved but not filtered)-- Find queries with Key Lookups that are candidates for covering index
-- Look at the Avg Logical Reads -- that is the cost of the lookups
SELECT TOP(20)
DB_NAME(t.dbid) AS [Database],
REPLACE(REPLACE(LEFT(t.[text],200),CHAR(10),''),CHAR(13),'') AS [Query Text],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.execution_count AS [Executions],
qs.total_logical_reads AS [Total Logical Reads]
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE CONVERT(nvarchar(max), qp.query_plan) LIKE '%KeyLookup%'
ORDER BY qs.total_logical_reads DESC;
15 Index Design Strategy — The Right Index for the Right Query Intermediate
There is a reliable process for designing an index for a specific query. It works for most OLTP scenarios and gives you a logical starting point that you can then validate with execution plan analysis.
The Index Design Process
- Equality columns first. Any column in the WHERE clause with an equality predicate (
= value) goes in the key columns first. These narrow the seek range to an exact value. - Inequality columns next. Columns with range predicates (
>,<,BETWEEN,LIKE 'x%') go after equality columns in the key. SQL Server can seek to the start of the range but must scan to the end — so only one inequality column in the key is usually useful. - SELECT columns in INCLUDE. Any columns in the SELECT list that are not in the key go in INCLUDE. This covers the query without adding unnecessary sort keys.
- Consider JOIN columns as equality columns. JOIN predicates (
ON a.X = b.X) are effectively equality predicates from the index design perspective.
-- Example: design an index for this query
SELECT o.OrderId, o.OrderDate, o.Amount
FROM dbo.SalesOrder o
WHERE o.CustomerId = @CustomerId -- equality
AND o.StatusId = @StatusId -- equality
AND o.OrderDate >= @StartDate; -- inequality (range)
-- Index design:
-- Key columns: CustomerId, StatusId (equalities first), then OrderDate (inequality)
-- Include columns: Amount (in SELECT, not in WHERE)
CREATE NONCLUSTERED INDEX IX_SalesOrder_Cust_Status_Date
ON dbo.SalesOrder (CustomerId, StatusId, OrderDate)
INCLUDE (Amount);
-- Verify with STATISTICS IO before and after
-- Expect: seek instead of scan, dramatic logical read reduction
Do not blindly create every index SQL Server suggests. The missing index DMVs and execution plan warnings show what would help individual queries — they do not consider the write cost of the indexes, whether a slightly modified existing index already covers the case, or whether the query runs rarely enough that the index maintenance overhead is not worth it. Always review suggestions in the context of the full workload.
16 Indexes That Hurt — Redundant, Unused, and Write-Heavy Intermediate
Over-indexing is as damaging as under-indexing. Every nonclustered index on a table must be maintained on every INSERT, UPDATE, and DELETE. A table with 20 indexes on a write-heavy OLTP workload can spend more time maintaining indexes than it does processing business logic. Finding and removing unnecessary indexes is often the fastest path to improving write performance.
-- Find unused nonclustered indexes (no reads since last SQL Server restart)
-- CAUTION: index usage stats reset on restart -- collect over a representative period (weeks, not hours)
SELECT
SCHEMA_NAME(o.schema_id) AS [Schema],
OBJECT_NAME(i.object_id) AS [Table],
i.name AS [Index Name],
i.index_id,
s.user_seeks + s.user_scans + s.user_lookups AS [Total Reads],
s.user_updates AS [Total Writes],
i.fill_factor
FROM sys.indexes i
JOIN sys.objects o ON o.object_id = i.object_id
LEFT JOIN sys.dm_db_index_usage_stats s
ON s.object_id = i.object_id
AND s.index_id = i.index_id
AND s.database_id = DB_ID()
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
AND i.index_id > 1 -- skip clustered index
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
AND (s.user_seeks IS NULL -- never used
OR s.user_seeks + s.user_scans + s.user_lookups = 0)
ORDER BY s.user_updates DESC; -- most write-costly unused at top
-- Find write-heavy indexes where writes significantly exceed reads
-- These are candidates for removal if the read benefit does not justify the write cost
SELECT
SCHEMA_NAME(o.schema_id) AS [Schema],
OBJECT_NAME(s.object_id) AS [Table],
i.name AS [Index Name],
s.user_updates AS [Total Writes],
s.user_seeks + s.user_scans + s.user_lookups AS [Total Reads],
s.user_updates - (s.user_seeks + s.user_scans + s.user_lookups) AS [Write Excess]
FROM sys.dm_db_index_usage_stats s
JOIN sys.indexes i ON i.object_id = s.object_id AND i.index_id = s.index_id
JOIN sys.objects o ON o.object_id = i.object_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
AND s.database_id = DB_ID()
AND i.index_id > 1
AND s.user_updates > (s.user_seeks + s.user_scans + s.user_lookups)
ORDER BY [Write Excess] DESC;
17 Parameter Sniffing — What It Is and How to Fix It Intermediate
Parameter sniffing is when SQL Server compiles a stored procedure or parameterized query using the specific parameter values from the first execution and caches that plan for all future executions. This is usually good — plan compilation is expensive and reuse speeds things up. It becomes a problem when the first execution uses parameter values that produce a very different plan from what most other executions need.
The classic scenario: a stored procedure is first executed with a parameter value that returns 5 rows (optimizer chooses a Nested Loops seek plan). Later, the same procedure is called with a parameter value that returns 5 million rows. The cached seek plan is catastrophically wrong for the large result set — it now does 5 million Key Lookups instead of a Hash Join scan.
Detecting Parameter Sniffing
-- A query that is "fast in SSMS but slow in the application" is the classic symptom
-- SSMS runs with fresh compilation; the application hits the cached sniffed plan
-- Check what parameter values are in the cached plan
-- (requires SQL Server 2022 or sp_BlitzCache from Brent Ozar)
SELECT
qs.execution_count,
qs.total_elapsed_time / qs.execution_count AS [Avg Elapsed (us)],
qs.total_logical_reads / qs.execution_count AS [Avg Logical Reads],
qs.creation_time AS [Plan Compiled At],
SUBSTRING(t.text, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(t.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/2)+1) AS [Statement Text]
FROM sys.dm_exec_procedure_stats ps
CROSS APPLY sys.dm_exec_sql_text(ps.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(ps.plan_handle) qp
JOIN sys.dm_exec_query_stats qs ON qs.plan_handle = ps.plan_handle
WHERE OBJECT_NAME(ps.object_id) = 'YourStoredProcedureName';
The Fix Toolkit — Ordered by Preference
| Fix | When to Use | Trade-off |
|---|---|---|
| SQL Server 2022 PSP Optimization (automatic) | Compat level 160 + Query Store enabled — engine manages multiple plans automatically | No code change; best starting point on 2022 |
OPTION (RECOMPILE) on the statement | Query runs infrequently; data distribution varies widely; CPU cost of recompile is acceptable | No plan caching benefit; loses Memory Grant Feedback |
OPTION (OPTIMIZE FOR (@param = value)) | You know the most common or most important parameter value | May produce suboptimal plans for other values |
OPTION (OPTIMIZE FOR UNKNOWN) | No dominant value; want a stable "average" plan | Often produces mediocre plans; not great, not terrible |
| Force good plan via Query Store | You have identified the optimal plan and want to lock it in | Must monitor for plan invalidity as data changes |
| IF/ELSE branching in procedure | You know the specific parameter ranges that need different plans | Maintenance overhead; most flexible |
18 Statistics — Why They Go Wrong and How to Fix Them Intermediate
The query optimizer makes every decision — which index to use, which join strategy to choose, whether to go parallel — based on statistics. Statistics are histograms that describe the distribution of data in columns and indexes. When statistics are out of date, the optimizer's estimates are wrong and it makes bad choices. A query that suddenly gets slow after a large data load, a batch import, or a table truncate-and-reload is almost always a statistics problem.
-- Check statistics age and modification count across a database
SELECT
SCHEMA_NAME(o.schema_id) + '.' + o.[name] AS [Object],
s.[name] AS [Statistic],
STATS_DATE(s.object_id, s.stats_id) AS [Last Updated],
sp.modification_counter AS [Row Changes Since Last Update],
sp.[rows] AS [Total Rows],
sp.rows_sampled,
CAST(sp.rows_sampled * 100.0 / NULLIF(sp.[rows],0) AS DECIMAL(5,1)) AS [Sample %]
FROM sys.stats s
JOIN sys.objects o ON s.object_id = o.object_id
CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.stats_id) sp
WHERE o.type = 'U'
AND sp.modification_counter > 0
ORDER BY sp.modification_counter DESC;
-- Update statistics for a single table
UPDATE STATISTICS dbo.SalesOrder;
-- Update all statistics in the database with full scan (slower but more accurate)
-- Use after a large data load or truncate-and-reload
EXEC sp_updatestats; -- updates only changed statistics (fast, adequate for most cases)
-- Full scan of all statistics (use sparingly -- can be slow on large databases)
EXEC sp_MSforeachtable 'UPDATE STATISTICS ? WITH FULLSCAN';
-- Force the optimizer to recompile a specific procedure after statistics update
EXEC sys.sp_recompile N'dbo.YourStoredProcedure';
Low sample rate = bad cardinality estimates. By default, SQL Server uses an automatic sample percentage that decreases as table size grows. A 500 million row table might get a 1% sample — meaning the histogram is built from 5 million rows and misses distribution patterns visible only at full scan. After a major data load on a large table, always update statistics with WITH FULLSCAN or WITH SAMPLE 30 PERCENT at minimum.
19 Blocking, Deadlocks, and Lock Management Advanced
Blocking occurs when one session holds a lock that another session needs. The second session waits until the lock is released. At low concurrency this is invisible. At high concurrency it cascades — one blocked session blocks two others, those block four more, and suddenly you have dozens of sessions waiting and applications timing out.
Finding Active Blocking Right Now
-- Who is blocking whom right now
SELECT
r.session_id AS [Blocked SPID],
r.blocking_session_id AS [Blocking SPID],
r.wait_type,
r.wait_time / 1000.0 AS [Wait Sec],
r.status,
SUBSTRING(t.text, (r.statement_start_offset/2)+1,
((CASE r.statement_end_offset WHEN -1 THEN DATALENGTH(t.text)
ELSE r.statement_end_offset END
- r.statement_start_offset)/2)+1) AS [Blocked Statement],
DB_NAME(r.database_id) AS [Database]
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.blocking_session_id != 0
ORDER BY r.wait_time DESC;
-- Find the head blocker (the session causing the chain)
SELECT
s.session_id,
s.login_name,
s.host_name,
s.program_name,
s.status,
at.transaction_begin_time,
DATEDIFF(MINUTE, at.transaction_begin_time, GETDATE()) AS [Tran Age (min)],
t2.text AS [Last SQL]
FROM sys.dm_exec_sessions s
LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
LEFT JOIN sys.dm_tran_session_transactions st ON s.session_id = st.session_id
LEFT JOIN sys.dm_tran_active_transactions at ON st.transaction_id = at.transaction_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) t2
WHERE s.session_id IN (
-- Find SPIDs that are blocking others but not blocked themselves
SELECT blocking_session_id FROM sys.dm_exec_requests
WHERE blocking_session_id != 0
)
ORDER BY [Tran Age (min)] DESC;
Common Causes and Fixes
- Long-running transactions — a transaction open for minutes holds locks for minutes. Fix: keep transactions as short as possible, commit early, never wait for user input inside a transaction.
- Missing indexes on join/filter columns — SQL Server takes escalated table locks when it scans large tables, blocking everyone else. Fix: add the missing index so the scan becomes a seek touching fewer pages.
- Read/write blocking — readers blocking writers and vice versa. Fix: enable Read Committed Snapshot Isolation (RCSI) — see the Setup guide.
- Deadlocks — two sessions each holding a lock the other needs, creating a circular dependency. SQL Server detects this and kills one session (the deadlock victim). Fix: ensure all code accesses tables in the same order, reduce transaction scope, add appropriate indexes.
Idle Sessions Holding Open Transactions — The Invisible Blocker
This is one of the most common blocking scenarios and one of the hardest to spot because the head blocker is not doing anything. The session appears idle — no active request, no CPU, no I/O — but it opened a transaction and never committed or rolled it back. Every other session waiting on a lock that transaction holds just sits there, and the standard blocking query returns nothing useful because the blocker has no active SQL text.
Common causes: a missing COMMIT or ROLLBACK in application code, a user who opened a transaction in SSMS and walked away from their desk, connection pooling returning a connection to the pool with an open transaction, or an application that wraps multiple operations in an explicit transaction and then waits for user input before committing.
The key diagnostic signal: DBCC OPENTRAN shows an open transaction, but sys.dm_exec_requests shows the session as sleeping with no active request. The transaction age from sys.dm_tran_active_transactions tells you how long the locks have been held.
-- Step 1: Confirm open transactions exist on the instance
DBCC OPENTRAN;
-- Step 2: Find idle sessions holding open transactions
-- The key columns: status = 'sleeping' + tran_age_min > 0
-- Any sleeping session with an open transaction that is minutes old is suspect
SELECT
s.session_id,
s.login_name,
s.host_name,
s.program_name,
s.status, -- 'sleeping' = idle but transaction still open
r.blocking_session_id,
at.transaction_begin_time,
DATEDIFF(MINUTE, at.transaction_begin_time, GETDATE()) AS tran_age_min,
t2.text AS last_sql_text
FROM sys.dm_exec_sessions s
LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
LEFT JOIN sys.dm_tran_session_transactions st ON s.session_id = st.session_id
LEFT JOIN sys.dm_tran_active_transactions at ON st.transaction_id = at.transaction_id
OUTER APPLY sys.dm_exec_sql_text(r.sql_handle) t2
WHERE s.is_user_process = 1
AND at.transaction_begin_time IS NOT NULL -- only sessions with open transactions
ORDER BY tran_age_min DESC;
------
-- A session with status = 'sleeping' and tran_age_min of 5, 10, or 30+ minutes
-- is almost certainly the idle blocker causing the blocking chain
-- The last_sql_text shows the last statement it ran before going idle
-- That tells you which application and which code path left the transaction open
-- Step 3: Confirm it is actually blocking other sessions
SELECT
r.session_id AS [Blocked SPID],
r.blocking_session_id AS [Blocking SPID], -- this should match the idle session above
r.wait_type,
r.wait_time / 1000.0 AS [Wait Sec],
r.status,
t.text AS [Blocked SQL]
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.blocking_session_id != 0
ORDER BY r.wait_time DESC;
The fix is always application-side. Killing the idle session with KILL session_id releases the locks immediately — but the problem will recur on the next execution unless the application code is fixed. Every transaction must have a COMMIT or ROLLBACK on every code path including error paths. Never hold a transaction open while waiting for user input, making a network call, or doing any operation that could take unpredictable time. If connection pooling is the cause, check whether the pool is returning connections with uncommitted transactions — most modern pooling libraries have a reset_on_return or equivalent setting that should always be enabled.
-- Capture deadlock information from system_health Extended Events
-- This works on SQL Server 2012+ with no setup required
SELECT
xdr.value('@timestamp','datetime2') AS [Deadlock Time],
xdr.query('.') AS [Deadlock Graph XML]
FROM (
SELECT CAST(target_data AS XML) AS target_data
FROM sys.dm_xe_session_targets t
JOIN sys.dm_xe_sessions s ON s.address = t.event_session_address
WHERE s.name = 'system_health'
AND t.target_name = 'ring_buffer'
) AS data
CROSS APPLY target_data.nodes('//RingBufferTarget/event[@name="xml_deadlock_report"]') AS xdt(xdr)
ORDER BY [Deadlock Time] DESC;
20 TempDB Spills and Memory Grants Advanced
When SQL Server performs sort operations or hash joins, it requests a memory grant — a reservation of buffer pool memory for the operation. If the optimizer underestimates the row count (a statistics problem), the grant is too small. When the data being sorted or hashed does not fit in the granted memory, SQL Server spills the overflow to TempDB. TempDB spills are one of the most common causes of unexplained query slowness — the query looks reasonable and the indexes look fine, but it is writing gigabytes of intermediate data to disk on every execution.
-- Find queries with sort or hash spills via Query Store
SELECT TOP(25)
qsqt.query_sql_text,
qsp.last_execution_time,
qsrs.avg_duration AS [Avg Duration (us)],
qsrs.avg_cpu_time AS [Avg CPU (us)],
qsrs.avg_logical_io_reads AS [Avg Logical Reads],
qsrs.count_executions AS [Executions],
TRY_CONVERT(XML, qsp.query_plan).value(
'(//SpillToTempDb/@SpillLevel)[1]','INT') AS [Spill Level]
FROM sys.query_store_plan qsp
JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id
JOIN sys.query_store_query_text qsqt ON qsqt.query_text_id = qsq.query_text_id
JOIN sys.query_store_runtime_stats qsrs ON qsrs.plan_id = qsp.plan_id
WHERE TRY_CONVERT(XML, qsp.query_plan).exist('//SpillToTempDb') = 1
ORDER BY qsrs.avg_cpu_time DESC;
-- Check cumulative spill waits (fallback if Query Store not available)
SELECT wait_type, waiting_tasks_count, wait_time_ms,
wait_time_ms / NULLIF(waiting_tasks_count,0) AS [Avg Wait (ms)]
FROM sys.dm_os_wait_stats
WHERE wait_type IN ('SORT_SPILL','HASH_SPILL');
Fixing TempDB Spills
- Update statistics with higher sample rate — most spills are caused by row count underestimates. Better statistics give the optimizer a better grant estimate.
- Add or improve covering indexes — an index that provides data in the sort order the query needs can eliminate the sort operation entirely.
- Use
OPTION (RECOMPILE)— forces the optimizer to see actual row counts at compilation time, producing a more accurate grant. Use with SQL Server 2019/2022 Memory Grant Feedback which automatically adjusts grants on repeated executions. - MIN_GRANT_PERCENT hint — explicitly request a minimum memory grant if you know the operation will be large.
-- Force a recompile to get accurate grant size
SELECT o.OrderId, o.OrderDate, c.CustomerName
FROM dbo.SalesOrder o
JOIN dbo.Customer c ON c.CustomerId = o.CustomerId
WHERE o.OrderDate >= '2025-01-01'
ORDER BY o.Amount DESC
OPTION (RECOMPILE);
-- Explicit minimum grant hint (use sparingly -- prevents sharing for other executions)
SELECT o.OrderId, o.OrderDate
FROM dbo.SalesOrder o
ORDER BY o.Amount DESC
OPTION (MIN_GRANT_PERCENT = 10); -- request at least 10% of workspace memory
21 Intelligent Query Processing — SQL Server 2019 and 2022 Advanced
SQL Server 2019 and 2022 introduced a family of features under the Intelligent Query Processing (IQP) umbrella that automatically fix common performance problems without requiring code changes. These features are activated by setting database compatibility level to 150 (SQL Server 2019) or 160 (SQL Server 2022) with Query Store enabled.
| Feature | What It Fixes | Available From |
|---|---|---|
| Memory Grant Feedback (batch mode) | Adjusts memory grants for subsequent executions when a grant was too large or too small | SQL Server 2017 |
| Memory Grant Feedback (row mode) | Same, but for row mode operators — covers most OLTP queries | SQL Server 2019 |
| Adaptive Joins | Defers choice between Nested Loops and Hash Join until runtime row count is known | SQL Server 2017 |
| Table Variable Deferred Compilation | Uses actual row count of table variable at first execution rather than always assuming 1 row | SQL Server 2019 |
| Scalar UDF Inlining | Transforms eligible scalar UDFs into equivalent relational expressions — eliminates row-by-row execution | SQL Server 2019 |
| Parameter Sensitive Plan (PSP) Optimization | Automatically creates multiple plans for different parameter value "buckets" — reduces parameter sniffing impact | SQL Server 2022 |
| DOP Feedback | Automatically lowers the degree of parallelism for queries where parallelism overhead exceeds benefit | SQL Server 2022 |
| Memory Grant Feedback Persistence | Stores grant feedback in Query Store so it survives restarts | SQL Server 2022 |
| Cardinality Estimation (CE) Feedback | Adjusts cardinality estimates for queries where the optimizer's model consistently underestimates | SQL Server 2022 |
-- Verify IQP features are enabled for your database
SELECT name, compatibility_level,
is_query_store_on AS [Query Store On],
is_read_committed_snapshot_on AS [RCSI On]
FROM sys.databases
WHERE name = DB_NAME();
-- Check database-scoped configurations related to IQP
SELECT [name], [value]
FROM sys.database_scoped_configurations
WHERE name IN (
'PARAMETER_SENSITIVE_PLAN_OPTIMIZATION',
'DOP_FEEDBACK',
'CE_FEEDBACK',
'MEMORY_GRANT_FEEDBACK_PERSISTENCE',
'MEMORY_GRANT_FEEDBACK_PERCENTILE_GRANT',
'EXEC_QUERY_STATS_FOR_SCALAR_FUNCTIONS' -- Scalar UDF Inlining reporting
);
-- Check if Scalar UDF Inlining is applying to your functions
SELECT
OBJECT_NAME(m.object_id) AS [Function Name],
m.is_inlineable,
m.inline_type
FROM sys.sql_modules m
JOIN sys.objects o ON o.object_id = m.object_id
WHERE o.type IN ('FN','TF')
AND m.is_inlineable = 1;
22 The Essential Free Tools Every DBA Should Know Beginner
The SQL Server community has produced several outstanding free tools that every DBA and developer should have in their toolkit. These are not replacements for understanding the fundamentals — they are accelerators that help you apply the fundamentals faster.
Brent Ozar's First Responder Kit
A set of free stored procedures that wrap the most common diagnostic queries into a simple, opinionated interface. Available at brentozar.com/first-aid.
- sp_BlitzFirst — runs a 5-second wait stats capture and tells you what is hurting performance right now. The best starting point for a server under active load.
- sp_BlitzCache — analyzes the plan cache and ranks queries by resource consumption with warnings about missing indexes, implicit conversions, parameter sniffing, and more. The fastest way to find your worst queries.
- sp_BlitzIndex — comprehensive index analysis: missing indexes, duplicate indexes, unused indexes, overlapping indexes, heaps, and fragmentation.
- sp_WhoIsActive — real-time view of what SQL Server is executing right now, sorted by duration or resource consumption. The first thing to run when the server is under stress. Written by Adam Machanic — available at whoisactive.com.
Using These Tools
-- After installing First Responder Kit:
-- What is hurting performance right now (5-second sample)?
EXEC sp_BlitzFirst @Seconds = 5, @ExpertMode = 1;
-- What are my worst queries? (plan cache analysis)
EXEC sp_BlitzCache @SortOrder = 'reads'; -- by logical reads
EXEC sp_BlitzCache @SortOrder = 'CPU'; -- by total CPU
EXEC sp_BlitzCache @SortOrder = 'executions'; -- by execution frequency
-- What is running right now?
EXEC sp_WhoIsActive @get_outer_command = 1, @get_plans = 1;
-- Index analysis for a specific database
EXEC sp_BlitzIndex @DatabaseName = 'YourDatabase',
@Mode = 4; -- Mode 4 = detailed index analysis
Erik Darling's sp_HumanEvents
A powerful free tool for capturing Extended Events data in a simplified way — blocking, query performance, parameter sniffing, recompiles, and more. Available at github.com/erikdarlingdata/DarlingData.
23 Using Query Store for Plan Management Intermediate
Query Store is built into SQL Server and is the right way to manage plan regressions without relying on trace flags or sp_recompile. It stores a history of query plans and runtime statistics, allowing you to see when a plan changed and what the performance impact was — and to force a known-good plan when needed.
-- Find the top resource-consuming queries in Query Store for the last 7 days
SELECT TOP(25)
qsqt.query_sql_text,
qsp.plan_id,
ROUND(qsrs.avg_duration/1000.0, 2) AS [Avg Duration (ms)],
ROUND(qsrs.avg_cpu_time/1000.0, 2) AS [Avg CPU (ms)],
qsrs.avg_logical_io_reads AS [Avg Logical Reads],
qsrs.count_executions,
qsp.last_execution_time
FROM sys.query_store_runtime_stats qsrs
JOIN sys.query_store_plan qsp ON qsp.plan_id = qsrs.plan_id
JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id
JOIN sys.query_store_query_text qsqt ON qsqt.query_text_id = qsq.query_text_id
WHERE qsp.last_execution_time > DATEADD(day, -7, GETDATE())
ORDER BY qsrs.avg_duration DESC;
-- Force a known-good plan (get query_id and plan_id from query above)
EXEC sys.sp_query_store_force_plan
@query_id = 123, -- replace with actual query_id
@plan_id = 456; -- replace with actual plan_id of the good plan
-- Check currently forced plans
SELECT
qsq.query_id,
qsp.plan_id,
qsp.is_forced_plan,
qsp.force_failure_count,
qsp.last_force_failure_reason_desc,
qsqt.query_sql_text
FROM sys.query_store_plan qsp
JOIN sys.query_store_query qsq ON qsq.query_id = qsp.query_id
JOIN sys.query_store_query_text qsqt ON qsqt.query_text_id = qsq.query_text_id
WHERE qsp.is_forced_plan = 1;
-- Unforce a plan when it is no longer needed
EXEC sys.sp_query_store_unforce_plan
@query_id = 123,
@plan_id = 456;
24 A Repeatable Tuning Process — From Alert to Fix Intermediate
When you get paged because the server is slow, or a ticket comes in saying "queries are taking forever," this is the process that gets you from zero to root cause in the shortest time. It works for a single slow query and for a server-wide performance event.
Print this section out and keep it next to your desk. The instinct under pressure is to start changing things. This process keeps you diagnostic before you start fixing.
Step 1 — Is It a Server-Wide Problem or a Single Query?
-- Run sp_WhoIsActive or this to see what is happening right now
SELECT
r.session_id,
r.blocking_session_id,
r.status,
r.wait_type,
r.wait_time / 1000.0 AS [Wait Sec],
r.cpu_time / 1000.0 AS [CPU Sec],
r.total_elapsed_time / 1000.0 AS [Elapsed Sec],
r.logical_reads,
DB_NAME(r.database_id) AS [Database],
SUBSTRING(t.text,(r.statement_start_offset/2)+1,
((CASE r.statement_end_offset WHEN -1 THEN DATALENGTH(t.text)
ELSE r.statement_end_offset END
- r.statement_start_offset)/2)+1) AS [SQL]
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.session_id > 50
ORDER BY r.total_elapsed_time DESC;
Step 2 — What Is the Server Waiting On?
-- Quick 10-second wait stats delta (captures current activity, not historical)
-- Save results from first run, wait 10 seconds, run again, compare
SELECT wait_type, waiting_tasks_count, wait_time_ms
INTO #waits_before
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN ('SLEEP_TASK','SLEEP_SYSTEMTASK','WAITFOR',
'BROKER_TO_FLUSH','DISPATCHER_QUEUE_SEMAPHORE',
'XE_DISPATCHER_WAIT','XE_TIMER_EVENT');
WAITFOR DELAY '00:00:10';
SELECT
w2.wait_type,
w2.waiting_tasks_count - w1.waiting_tasks_count AS [New Waits],
w2.wait_time_ms - w1.wait_time_ms AS [New Wait Time (ms)]
FROM sys.dm_os_wait_stats w2
JOIN #waits_before w1 ON w1.wait_type = w2.wait_type
WHERE w2.wait_time_ms - w1.wait_time_ms > 0
ORDER BY [New Wait Time (ms)] DESC;
DROP TABLE #waits_before;
Step 3 — Identify the Specific Query
Use sp_BlitzCache @SortOrder = 'reads', or the top logical reads DMV query from Section 6, or Query Store if available.
Step 4 — Diagnose the Query
-- Capture baseline metrics before making any change
SET STATISTICS IO ON;
SET STATISTICS TIME ON;
-- Run the slow query here
SET STATISTICS IO OFF;
SET STATISTICS TIME OFF;
-- Capture the actual execution plan (Ctrl+M in SSMS)
-- Look for: Index Scan on large tables, Key Lookups, thick arrows on unexpected operators,
-- yellow warnings, large discrepancies between estimated and actual rows
Step 5 — Apply One Fix and Measure Again
Based on what you found in Step 4, apply the appropriate fix from this guide. The most common findings and their fixes:
| What You See in the Plan | Most Likely Fix | Section |
|---|---|---|
| Index Scan on a large table | Non-SARGable predicate or missing index | 9, 15 |
| Key Lookup on many rows | Add INCLUDE columns to covering index | 14 |
| Yellow warning on any operator | Missing index, implicit conversion, or memory spill | 10, 20 |
| Estimated 1 row, actual 500,000 | Out-of-date statistics | 18 |
| Fast in SSMS, slow from application | Parameter sniffing | 17 |
| Many sessions blocked, one head blocker | Long transaction; missing index causing table lock | 19 |
| Head blocker is sleeping with no active SQL | Idle session holding open transaction — application did not COMMIT | 19 |
| Sort or Hash spill warning | Statistics update; covering index to eliminate sort | 20 |
| CONVERT_IMPLICIT warning | Data type mismatch between parameter and column | 10 |
References
- Paul Randal (SQLskills) — Top Ten SQL Server Performance Tuning Best Practices
- Paul Randal (SQLskills) — SQL Server Wait Types Library
- Erik Darling — Implicit Conversion Is a SARGability Problem
- Erik Darling — Why Implicit Conversions Aren't SARGable
- Brent Ozar — SQL Server Performance Tuning
- Brent Ozar — Parameter Sniffing
- Brent Ozar — First Responder Kit (sp_Blitz, sp_BlitzCache, sp_BlitzIndex) — Free Download
- Adam Machanic — sp_WhoIsActive — Free Download
- Microsoft Docs — Query Processing Architecture Guide
- Microsoft Docs — Intelligent Query Processing Feature Catalog
- Microsoft Docs — Monitoring Performance Using Query Store
- Microsoft Docs — SQL Server Index Architecture and Design Guide
- Pinal Dave (SQLAuthority) — Catching Non-SARGable Queries in Action
- sp_Develop — SQL Server Naming Conventions and Standards
- SQLYARD — Parameter Sniffing: A Field Guide for DBAs and Developers
- SQLYARD — SQL Server DBA Health Check Toolkit
- SQLYARD — SQL Server Instance Setup and Best Practices
- Erik Darling — DarlingData (sp_HumanEvents and more) — Free Tools on GitHub
Discover more from SQLYARD
Subscribe to get the latest posts sent to your email.


