When you start learning SQL Server internals, one concept that often feels hidden in the background but can cause very real problems is Virtual Log Files (VLFs). Even though Microsoft doesn’t expose them directly in Management Studio, they play a huge role in how your transaction log performs. Too many VLFs can seriously hurt performance, especially with large or busy databases.
This post will break down what VLFs are, why they occur, how they can hurt performance, and most importantly, how you can stay proactive to prevent them from becoming a bottleneck.
1. What is a Virtual Log File (VLF)?
SQL Server transaction logs are stored in .ldf files. But internally, SQL Server doesn’t use one continuous structure—it breaks the log file into chunks called Virtual Log Files (VLFs).
- Each
.ldffile is divided into several VLFs. - A VLF is the smallest unit of the log that SQL Server tracks for things like recovery, truncation, and reuse.
- You can’t control the exact VLF size manually—SQL Server decides it based on how the log file grows (initial creation and auto-growth).
Think of it like this:
.ldffile = a book- VLFs = chapters inside the book
- When SQL Server needs to read or write transactions, it navigates through these chapters.
2. Why do VLFs occur?
When you create a database, SQL Server divides the log file into a set number of VLFs depending on the initial size. Every time the log file grows (auto-growth event or manual growth), SQL Server adds more VLFs. The number and size of VLFs created depends on the growth size:
According to Microsoft:
- Growth less than 64 MB → 4 VLFs
- Growth 64 MB to 1 GB → 8 VLFs
- Growth greater than 1 GB → 16 VLFs
That means:
- If your log file grows in tiny increments (say, 1 MB at a time), SQL Server creates thousands of tiny VLFs.
- If your log file grows by 8 GB at once, you’ll still only get 16 VLFs, but each is huge.
3. How too many VLFs hurt performance
At first, VLFs sound harmless—just segments of the log. But problems creep in when you have too many (hundreds, thousands, or even tens of thousands).
Here’s why excessive VLFs hurt:
- Slow recovery/startup times
During crash recovery, SQL Server has to scan each VLF to determine what transactions need to be redone or undone. Thousands of VLFs mean thousands of small scans. - Slower log backups and restores
Each VLF is processed individually. A log backup that should take seconds can take minutes if SQL Server must churn through thousands of tiny VLFs. - Replication and mirroring latency
Features like Always On Availability Groups, log shipping, and replication rely on reading from the transaction log. Too many VLFs cause overhead in log reader agents. - General log management overhead
SQL Server marks VLFs as active or inactive. With too many VLFs, managing these transitions adds extra work.
4. How to check VLF count in your database
Use the undocumented command DBCC LOGINFO (older versions) or sys.dm_db_log_info (SQL Server 2016 SP2+):
-- Modern DMV (SQL Server 2016 SP2+)
SELECT
name AS DatabaseName,
COUNT(li.vlf_sequence_number) AS VLFCount
FROM sys.databases d
CROSS APPLY sys.dm_db_log_info(d.database_id) li
GROUP BY name
ORDER BY VLFCount DESC;
-- Legacy (pre-2016 SP2)
DBCC LOGINFO;
5. Best practices to prevent excessive VLFs
The root cause of excessive VLFs is usually poor autogrowth settings. Many DBAs leave the defaults (1 MB growth increments), which quickly creates thousands of tiny VLFs.
Be proactive:
- Pre-size your transaction log file
- Don’t leave it at 8 MB (the default).
- Set it large enough to handle normal workload without constant growth.
ALTER DATABASE MyDB
MODIFY FILE (NAME = MyDB_log, SIZE = 1GB);
Use sensible autogrowth settings
- Avoid small growth increments (like 1 MB).
- Avoid percentage growth, especially on large databases (e.g., 10% of 500 GB = 50 GB growth = 16 huge VLFs).
- Instead, use a fixed size growth that makes sense for your workload.
- Microsoft recommends increments of 512 MB to 1 GB for large logs.
Example:
ALTER DATABASE MyDB
MODIFY FILE (NAME = MyDB_log, FILEGROWTH = 512MB);
- Monitor regularly
- Add VLF count checks to your health scripts.
- Use alerts to catch logs with thousands of VLFs before they become a problem.
6. Fixing excessive VLFs
If you already have thousands of VLFs, here’s the remediation process:
- Backup the log (if needed).
- Shrink the log file to a small size:
DBCC SHRINKFILE (MyDB_log, 1);
3. Grow it back manually in large chunks:
ALTER DATABASE MyDB
MODIFY FILE (NAME = MyDB_log, SIZE = 4GB);
4. Set appropriate autogrowth for the future.
⚠️ Warning: Don’t shrink/grow on a regular basis. That’s only for cleanup when you already have excessive VLFs. Normal proactive management avoids this problem.
7. Rule of thumb: what’s “too many” VLFs?
- < 1,000 → Usually fine.
- 1,000–10,000 → Monitor closely, may cause issues.
- 10,000+ → Definitely problematic, fix ASAP.
These aren’t hard limits, but widely accepted guidelines from Microsoft and community experts.
8. Summary
- VLFs are internal chunks of the transaction log file.
- They are created every time the log grows.
- Too many VLFs (from small growth increments) = slower recovery, backups, replication, and log-related operations.
- Best practices:
- Pre-size log files properly.
- Use fixed, reasonable autogrowth settings (hundreds of MBs, not tiny MBs or large percentages).
- Monitor VLF count with
sys.dm_db_log_info.
- If you already have excessive VLFs, shrink and regrow the log with better growth settings.
By being proactive with sizing and monitoring, you can avoid one of the sneaky but impactful performance issues in SQL Server.
9. Hands-On Lab: Creating and Fixing VLF Bloat
Here’s a safe exercise you can run in a test environment to see VLF bloat in action.
Step 1: Create a test database with bad settings
CREATE DATABASE VLFTest
ON PRIMARY (NAME = VLFTest_data, FILENAME = 'C:\SQLData\VLFTest_data.mdf', SIZE = 10MB)
LOG ON (NAME = VLFTest_log, FILENAME = 'C:\SQLData\VLFTest_log.ldf', SIZE = 1MB, FILEGROWTH = 1MB);
GO
This creates a log file that grows 1 MB at a time—the classic cause of thousands of VLFs.
Step 2: Force lots of autogrowth
USE VLFTest;
GO
-- This loop will insert rows and force the log to grow many times
SET NOCOUNT ON;
CREATE TABLE dbo.BigTable (ID INT IDENTITY, SomeData CHAR(8000) DEFAULT 'X');
DECLARE @i INT = 0;
WHILE @i < 5000
BEGIN
INSERT INTO dbo.BigTable DEFAULT VALUES;
SET @i += 1;
END
GO
Step 3: Check VLF count
DBCC LOGINFO; -- pre-SQL 2016 SP2
-- or
SELECT COUNT(*) AS VLFCount
FROM sys.dm_db_log_info(DB_ID('VLFTest'));
You’ll likely see hundreds or thousands of VLFs.
Step 4: Fix the problem
- Backup the log if needed.
- Shrink the log down:
DBCC SHRINKFILE (VLFTest_log, 1);
3. Regrow it in a single chunk (say 512 MB):
ALTER DATABASE VLFTest
MODIFY FILE (NAME = VLFTest_log, SIZE = 512MB, FILEGROWTH = 128MB);
You should now see only a handful of VLFs, and they’ll be much larger.
✅ This exercise lets you see how bad settings create bloat, and how proactive sizing immediately fixes it.
Discover more from SQLyard
Subscribe to get the latest posts sent to your email.


