10 August,2011 by Jack Vamvas
Like anyone , we all have significant amounts to share, particularly if you work in a busy production environment. Deadlines are tight, business users want everything yesterday which all amounts to decreasing standards of quality-control. “You live and learn” ……one recent example
A nightly ETL process included the following steps:
1) Full Backup of a database
2) Copy the database to another drive
3) Restore the database onto another sql server instance and run transitions
After a large maintenance job on Sundays the transaction log file physical size of the database in Step 1 would be large.
Regular BACKUP LOG commands during the day managed the inactive part of the Logs. To decrease the transaction log physical size, a DBA applied DBCC SHRINKFILE every night on the log file , prior to step 1. Auto-grow on the Log file was set 1 MB .
The subtle problem started appearing at step 3. I noticed the Recovery process was taking longer than expected.
I noticed the DBCC SHRINKFILE had stopped running a couple of weeks ago.
Using DBCC LOGINFO(mydb) - I noticed there were thousands of Virtual Log Files (VLFs). A large amount of VLFs can cause the discovery layer on a database recovery process to slow down.
A transaction log file is composed of VLFs. When a transaction log file is truncated , the basic unit is a VLF.
The auto-grow mentioned earlier , at 1 MB size was causing an excessive amount of VLFs. I changed the auto-grow to 100 MB – and this resolved the situation. I forced a recovery process with the new autogrow setting , and the recovery process was significantly quicker.
From the account above , I‘d highlight two mistakes:
1) Autogrow set at a very low file size level. Size the log files accordingly , and try to minimise the amount of times autogrow occurs.
2) Shrinking and growing the transaction log files causes file system fragmentation. This decreases performance. Minimise shrinking of transaction logs