31 August,2014 by Jack Vamvas
Data amounts are growing , causing a faster processing requirement for DELETE operation . How is it possible to make the DELETE functionality faster in table with a very large amount of data?
The DELETE dml operation does not support bulk logging , so using the following pattern will exploit the bulk logging of SELECT … INTO and INSERT INTO … SELECT.
Step 1 – Identify the data which won’t be deleted and isolate with SELECT … INTO
Step 2 – Use TRUNCATE to prepare table. Truncate removes all records from the table without logging each delete separately
Step 3 – Complete operation by INSERT INTO... SELECT and the TABLOCK table hint
For further tips on optimising DELETE in different scenarios – read SQL Server – Optimize delete from table - delete in sql
SQL Server Query Optimizer and Statistics - SQL Server DBA
Find queries causing Lock Timeouts per - SQL Server DBA
SQL Server measure i\o transaction rates and sys.dm_io_virtual_file_stats
Data Loading Performance Guide - SQL Server DBA
This is only a preview. Your comment has not yet been posted.
As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.
Having trouble reading this image? View an alternate.
Posted by: |