09 January,2013 by Jack Vamvas
Deleting rows from a very large table can be a challenge. A DBA must deal with different scenarios for deleting rows. Such as , during business hrs, multiple users and disk space.
One solution doesn’t always solve every problem.
I’ve used a number of different tactics over the last few months - I’m including as part of the list below.
DELETE is a a fully logged operation , and can be rolled back if something goes wrong
TRUNCATE Removes all rows from a table without logging the individual row deletions
1) Issue Table lock to avoid doing loads of row locks . Using a table hint , such as TABLOCK or TABLOCKX. Keep in mind – there may already be other transactions in progress , which may cause a delay on the DELETE request
2) Make sure statistics are up to date . SQL Server - Find last time STATISTICS updated ... - SQL Server DBA
3) Lack of indexes. Also the wider the index the slower performance.
4) Incorrect data types – for example – why use a bigint when you can use an int. It all adds to IO.
5) Lock contention. Are there other processes locking rows . Are there any deadlocks?
6) Poor hardware
7) Investigate for expensive foreign keys, constraints, triggers, cascade deltes Use the Execution Plan to investigate
8) If you are DELETING most of the table data – consider saving the data into a temp table , TRUNCATE and re insert . Truncate removes all records from the table without logging each delete separately
SELECT * INTO #data FROM mytable WHERE id > 2 TRUNCATE TABLE mytable INSERT mytable SELECT * FROM #data
9) A VIEW with order by and TOP . Read more on fast ordered delete
10) If you are applying optimized methods , check the IO performance. Use perfmon and DMV . SQL Server Storage and IO performance - SQL Server DBA
11) A delete is fully logged.Are the transaction logs growing to slowly
12) Drop and recreate indexes – easier as out of hrs
13) Run the process out of hrs , therefore minimising production load