81

I have statements like this that are timing out:

DELETE FROM [table] WHERE [COL] IN ( '1', '2', '6', '12', '24', '7', '3', '5')

I tried doing one at a time like this:

DELETE FROM [table] WHERE [COL] IN ( '1' )

and so far it's at 22 minutes and still going.

The table has 260,000 rows in it and is four columns.

Does anyone have any ideas why this would be so slow and how to speed it up? I do have a non-unique, non-clustered index on the [COL] that i'm doing the WHERE on. I'm using SQL Server 2008 R2

update: I have no triggers on the table.

Kyle
  • 28,411
  • 33
  • 120
  • 167
  • 1
    what happens if you do `where [col] = '1'`? – Alex Gitelman Jun 05 '12 at 16:42
  • 3
    have you checked the execution plan on the query to see if it shows anything? – Taryn Jun 05 '12 at 16:42
  • 3
    DELETE statement are generally slow because of the log. TRUNCATE is faster. but you cannot use TRUNCATE in this situation. I don't have any further clue – codingbiz Jun 05 '12 at 16:44
  • 1
    possible duplicate of [Is IN with the multiple of the same terms slower?](http://stackoverflow.com/questions/10901238/is-in-with-the-multiple-of-the-same-terms-slower) – Taryn Jun 05 '12 at 16:44
  • There must be a trigger or something that deadlocks. – Sergey Kalinichenko Jun 05 '12 at 16:44
  • how is this question any different from the one you posted a few minutes ago? http://stackoverflow.com/questions/10901238/is-in-with-the-multiple-of-the-same-terms-slower – Taryn Jun 05 '12 at 16:45
  • @bluefeet because they are asking very different things? One question i ask if including the same term in a in statement is slower than just including it once. In this question I ask for ideas on why deleting in general might be so slow, even when I just am using one term. – Kyle Jun 05 '12 at 16:47
  • @AlexGitelman i did try that and it is just as slow. – Kyle Jun 05 '12 at 17:02
  • If your table is referenced by FK's from other tables your QP will reveal the means if checking those constraints. If you don't have indexes on the FK's your will have a table scan when the constraint is checked. Not a big deal if the tables are small but if you have a big one (or a couple) in there that could be what is causing your delete to be slow. Check the QP (query plan). – Mikael Eriksson Jun 05 '12 at 17:54
  • I had the same issue. I deleted first couple of rows (using http://stackoverflow.com/a/8956164/379279) to get query execution plan, and it suggested creation of 2 non-clustered indexes, which fixed my issue. – xhafan Nov 03 '14 at 14:02
  • Possible duplicate of [Optimizing Delete on SQL Server](https://stackoverflow.com/questions/955435/optimizing-delete-on-sql-server) – pcdro Jun 27 '17 at 20:49
  • I tried to delete 200 rows from total of 300 rows and it was taking too long. I found that another table with 70,000,000 rows referenced table with 300 rows with a foreign key. Once I deleted that foreign key delete was instant. – Joe Schmoe Nov 07 '19 at 20:49

15 Answers15

92

Things that can cause a delete to be slow:

  • deleting a lot of records
  • many indexes
  • missing indexes on foreign keys in child tables. (thank you to @CesarAlvaradoDiaz for mentioning this in the comments)
  • deadlocks and blocking
  • triggers
  • cascade delete (those ten parent records you are deleting could mean millions of child records getting deleted)
  • Transaction log needing to grow
  • Many Foreign keys to check

So your choices are to find out what is blocking and fix it or run the deletes in off hours when they won't be interfering with the normal production load. You can run the delete in batches (useful if you have triggers, cascade delete, or a large number of records). You can drop and recreate the indexes (best if you can do that in off hours too).

HLGEM
  • 88,902
  • 11
  • 105
  • 171
  • 30
    I had a case when I was trying to delete from a table containing ~1 million rows but it took forever. Querying the rows with `select *` was fast, but the delete was insanely slow. Then I realized there was a foreign key to the table from another one with 2 billion(!) rows. Of course, the FK column was not indexed. Solution: dropped the FK, deleted the rows, recreated the FK. Recreating the FK still took some time, but it was much faster. – Daniel Dinnyes Jan 22 '14 at 18:40
  • 5
    Important: if you have foreign keys these in your tables should be indexed – Cesar Alvarado Diaz May 02 '17 at 14:34
  • Is it a good practice to disable trigger while deleting a huge record list from a table? – aagjalpankaj May 30 '18 at 09:03
  • 1
    @Aviator, only if your delete script handles what ever the trigger handled. Otherwise you are likely going to cause a massive data integrity problem. – HLGEM May 30 '18 at 13:33
  • A little more detail on the foreign key issue, because I missed it at first. Don't just think about the foreign key constraints that this table has on others. Think about the foreign keys that other tables have on this one. So, if you run `\d my table`, look under `Referenced by`. My issue was that I had a huge table that was referencing the one I was trying to delete, and the column the other table was using needed an index. – nbrustein Jan 11 '21 at 14:24
66
  1. Disable CONSTRAINT

    ALTER TABLE [TableName] NOCHECK CONSTRAINT ALL;

  2. Disable Index

    ALTER INDEX ALL ON [TableName] DISABLE;

  3. Rebuild Index

    ALTER INDEX ALL ON [TableName] REBUILD;

  4. Enable CONSTRAINT

    ALTER TABLE [TableName] CHECK CONSTRAINT ALL;

  5. Delete again

Shahab J
  • 1,123
  • 8
  • 10
  • 15
    This worked, but an explanation would have been nice. – cdonner Jun 01 '16 at 20:30
  • 2
    One drawback is that after `alter index all on mytable disable` most queries stop working. "The query processor is unable to produce a plan". They seem to work again once the indexes have been rebuilt and the constraints re-enabled. – Ed Avis Sep 09 '16 at 09:30
  • When disabling the indexes on the table, all foreign keys pointing to the table will be disabled too. This could be the explanation for the speedup. Afterwards, you can `select referrer, fk from foreign_keys where is_disabled = 1`, or check the warning messages which were produced when disabling the indexes. Then for each affected child table, `alter table mychild with check check constraint all`. This might find some child rows that now need deleting! – Ed Avis Sep 09 '16 at 09:38
  • You may also end up with FKs marked 'not trusted', so `select referrer, fk from foreign_keys where is_not_trusted = 1` and fix those too. – Ed Avis Sep 09 '16 at 09:52
  • For me, rebuilding the indexes on the Primary Keys that were being used in clustered index searches gave me a massive speed improvement. I had a tonne of fragmentation on the PKs. – Kris Nov 29 '16 at 02:52
  • 2
    WARNING: Rebuilding indexes in a BIG table (2 million+) takes a LOT of time (hours...). In my case it was better just to do as in @Andomar's answer. – マルちゃん だよ Mar 28 '18 at 06:29
  • You made my day. Kuddos – Abhishek_Itachi Sep 12 '19 at 17:44
  • Good one. Thank you. – Paul Nov 01 '19 at 12:57
22

Deleting a lot of rows can be very slow. Try to delete a few at a time, like:

delete top (10) YourTable where col in ('1','2','3','4')
while @@rowcount > 0
    begin
    delete top (10) YourTable where col in ('1','2','3','4')
    end
Andomar
  • 216,619
  • 41
  • 352
  • 379
4

Preventive Action

Check with the help of SQL Profiler for the root cause of this issue. There may be Triggers causing the delay in Execution. It can be anything. Don't forget to Select the Database Name and Object Name while Starting the Trace to exclude scanning unnecessary queries...

Database Name Filtering

Table/Stored Procedure/Trigger Name Filtering

Corrective Action

As you said your table contains 260,000 records...and IN Predicate contains six values. Now, each record is being search 260,000 times for each value in IN Predicate. Instead it should be the Inner Join like below...

Delete K From YourTable1 K
Inner Join YourTable2 T on T.id = K.id

Insert the IN Predicate values into a Temporary Table or Local Variable

4

In my case the database statistics had become corrupt. The statement

delete from tablename where col1 = 'v1' 

was taking 30 seconds even though there were no matching records but

delete from tablename where col1 = 'rubbish'

ran instantly

running

update statistics tablename

fixed the issue

SIGSTACKFAULT
  • 957
  • 1
  • 15
  • 24
Mike
  • 41
  • 4
3

If the table you are deleting from has BEFORE/AFTER DELETE triggers, something in there could be causing your delay.

Additionally, if you have foreign keys referencing that table, additional UPDATEs or DELETEs may be occurring.

sybkar
  • 376
  • 1
  • 9
2

It's possible that other tables have FK constraint to your [table]. So the DB needs to check these tables to maintain the referential integrity. Even if you have all needed indexes corresponding these FKs, check their amount.

I had the situation when NHibernate incorrectly created duplicated FKs on the same columns, but with different names (which is allowed by SQL Server). It has drastically slowed down running of the DELETE statement.

flam3
  • 1,359
  • 1
  • 13
  • 23
  • 1
    This should have been a comment, not an answer. With a bit more rep, [you will be able to post comments](http://stackoverflow.com/privileges/comment). – IKavanagh Dec 07 '15 at 09:27
1

Check execution plan of this delete statement. Have a look if index seek is used. Also what is data type of col?

If you are using wrong data type, change update statement (like from '1' to 1 or N'1').

If index scan is used consider using some query hint..

Jānis
  • 2,128
  • 1
  • 14
  • 27
0

Is [COL] really a character field that's holding numbers, or can you get rid of the single-quotes around the values? @Alex is right that IN is slower than =, so if you can do this, you'll be better off:

DELETE FROM [table] WHERE [COL] = '1'

But better still is using numbers rather than strings to find the rows (sql likes numbers):

 DELETE FROM [table] WHERE [COL] = 1

Maybe try:

 DELETE FROM [table] WHERE CAST([COL] AS INT) = 1

In either event, make sure you have an index on column [COL] to speed up the table scan.

Russell Fox
  • 4,894
  • 1
  • 21
  • 25
  • Unfortunately the column holds letters and numbers and i do have an index =/. – Kyle Jun 05 '12 at 19:06
  • I'm not sure if it will help, but one thing we do here is put an "IsActive" column into all of our important tables, allowing us to update a field rather than delete rows. Handy for auditing and less messy in terms of rebuilding indexes on deletion. Of course, all subsequent views/queries/procs/functions have to include "WHERE ISACTIVE = 1". – Russell Fox Jun 05 '12 at 22:19
  • 3
    Casting all values in the database to an int may actually hurt performance significantly. It may also prevent that sql server can use the index. So if your proposal has any effect, it's most probably worse. – Stefan Steinegger Sep 10 '13 at 11:03
0

I read this article it was really helpful for troubleshooting any kind of inconveniences

https://support.microsoft.com/en-us/kb/224453

this is a case of waitresource KEY: 16:72057595075231744 (ab74b4daaf17)

-- First SQL Provider to find the SPID (Session ID)

-- Second Identify problem, check Status, Open_tran, Lastwaittype, waittype, and waittime
-- iMPORTANT Waitresource select * from sys.sysprocesses where spid = 57

select * from sys.databases where database_id=16

-- with Waitresource check this to obtain object id 
select * from sys.partitions where hobt_id=72057595075231744

select * from sys.objects where object_id=2105058535
Eduardo
  • 87
  • 5
0

If you're deleting all the records in the table rather than a select few it may be much faster to just drop and recreate the table.

lastlink
  • 752
  • 9
  • 20
0

After inspecting an SSIS Package(due to a SQL Server executing commands really slow), that was set up in a client of ours about 5-4 years before the time of me writing this, I found out that there were the below tasks: 1) insert data from an XML file into a table called [Importbarcdes].

2) merge command on an another target table, using as source the above mentioned table.

3) "delete from [Importbarcodes]", to clear the table of the row that was inserted after the XML file was read by the task of the SSIS Package.

After a quick inspection all statements(SELECT, UPDATE, DELETE etc.) on the table ImportBarcodes that had only 1 row, took about 2 minutes to execute.

Extended Events showed a whole lot PAGEIOLATCH_EX wait notifications.

No indexes were present of the table and no triggers were registered.

Upon close inspection of the properties of the table, in the Storage Tab and under general section, the Data Space field showed more than 6 GIGABYTES of space allocated in pages.

What happened:

The query ran for a good portion of time each day for the last 4 years, inserting and deleting data in the table, leaving unused pagefiles behind with out freeing them up.

So, that was the main reason of the wait events that were captured by the Extended Events Session and the slowly executed commands upon the table.

Running ALTER TABLE ImportBarcodes REBUILD fixed the issue freeing up all the unused space. TRUNCATE TABLE ImportBarcodes did a similar thing, with the only difference of deleting all pagefiles and data.

jimas13
  • 106
  • 2
  • 10
0

Older topic but one still relevant. Another issue occurs when an index has become fragmented to the extent of becoming more of a problem than a help. In such a case, the answer would be to rebuild or drop and recreate the index and issuing the delete statement again.

0

As an extension to Andomar's answer, above, I had a scenario where the first 700,000,000 records (of ~1.2 billion) processed very quickly, with chunks of 25,000 records processing per second (roughly). But, then it starting taking 15 minutes to do a batch of 25,000. I reduced the chunk size down to 5,000 records and it went back to its previous speed. I'm not certain what internal threshold I hit, but the fix was to reduce the number of records, further, to regain the speed.

Cryptc
  • 702
  • 1
  • 7
  • 9
-7

open CMD and run this commands

NET STOP MSSQLSERVER
NET START MSSQLSERVER

this will restart the SQL Server instance. try to run again after your delete command

I have this command in a batch script and run it from time to time if I'm encountering problems like this. A normal PC restart will not be the same so restarting the instance is the most effective way if you are encountering some issues with your sql server.

Hans Kesting
  • 34,565
  • 7
  • 74
  • 97
Mhelboy
  • 35
  • 1
  • This doesn't answer the original question, or if it does, you're not explaining why this would help. – mjuarez Feb 05 '18 at 04:43