I would totally give credit to whomever did this, but I can't figure out who it is (otherwise I would have gone to their site, stolen their code, run it in my environment, and then gone off on my merry way... and my spouse wouldn't yell at me for coming home late because I blogged it)
ALSO: There's almost assuredly a better way to do this, but it's already 5:30 so I rushed it.
When you delete from a large table, you delete in batches. No blowing out logs, less contention, etc, etc.
DELETE TOP (5000) FROM Table_X
But , it gets slower as it progresses, because each time it has to scan until it finds them. And so your IO goes up each time it runs.
One potential way around that is to use Simon Sabin's trick...
from (select top (10000) *
from t1 order by a) t1
Someone (please tell me who and I'll happily change it!) had a really clever idea - figure out each time what the most recent record deleted was, and use that to modify the WHERE clause to only get more recent.
E.G. your table is indexed (clustered, whatever) on the inserted_datetime. So you DELETE TOP 5000 ... ORDER BY (which is why I have the CTE; could probably do the derived trick above), and the most recent one deleted was 1/1/2014 12:23:34.567. You know that because you saved the deleted values to a table (via the OUTPUT), then grabbed the most recent to a variable, which is in the WHERE clause so the next delete starts looking at that time (1/1/2014 12:23:34.567)
Does it work? Like a champ. My "stupid" delete was taking 30 seconds when I stopped it an hour in.
This one is still doing batches every 2-5 seconds. Of course, for this post I had to stop it... and so when it started back up it needed 8 minutes to figure out where it was (and then each subsequent batch took 2-5 seconds.
Enjoy, and thanks again to whomever had the great idea!!