Friday, January 16, 2015

[T-SQL] A more efficient DELETE using TOP, CTEs, OUTPUT, and somebody else's clever idea.

I would totally give credit to whomever did this, but I can't figure out who it is (otherwise I would have gone to their site, stolen their code, run it in my environment, and then gone off on my merry way... and my spouse wouldn't yell at me for coming home late because I blogged it)

ALSO: There's almost assuredly a better way to do this, but it's already 5:30 so I rushed it.


When you delete from a large table, you delete in batches.  No blowing out logs, less contention, etc, etc.

DELETE TOP (5000) FROM Table_X

But , it gets slower as it progresses, because each time it has to scan until it finds them.   And so your IO goes up each time it runs.
One potential way around that is to use Simon Sabin's trick...
(http://sqlblogcasts.com/blogs/simons/archive/2009/05/22/DELETE-TOP-x-rows-avoiding-a-table-scan.aspx)

delete t1
from (select top (10000) *
        from t1 order by a) t1

Which uses a derived table to force it to use an indexed order.  But the problem I ran into is that it still gets slower as it progresses.  Both because there are other rows in the table that I'm not deleting, and because it has a LOT of rows to delete. 
 
Someone (please tell me who and I'll happily change it!) had a really clever idea - figure out each time what the most recent record deleted was, and use that to modify the WHERE clause to only get more recent.  

E.G. your table is indexed (clustered, whatever) on the inserted_datetime.  So you DELETE TOP 5000 ... ORDER BY (which is why I have the CTE; could probably do the derived trick above), and the most recent one deleted was 1/1/2014 12:23:34.567.  You know that because you saved the deleted values to a table (via the OUTPUT), then grabbed the most recent to a variable, which is in the WHERE clause so the next delete starts looking at that time (1/1/2014 12:23:34.567)

Does it work? Like a champ.  My "stupid" delete was taking 30 seconds when I stopped it an hour in.  
This one is still doing batches every 2-5 seconds.  Of course, for this post I had to stop it... and so when it started back up it needed 8 minutes to figure out where it was (and then each subsequent batch took 2-5 seconds. 

Enjoy, and thanks again to whomever had the great idea!!

[SQLSaturday] Presenting at #SQLSatAustin - January 31st 2015! (two weeks away!)

Quick note:

I will be presenting SQLWatchdog (Event Notifications) at SQL Saturday Austin.  It looks like it's going to have a ton of good presentations - heck, there's a service broker presentation up against mine!  And there's some recently-under-NDA stuff that Conor Cunningham will be presenting on, too!  And a wish-I-could-go-but-my-family-is-coming Precon on Friday, as well.

All around, good selection of speakers, good precons - a good time had by all.  Come learn some new SQL tricks - I've been at a ton of these now, and I still learn something new.  It costs $10 for a day of training(!!!!!), and that's basically covering lunch.  I'm psyched.  Maybe I can even eat at Franklin's!

https://www.sqlsaturday.com/362/eventhome.aspx

 

This will probably be one of my last presentations of SQLWatchdog, unless I make PASS 2015.  I've pretty much hit all the SQLSats that are within decent driving distance (and a couple that aren't), so it's time to come up with a new presentation.  Mixed feelings about it - still ultra-psyched about EN (and I still use the everliving daylights out of it), but I don't/can't just be "that guy" with one presentation.  Well, not if I want to go back to these places.  : )

Wednesday, January 7, 2015

[BULK INSERT] importing a fixed-width file that has header rows ala SQL, but without a format file.

Needed this for today.  Basically someone saved out a query as fixed width, but we weren't able to import it (Excel, BIDS & Import/Export Wizard all failed).  So I decided to build something that could also do it.

A basic file would look like this.  Note that the dashes and field names line up.  So what we do is really basic but still effective - look for those singleton spaces, then use that to pull out the details from the header row. 

STORE_CITY                          STORE_STATE STORE_ZIP
----------------------------------- ----------- ---------
RICHMOND                            VA          23229
SPRINGFIELD                         MO          65807

The one down side is that I don't have time to tweak it to get it working with sp_executesql (since my particular query is over 8000 characters), so I print out each line, toss that into my SELECT, strip out the last comma and add the tablename. 

/*
CREATE TABLE bulk_insert_raw 

    (id INT IDENTITY PRIMARY KEY, resultant VARCHAR(8000))
--view is needed in order to do a BULK INSERT with an inline IDENTITY - using staging table would NOT guarantee order.
CREATE VIEW bulk_insert_view 

AS 
SELECT resultant FROM bulk_insert_raw

BULK INSERT bulk_insert_view
   FROM '\\myserver\myfile.csv'
   WITH
      (
         FIELDTERMINATOR = '|',
         ROWTERMINATOR = '0x0a'  --if a UNIX-type file
      )
*/


DECLARE @min INT, @max INT, @header VARCHAR(8000), @dashes VARCHAR(8000), @last_dash INT, @last_field VARCHAR(200), @sql VARCHAR(max)
SET @min = 1

--The offset is because our particular data file had a bad first row. 

--We know line 1 and 2 are the header/dashes.
SET @header = (SELECT SUBSTRING(resultant,4,8000) FROM bulk_insert_raw WHERE id = 1)
SET @dashes = (SELECT resultant FROM bulk_insert_raw WHERE id = 2)

SET @max = LEN(@dashes) +2 --so we get the last field.
SET @last_dash = 1

SET @sql = 'select '
WHILE @min < @max
BEGIN
   
    IF (SELECT SUBSTRING(@dashes,@min,1)) IN (' ', '')
        BEGIN
            SELECT @last_field = QUOTENAME(RTRIM(SUBSTRING(@header,@last_dash, (@min)-@last_dash    )))
            PRINT @last_field + CONCAT(' = substring(resultant,', @last_dash, ',', (@min)-@last_dash,')', ',')
            SET @last_dash = @min + 1
        END

    SET @min = @min + 1
END

Tuesday, January 6, 2015

[Event Notifications] Changing security on the endpoints.

(Note: I still owe Justin some help on EN)


Recently had an issue trying to remove my login from a server (I've been transitioning from Dev & Ops to more Dev & Arch), and found I couldn't because of some of the permissions I had.  Namely: EN's (and thus Service Broker's) endpoint.

A former coworker running "SQLWatchdog" (aka my EN solution) came up with this code to fix the issue.  The second block of code fixes the endpoint for Always On.

And here's someone else who had the problem.  Code courtesy of both of them.


http://akawn.com/blog/2014/06/identify-and-change-a-sql-server-endpoint-owner/


USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.service_broker_endpoints;

USE master;
ALTER AUTHORIZATION ON ENDPOINT::ENAudit_Endpoint TO sa;

USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.service_broker_endpoints;

--------------------------------------------------------
USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.database_mirroring_endpoints;

USE master;
ALTER AUTHORIZATION ON ENDPOINT::Hadr_endpoint TO sa;

USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.database_mirroring_endpoints;