Sunday, March 15, 2015

[Event Notifications] and impersonation

Had this question pop up at #SQLSatAustin last month.  Finally had some time to futz with it.

Naomi Williams (@naomithesqldba) asked a security question I hadn't considered before.  "How does Event Notification deal with impersonation?"

Hmmm.  Good question.

Well, there are two different ways to do impersonation.  SETUSER and EXECUTE AS. 
Testing both, they appear to act the same. 

Our examples:
SELECT SUSER_SNAME()
go
SETUSER 'mydomain\mycoworkersname'
GO
SELECT SUSER_SNAME()
go
CREATE TABLE test_using_setuser (id INT IDENTITY)
GO
DROP TABLE test_using_setuser 

go
SETUSER
GO
SELECT SUSER_SNAME()
GO

EXECUTE AS LOGIN = 'mydomain\mycoworkersname'
CREATE TABLE test_using_executeaslogin (id INT IDENTITY)
REVERT


Let's look at the obvious event, the CREATE_TABLE.

 
 
Hmmm, looks like he did it, right?

Well, fortunately for him, I also capture trace events, via TRC_OBJECTS.  Which has more information that isn't normally needed.  But it's saved my bacon before, because it includes the App Name and Hostname.

And what does that one look like?




Busted!  Two different ways, in fact.  First of all, look at the HostName, which is my desktop.  Thought that can be worked around (most easily by connecting to the box itself - how long, after all, do your systems people store login info?).

However, the other is much more damning.  SessionLoginName, which clearly shows it was me.

Thanks, Naomi!

Thursday, March 12, 2015

[Extended Events] Dynamically Shredding XML and PIVOTing the results for easy reading

I love Extended Events. I hate XML.  I love the dynamic shred for XML.  I hate having to write XML code when I can have it write itself.

And so....
 Here's an easy(er) way to deal with XE XML.
This XE is specifically to find queries with a duration longer than 1 second.


Results:


How, you ask?
We create an Extended Event for items of long-running duration.
We then feed it into a dynamic XML shredder ('local-name(.)'), that kind of thing.
We then take THOSE results and feed it into a Dynamic Pivot.

End result:
no crappy code that looks like:
event_data_XML.value('(event/data[3])[1]','INT') AS object_type,
event_data_XML.value('(event/data[4])[1]','INT') AS cpu,
event_data_XML.value('(event/data[5])[1]','INT') AS duration,

Enjoy!


Wednesday, March 11, 2015

[WAT] Fun length issue with REPLACE in SSMS "Results to"

Just had this burn me, and don't remember having seen this issue before.

REPLACE changes the length of your columns, in Results to Text/Results to File.

Run this piece of code in SSMS. Query->Results To->Grid.
SELECT TOP 100 REPLACE(REPLACE(name,'&',';'),'/','#'), name FROM sysdatabases



Copy and paste on successive lines.  The same.
Now do the same thing, in Results to Text...
And now the same, Results to File...

Note that they're no longer the same length.  Seems the case at least since 2005.
The length of the header, which dictates the length (if doing fixed-width stuff) is twice as long.  That occurs even when the replaced characters are the same length.

Now let's see what our 2012++ friend, which gives us the metadata for a query, says.

SELECT * FROM sys.dm_exec_describe_first_result_set('
SELECT TOP 100 replace(replace(name,''&'','';''),''/'',''$'') as rname, name
FROM sysdatabases', NULL, 0)

Which gets you this fun nugget...



Hey look!  It went from nvarchar(128) (aka sysname) to nvarchar(4000).  Interesting!

Maybe this is normal/expected behavior.  But it made life difficult today, and experience is gained from doing it wrong, so here's your experience for the day.

Tuesday, February 10, 2015

SQLSentry - a possible gotcha for high-transaction boxes

Blogging it here for my own edification.

We ran into a problem last year where our performance on a box, monitored by SQLSentry, sucked. 

With the help of SQLSentry Support, we were able to find the problem.  Our code uses lots of functions, and their default statement-level trace slowed things down quite a bit - on one box, parsing was cut by 50%.

Note that this is NOT an inherent problem with SQLSentry, but with the level of detail they were collecting via the trace.

(More here: http://sqlblog.com/blogs/linchi_shea/comments/14682.aspx)

From their support (paraphrased and edited for clarity):

The easiest way to test if this helps in your case is just to turn it off and observe.

To disable it, you can go to View-> Settings, then change "Computer Settings" to "SQL Server Settings" , then change "SQL Server Connection" to "Top SQL Settings". [on SQLSentry 8, it's "Top SQL Source"]
 

Now, set "Collect Statement Events" to False.You will probably need to change "Inherit From Parent (Global)" to False in "Collection Settings", otherwise it won't let you change the value.  Finally, click on another server in the left-side drop down (which changes context) to make sure it's saved.  

" What you will be losing here is really just the statement level detail for cached plans that SQL Sentry is collecting. It is nice to have that level of detail, but also not an absolute necessity."

Friday, January 16, 2015

[T-SQL] A more efficient DELETE using TOP, CTEs, OUTPUT, and somebody else's clever idea.

I would totally give credit to whomever did this, but I can't figure out who it is (otherwise I would have gone to their site, stolen their code, run it in my environment, and then gone off on my merry way... and my spouse wouldn't yell at me for coming home late because I blogged it)

ALSO: There's almost assuredly a better way to do this, but it's already 5:30 so I rushed it.


When you delete from a large table, you delete in batches.  No blowing out logs, less contention, etc, etc.

DELETE TOP (5000) FROM Table_X

But , it gets slower as it progresses, because each time it has to scan until it finds them.   And so your IO goes up each time it runs.
One potential way around that is to use Simon Sabin's trick...
(http://sqlblogcasts.com/blogs/simons/archive/2009/05/22/DELETE-TOP-x-rows-avoiding-a-table-scan.aspx)

delete t1
from (select top (10000) *
        from t1 order by a) t1

Which uses a derived table to force it to use an indexed order.  But the problem I ran into is that it still gets slower as it progresses.  Both because there are other rows in the table that I'm not deleting, and because it has a LOT of rows to delete. 
 
Someone (please tell me who and I'll happily change it!) had a really clever idea - figure out each time what the most recent record deleted was, and use that to modify the WHERE clause to only get more recent.  

E.G. your table is indexed (clustered, whatever) on the inserted_datetime.  So you DELETE TOP 5000 ... ORDER BY (which is why I have the CTE; could probably do the derived trick above), and the most recent one deleted was 1/1/2014 12:23:34.567.  You know that because you saved the deleted values to a table (via the OUTPUT), then grabbed the most recent to a variable, which is in the WHERE clause so the next delete starts looking at that time (1/1/2014 12:23:34.567)

Does it work? Like a champ.  My "stupid" delete was taking 30 seconds when I stopped it an hour in.  
This one is still doing batches every 2-5 seconds.  Of course, for this post I had to stop it... and so when it started back up it needed 8 minutes to figure out where it was (and then each subsequent batch took 2-5 seconds. 

Enjoy, and thanks again to whomever had the great idea!!

[SQLSaturday] Presenting at #SQLSatAustin - January 31st 2015! (two weeks away!)

Quick note:

I will be presenting SQLWatchdog (Event Notifications) at SQL Saturday Austin.  It looks like it's going to have a ton of good presentations - heck, there's a service broker presentation up against mine!  And there's some recently-under-NDA stuff that Conor Cunningham will be presenting on, too!  And a wish-I-could-go-but-my-family-is-coming Precon on Friday, as well.

All around, good selection of speakers, good precons - a good time had by all.  Come learn some new SQL tricks - I've been at a ton of these now, and I still learn something new.  It costs $10 for a day of training(!!!!!), and that's basically covering lunch.  I'm psyched.  Maybe I can even eat at Franklin's!

https://www.sqlsaturday.com/362/eventhome.aspx

 

This will probably be one of my last presentations of SQLWatchdog, unless I make PASS 2015.  I've pretty much hit all the SQLSats that are within decent driving distance (and a couple that aren't), so it's time to come up with a new presentation.  Mixed feelings about it - still ultra-psyched about EN (and I still use the everliving daylights out of it), but I don't/can't just be "that guy" with one presentation.  Well, not if I want to go back to these places.  : )

Wednesday, January 7, 2015

[BULK INSERT] importing a fixed-width file that has header rows ala SQL, but without a format file.

Needed this for today.  Basically someone saved out a query as fixed width, but we weren't able to import it (Excel, BIDS & Import/Export Wizard all failed).  So I decided to build something that could also do it.

A basic file would look like this.  Note that the dashes and field names line up.  So what we do is really basic but still effective - look for those singleton spaces, then use that to pull out the details from the header row. 

STORE_CITY                          STORE_STATE STORE_ZIP
----------------------------------- ----------- ---------
RICHMOND                            VA          23229
SPRINGFIELD                         MO          65807

The one down side is that I don't have time to tweak it to get it working with sp_executesql (since my particular query is over 8000 characters), so I print out each line, toss that into my SELECT, strip out the last comma and add the tablename. 

/*
CREATE TABLE bulk_insert_raw 

    (id INT IDENTITY PRIMARY KEY, resultant VARCHAR(8000))
--view is needed in order to do a BULK INSERT with an inline IDENTITY - using staging table would NOT guarantee order.
CREATE VIEW bulk_insert_view 

AS 
SELECT resultant FROM bulk_insert_raw

BULK INSERT bulk_insert_view
   FROM '\\myserver\myfile.csv'
   WITH
      (
         FIELDTERMINATOR = '|',
         ROWTERMINATOR = '0x0a'  --if a UNIX-type file
      )
*/


DECLARE @min INT, @max INT, @header VARCHAR(8000), @dashes VARCHAR(8000), @last_dash INT, @last_field VARCHAR(200), @sql VARCHAR(max)
SET @min = 1

--The offset is because our particular data file had a bad first row. 

--We know line 1 and 2 are the header/dashes.
SET @header = (SELECT SUBSTRING(resultant,4,8000) FROM bulk_insert_raw WHERE id = 1)
SET @dashes = (SELECT resultant FROM bulk_insert_raw WHERE id = 2)

SET @max = LEN(@dashes) +2 --so we get the last field.
SET @last_dash = 1

SET @sql = 'select '
WHILE @min < @max
BEGIN
   
    IF (SELECT SUBSTRING(@dashes,@min,1)) IN (' ', '')
        BEGIN
            SELECT @last_field = QUOTENAME(RTRIM(SUBSTRING(@header,@last_dash, (@min)-@last_dash    )))
            PRINT @last_field + CONCAT(' = substring(resultant,', @last_dash, ',', (@min)-@last_dash,')', ',')
            SET @last_dash = @min + 1
        END

    SET @min = @min + 1
END

Tuesday, January 6, 2015

[Event Notifications] Changing security on the endpoints.

(Note: I still owe Justin some help on EN)


Recently had an issue trying to remove my login from a server (I've been transitioning from Dev & Ops to more Dev & Arch), and found I couldn't because of some of the permissions I had.  Namely: EN's (and thus Service Broker's) endpoint.

A former coworker running "SQLWatchdog" (aka my EN solution) came up with this code to fix the issue.  The second block of code fixes the endpoint for Always On.

And here's someone else who had the problem.  Code courtesy of both of them.


http://akawn.com/blog/2014/06/identify-and-change-a-sql-server-endpoint-owner/


USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.service_broker_endpoints;

USE master;
ALTER AUTHORIZATION ON ENDPOINT::ENAudit_Endpoint TO sa;

USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.service_broker_endpoints;

--------------------------------------------------------
USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.database_mirroring_endpoints;

USE master;
ALTER AUTHORIZATION ON ENDPOINT::Hadr_endpoint TO sa;

USE master;
SELECT SUSER_NAME(principal_id) AS endpoint_owner
  ,NAME AS endpoint_name
FROM sys.database_mirroring_endpoints;