Friday, August 31, 2012

Remember, when scripting out jobs for reuse...

That they all use the same schedule ID, and thus the same schedule.  Modify the schedule in one job, it's inherently changed on the other.

Wednesday, August 1, 2012

[Automation] importing trace files via Cleartrace for several servers

(update 2013/09/19 minor bug to deal with a staging table that had non-sequential IDs)

I just posted this on the Cleartrace forums, but wanted a copy here.

Basically, Cleartrace takes trace files, imports the code calls, and turns it into information.  Here's the code that runs most often, here's the code that takes the longest to run, here's your I/O hog, etc.  

Everybody I ask about Profiler typically talks about running it then manually going through the results.  While that's fine for small stuff, I'm more interested in aggregations and baselines.  So were the people behind Cleartrace, fortunately.  But I need it automated, and on several machines. Hence, this code.  

Once cleartrace is running and set up to use a server, run this code to add a config table, then set up some servers.  

I assume you're running with adequate permissions; if not, set up the folders put the files in a share and set that as your trace_folder.  Set up a specific name for the trace.  

Once it's imported some, bring up Cleartrace (the real version) and choose an import group, then start looking at data.  Slice and dice as necessary.  

Note that this does NOT keep the details - it's all aggregates, even in the CT DB.  Which is why we save the trace files for posterity.

This is not any big deal, it's a rough first pass, I'm sure I'll have to fix it later.  But it works and it lets me check out servers for user requests.  (I didn't use XE or EN for several reasons, not the least of which because XE doesn't have all the capabilities I need under 2008)


USE Cleartrace
go
if object_id('Cleartrace..CT_AutoServerList') is not null
DROP TABLE CT_AutoServerList
CREATE TABLE CT_AutoServerList 
(id INT IDENTITY,
server_name sysname,
trace_folder VARCHAR(200),
trace_name VARCHAR(50),
is_active BIT,
insert_datetime DATETIME DEFAULT(GETDATE()),
update_datetime datetime 
)

SELECT * FROM CT_AutoServerList

INSERT INTO CT_AutoServerList
  ( server_name ,
  trace_folder,
  trace_name,
    is_active ,
    insert_datetime 
  )
VALUES ( 'yourservernamehere', -- server_name - sysname
'e:\foldertoholdtraces',
'tracenamegoeshere',
 1 , -- is_active - bit
'2012-07-31 10:02:00' 
)


-----------------------------
--Cleartrace automated load--
--mdb 2012/08/01 1.00      --
-----------------------------
CREATE PROCEDURE ReadTrace.usp_AutomateTraceLoad
as
/*
--mdb 20120801 first version!

Purpose: Pull trace files from a variety of servers and use Cleartracecmd (Cleartrace Command Line) to 
automatically load into tables for processing, using a particular trace name and saving to a folder
on the "processing" server.  

Cleartrace can be found at www.scalesql.com/cleartrace/
and is written(?) & maintained by Bill Graziano.  It's his tool and I take no credit for it.  
It's indispensable for reading traces.  My code simply tries to automate it.

Instructions:
Download Cleartrace (full version, since we use Cleartracecmd.exe).
Run Cleartrace and choose a server/database.  It will create the DB and objects as needed.  Import a file to make sure it's all created.
Run this script in that database.
Create a folder to hold the trace files
Add rows to the CTAutoServerList table
Set up traces on the servers you want to manage.  Set a beginning/end time so that this has time to run at night and import.
Run traces.
While the traces are stopped, run this code

Goal: to run an import for each particular trace for a server, saving locally, and making available via cleartrace
Most of the code here is to guarantee that it gets processed once.
*/
DECLARE 
 @cleartrace_server sysname ,
 @cleartrace_database sysname,
 @archive_folder varchar(300),
 @executable_folder VARCHAR(300),
 @min INT, @max INT,
 @full_archive_folder_name varchar(300),
 @error INT,
 @cmd NVARCHAR(4000),
 @msg VARCHAR(1000)

if object_id('tempdb..#Error_Finder') is not null
    drop table #Error_Finder
create table #Error_Finder (listing nvarchar (255))

if object_id('tempdb..#File_Results') is not null
    drop table #File_Results
CREATE TABLE #File_Results (
File_Exists int,
File_is_a_Directory int,
Parent_Directory_Exists int
)

SET @cleartrace_server = 'cleartraceservernamehere'
SET @cleartrace_database = 'ClearTrace' 
SET @archive_folder = 'e:\trace_file_archive' --where to save trace files.
SET @executable_folder = 'e:\trace_file_archive' --where cleartracecmd.exe is.
SELECT @min = 1, @max = 0 --if no records at all, skip

--get rid of trailing slashes on the two fields, just in case
IF RIGHT(@archive_folder,1)='\' SET @archive_folder = LEFT(@archive_folder,LEN(@archive_folder)-1)
IF RIGHT(@executable_folder,1)='\' SET @executable_folder = LEFT(@executable_folder,LEN(@executable_folder)-1)

--Get a list of just the active servers/traces
--we could replace this with a ROW_NUMBER'd table, but we call it multiple places and I don't want more variables
--doing this so we only run against valid servers
DECLARE @server_list TABLE 
 (id INT IDENTITY,
 server_name sysname,
 trace_folder VARCHAR(200),
 trace_name VARCHAR(50))
INSERT INTO @server_list 
 SELECT server_name, 
   trace_folder, 
   trace_name 
 FROM CT_AutoServerList 
 WHERE is_active = 1

--loop through valid servers and grab trace files 
SELECT @min = MIN(id), @max = MAX(id) FROM @server_list
SELECT @min, @max
WHILE @min <= @max
BEGIN
 TRUNCATE TABLE #File_Results
 TRUNCATE TABLE #Error_Finder
 SELECT @full_archive_folder_name = NULL,
   @error = 0,
   @cmd = NULL 

 ------------------------------------------------------
 --Step 1: verify archive folder exists for that server
 ------------------------------------------------------
 SELECT @full_archive_folder_name = @archive_folder + '\' + server_name 
 FROM @server_list WHERE id = @min

    --See if folder for server exists.  If not, try and create the folder.  If that fails, throw an error.
 INSERT INTO #File_Results
  (File_Exists, file_is_a_directory, parent_directory_exists)
 --verify it exists
 EXEC Master.dbo.xp_fileexist @full_archive_folder_name

 IF (SELECT TOP 1 File_is_a_Directory FROM #File_Results) = 0
 --if it does not, create it.  
 BEGIN
  SELECT @cmd = 'mkdir ' + @full_archive_folder_name
  INSERT #Error_Finder EXEC xp_cmdshell @cmd
  --throw errors if the create failed, as it'll import but not 
  SET @error = @@ERROR
  IF @error <> 0 OR 
   (SELECT COUNT(*) FROM #Error_Finder where 
    listing like '%error%' 
    OR listing LIKE '%not ready%' 
    OR listing LIKE '%not found%') > 0
  BEGIN
   SELECT @msg = '[Cleartrace] Error ' + CONVERT(VARCHAR,@error) + ' occurred during folder creation.'
    RAISERROR (@msg,16,1)
  END
 END 

 --------------------------------------
 --Step 2: import using ClearTraceCmd--
 --------------------------------------
 TRUNCATE TABLE #Error_Finder
 SET @cmd = NULL 
 SET @error = 0
 SELECT @cmd = 
  @executable_folder + '\ClearTraceCmd.exe' 
  + ' /s ' + @cleartrace_server
  + ' /d ' + @cleartrace_database
  + ' /f ' + '\\' + server_name + '\' + REPLACE(trace_folder,':','$') + '\' --change e: to e$, if not using shares
  + trace_name + '_*' --_* allows web1 and web2
  + ' /group ' + server_name + '_' + trace_name 
  + ' /archivedir ' + @full_archive_folder_name
 FROM @server_list WHERE id = @min

 INSERT #Error_Finder EXEC xp_cmdshell @cmd

 SET @error = @@ERROR
 IF @error <> 0 OR 
  (SELECT COUNT(*) FROM #Error_Finder where 
   listing like '%error%' 
   OR listing LIKE '%not ready%' 
   OR listing LIKE '%not found%') > 0
 BEGIN
  SELECT @msg = '[Cleartrace] Error ' + CONVERT(VARCHAR,@error) + ' occurred during cmdline import.'
   RAISERROR (@msg,16,1)
  SELECT * FROM #Error_Finder
 END

 SET @min = @min + 1
end



DROP TABLE #Error_Finder
DROP TABLE #File_Results
go

Wednesday, July 25, 2012

Event Notifications - another curve

Another learning curve, after getting 98% there with Extended Events (see previous post).  For many things XE is a better tool, but for my purposes (DDL tracking over time) EN is better (again, thanks Jonathan for a great post on it: http://sqlskills.com/blogs/jonathan/post/Event-Notifications-vs-Extended-Events.aspx).  


Note that EN is NOT the same as SQL Server Notification Services, which has been dropped by MS and was for Reporting Services.

Event Notifications started with 2005, and are woefully underutilized.
Here's my annotated basic attempt at tracking all DDL changes on a server.  Next up (probably next post) is creating a routable SB to a secondary server, which would have an activated SP which would write records to a table.  As is, this is a proof of concept that writes to a queue. DO NOT JUST LEAVE IT RUNNING, as it'll fill up your queue, grow your database and you'll never figure out where the space is (since queues are sorta-hidden when it comes to space used)

TL;DR: Event Notifications tracks events, and routes them through service broker. This gives power and danger.

Code cribbed from both Jonathan Kehayias (http://www.sqlservercentral.com/articles/Event+Notifications/68831/) and Arshad Ali (http://www.mssqltips.com/sqlservertip/2121/event-notifications-in-sql-server-for-tracking-changes/).


ALTER DATABASE db_stuff SET ENABLE_BROKER;

go
USE db_stuff
go
CREATE QUEUE EventNotificationQueue   -- the reason you frequently see \\servername\queuename is because it needs to be unique
GO
create SERVICE EventNotificationService ON QUEUE EventNotificationQueue ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification])
GO
use db_stuff
go
CREATE EVENT NOTIFICATION CaptureErrorLogEvents
 ON SERVER
 WITH FAN_IN
 FOR DDL_EVENTS, ALTER_SERVER_CONFIGURATION
 TO SERVICE 'EventNotificationService', 'current database';
GO
create table blah (id int identity)
go
drop table blah
go
SELECT * FROM db_stuff..EventNotificationQueue

SELECT CAST(message_body AS XML) AS message_body_xml
FROM db_stuff.dbo.EventNotificationQueue

with MESSAGE_DATA as (SELECT queuing_order, CAST(message_body AS XML) AS message_body_xml
FROM db_stuff.dbo.EventNotificationQueue
)
SELECT
 message_body_xml.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(128)' ) as EventType,
 message_body_xml.value('(/EVENT_INSTANCE/PostTime)[1]', 'varchar(128)') AS PostTime,
 message_body_xml.value('(/EVENT_INSTANCE/SPID)[1]', 'varchar(128)') AS SPID,
 message_body_xml.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(128)' ) AS LoginName,
 message_body_xml.value('(/EVENT_INSTANCE/UserName)[1]', 'varchar(128)' ) AS UserName,
 message_body_xml.value('(/EVENT_INSTANCE/ServerName)[1]', 'varchar(128)' ) AS ServerName,
 message_body_xml.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(128)' ) AS DatabaseName,
 message_body_xml.value('(/EVENT_INSTANCE/SchemaName)[1]', 'varchar(128)' ) AS SchemaName,
 message_body_xml.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(128)' ) AS ObjectName,
 message_body_xml.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(128)' ) AS ObjectType,
 message_body_xml.value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'varchar(max)' ) AS CommandText,
 *
from message_data order by queuing_order
go

DROP EVENT NOTIFICATION CaptureErrorLogEvents on server
go
DROP SERVICE EventNotificationService
GO
DROP QUEUE EventNotificationQueue
go


Annotated:

ALTER DATABASE db_stuff SET ENABLE_BROKER;

turn on service broker on the database.  It's on by default in MSDB, but let's not play god in system databases today.

USE db_stuff
CREATE QUEUE EventNotificationQueue GO
create the service broker queue in "db_stuff".  EN uses service broker.

create SERVICE EventNotificationService ON QUEUE EventNotificationQueue ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification])
The notification sends to the service which sends to the queue. It needs a schema, MS has provided that (the http).

CREATE EVENT NOTIFICATION CaptureErrorLogEvents
 ON SERVER
can be server-wide, database-wide, etc.  I'm tracking all DDL changes on the server, so server it is.

 WITH FAN_IN
prevents multiple inserts from the same event into the same queue.

 FOR DDL_EVENTS, ALTER_SERVER_CONFIGURATION
the things I'm actually tracking.  you could do them individually, but there's over 1000, so I'm using "event groups". Here, I'm looking for server-level config changes, and DDL events (this includes UPDATE STATISTICS, which means it can fire a LOT)

 TO SERVICE 'EventNotificationService', 'current database';
send it to the service,  And yes "current database" is the proper format.  Do Not Like that name.

SELECT * FROM db_stuff..EventNotificationQueue
As of the CREATE EVENT NOTIFICATION, the queue is on and catching data.  Let's see what it's caught.  Not really legible, is it?  Let's change that.

SELECT CAST(message_body AS XML) AS message_body_xml
FROM db_stuff.dbo.EventNotificationQueue



So at this point, we'll use a CTE (or you could use a derived table; whatever) to provide a useful query that pulls out most of the provided fields.
with MESSAGE_DATA as (SELECT queuing_order, CAST(message_body AS XML) AS message_body_xml
FROM db_stuff.dbo.EventNotificationQueue
)
SELECT
 message_body_xml.value('(/EVENT_INSTANCE/EventType)[1]', 'varchar(128)' ) as EventType,
 message_body_xml.value('(/EVENT_INSTANCE/PostTime)[1]', 'varchar(128)') AS PostTime,
 message_body_xml.value('(/EVENT_INSTANCE/SPID)[1]', 'varchar(128)') AS SPID,
 message_body_xml.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(128)' ) AS LoginName,
 message_body_xml.value('(/EVENT_INSTANCE/UserName)[1]', 'varchar(128)' ) AS UserName,
 message_body_xml.value('(/EVENT_INSTANCE/ServerName)[1]', 'varchar(128)' ) AS ServerName,
 message_body_xml.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(128)' ) AS DatabaseName,
 message_body_xml.value('(/EVENT_INSTANCE/SchemaName)[1]', 'varchar(128)' ) AS SchemaName,
 message_body_xml.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(128)' ) AS ObjectName,
 message_body_xml.value('(/EVENT_INSTANCE/ObjectType)[1]', 'varchar(128)' ) AS ObjectType,
 message_body_xml.value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'varchar(max)' ) AS CommandText,
 *
from message_data order by queuing_order



And finally, a list of all the events, courtesy BOL.

WITH DirectReports(name, parent_type, type, level, sort) AS
(
    SELECT CONVERT(varchar(255),type_name), parent_type, type, 1, CONVERT(varchar(255),type_name)
    FROM sys.trigger_event_types
    WHERE parent_type IS NULL
    UNION ALL
    SELECT  CONVERT(varchar(255), REPLICATE ('|   ' , level) + e.type_name),
        e.parent_type, e.type, level + 1,
    CONVERT (varchar(255), RTRIM(sort) + '|   ' + e.type_name)
    FROM sys.trigger_event_types AS e
        INNER JOIN DirectReports AS d
        ON e.parent_type = d.type
)
SELECT parent_type, type, name
FROM DirectReports
ORDER BY sort;

Extended Events - climbing the learning curve

I recently spent an evening climbing the learning curve on XE (Extended Events), not the least of which because it acted slightly differently between SQL Server 2008 and SQL Server 2012.  I'd initially got it working, but couldn't reproduce that success for another couple of hours because I was running the code on 2008 (which lacks certain events that 2012 has - doh!).


Here's what I've learned, mostly here as a basic HOWTO, but also to remind me later.


1) It's actually pretty easy!  
2) It's the future - Profiler as we know it is going away (but you have a couple of years).  This is the replacement.
3) Jonathan Kehayias is awesome. He's gone through a bunch of pain - use his lessons learned.
4) SSMS 2012 offers it natively.  There's a plugin for SSMS 2008/R2 (Extended Event Session Explorer) that adds an option under the "View" menu.  Yes it works, but use it to assist what you're doing, don't just be the "GUI guy".
5) BOL is pretty good, but some of the obvious pages are hidden.  http://msdn.microsoft.com/en-us/library/bb630284(v=sql.105).aspx is a good example. (Somehow hadn't seen it before today)
6) Once you go through an example, all the giant blocks of code on the web make perfect sense.  That alone is a good reason to go through this exercise.


Here's an example I'm using now  You can run just this first block of code, pop open a new window, run a command, come back and see what happens.  Go on do it, I'll wait.  This works in 2008 and 2012.


CREATE EVENT SESSION [web_users_XE] ON SERVER
ADD EVENT sqlserver.rpc_completed
(
    ACTION(sqlserver.sql_text,sqlserver.username)
    WHERE ([sqlserver].[username]<>'web_users')
)
ADD TARGET package0.asynchronous_file_target(SET filename=N'C:\web_users_XE.xel'
, metadatafile='c:\web_users_XE.xem')
WITH (STARTUP_STATE=OFF)
GO
ALTER EVENT SESSION [web_users_XE] ON SERVER STATE = START
go
--at this point open a new window and run a command or two, hit an SP if you can...
sp_help
go
waitfor delay '00:00:30'
go
ALTER EVENT SESSION [web_users_XE] ON SERVER STATE = STOP
go
DROP EVENT SESSION [web_users_XE] ON SERVER;



--2008


So what's that mean and do?  Let's cover one part at a time.


CREATE EVENT SESSION [web_users_XE] ON SERVER 

pretty explanatory.  mandatory to create it. this creates the actual session.


ADD EVENT sqlserver.rpc_completed(
Now let's see: there are sessions, events, actions, and targets.  The SESSION is like the full sql trace.  The EVENTs are just like Trace Events (RPC:Completed, etc). There's even a table in 2012 that gives you a "this in trace is this in XE".  This is RPC:Completed.


    ACTION(sqlserver.sql_text,sqlserver.username)

what are we going to save? ACTIONS are "columns" in the trace (textdata, etc).  So here we savehe SQL_Text and the Username of the user running the query.  Note that this is for this particular event.  Which means you can do stuff like "event A you save these fields, for event B you save these other fields", etc, etc.


    WHERE ([sqlserver].[username]='web_users')
our filter - for this EVENT, only save from user "web_users".  As with the ACTION, it's per event.


ADD TARGET package0.event_file(SET filename=N'C:\SQL_Log\web_users_XE.xel')
OR

ADD TARGET package0.asynchronous_file_target(SET filename=N'C:\web_users_XE.xel'
, metadatafile='c:\web_users_XE.xem')


targets are "where the data goes".
Wait, why are there 2 versions?  The one in the block of code runs in both 2008/2012.  But the new name is "event_file".  Also, in 2008 you needed a metadata file to be able to parse it; they got rid of that in 2012.  It WILL NOT CREATE ONE, even if specified.  Nor do you need it to parse.
Targets: The 3 most-common are ring, file, and histogram.  
* File is a file. It's XML so it needs to be parsed, but easy enough.
* Ring is a first-in-first-out set of memory.  As you save to it, older things get kicked out.
* Histogram is a series of buckets, grouped by whatever you choose.  Why is that useful?  Kehayias does a clever example with object_id and page splits.  Everytime the split occurs, it either adds or increments a bucket with the object_id and count.  So rather than having to parse a list of events and group to see which objects are used, it's already done by the histogram - just see what buckets have the highest count, and that gives you the object ID.


WITH (STARTUP_STATE=OFF)
should this automatically start when the server starts?


ALTER EVENT SESSION [web_user_XE] ON SERVER STATE = START
actually start the session.

sp_help
Run something so we have an event.


waitfor delay '00:00:30'
There can be a 30 second delay before things are written to the file (to be lower-impact, it waits until a buffer fills).  There's a setting for this, but the default is 30 seconds.


ALTER EVENT SESSION [web_user_XE] ON SERVER STATE = STOP
stop the session.  It still exists, it's just not running.

DROP EVENT SESSION [web_users_XE] ON SERVER;
drop the session entirely.


Now, how do we parse it?  Parts cribbed from Kehayias again, notably getting the sql_text.
SELECT 
event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
DATEADD(hh, DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), 
event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
event_data.value('(event/data[@name="cpu"]/value)[1]', 'int') AS [cpu],
        event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
        event_data.value('(event/data[@name="reads"]/value)[1]', 'bigint') AS [reads],
        event_data.value('(event/data[@name="writes"]/value)[1]', 'bigint') AS [writes],
event_data.value('(event/action[@name="username"]/value)[1]', 'varchar(50)') AS username,
event_data.value('(event/action[@name="client_app_name"]/value)[1]', 'varchar(50)') AS application_name,
event_data.value('(event/action[@name="attach_activity_id"]/value)[1]', 'varchar(50)') AS attach_activity_id,
        REPLACE(event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)'), CHAR(10), CHAR(13)+CHAR(10)) AS [sql_text],
        event_data
FROM 
 (
 SELECT CAST(event_data AS xml)  AS 'event_data'
FROM sys.fn_xe_file_target_read_file('c:\sql_log\web_users*.xel', 'c:\sql_log\web_users*.xem', NULL, NULL)
)a ORDER BY DATEADD(hh, DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP), 
event_data.value('(event/@timestamp)[1]', 'datetime2')) 

Since it's XML, you need to parse it.  We convert it to XML from binary data, then use "value" to extract info.  One change between 2012 and 2008 is fn_xe_file_target_read_file.  On 2008 you need the metadatafile location.  On 2012 you don't need it, but it won't complain if it's there.

THAT'S IT!
Man, didn't that look more difficult?

Friday, June 29, 2012

[Baking] Olga's 11-layer Cake (aka 7-layer Chocolate Cake) or Torte

(pictures forthcoming)
This is my Baba's famed Eleven Layer Cake.  Also known as the Seven Layer Cake (we'll get into that)

I have no idea where she found this.  She says she found it in a cookbook, probably in the 50s.  She said it was a German Chocolate Torte.  What it is, to be honest, is divine.  A royal pain to make, but worth it.

It's an all-afternoon affair.  Since I hadn't made it by myself, I split it into 3 nights.


On the first night, I baked the 7 layers.  As I got better at it I figured out why it used to be an 11 layer cake and is now a 7 layer cake - even with just 7 layers it's an amazing-looking cake, and you probably don't miss them.  If you practice on the first couple until you get it right, it'll easily spread into 7 layers.  Once I'd gotten the hang of it, I see how you could conserve frosting and wind up with 11 - but it'd be a beating.  Next time I do want to try 11, just to see if I can.

On the second night, I made and applied the filling.

On the third night, I "glazed" (iced/frosted/coated/etc) the cake.

On the fourth night, I rested.

On the fifth night, PARTY!

Here's the recipe.  Originally in high-altitude, low-humidity.  I didn't have to change anything.
The ONE tip I would give: weigh the intermediate steps.  If I knew exactly how much batter I had to work with on each layer, it would have gone MUCH more smoothly.


 
1 cup sugar
3 jumbo eggs
2 sticks butter
1 teaspoon baking powder
1 teaspoon (mexican) vanilla
1/2 cup cornstarch
1 1/2 cup flour
 
In mixer, mix butter and sugar until smooth.  Add eggs and mix well.  Add baking powder, vanilla and cornstarch and mix well.  Add flour and mix well (getting the picture-mix well!).  This will make 6 layers in a 9-inch cake round.  Spread the batter in a thin layer (with a super old blunt butter knife) in the bottom of the cake pan and bake at 350 degrees about 7 minutes.  It should be brown all over when you pull it out.  Let cool in the pan a few minutes then put on a colling rack to finish cooling.
 
Filling:
3 egg yolks
3 cups milk
1/2 cup Nestle Quick (Nesquik) chocolate drink mix powder 
3 tablespoon Hershey's syrup
1/3 cup sugar
8 tablespoon flour
 
Mix yolk and sugar until smooth.  Add Nestle Quick and Hershey's syrup and mix well.  Add flour and 1/2 up milk.  Mix well.  In a 3 quart pot, boil the remaining milk and add the chocolate mix.  Blend well and stir until it starts to boil.  Simmer for 1 minute.  Shut off stove and let cool.
 
Soften 2 sticks unsalted butter.  Beat well with mixer and add to cooled chocolate mixture.  Mix well.
 
Take 1 layer and put it on the cake dish.  Frost the top portion and add layers until done.  Leave the top portion unfrosted.  You will use the "Glazing".
 
Glazing:
1 1/2 cup powder sugar
3/4 cup chocolate chips
OR  1 1/2 squares semi-sweet chocolate
1/2 tablespoon butter
1/4 cup hot water
 
In small pot, add butter and chocolate together and melt on stove.  Add sugar and water.  Mix well.  It should be thick like syrup.  If it is too thick, add water.  If it is too thin, add sugar.  Frost cake.

Extended Events - an idiot's primer

Me being the idiot for having taken this long to finally tackle it, and for all the mistakes I made tonight.

Here are the basics on Extended Events:
It's fast, lightweight, and will replace Profile Traces.  You need to learn this, and now. Spend 30 minutes here and run this code, modify it, play with it.

Fortunately, there's a GUI for 2008 SSMS (the Extended Event Session Explorer), and 2012 supports it via the GUI, but ONLY IF you're hitting a 2012 server.  If you install EESE, there's no documentation.  Install it.  Bring up the Object Explorer.  Select a server.  Now go to the VIEW menu in SSMS and the first option is Show Extended Event Session Explorer.  You pretty much can only create a SQL Script, which is perfect.  Read below, then install and play.

Okay, first hurdle done.  XE (Extended Events, get used to the term) is all command line, until you move to SSMS 2012 & SQL Server 2012.  All that EESE tool does is make it easier to create your script.  A LOT easier.

Here's my basic script.
* It creates an SESSION (a "trace", using the old terminology),
* tracking 1 or many EVENTs (pretty much same as in Profiler ;each event is separate and has separate filters).
* In order to get more data from the EVENTs, you add ACTIONs (the columns in Profiler).
* You can also add PREDICATEs, which is a Filter - it acts like a Profiler Column filter and even uses the WHERE clause.
* Finally, you add a TARGET, which is where this goes.
** The two main ones to start with are the asynchronous_file_target (2012 calls it a file_event; the name I use works in both versions), which is basically a file written to disk.  Note that there is a METADATAFILE created on 2008.  YOU NEED THIS.  You can't query the code without it.  Now, under 2012, there IS NO METADATAFILE, even if you explicitly tell it so; it just won't create it.  Turns out that's a change, and for the better.  Fortunately, on the code used to parse the file(sys.fn_xe_file_target_read_file), all you have to do is leave the second parameter as NULL and it'll work.  
** There is also the RING_BUFFER, which is a fancy way of saying "save a few megabytes for it in RAM; no need to save this to disk or anything". Server reboots, it went bye-bye - not saved.  And as you save new "rows" to it, it can push out old rows.
* There is a WITH clause, which can do fancy things like server_startup, so when the server bounces it starts back up.  There are others, but to be honest most of the defaults seem quite reasonable.

You then have to START the session.  Non-intuitive, and I don't think either the XE wizard in 2012, OR the SSMS 2008 Codeplex tool does that unless you choose Start Immediately, which you honestly should stop and think about before doing.  You just did this new piece of code, find a quiet server to run it on.

So, you've started it. Congrats!  Now that it's going, you can query it!  Either way, using the Ring Buffer or the file, it's now live - but you may not see data for up to 30 seconds.  There's a thing called dispatch latency, which is a fancy way of saying "it can sit for up to X seconds before writing to buffer/disk".  Default is 30 seconds.  Default size for the file is 1gb, and I suspect there are 5 default growths.

But the query works.  Shamelessly copied from Jonathon Kehayias, who is an MVP and I believe one of The Guys when it comes to XE.  As well he should be.  This is me condensing the last 5 hours into 5 minutes.  I had to skip some parts.

ONE OTHER VERY IMPORTANT THING:
this code isn't working quite right for me, but it could be because I've stared at it for 5 hours now (there's your learning curve; probably closer to 2-3 before it clicks).  For some reason, it's filtering my RPC calls.  I'll try and figure it out in the morning.



--Drop old session if it exists
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='web_user_XEv')
    DROP EVENT SESSION [web_user_XEv] ON SERVER;
--Create the new session
CREATE EVENT SESSION [web_user_XEv]
ON SERVER
--One “add event” per type of event you want to capture; filters MUST be applied to each!
ADD EVENT sqlserver.rpc_completed
(
--this part is optional; add other things you want returned in the XML
     ACTION (sqlserver.sql_text,sqlserver.username,sqlserver.client_app_name)
--a predicate – this limits things. No examples on whether it’s unicode or not; this sort of works.
-- you could also filter on the app name, etc, etc, etc.
     WHERE (((sqlserver.username<>'my.name')) and ((sqlserver.username<>N'the.other.guys.name')))
)
--How we actually save the data.  This saves to a file.
--This particular code is 2008/2012; it becomes “file_event” in 2012, but this works in both
ADD TARGET package0.asynchronous_file_target(
--the metadatafile is NOT RETURNED in 2012, but you don’t need it; just change the call that reads the log to NULL
-- And without having the metadata file in 2008, the data file is useless; you can’t query.
-- However, even in 2008 you don’t actually NEED to create the metadatafile; it will do it for you. So rerun your CREATE script and use that file.
-- So I guess the metadata call is sort of like the appendix?
     SET filename='c:\sql_log\web_user.xel', metadatafile='c:\sql_log\web_user.xem'
--Lots of options here, but the defaults are fine. You don’t even need the latency.
-- However, one you probably want is server_side.  That autorestarts it when the box comes up. Dispatch Latency says it can sit in RAM for up to 30 seconds before being written.
    WITH (max_dispatch_latency = 30 SECONDS)
go

--Actually STARTS the trace
ALTER EVENT SESSION [web_user_XEv] ON SERVER STATE = START
GO

-------do whatever you want here.  You’re tracing!  Congrats!- --------------

--Now stop the trace.
ALTER EVENT SESSION [web_user_XEv] ON SERVER STATE = STOP
go

--Basic query to pull out data. You CAN (and should) run this while the XE is running.  Clever, eh?
SELECT
--          event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,  --“rpc_completed”, etc
            DATEADD(hh, DATEDIFF(hh, GETUTCDATE(), CURRENT_TIMESTAMP),
            event_data.value('(event/@timestamp)[1]', 'datetime2')) AS [timestamp],
            event_data.value('(event/data[@name="cpu"]/value)[1]', 'int') AS [cpu],
event_data.value('(event/data[@name="duration"]/value)[1]', 'bigint') AS [duration],
event_data.value('(event/data[@name="reads"]/value)[1]', 'bigint') AS [reads],
event_data.value('(event/data[@name="writes"]/value)[1]', 'bigint') AS [writes],
            event_data.value('(event/action[@name="username"]/value)[1]', 'varchar(50)') AS username,
            event_data.value('(event/action[@name="client_app_name"]/value)[1]', 'varchar(50)') AS application_name,
REPLACE(event_data.value('(event/action[@name="sql_text"]/value)[1]', 'nvarchar(max)'), CHAR(10), CHAR(13)+CHAR(10)) AS [sql_text],
        event_data
FROM
 (
 SELECT CAST(event_data AS xml)  AS 'event_data'
FROM sys.fn_xe_file_target_read_file('c:\sql_log\web_user*.xel', 'c:\sql_log\web_user*.xem', NULL, NULL)
)a

Friday, June 15, 2012

Columnstore - errors if you run queries on SSMS 2008

Still trying to find the exact behavior, but running queries in SSMS 2008R2 threw a couple weird errors when hitting a table with Columnstore.  First, the estimated execution plan wouldn't use the CS, even when explicity called.  (that may be due to a cross-table view)

Then, I managed to get this error:
Instance validation error: 'Batch Hash Table Build' is not a valid value for PhysicalOpType.
which doesn't really come up in google.  And THAT occurs because (yup) the XML parser for the execution plan in SSMS 2008 doesn't know about "Batch Hash Table Build", because it's pre-columnstore.

Minor issue, but annoying.  If only SSMS 2012 didn't keep locking up on me.  : (

Tuesday, May 8, 2012

[trick] use compression SP to find partition size

I _know_ I'm missing a simple query here, probably using sys.partitions (actually, just found it.  Doh!  sys.dm_db_partition_stats), but I needed to know the size of a partition before and after adding several columns.  So...


sp_estimate_data_compression_savings
       @schema_name =  'dbo'
     , @object_name =  'my_staging' --then after adding fields
     , @index_id = NULL  
     , @partition_number = 290
     , @data_compression = 'row'

It gave me the size of that particular partition.  In retrospect the DMV is probably faster/stronger/better, but this worked in a pinch.

Monday, April 30, 2012

[traces] Killing orphaned trace

Swore I had this already.



SELECT * FROM sys.traces

sp_trace_setstatus 2, 0 --stop trace 2
go
sp_trace_setstatus 2, 2 --close/remove trace 2

Thursday, March 8, 2012

[Powershell] Script out each job to a separate file

Code blatantly stolen from both John Sansom and Jan Goyvaerts (mostly John; read the credits), but I modified it a bit.
I needed to be able to script out each of my jobs to separate files.  And I need to trap for characters that can't be used in a file name (why, oh why, did I ever put Colons in my job names?)

(updated on Pi Day - forgot numbers, which caused some jobs to not script out)

# Date:     23/02/12 updated 2012/03/14
# Author:   John Sansom
# Description:  PS script to generate all SQL Server Agent jobs on the given instance.
#       The script accepts an input file of server names.
# Version:  1.1 mbourgon modified to save each job to a separate file; excludes non-filename characters
# 1.11 added numbers.
# Example Execution: .\Create_SQLAentJobSripts.ps1 .\ServerNameList.txt

param([String]$ServerListPath)

#Load the input file into an Object array
$ServerNameList = get-content -path $ServerListPath

#Load the SQL Server SMO Assemly
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | Out-Null

#Create a new SqlConnection object
$objSQLConnection = New-Object System.Data.SqlClient.SqlConnection

#For each server in the array do the following..
foreach($ServerName in $ServerNameList)
{
    Try
    {
        $objSQLConnection.ConnectionString = "Server=$ServerName;Integrated Security=SSPI;"
            Write-Host "Trying to connect to SQL Server instance on $ServerName..." -NoNewline
            $objSQLConnection.Open() | Out-Null
            Write-Host "Success."
        $objSQLConnection.Close()
    }
    Catch
    {
        Write-Host -BackgroundColor Red -ForegroundColor White "Fail"
        $errText =  $Error[0].ToString()
            if ($errText.Contains("network-related"))
        {Write-Host "Connection Error. Check server name, port, firewall."}

        Write-Host $errText
        continue
    }

    #IF the output folder does not exist then create it
    $OutputFolder = ".\$ServerName"
    $DoesFolderExist = Test-Path $OutputFolder
    $null = if (!$DoesFolderExist){MKDIR "$OutputFolder"}

    #Create a new SMO instance for this $ServerName
    $srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $ServerName

    #Script out each SQL Server Agent Job for the server

    $jobs = $srv.JobServer.Jobs
#Using regex invocation courtesy of Jan Goyvaerts at regular-expressions.info
    $regex = [regex] '[^a-zA-Z0-9-\ ]'

    foreach ($job in $jobs)
{
$jobname = ".\$OutputFolder\" + $regex.Replace($Job.Name,'_') + ".job.sql"
$job.Script() | Out-File $jobname
}
}