Wednesday, October 23, 2013

#SQLSaturday255 - I'm presenting In Dallas on November 2nd!

Howdy, all.  I'm proud to announce that I'll be presenting on Event Notifications (aka "Master of All I Survey") at SQL Saturday 255 in Arlington (Texas) on November 2nd.  I personally will be presenting at 2:45, in Room 109, but there are almost 50 different sessions, including such luminaries as Joe Celko and Grant Fritchey, just to name a couple.  So I'm definitely ramping up my presentation in response!

http://www.sqlsaturday.com/255/eventhome.aspx


What we'll be covering:
  • What is Service Broker and why is it useful?
  • What are Event Notifications and what good are they?
  • How do we use Event Notifications to track code changes (DDL)?
  • How do we centralize that for many machines?
  • Reporting?
  • EN Tricks: emailing errors, watching for Bad Things, and checking into TFS automagically.

I think I've got a good mix of Presentation and Demos, and my goal is to send you home with code that you want to use immediately, and CAN use immediately!

See you there!

Monday, October 7, 2013

[Event Notifications] Automatically check in code changes to TFS


I'll be presenting this at SQLSaturday 255 in Dallas, on November 2nd 2013.

Two note to begin, because this is a lengthy post: there are two parts of this, and two scripts.
* The Powershell script will script out objects and check them into TFS.  No prereqs other than TFS and powershell.  You could just run this on servers daily and it would give you a basic "here's what the code is like". Basic but functional.
* The SQL script will look for new changes using an existing Event Notifications setup, then call the PoSH script for those objects.  More work, but IMHO more reward; you get real-time monitoring, information such as the LoginName of the user who changed it, and better comments.

So, I've been trying to figure out better ways to present code to our developers, as well as giving our group the ability to look back at code changes in production.  Our devs have Source Control for code that's deployed, but our Operations team has to look at objects that have been changed over time, be it via indexes, tweaks made in production for emergencies, permission changes, etc. 

There is one 3rd party app I know of that does this, but we had issues getting it to work.
So I decided to write my own.

And for me it seemed a natural to tie it in with Event Notifications.  My original idea was to just use EN to check the changes in, but that's not always feasible - if a table gets a new column, how do you represent that?  So I moved to a different model - EN detects the changes, and then a separate SP kicks off a powershell script that deals with TFS, by scripting out a new copy of the affected object(s) and checking it in.  And because we're using EN, we get information like the username of the person who changed it, what exactly they did, etc.


While this method can be used with any source control server, our company uses TFS to track changes.
However, I was trying to set it up on an existing server and didn't want a "heavy" client on that server.  Fortunately, I found on stack overflow a post (http://stackoverflow.com/questions/5503858/how-to-get-tf-exe-tfs-command-line-client, answer by ijprest) describing how to do this. Alternatively, you can install Team Explorer or Visual Studio with Team Explorer or use the Java TFS client.  The way licensing works, as long as you have a CAL, they don't care what client you use.

The next step was the scripting-to-disk.  I settled on using Phil Factor's code which uses powershell & SMO, then modified it for my use, including the ability to feed a server/database and have it script out the entire database and check it into TFS.  All the credit goes to him, all the blame goes to me.  Right now it's scripting "under" the database-level - you get all the changes within the database, but not changes ON the database, like changes to DB settings, files, and logins/permissions.  Version 1.1 will script at the server level and below.


So, at this point we have TFS, a SMO object scripter, next up is code that runs, looks at our Event Notification table, and call the Powershell script.  Our TFS does not allow every user to CHECKIN code, so rather than risk it failing I add their name to the COMMENT instead.  We are calling the powershell script via XP_CMDSHELL; there's probably a better way and I'll look at it for a future version, once PASS 2013 is over.

Overall, it seems to work well.  I need to work on some sort of dynamic exclusion list and a couple other tweaks, but overall I'm very happy with it.  As are our auditors.

Now, the install, which is intimidating.  But think of it the other way.  Create 3 directories, copy over a total of 11 files, run one script, create two tables and add one job.  The hardest part, personally, was getting the TFS workspace working.



First, create 3 folders:
C:\TFS_CLI
C:\TFS_CLI\App
C:\TFS_CLI\TF_Workspace

Next, copy the following 10 files from a valid TFS install into the \App folder.
 (Alternatively, install Team Explorer, which includes TF.EXE; you’ll need to change the commands as I’ve hardcoded the location of TF.EXE)

Microsoft.TeamFoundation.Build.Controls.dll
Microsoft.TeamFoundation.Client.dll
Microsoft.TeamFoundation.Common.dll
Microsoft.TeamFoundation.Common.Library.dll
Microsoft.TeamFoundation.dll
Microsoft.TeamFoundation.VersionControl.Client.dll
Microsoft.TeamFoundation.VersionControl.Common.dll
Microsoft.TeamFoundation.VersionControl.Controls.dll
TF.exe
TF.exe.config

At this point, set up your TFS workspace for this account/server. It assumes your TFS project is named $/YourENproject (I created a separate project just for this, so you don't risk overwriting new code), and the server is named your-tf-server-name.  The workspace will be called TF_Workspace, and will use c:\TFS_CLI\TF_Workspace


--unsure if this first line is needed or not. Run from command line in that folder

c:\TFS_CLI\app\>tf.exe workspace /new tf_workspace /location:c:\TFS_CLI\TF_Workspace
Unable to determine the source control server.

c:\TFS_CLI\App>tf workspace /new /noprompt /s:http://your-tf-server-name:8080 TF_Workspace

c:\TFS_CLI\App>tf workfold $/YourENproject c:\TFS_CLI\EN_Workspace /workspace:TF_Workspace /s:http://your-tf-server-name:8080

c:\TFS_CLI\App>tf workfold /unmap $/

 Now, try and check out an object - I put a test object in mine to make sure this worked.


c:\TFS_CLI\App> tf get $/Randolph/ftw-sv-db-03/assetexplorer/Tables/dbo.AaaAce.sql









Now, save the following script as sql_script_to_TFS.ps1 and place it in C:\TFS_CLI.  
Since it's Powershell you'll either need to sign it or run in an execution mode of unrestricted.  
Personally, I'd sign it.

https://gist.github.com/mbourgon/6873071 (in case it isn't directly below.





 Once you have it saved, try adding an object!
--SSMS, on my_personal_server
use  somedatabase
create table blah (id int identity)

Now from powershell:
C:\TFS_CLI\sql_script_to_TFS.ps1 -Server my_personal_server -Database somedatabase -objecttoscript blah -Author mydomain\validTFSuser

On mine, I get:
item exists; get, check out, script to override, check in
C:\TFS_CLI\YourENproject\my_personal_server\somedatabase\Table:
Checking in edit: dbo.blah.sql
(followed by a confirmation that it checked in)

All good?  Good.  Now for the final step - using EN to kick it off.

Create a couple tables to hold the details.  Mine are in my EN database, since that's where the data is.


CREATE TABLE TFS_Checkin_Errors (id BIGINT IDENTITY PRIMARY key, time_job_run DATETIME, original_id INT, resultant VARCHAR(8000))
CREATE TABLE TFS_Last_Time_Run (last_time_run DATETIME PRIMARY KEY )


Save out this script, then run it on the server.  



Run once to make sure it's acting as expected, then set up a job to run it every 5 minutes.

....and..... you're done!
Check the error tables for issues; right now it doesn't handle spaces in the object name, I'm fixing that this week.


Saturday, August 10, 2013

[Replication] a Replication Delay Monitor for 2012

This is an updated version of my replication delay monitor.  It looks at the most recent time a subscriber has updated and throws an email if it's over an hour old.  This should catch oddballs that my other monitors don't.

Monday, July 29, 2013

[Event Notifications] Emailed report based off collected ERRORLOG messages

One of the cool side effects of Event Notifications is the easy collection of SQL Errorlog.  While it doesn't collect all messages, it collects enough for me to be aware of problems.  Alas, it does not collect enough for me to get an email when a server decides to shut down... the skipped messages are usually around startup/shutdown and error dumps.  See https://connect.microsoft.com/SQLServer/feedback/details/782901/event-notification-not-all-errorlog-notices-are-sent-to-service-or-queue#details for more info.


BUT... since it's EN, you get them in real-time, so you could theoretically do other things with them.  This report runs once a day, but I'm sure someone in the comments will come up with a new use I hadn't considered.  I use Alerts on the SQL Server, but I could think of other triggers (what if you suddenly started getting failed logins?)

Another advantage: this has more than the sql ERRORLOG file has - it's closer to what the Event Logs have.  Which means that you don't just got "query ran out of memory", you get the name of the login as well.  REALLY handy, that.

For this report, I've just built a simple 3-way email.  If there are severity errors, you get a block of those.  If there are non-severity errors, they go in a different block And finally, login emails get their own. 

What to do: 
 1) Set up EN to capture ERRORLOG.  See my "Master of All I Survey" post for more.  If you already have EN set up, you'll need to DROP/CREATE a new EN that includes it.  (No alters, sorry)
2) Create a table with the exclusion table.  This allows you to skip records you know about, without changing code.



 CREATE TABLE [dbo].[EN_sql_errors_exclusion_email]
(
[id] [int] NOT NULL IDENTITY(1, 1),
[Exclusions] [varchar] (250) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[EN_sql_errors_exclusion_email] ADD CONSTRAINT [PK__EN_sql_errors__ID] PRIMARY KEY CLUSTERED  ([id]) ON [PRIMARY]
GO


3) Create a job.  Here's my description:
This job uses EventNotificationRec..ENAudit_Events and looks for all SQL errors since 1 day ago , split up by those that have 'severity' in the text data, those that don't, and failed logins.  You can exclude data by adding rows to table EN_sql_errors_exclusion_email (uses LIKE, so 'blah%blah' will work; code already includes % at beginning & end)

Enjoy.  Let me know if you have any questions or concerns.

Thursday, July 18, 2013

[SSIS] Finding your connection strings in SQL Server 2012

(update 2013/07/18: hey, grabbing the data source doesn't actually get the default value.  That's stored in the same tables, but I need to figure out how to pull those out (ObjectType, maybe).  Give me a few days.)

(update 2013/07/22 here's what I've cobbled together, though it needs to be integrated with the below

SELECT  projects.name, ConnStr.object_name, ConnStr.parameter_name, ConnStr.description,
        ConnStr.design_default_value, ConnStr.default_value,
        SrvName.parameter_name, SrvName.description, SrvName.design_default_value,
        SrvName.default_value,
        --the catalog name
        DBName.parameter_name, DBName.design_default_value, DBName.default_value
    FROM ssisdb.catalog.projects
    INNER JOIN     [SSISDB].[catalog].[object_parameters] ConnStr
    ON SSISDB.catalog.projects.project_id = ConnStr.project_id
--get the server name.  Left outer join as it doesn't appear in the table otherwise
    LEFT OUTER JOIN [SSISDB].[catalog].[object_parameters] SrvName
    ON connstr.[object_name] = SrvName.[object_name]
    AND connstr.parameter_name LIKE 'CM%ConnectionString'
    AND srvname.parameter_name LIKE 'CM%ServerName'
--get the database name as well
    LEFT OUTER JOIN [SSISDB].[catalog].[object_parameters] DBName
    ON connstr.[object_name] = DBName.[object_name]
    AND connstr.parameter_name LIKE 'CM%ConnectionString'
    AND DBName.parameter_name LIKE 'CM%InitialCatalog'
WHERE ConnStr.parameter_name LIKE 'CM%ConnectionString'
    AND LEFT(connstr.parameter_name, LEN(connstr.parameter_name) - CHARINDEX('.',REVERSE(connstr.parameter_name)))
     = LEFT(SrvName.parameter_name, LEN(SrvName.parameter_name) - CHARINDEX('.',REVERSE(SrvName.parameter_name)))
    AND LEFT(connstr.parameter_name, LEN(connstr.parameter_name) - CHARINDEX('.',REVERSE(connstr.parameter_name)))
     = LEFT(DBName.parameter_name, LEN(DBName.parameter_name) - CHARINDEX('.',REVERSE(DBName.parameter_name)))
ORDER BY projects.name, ConnStr.object_name, connstr.parameter_name
)


Part of the "fun" with SSIS has always been the myriad of ways a parameter can be set.  Variables, Configuration Files, Properties, Jobs.... and probably 1 or 2 others, just to confuse things.

I've already posted (http://thebakingdba.blogspot.com/2013/05/ssis-finding-your-connection-strings-in.html) about the connection strings in 2008/R2.  This is the version for 2012... 

In theory, it should be simple - if you're using the new methodology, you're developing with SSISDB, and using it to hold your configuration values.   Thus, it has all the connection string details within.

However, it can be overridden at the job level... so you potentially need both.  What I do is grab the most recent run of the package, as well as what SSISDB says it has for the package, and present both.

Let me know if there are any questions.    I see on mine, for instance, dupes where the connection string is the same, but the parameter name is different.  Not sure why yet.

Let me know if you see any problems or places to improve, or heck, maybe I'm just doing it wrong.


Thursday, July 11, 2013

[Replication] Fun little gotcha with tables that have a UNIQUE index/constraint.

So I've been bitching to one of our devs about how his updates to a replicated table suck.  Today I went looking.  What the.... it issues a delete, then inserts the row?  Why are you doing it that way?  Well, he's not.  SQL is.  : - \

Turns out, if you have a unique index or constraint (as opposed to the default PK index, I suppose...), then it deletes then inserts the row.  Which sucks when you have a massive wide table with a ton of records.

Live and learn.  Hopefully this helps somebody else.

https://blogs.msdn.com/b/repltalk/archive/2011/10/04/sql-server-transaction-replication-bounded-update-or-why-my-update-was-transformed-into-an-delete-insert-pair.aspx?Redirected=true

Thursday, June 27, 2013

[SSRS] Get the SQL Query from your RDL via T-SQL

Coworker asked for this.

Basically we wanted to get the queries out of the RDLs on our Reporting Services server.  I saw a post originally that discussed using the tool to pull all the RDLs off of the machine, then parsing them.  Hey, waitasecond....

Immediately found the blog that showed how to use the double-convert to get back the XML.  Once I had that, I started querying the XML to get out what I wanted.  Now why am I getting NULL.....

So, after spending an hour banging my head against the XML and getting back NULL, I went to StackOverflow, whereupon a nice person named Mikael gave me the syntax I was looking for (thanks, Mikael!).  Turns out that you have to ignore the namespace, which means your syntax can get... weird (at least compared to what I normally use).

Overall, though, pleased how it turned out. Please let me know if you have any questions, comments or if there's more stuff I should show.  DataSource coming in a couple weeks; I have the code, just need to integrate it.

(Click on the "View Raw" to get the SQL in a new window.  Thanks!)


Thursday, May 30, 2013

[SSIS] Finding your connection strings in SQL Server 2008/R2

"Bad artists imitate - great artists steal" Picasso Banksy

I initially wasn't sure if this was worth posting - if you have the same problem, you've probably come across the same links.  But since I don't see a soup-to-nuts version, here goes.

Problem: you're migrating servers, and need to see what SSIS packages might be affected, so you need the connection strings for all of your SSIS packages.  You could open up each package and check the connections - but that's... inefficient.

If you're storing the packages on disk, you could write a short (power)shell script that would scan the SSIS package files.  But if it's in MSDB, you'd need to pull the package XML out of the system tables, then parse the XML.  This is for 2008/R2 (and if you're not using SSISDB in 2012).  In 2012, SSISDB holds the files differently, and I'm awaiting an answer from one of the MS people on the SSIS team (thanks to Jamie Thomson sqlblog.com/blogs/jamie_thomson/‎ for forwarding my tweet to him)

Code cribbed from somebody, possibly Feodor Georgiev  (I found two links, below)
http://sqlconcept.com/2011/12/06/extracting-ssis-package-definition-from-msdb/
https://www.simple-talk.com/content/print.aspx?article=1408

and the XML parse was a question I asked on StackOverflow.  Thanks to Davmos for the help!
http://stackoverflow.com/questions/16824772/sql-server-query-xml-node-dtsconnectionmanager-dtsname-in-t-sql/16825308?noredirect=1#comment24269324_16825308

Wednesday, May 22, 2013

[Powershell] Grabbing and parsing the RSS feed for new SQL Server Releases

(Updated 2019/04/18 - MS changed the site, moved to TLS 1.2)

My boss asked me to scrape a web site for release/security info for SQL Server.  Okay.

First attempt required a stop at stackoverflow, where I was reminded about hashtables (Thanks again to Grainer!)  And after several hours, I had found the correct Microsoft blog for security updates and had this:



cls
$posts= @() # this basically resets the posts for multiple runs - broke my ISE and good before

[xml]$rss = Invoke-WebRequest http://technet.microsoft.com/en-us/security/rss/comprehensive

$rss.SelectNodes('//item') | % {
    $posts += New-Object psobject -Property @{
        Title = $_.Title.Trim()
        Thedate = [datetime]$_.date.Trim()
        Guid = $_.Guid.Trim()
        Content = $_.Encoded."#cdata-section".Trim()
    }
}

$posts | select-object # | where-object {$_.Title -like '*sql*' -or $_.Guid -like '*sql*'}
#and then insert into a table


Awesome!  I feel good - mission accomplished.  Worked on it at home (yes, nerd here, big shock).
Then the boss looks and says "no, I didn't want that, I wanted the patches and CUs".

Crap.  So let's go find a different blog.  Oh yeah, I have one specifically for SP/CUs!  (In google reader, naturally...RIP.  OT: go to Newsblur and CommaFeed!)

Fortunately, this blog is much easier to parse, so I don't need the hashtable.  Well, not a massive hashtable.  The second one below is specifically to rename the "Description" field so that I don't have a field that's a semi-reserved word. The first one below is to reformat it as a datetime.  Use Chad Miller's awesome scripts to turn it into a datatable and write to disk, and voila!  (Alternatively, you should be able to use my Extensible Powershell Repository, elsewhere on this blog, and just have a 2-line PS script that is invoked via the EPR.

Now, I just have to use that with a MERGE statement to add only the new records, and email me when a new record comes in. 



# CREATE TABLE SQL_Server_Releases_from_RSS
#(
# title varchar(200),
# link varchar(500),
# pubdate smalldatetime,
# descript varchar(MAX) --I really don't want a field named "description"
#)
# Scripts from http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx
. C:\sql_tools\out-datatable.ps1  #Chad Miller rocks!
. C:\sql_tools\write-datatable.ps1
. C:\sql_tools\add-sqltable.ps1


#the next line, fixing the TLS ("could not create SSL/TLS secure channel") fixed by Chandan Rai on https://stackoverflow.com/questions/41618766/powershell-invoke-webrequest-fails-with-ssl-tls-secure-channel
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
#web page changed by MS, yet again, around 2019/04/15. Also moved to TLS 1.2.

[xml]$rss = Invoke-WebRequest https://techcommunity.microsoft.com/gxcuf89792/rss/board?board.id=SQLServer -UseBasicParsing  #was http://blogs.msdn.com/sqlreleaseservices/rss.xml, changed by mdb on 2018/03/19

$datatable = $rss.rss.channel.item | select title, link, @{Name="pubdate";Expression={get-date ($_.date) -format "yyyy/MM/dd hh:mm"}}, @{Name="descript";Expression={$_."Description"}} |out-datatable

write-datatable -ServerInstance "sql_repository" -Database myrepository -TableName SQL_Server_Releases_from_RSS -data $datatable