Tuesday, December 17, 2013

#SQLSat271 - Jan 25, 2014, Albuquerque - Woot!

I'm proud to announce that I will again be presenting at a SQL Saturday, this time at SQL Saturday 271 in Albuquerque, New Mexico, on January 25th 2014. 

I'll again be presenting on using Event Notifications to track code (aka DDL) changes.  I'll be showing a soup-to-nuts solution you can use, including things like Errorlog tracking, automated emails, SSRS reports, multi-server setups, SSMS plugins, and (what I've been working on lately), a fully automated TFS checkin process, where it automatically checks in any changes within 5 minutes of that change occurring.  You can baseline your servers and see what changes get made over time.  Your devs will love and hate you. 

I'm psyched, honestly - 2 coworkers expressed interest in attending another, and that gave me the push to submit.  It looks like there will be a couple great pre-cons, I'm looking forward to meeting some acquaintances again, and meeting new friends.  Hope to see y'all there!

Thursday, November 14, 2013

#SQLSat255 - recap, thoughts from a first-time SQLSat presenter

>verbosity on 
VERBOSITY IS SET TO HIGH

First, this is a long blog post.  My wife said I'd given up any chance of being "The Terse DBA". Ha. :-\

TL;DR: SQLSaturdays are awesome.  Even if you're not Old Grizzled Veteran (or maybe especially if you're not one), you should find a topic and look at speaking - the experience was wonderful.

And here's my slide deck + exact scripts used during my presentation:  https://drive.google.com/file/d/0B3a287PS_UJIRDJBNWEtNVBJY0E/edit?usp=sharing

So, here's my full story of SQL Saturday 255 (Dallas TX 2013).  Yes, it's long.  There's a line of dashes below when I get to SQL Sat 255 itself.

First, let's roll back a couple years to April 2011.  I'd been to PASS once, and then heard about this SQL Saturday thing... some sort of PASS event (denoting awesome quality), and the opportunity to get a full day of training was too good to pass up.  And, hey, it's like $10, right?  Slam-dunk.  It was fantastic, and I saw several really interesting sessions, learned how to muck with Kerberos, verified my two-tier repliction was As Good As You Were Going To Get, and spoke with a couple people from Microsoft at length.  I loved it.

Then 2012 rolled around and with it, SQL Rally.  Again a blast (Fritchey's talk on parameter sniffing was a thing of beauty), but I saw a couple presentations and thought "well, I could do that - I know it at least as well as them".  So, with a bunch of hubris and zero humility, I decided it was time to present.

Naturally, the first stumbling block came immediately - what do I present on?  The easy ones were already "taken", in my mind, by people who already had presentations built and had presented multiple times.  Yes, I know, everybody brings something different to the party, but theirs is already there - and says pretty much everything I could say.

At the same time, I was trying to track down some code changes.  It was aggravating.  Standards for deploys were coming, but I was trying to find a happy medium and something I could use immediately to more easily find them when they occurred (lots of DBA where I work).  I'd looked at setting up Traces (a pain to collect), DDL Triggers (broke replication in fun new ways, and prevented me from applying an SP to our cluster), a third party tool (which was slow to scan and wouldn't reliably work with our TFS server - which was its main interface).  So I'd gone towards Extended Events, which was the Next Big Thing, learning a lot (but not finding a solution to my problem).  Hip deep in it, asking questions, and Jonathan Kehayias (who I'd seen present on XE at PASS) chimed in, and offhandedly mentioned that XE could be used for it, but why wasn't I using Event Notifications...

Several hours digging in the BOL, and I was intrigued.  A couple days/weeks later I had code running, but with some significant holes in my knowledge.  More coding, more reading, more time spent, more homework, and more Stack Overflow.  At some point, I realized that this was my presentation - I knew the subject really well, as I'd been bitten by all the major bugs.  Almost nobody had presented on it (only 2 sessions at PASS in 7 years), and even though I'd been at BOTH summits it was presented at, and I was the target audience, the title/abstract must not have made an impression on me (sorry, guys).  And finally, and this really was the key - I was (heck, still am) jazzed about this - I use it pretty much daily, and it's an awesome tool.

More work, more slide decks, more demos... and finally, when I thought I was ready, I spoke to the local user group - and they had a spot I could present at.  Easy as pie, right?

Yeah, not so much.  I had a work laptop with my demos on it.  Which was running Windows Server, and didn't have the drivers so it would actually WORK with the projector.  Fortunately, I knew the material really well, the audience was both responsive and forgiving (*and suggested narrating my code instead of skipping an un-doable demo), and it all worked on in the end.  A bit frustrating, but it worked!  Ask about my newly-discovered interpretive dance-around-the-code skills!

So, one down, what's next?  Well, a couple months later (and yet more servers deployed), I asked the DBA Virtual Chapter of PASS if they had an opening.  This time I was on a laptop I knew well, had my VMs all set up (3 of em!), and was presenting virtually.  This time a larger audience - 90 people watching was a great feeling, and (aside from not knowing if the laptop's cam was broadcasting) I thought it went really well, got some twitter followers, and some questions that showed they were interested.  

So the next step was PASS Summit, right?  I took my overly-grandiose title and abstract (which I'd written so that my past self would've been interested), and then changed them a bunch on advice from others, trying to make it more accessible and get my golden PASS ticket.  Weeks passed and... nope.  Insert sad trombone here.  Honestly, the most frustrating part was the silence.  No formal rejection letter, just seeing the accepted sessions and my name wasn't on there.  No feedback on why I'd been rejected - was it a topic nobody cared about?  Was there some other session on it by a Big Name?  Was it a crappy title?

-------------------------------------

So, fast forward a couple months.  On a lark, I was showing one of my new coworkers this SQL Saturday thing, there was one in 2011 and 2012, maybe there'd be one in Austin or someth.... wait, what?  This fall? Here? Schwing!

So I dusted off my original abstract, kept the overly-grandiose title ("Master of All I Survey" - I still like it), and tweaked my abstract a little (but only a little), hit submit, and waited.

A few weeks later, my coworkers heard a "whoop" from my desk and wondered what the hell I'd done now.  I'd been accepted!  Which meant.... more work.  Working on the slide deck, work through my presentation, make sure the damn laptop would connect to a projector... then present to coworkers a couple weeks in advance.  It's never a good sign, by the way, when in a small room 3 of the 5 fall asleep (it was warm! Seriously!).  Not my shining-est moment . However, it did make me re-evaluate my presentation, so I changed the whole thing around - added slides, added demos, moved parts around and overall refactored (heh, code term) a whole bunch of things.  Somewhere in there I also went to PASS, which worked on many levels - I'd been taking notes not just on the subjects, but on the presentation process itself.  Also, I saw a great presentation by Mark Vaillancourt and picked up a bunch of tips.

Fast forward a couple weeks and a whole lot of practicing.

I'd been getting emails from Ryan, Sri, and Ganesh about the details of the SQL Saturday, and was pleasantly surprised to get an email about a Speakers Dinner - how cool is that?!  Honestly, I hadn't considered that there'd be one. And, it was awesome - we met at one of the local Mercado Juarez restaurants, went to one of the rooms, and lo and behold!  A bunch of people there that I'd either seen present (Grant, Dave, Mike), read articles by (Kathi), or even just met people for the first time and seen their abstracts on the schedule.  One thing that did stand out - I asked about my PASS submission, and one of the long-time speakers said, (paraphrased) "no, you haven't presented enough, yet.  For Summit, the quality has to be through the roof, and a first/second-time presenter isn't at that level when you're paying that kind of money".  Put in that context, yeah, she was definitely right.  Overall, though, the night was fantastic - a bunch of people who were all there for a common cause, having a lot of fun hanging out and telling stories.  The term #SQLFamily fit perfectly here, even more so than at PASS Summit, which I didn't think was possible.  And to finish the awesome evening, our hosts handed out shirts along with our speaker badges, gave us a sincere thank you, and made us feel utterly and completely like rock stars.  I was humbled - but had a gigantic grin on my face the rest of the night.

Fast forward 12 hours, getting up far too early so that I'd be there at 7:30, and could get into the room early (and make sure my laptop worked; oddness presenting at work in full-screen mode).  Ran into Glenda, a new friend from the previous night's dinner, and walked into the UTA hall.  First order of business?  Checking in, handled flawlessly by the tireless volunteers.  Next?  Double-checking the laptop.  I had a couple different possible concerns, but had 3 different contingency plans, none of which wound up being necessary. That's why you have them, though, right?  Next up - grabbed breakfast and listened to the welcome, then found my list of presentations-I-really-want-to-see.  I didn't have to speak until 2:45, which was a mixed blessing.  On the plus side, it gave me all day to prep/practice should I need it.  On the minus side, I was up against Joe Celko.  Yup, Celko's from Austin and he was presenting.  No pressure, right?

So on to the presentations!

First up, Ami Levin - physical join operators.  Enjoyed this thoroughly - Ami used two gigantic decks of cards as metaphors, and Grant Fritchey was kind enough to volunteer to be the magician's assistant.  Mostly stuff I knew (always nice), but a couple tricks/pitfalls I hadn't considered (even nicer!).

Next?  Khaladkar talking about SSAS, Tabular, and DirectQuery.  We're looking at implementing Tabular SSAS, but were concerned since our compressed data set is over a Terabyte - which means we'll probably have to use Direct Query.  The details were there, but I flinched when he said he was going to VPN in to show some demos - I'd seen enough issues in the past with dodgy wifi that I was pleasantly surprised when it (mostly) worked.  Some interesting details in his presentation, with a whole lot of lessons learned, and I'd love to see it again once he's had a chance to further polish it.

For the 11am session, there were a couple I wanted to see, and wound up going to see Steven Ormrod (who just posted his blog entry on SQLSat - http://stevenormrod.com/2013/11/sql-saturday-255-dallas-revisited/) talk about memory.  Honestly, I know more-or-less how it works, but the details have always hurt my brain.  Fortunately, he did a great job putting it all together and I got some good takeaways.

Next up - lunch!  Lots of choices, and a cookie.  Mmmm... cookie.  And a quick marketing spiel by Microsoft that actually showed off a couple cool Cloud-type things that even I (as a HIPAA-bound person) could use.  Then it was time to take advantage of the awesome sponsors, as well as check my VMs prior to my presentation.  Which, naturally, weren't talking to each other (and was about 25% of my presentation).  Now I see the advantages of the Speaker Room - casual conversation with a bunch of people, while working on getting my VMs up (30 minutes later, finally run a profile trace, and it becomes obvious what's going on - the VMs have shifted their times outside the Kerberos-allowed skew).

A couple cool conversations occur - several cool stories from Joe Celko who has come in, and is definitely The Best Dressed DBA there - 3 piece suit, Superman cufflinks, and all.  The man has Stories, people, mad stories!  So I learned interesting Joe Celko facts - for instance, he's also known in India as SQL Satan, and in South America as Ming the Merciless.  Makes me wish I'd gone with a Flash shirt for my presentation.  One of the presenters' daughters was in the speakers room, and chatted with Joe.  Which resulted in some other cool stories coming out.

Another interesting conversation was about SQL Saturday itself, and whether it was sustainable or would succumb to fatigue by the various parties (vendors, volunteers, speakers).  Tim Mitchell opined that the first time he went, he didn't think it'd be sustainable - then mentioned that this was at SQL Saturday #3.  As a first-time speaker, personally, I think it's going to continue to do well  While I did recognize a couple of the vendors from PASS and SQL Saturday #237 a couple weeks before, all the rest were new.  And equally importantly, they're hitting a different group of people.  Aside from a handful of presenters who'd been to several SQL Saturdays, most of the people there hadn't been to other SQL Saturdays or even PASS, so it was a whole new tank of fish for the vendors to try and "catch".  And I bet that's the way with all the SQL Saturdays - brand new groups of locals.  It also was a good opportunity, IMHO, for a lot of the local firms to get their name out in front of DBAs.  And as a first-time speaker to SQL Saturday, I think there are two keys.  The first is making sure speakers are made welcome - and they did a marvelous job of that.

The second key will be encouraging and cultivating new speakers.

I've been doing SQL Server since 7.0 came out, and while I can be an extrovert, it's not my natural state.  I hadn't presented before because I figured it was all Big Names, albeit names I didn't know, that were presenting, and that their experience and knowledge far outstripped mine.  Obviously it's not - it's just a bunch of members of the SQLFam who have done the hard work and spent the effort.  I think it's just a matter of getting more people exposed to the idea of speaking earlier on - while I still don't know everything, SQL Server is big enough that you're never going to know it all, so find something you know well, put in the effort, help out, and present... or at least volunteer to help.  Thomas Larock said at PASS that he viewed everyone there as a future volunteer, and he's right.

Just my 2 cents.

What's left?  My presentation!  It went well - I had probably about 20 people in there, and there was definitely some interest (which means I'd better finish working on my new set of scripts!).  And as it turned out, the room I was in had all its sessions recorded by Usergroup.Tv, another community outreach by some awesome people, so my presentation should be online in a couple weeks.  Also, the previous month I'd been looking for a "wow" to bring to my presentation, and so put some extra effort in - which ended up with a SQL job and Powershell script that checks changed items into TFS automatically!  It works well (my boss loves the thing) and got a lot of interest from the audience.  I've just been querying the table directly or using an SSRS report, but I think I'm going to spend more time on the TFS part, especially after seeing how many questions were about it.  ("you mean we don't have to give our devs VIEW DEFINITION?!" Ayup.)  I was also really surprised to get my feedback papers immediately - I'd figured that they'd use it for future SQLSaturdays, and even figured they were using this as a "Farm League" for PASS Summit.  I was thrilled with the results - probably 80% 5s, 15% 4s, a single 3, and one guy who said he wanted to see more presentations by me and loved this one - but gave me all 1s.

After my presentation?  Floating on a cloud, that was me... so I was late to the final presentation, a one-hour distillation by Greg Galloway of a 350-page whitepaper on tuning Tabular SSAS.  Thin attendance, probably because everyone's brain was full, but I got a ton out of it - and it was a precursor to a full-day seminar in Las Colinas, which we made sure to send somebody to.

Finally, the end-of-day kum-ba-ya (standing room only!), and all the awesome giveaways.  Coworker had a very similar name to one of the winners, so there was some confusion - and after that, everyone made sure to check name tags.  : )

Then, the after-party, which was a blast.  It was at the Dave & Busters, and we had a "room" with a couple pool tables, grub (pub food, but pretty good), and a whole lot of people to talk to.  So I spent the night talking to various people, both SQL and otherwise, some more amazing Joe Celko stories, networking, making friends, and even beat a coworker at a game of pool.

Then Monday rolled around, I got the skinny on the pre-cons, found out what all my coworkers saw and learned (almost everyone from my group went to one of the pre-cons), and then got the biggest surprise of all - my coworkers were so stoked from this one, they were ready to go see another!  So, I've submitted to SQL Saturday Albuquerque 2014 - we'll see what happens...

Wednesday, October 23, 2013

#SQLSaturday255 - I'm presenting In Dallas on November 2nd!

Howdy, all.  I'm proud to announce that I'll be presenting on Event Notifications (aka "Master of All I Survey") at SQL Saturday 255 in Arlington (Texas) on November 2nd.  I personally will be presenting at 2:45, in Room 109, but there are almost 50 different sessions, including such luminaries as Joe Celko and Grant Fritchey, just to name a couple.  So I'm definitely ramping up my presentation in response!

http://www.sqlsaturday.com/255/eventhome.aspx


What we'll be covering:
  • What is Service Broker and why is it useful?
  • What are Event Notifications and what good are they?
  • How do we use Event Notifications to track code changes (DDL)?
  • How do we centralize that for many machines?
  • Reporting?
  • EN Tricks: emailing errors, watching for Bad Things, and checking into TFS automagically.

I think I've got a good mix of Presentation and Demos, and my goal is to send you home with code that you want to use immediately, and CAN use immediately!

See you there!

Monday, October 7, 2013

[Event Notifications] Automatically check in code changes to TFS


I'll be presenting this at SQLSaturday 255 in Dallas, on November 2nd 2013.

Two note to begin, because this is a lengthy post: there are two parts of this, and two scripts.
* The Powershell script will script out objects and check them into TFS.  No prereqs other than TFS and powershell.  You could just run this on servers daily and it would give you a basic "here's what the code is like". Basic but functional.
* The SQL script will look for new changes using an existing Event Notifications setup, then call the PoSH script for those objects.  More work, but IMHO more reward; you get real-time monitoring, information such as the LoginName of the user who changed it, and better comments.

So, I've been trying to figure out better ways to present code to our developers, as well as giving our group the ability to look back at code changes in production.  Our devs have Source Control for code that's deployed, but our Operations team has to look at objects that have been changed over time, be it via indexes, tweaks made in production for emergencies, permission changes, etc. 

There is one 3rd party app I know of that does this, but we had issues getting it to work.
So I decided to write my own.

And for me it seemed a natural to tie it in with Event Notifications.  My original idea was to just use EN to check the changes in, but that's not always feasible - if a table gets a new column, how do you represent that?  So I moved to a different model - EN detects the changes, and then a separate SP kicks off a powershell script that deals with TFS, by scripting out a new copy of the affected object(s) and checking it in.  And because we're using EN, we get information like the username of the person who changed it, what exactly they did, etc.


While this method can be used with any source control server, our company uses TFS to track changes.
However, I was trying to set it up on an existing server and didn't want a "heavy" client on that server.  Fortunately, I found on stack overflow a post (http://stackoverflow.com/questions/5503858/how-to-get-tf-exe-tfs-command-line-client, answer by ijprest) describing how to do this. Alternatively, you can install Team Explorer or Visual Studio with Team Explorer or use the Java TFS client.  The way licensing works, as long as you have a CAL, they don't care what client you use.

The next step was the scripting-to-disk.  I settled on using Phil Factor's code which uses powershell & SMO, then modified it for my use, including the ability to feed a server/database and have it script out the entire database and check it into TFS.  All the credit goes to him, all the blame goes to me.  Right now it's scripting "under" the database-level - you get all the changes within the database, but not changes ON the database, like changes to DB settings, files, and logins/permissions.  Version 1.1 will script at the server level and below.


So, at this point we have TFS, a SMO object scripter, next up is code that runs, looks at our Event Notification table, and call the Powershell script.  Our TFS does not allow every user to CHECKIN code, so rather than risk it failing I add their name to the COMMENT instead.  We are calling the powershell script via XP_CMDSHELL; there's probably a better way and I'll look at it for a future version, once PASS 2013 is over.

Overall, it seems to work well.  I need to work on some sort of dynamic exclusion list and a couple other tweaks, but overall I'm very happy with it.  As are our auditors.

Now, the install, which is intimidating.  But think of it the other way.  Create 3 directories, copy over a total of 11 files, run one script, create two tables and add one job.  The hardest part, personally, was getting the TFS workspace working.



First, create 3 folders:
C:\TFS_CLI
C:\TFS_CLI\App
C:\TFS_CLI\TF_Workspace

Next, copy the following 10 files from a valid TFS install into the \App folder.
 (Alternatively, install Team Explorer, which includes TF.EXE; you’ll need to change the commands as I’ve hardcoded the location of TF.EXE)

Microsoft.TeamFoundation.Build.Controls.dll
Microsoft.TeamFoundation.Client.dll
Microsoft.TeamFoundation.Common.dll
Microsoft.TeamFoundation.Common.Library.dll
Microsoft.TeamFoundation.dll
Microsoft.TeamFoundation.VersionControl.Client.dll
Microsoft.TeamFoundation.VersionControl.Common.dll
Microsoft.TeamFoundation.VersionControl.Controls.dll
TF.exe
TF.exe.config

At this point, set up your TFS workspace for this account/server. It assumes your TFS project is named $/YourENproject (I created a separate project just for this, so you don't risk overwriting new code), and the server is named your-tf-server-name.  The workspace will be called TF_Workspace, and will use c:\TFS_CLI\TF_Workspace


--unsure if this first line is needed or not. Run from command line in that folder

c:\TFS_CLI\app\>tf.exe workspace /new tf_workspace /location:c:\TFS_CLI\TF_Workspace
Unable to determine the source control server.

c:\TFS_CLI\App>tf workspace /new /noprompt /s:http://your-tf-server-name:8080 TF_Workspace

c:\TFS_CLI\App>tf workfold $/YourENproject c:\TFS_CLI\EN_Workspace /workspace:TF_Workspace /s:http://your-tf-server-name:8080

c:\TFS_CLI\App>tf workfold /unmap $/

 Now, try and check out an object - I put a test object in mine to make sure this worked.


c:\TFS_CLI\App> tf get $/Randolph/ftw-sv-db-03/assetexplorer/Tables/dbo.AaaAce.sql









Now, save the following script as sql_script_to_TFS.ps1 and place it in C:\TFS_CLI.  
Since it's Powershell you'll either need to sign it or run in an execution mode of unrestricted.  
Personally, I'd sign it.

https://gist.github.com/mbourgon/6873071 (in case it isn't directly below.


<#
.SYNOPSIS
Will script an object from SQL Server and CHECKIN/ADD to TFS.
.EXAMPLE
sql_script_to_TFS.ps1 -server yourservername -Database yourdatabasname -ScriptType "FULL" -Author yourTFSname -Comment "full checkin of database"
#>
# Purpose - given parameters, script out an object from a SQL Server, using SMO, then check it into TFS.
#Scripting code gleefully copied from Phil Factor - the great parts that work are his, the errors are mine
# https://www.simple-talk.com/sql/database-administration/automated-script-generation-with-powershell-and-smo/
#Other code from others copied and attributed below.
#mdb 2013/12/03 adding ScriptType so that we can add "create database"-specific changes ONLY.
# To do FULLs you will have to have -ScriptType = 'Full' parameter. By default it will do objects.
#mdb 2013/12/05 trying to better bullet-proof the app against TF failures, which appear to occur randomly
#mdb 2013/12/07 works great, though occasionally it cant script out an item. Trying to bulletproof that
#mdb 2013/12/16 broke the individual-item portion, fixed.
#mdb 2013/12/17 more cleanup and dealing with no objects found.
#mdb 2014/09/08 removing .exe references, and changing folder name, so 2013 works.
#mdb 2018/03/06 the -notmatch doesn't include usp_M* under certain circumstances. Changing to notlike
#param has to be the first line of the script.
param(
[Parameter(Mandatory=$true,Position=1)]
[string]$Server,
[Parameter(Mandatory=$true,Position=2)]
[string]$Database,
[string]$ScriptType ='Object',
[string]$SchemaToScript,
[string]$ObjectToScript,
[string]$Author,
[string]$Comment
)
#make sure not ObjectToScript blank if scripting out the entire DB; makes checkin better
#these are the parameters for testing
#$Server='sql_repository'# the server it is on
#$Database='model' # the name of the database you want to script as objects
#$SchemaToScript = 'dbo'
#$ObjectToScript = 'spy_tempdef'
#$Author = 'michael.bourgon'
#$ScriptType ='Object'
#$Comment = 'bourgon_test'
#setting up an error code for later
$myerror = 0
cd c:\tfs_cli\en_workspace
if ($comment -eq '')
{ $comment = "generic EN checkin"}
if ($author -eq '')
{ $author = "erxnetwork\sqlservice"}
#field is mandatory, if we dont know, use a known-valid.
$ServerNameClean = "$($Server -replace '[\\\/]','__')"
clear
#writing this out for logging and troubleshooting. These are all the parameters except ScriptType
write-host $Server, $Database, $SchemaToScript, $ObjectToScript, $Author, $Comment
#TFS workspace folder - whatever you set it up as on your server
$DirectoryToSaveTo='C:\TFS_CLI\EN_Workspace' # the directory where you want to store them
# Load SMO assembly, and if we are running SQL 2008 DLLs load the SMOExtended and SQLWMIManagement libraries
$v = [System.Reflection.Assembly]::LoadWithPartialName( 'Microsoft.SqlServer.SMO')
if ((($v.FullName.Split(','))[1].Split('='))[1].Split('.')[0] -ne '9') {
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMOExtended') | out-null
}
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SmoEnum') | out-null
set-psdebug -strict # catch a few extra bugs
$ErrorActionPreference = "stop"
$My='Microsoft.SqlServer.Management.Smo'
$srv = new-object ("$My.Server") $Server # attach to the server
if ($srv.ServerType-eq $null) # if it managed to find a server
{
Write-Error "Sorry, but I could not find Server '$Server' "
return
}
$scripter = new-object ("$My.Scripter") $srv # create the scripter
#Add the various options we care about
$scripter.Options.ToFileOnly = $true
$scripter.Options.ExtendedProperties= $true # yes, we want these
$scripter.Options.DRIAll= $true # and all the constraints
$scripter.Options.Indexes= $true # Yup, these would be nice
$scripter.Options.Triggers= $true # This should be includede
$scripter.Options.AppendToFile = $False
$scripter.Options.AllowSystemObjects = $False
$scripter.Options.ClusteredIndexes = $True
$scripter.Options.DriAll = $True
$scripter.Options.ScriptDrops = $False
$scripter.Options.IncludeHeaders = $False #so you do not get the one line "scripted at..." which would be NOW.
#$scripter.Options.ToFileOnly = $True
#$scripter.Options.Indexes = $True
$scripter.Options.Permissions = $True
$scripter.Options.WithDependencies = $False
$scripter.Options.Bindings = $true
$scripter.Options.IncludeDatabaseRoleMemberships = $true
#################################################
#First, script out the database "create" itself.#
#################################################
if (($Database) -and $ObjectToScript -eq '') #if database has a value but there is no object, script out the DB.
{
$db_scripter = new-object ("$My.Scripter") $srv # script out the database creation
$db_scripter.options=$scripter.options # with the same options
$db_scripter.options.filename="$($DirectoryToSaveTo)\$($ServerNameClean)\$($Database)\$($Database)_database_create.sql" # with the same options
#explcitly creating the path for the DB script here. We still need to do it for all the sub-types.
# Could probably move the $d declaration up here, but leaving it here for readability.
$SavePath="$($DirectoryToSaveTo)\$($ServerNameClean)\$($Database)"
# create the directory if necessary (SMO does not).
if (!( Test-Path -path $SavePath )) # create it if not existing
{Try { New-Item $SavePath -type directory | out-null }
Catch [system.exception]{
Write-Error "error while creating '$SavePath' $_"
return
}
}
#Use TF to see if the object exists on our TFS server.
# Optimization idea: DIR the entire subfolder on a FULL and compare all at once.
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF dir $/Randolph/$ServerNameClean/$Database/$($Database)_database_create.sql 2>&1"
# Running the TF calls this way as per Simon Ejsing to ignore the error state and capture the actual error message
# http://stackoverflow.com/questions/2095088/error-when-calling-3rd-party-executable-from-powershell-when-using-an-ide
# However, that also means we need
#Note that if the database create has not changed, it will still attempt to CHECKIN, but TFS will ignore as it is the same.
if ($tf -like "No items match*" -or $tf -like "*is not found or not supported*")
{
"database script does not exist; scripting out and ADDing to TFS"
if(Test-Path -Path $db_scripter.options.filename)
{
#delete the file manually, since we have seen permission issues where the $script cannot overwrite.
$deleteme = "$SavePath\$($Database)_database_create.sql"
$deleteme
try
{
remove-item "$SavePath\$($Database)_database_create.sql" -force
}
catch
{
$error[0].Exception
}
}
#putting in a try/catch so we get error messages if it breaks, and it can continue.
try
{
$db_scripter.Script($srv.Databases[$Database]) # do it
}
Catch
{
"Error Message trying to script out $SavePath\$Filename"
$error[0].Exception
}
"database create script done"
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF add $/Randolph/$ServerNameClean/$Database/$($Database)_database_create.sql 2>&1"
$tf
#use mass checkin at the end
}
else
{
"database script exists; get, check out, script to override, check in"
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF get ""$/Randolph/$ServerNameClean/$Database/$($Database)_database_create.sql"" /noprompt 2>&1"
"database script GET results"
$tf
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF checkout ""$/Randolph/$ServerNameClean/$Database/$($Database)_database_create.sql"" 2>&1"
"database script CHECKOUT results"
$tf
"database checkout done"
#If the file exists, manually delete; we have seen permission issues where $script cannot overwrite.
if(Test-Path -Path $db_scripter.options.filename)
{
$deleteme = "$SavePath\$($Database)_database_create.sql"
$deleteme
try
{
#bug exists with standard remove - if there are read-only items in the same folder, -force is required
remove-item "$SavePath\$($Database)_database_create.sql" -force
}
catch
{
$error[0].Exception
}
}
#putting in a try/catch so we get error messages if it breaks, and it can continue.
try
{
$db_scripter.Script($srv.Databases[$Database]) # do it
}
Catch
{
"Error Message trying to script out $SavePath\$Filename"
$error[0].Exception
}
"database script out done"
#use mass checkin at the end
}
}
###########################
## Scripting out Objects ##
###########################
# we now get all the object types except extended stored procedures
# first we get the bitmap of all the object types we want
$all =[long] [Microsoft.SqlServer.Management.Smo.DatabaseObjectTypes]::all `
-bxor [Microsoft.SqlServer.Management.Smo.DatabaseObjectTypes]::ExtendedStoredProcedure
# and we store them in a datatable
$d = new-object System.Data.Datatable
# get almost everything; skipping most service broker, information_schema, system_views, certificates (cannot be scripted)
# there are other items that may need to be skipped, like SymmetricKeys in SSISDB
# Yes, I realize the irony in skipping SB given that it is powering this.
#putting in a try/catch so we get error messages if it breaks, and it can continue.
try
{
$d=$srv.databases[$Database].EnumObjects([long]0x1FFFFFFF -band $all) | `
Where-Object {$_.Schema -ne 'sys'-and $_.Schema -ne "information_schema" -and $_.DatabaseObjectTypes -ne 'ServiceBroker' `
-and $_.DatabaseObjectTypes -ne 'Certificate' `
-and $_.DatabaseObjectTypes -ne 'MessageType' `
-and $_.DatabaseObjectTypes -ne 'ServiceContract' `
-and $_.DatabaseObjectTypes -ne 'ServiceQueue' `
-and $_.DatabaseObjectTypes -ne 'ServiceRoute' `
-and ($SchemaToScript -eq '' -or $_.Schema -eq $SchemaToScript) `
-and (($ObjectToScript -eq '' -and $ScriptType -eq 'Full') -or $_.Name -eq $ObjectToScript) `
-and ($_.Name -notlike 'sp_MS*') }
# mdb 2013/11/07 previous line skips replication objects. This comment below code as comment lines break extended 1-liner.
}
Catch
{
"Error Message trying to enumerate the database - may be logshipped or being restored"
$myerror = 1
$error[0].Exception
}
# List every item that we are going to do
$d = $d | sort -Property DatabaseObjectTypes,Schema,Name
$d | select databaseobjecttypes, schema, name
if ($d.Count -gt 10000)
{
"skipping the database objects - more than 10000"
}
# Now write out each scriptable object as a file in the directory you specify
#it appears that an empty array never actually enters the FOREACH, leaving variables unset
# -and -$d.Count -ne 0
if ($myerror -eq 0 -and $d.Count -lt 10001) #20k of objects takes up 5gb of RAM in the PS script and causes problems
{
$d| FOREACH-OBJECT { # for every object we have in the datatable.
"" #blank line so each block of error messages is separated out
$SavePath="$($DirectoryToSaveTo)\$ServerNameClean\$Database\$($_.DatabaseObjectTypes)"
# create the directory if necessary (SMO does not).
if (!( Test-Path -path $SavePath )) # create it if not existing
{Try { New-Item $SavePath -type directory | out-null }
Catch [system.exception]{
Write-Error "error while creating '$SavePath' $_"
return
}
}
# tell the scripter object where to write it, and make sure it is actually writeable
if ($_.schema)
{
$Filename = "$($_.schema -replace '[\\\/\:\.]','-').$($_.name -replace '[\\\/\:\.\ ]','-').sql";
}
else
{
$Filename = "$($_.name -replace '[\\\/\:\.]','-').sql";
}
$scripter.Options.FileName = "$SavePath\$Filename"
$scripter.Options.FileName #print it out so we know what is being done
$UrnCollection = new-object ('Microsoft.SqlServer.Management.Smo.urnCollection')
$URNCollection.add($_.urn)
############################
# TFS code for each object #
############################
#Use TF to see if the object exists on our TFS server
"checking to see if object exists"
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF dir $/Randolph/$ServerNameClean/$Database/$($_.DatabaseObjectTypes)/$Filename 2>&1"
# Running all the TF commands this way as per Simon Ejsing to ignore the error state and capture the actual error message.
# http://stackoverflow.com/questions/2095088/error-when-calling-3rd-party-executable-from-powershell-when-using-an-ide
if ($tf -like "No items match*" -or $tf -like "*is not found or not supported*")
{
"no items match; scripting out and ADDing to TFS"
try
{
$scripter.script($URNCollection)
}
Catch
{
"Error Message trying to script out $SavePath\$Filename"
$error[0].Exception
}
"script done"
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF add /noprompt ""$/Randolph/$ServerNameClean/$Database/$($_.DatabaseObjectTypes)/$Filename"" 2>&1"
$tf
#mdb 2013/11/07 only do ONE checkin at the end if we are doing an entire database; all will have the same comment
if ($ObjectToScript -ne '')
{
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF checkin /author:$author /comment:""$comment"" /noprompt 2>&1"
$tf
}
}
else
{
"item exists; get, check out, script to override, check in"
#noprompt causes it to crash, virtually every time
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF get ""$/Randolph/$ServerNameClean/$Database/$($_.DatabaseObjectTypes)/$Filename"" 2>&1"
$tf
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF checkout ""$/Randolph/$ServerNameClean/$Database/$($_.DatabaseObjectTypes)/$Filename"" 2>&1"
$tf
#Delete file before scripting; we have seen permission issues.
if(Test-Path -Path $scripter.options.filename)
{
try
{
remove-item "$SavePath\$Filename" -force
}
catch
{
$error[0].Exception
}
}
try
{
$scripter.script($URNCollection)
}
Catch
{
"Error Message trying to script out $SavePath\$Filename"
$error[0].Exception
}
#mdb 2013/12/03 making this part only run if it is a specific object; that way we can rerun an entire database
if ($ObjectToScript -ne '')
{
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF checkin /author:$author /noprompt /comment:""$comment"" ""$/Randolph/$ServerNameClean/$Database/$($_.DatabaseObjectTypes)/$Filename"" 2>&1"
$tf
}
}
}
}
#If it is a mass add or a database-specific, CHECKIN now.
if ($ObjectToScript -eq '')
{
"final mass checkin"
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF checkin /author:$author /comment:""$comment"" /noprompt 2>&1"
#$tf we do not need this one here because it will be shown below
#$tf
"mass checkin done"
}
"--------------------------------------------------------------------------"
#if the checkin failed, UNDO so the next one does not make it worse.
#The next checkin would probably fix it, but I have had to go back and manually undo. Not fun.
#mdb 20131107 If there were any items to check in, get the results. Throws an error otherwise.
#using Stej code to verify the variable exists; if no objects, $tf is never set, so it bombs here
# http://stackoverflow.com/questions/3159949/in-powershell-how-do-i-test-whether-or-not-a-specific-variable-exists-in-global
if (Test-Path variable:local:tf)
{
if ($tf -like "Changeset * checked in." -or $tf -like "There are no remaining changes to check in.")
{
#mdb 20131107 If there were any items to check in, get the results. Throws an error otherwise.
if ((Test-Path variable:local:d) -and $ObjectToScript -eq '')
{
$tf
}
}
else
{
"changes not made - errorlog follows"
$tf
"================================UNDO BEGINS=================================="
$tf = &cmd /c "c:\TFS_CLI\App_2013\TF undo ""$/Randolph/$ServerNameClean/$Database/*.*"" /recursive 2>&1"
$tf
"=================================UNDO ENDS==================================="
}
}
else
{
"No objects found, nothing done"
}
#TFS rollback code, should look something like this
#c:\TFS_CLI\App_2013\TF undo $/Randolph/$ServerNameClean/VOAgent/DatabaseRole/*.*
#No files checked in due to conflicting changes. These conflicting changes have been automatically resolved. Please try the check-in again.
#rolling back code: c:\TFS_CLI\App_2013\TF undo $/Randolph/cm-01/VOAgent/DatabaseRole/*.*
#for some reason "override:" did not work.
#C:\TFS_CLI\EN_Workspace\server\file.sql
#item exists; get, check out, script to override, check in
#All files are up to date.



 Once you have it saved, try adding an object!
--SSMS, on my_personal_server
use  somedatabase
create table blah (id int identity)

Now from powershell:
C:\TFS_CLI\sql_script_to_TFS.ps1 -Server my_personal_server -Database somedatabase -objecttoscript blah -Author mydomain\validTFSuser

On mine, I get:
item exists; get, check out, script to override, check in
C:\TFS_CLI\YourENproject\my_personal_server\somedatabase\Table:
Checking in edit: dbo.blah.sql
(followed by a confirmation that it checked in)

All good?  Good.  Now for the final step - using EN to kick it off.

Create a couple tables to hold the details.  Mine are in my EN database, since that's where the data is.


CREATE TABLE TFS_Checkin_Errors (id BIGINT IDENTITY PRIMARY key, time_job_run DATETIME, original_id INT, resultant VARCHAR(8000))
CREATE TABLE TFS_Last_Time_Run (last_time_run DATETIME PRIMARY KEY )


Save out this script, then run it on the server.  

USE eventnotificationrec
GO
CREATE PROCEDURE [dbo].[EN_TFS_Checkin]
AS
/*
Purpose - using Event Notifications, determines which objects have changed (sub-database-level, for now).
Once list has been generated, repeatedly call a custom powershell script that will
"get" and "checkout" the object from TFS if it exists, overwrite it with a current copy (scripted via SMO),
then "checkin" the new object with a comment based on the loginname
mdb 2013/10/04 1.00 Works well (albeit a little slowly)and handled multiple days' worth of checkins.
Error items were due because of short-term objects that are created then dropped.
mdb 2013/10/11 1.10 Using TFSsecurity to find valid users. If user is valid, use their name to checkin.
mdb 2013/10/22 1.11 Changing INDEX to include table. Makes it easier to read in history.
mdb 2013/11/08 1.12 Let's handle extended properties!
mdb 2013/12/03 1.13 the PS code can now script out database creation. Adding things like ALTER_DATABASE,
and dealing with full-database checkins.
mdb 2013/12/04 1.14 fixing author bug
To be done still:
* Script server-level
* Database-level: perms and ALTER/CREATE_SCHEMA
* Logic for Drop and Renames. How do we want to handle that? (TF.exe RENAME, either to new name or "DROPPED - oldname")
* Better logic to skip events? We would conceivably need more table(s) to hold the exception types.
sample of items not yet done
add_role_member --must be at the database level(?)
ADD_SERVER_ROLE_MEMBER
ALTER_ASSEMBLY
ALTER_CREDENTIAL
*/
SET NOCOUNT ON
DECLARE @cmd VARCHAR(8000), @min INT, @max INT, @EventType VARCHAR(128), @Time_of_last_run datetime, @now DATETIME
, @loginname VARCHAR(128), @loginname_temp VARCHAR(128), @TargetObjectType VARCHAR(128)
--mdb 2013/11/08 we didn't add ObjectType as it would screw up our dupe-elimination, but I need TargetObjectType for ExtendedProperties
DECLARE @change_list TABLE (id INT IDENTITY, ServerName VARCHAR(128), EventType VARCHAR(128), LoginName VARCHAR(128)
, DatabaseName VARCHAR(128), SchemaName VARCHAR(128), ObjectName VARCHAR(128)
, TargetObjectName VARCHAR(128), TargetObjectType VARCHAR(128))
DECLARE @valid_tfs_logins TABLE (id INT IDENTITY, loginname VARCHAR(128))
IF OBJECT_ID('tempdb..#holding_tank_for_errors') IS NOT NULL
DROP TABLE #holding_tank_for_errors
CREATE TABLE #holding_tank_for_errors (id INT IDENTITY, resultant VARCHAR(800))
if object_id('tempdb..#tfssecurity') is not null
DROP TABLE #tfssecurity
CREATE TABLE #tfssecurity (id INT IDENTITY, resultant nVARCHAR(800))
--TFSSecurity. Ensures the user is valid by querying for "Team Foundation Valid Users" (though you may have other groups).
-- If valid, uses their name for checkin. If not, uses a default (set in PS) and adds it to the Comment
INSERT INTO #tfssecurity
(resultant)
EXEC master..xp_cmdshell 'C:\tfs_cli\app_2013\tfssecurity /imx "Team Foundation Valid Users" /collection:http://yourtfserver:8080/tfs/DefaultCollection'
--find only users, parsing into a comparable field.
INSERT INTO @valid_tfs_logins (loginname)
SELECT
SUBSTRING(resultant, CHARINDEX(']',resultant)+2, CASE CHARINDEX('(', resultant) WHEN 0 THEN 200 ELSE CHARINDEX('(', resultant) - CHARINDEX(']',resultant)-3 end)
FROM #tfssecurity
WHERE resultant LIKE '%[[]U]%'
--go back 1 day, the first time you run.
IF (SELECT COUNT(*) FROM eventnotificationrec.dbo.TFS_Last_Time_Run) = 0
BEGIN
INSERT INTO eventnotificationrec.dbo.TFS_Last_Time_Run VALUES (GETDATE()-1)
END
--find the last time we ran, so we're only getting in that range
SET @time_of_last_run = (select MAX(last_time_run)
FROM eventnotificationrec.dbo.TFS_Last_Time_Run)
--located here, as soon as possible before the query to get the list of events. We could change it to use the ID field instead.
SELECT @now = GETDATE()
SELECT @time_of_last_run AS last_run_time, @now AS now_run_time
-- can simplify this further.
-- Multiple changes to the same object don't need multiple checkins. Indexes, for instance, only need 1 checkin of the table.
-- will probably need a case statement since (for indexes, in this case) each would be a different object w/same targetobject
-- as of 1.1, each index change requires a separate checkin.
INSERT INTO @change_list
SELECT ServerName, EventType, LoginName, DatabaseName, SchemaName, ObjectName, TargetObjectName, TargetObjectType
FROM eventnotificationrec.dbo.ENAudit_View
WHERE EventType IN
(
--these lines are blocked out by the groupings below, use them to make it easier to read this.
'ALTER_FUNCTION','ALTER_PROCEDURE','ALTER_TABLE','ALTER_VIEW','CREATE_FUNCTION',
'CREATE_PARTITION_FUNCTION', 'CREATE_PARTITION_SCHEME', 'CREATE_PROCEDURE', 'CREATE_TABLE', 'CREATE_VIEW'
,'ALTER_TABLE'
,'Create_Type', 'Alter_Type'
,'ALTER_TRIGGER', 'ALTER_INDEX', 'ALTER_SPATIAL_INDEX'
,'CREATE_TRIGGER', 'CREATE_INDEX', 'CREATE_SPATIAL_INDEX'
,'ALTER_EXTENDED_PROPERTY','CREATE_EXTENDED_PROPERTY','DROP_EXTENDED_PROPERTY'
,'ALTER_AUTHORIZATION_DATABASE','ALTER_DATABASE'
)
AND NOT (EventType = 'ALTER_INDEX' AND commandtext LIKE '%reorg%') --ignore reorgs
AND NOT (EventType = 'ALTER_INDEX' AND commandtext LIKE '%REBUILD%' AND commandtext NOT LIKE '%fillfactor%')
AND insert_datetime > @time_of_last_run
AND insert_datetime <= @now
GROUP BY ServerName, EventType, LoginName, DatabaseName, SchemaName, ObjectName, TargetObjectName, TargetObjectType
ORDER BY MAX(insert_datetime)
select count(*) from @change_list AS number_of_items_to_do
--now that we have a list to process, invoke the powershell script for each.
--The powershell script does the work; this just tells it what's changed and needs to be done.
SELECT @min = MIN(id), @max = MAX(id) FROM @change_list
WHILE @min <= @max
BEGIN
SET @EventType = NULL
SET @loginname_temp = NULL
SET @loginname = NULL
SET @TargetObjectType = NULL
--Using TFSSecurity block above, change the loginname to be the actual loginname if they're a valid user
SELECT @loginname_temp = loginname FROM @change_list WHERE id = @min
IF EXISTS (SELECT * FROM @valid_tfs_logins WHERE loginname = @loginname_temp)
BEGIN
SET @loginname = ' -author "' + @loginname_temp + '" -comment "'
END
ELSE
BEGIN
SET @loginname = ' -comment "' + ISNULL(@Loginname_Temp,'blank.username') + ' ----- '
END
--clear the error trap - we have 2, one transient and one permanent
TRUNCATE TABLE #holding_tank_for_errors
--easier to select this once; makes below code more readable.
SELECT @EventType = EventType, @TargetObjectType = TargetObjectType FROM @change_list WHERE id = @min
--basic call, comment has loginname & eventtype
IF @EventType IN ('ALTER_FUNCTION','ALTER_PROCEDURE','ALTER_VIEW','CREATE_FUNCTION',
'CREATE_PARTITION_FUNCTION', 'CREATE_PARTITION_SCHEME', 'CREATE_PROCEDURE', 'CREATE_TABLE', 'CREATE_VIEW')
or (@EventType IN ('ALTER_EXTENDED_PROPERTY','CREATE_EXTENDED_PROPERTY','DROP_EXTENDED_PROPERTY') AND @TargetObjectType = '')
BEGIN
SELECT @cmd = 'powershell -file c:\tfs_cli\sql_script_to_tfs.ps1 -server ' + servername
+ ' -database "' + databasename + '" -schematoscript "' + SchemaName + '" -objecttoscript "' + ObjectName + '"'
+ @LoginName + EventType + ' ' + ObjectName
+ CASE
WHEN TargetObjectName = '' THEN ''
WHEN TargetObjectName IS NOT NULL THEN ' on ' + TargetObjectName
ELSE '' END + '"'
FROM @change_List WHERE id = @min
END
--when we want to include the actual change in the comment, to get specifics like ALTER TABLE ADD COLUMN
ELSE
IF @EventType IN ('ALTER_TABLE')
BEGIN
SELECT @cmd = 'powershell -file c:\tfs_cli\sql_script_to_tfs.ps1 -server ' + servername
+ ' -database "' + databasename + '" -schematoscript "' + SchemaName + '" -objecttoscript "' + ObjectName + '"'
+ @LoginName + EventType + ' ' + ObjectName + '"'
FROM @change_List WHERE id = @min
END
--objects without schemas, so searching the SMO to match SchemaName won't find anything
ELSE
IF @EventType IN ('CREATE_TYPE', 'ALTER_TYPE')
BEGIN
SELECT @cmd = 'powershell -file c:\tfs_cli\sql_script_to_tfs.ps1 -server ' + servername
+ ' -database ' + databasename + ' -objecttoscript ' + ObjectName
+ @LoginName + EventType + ' ' + ObjectName + '"'
FROM @change_List WHERE id = @min --when scripted, they have no schema
END
--objects that are actually "part" of another object, and thus use the TargetObject to determine what to script out.
ELSE
IF @EventType IN ('ALTER_TRIGGER', 'ALTER_INDEX', 'ALTER_SPATIAL_INDEX', 'CREATE_TRIGGER', 'CREATE_INDEX', 'CREATE_SPATIAL_INDEX')
or (@EventType IN ('ALTER_EXTENDED_PROPERTY','CREATE_EXTENDED_PROPERTY','DROP_EXTENDED_PROPERTY') AND @TargetObjectType <> '')
BEGIN
SELECT @cmd = 'powershell -file c:\tfs_cli\sql_script_to_tfs.ps1 -server ' + servername
+ ' -database "' + databasename + '" -schematoscript "' + SchemaName + '" -objecttoscript "' + TargetObjectName + '"'
+ @Loginname + EventType + ' ' + ISNULL(ObjectName,'') + '.' + TargetObjectName + '"'
FROM @change_List WHERE id = @min
END
--database-level objects, such as CREATE or changes to a database
ELSE
IF @EventType IN ('ALTER_AUTHORIZATION_DATABASE','ALTER_DATABASE','CREATE_DATABASE')
BEGIN
SELECT @cmd = 'powershell -file c:\tfs_cli\sql_script_to_tfs.ps1 -server ' + servername
+ ' -database "' + databasename + '"'
+ @LoginName + EventType + ' ' + databasename + '"'
FROM @change_List WHERE id = @min
END
--run the powershell command, saving errors to a temp table
RAISERROR (@cmd, 0, 1) WITH NOWAIT
INSERT INTO #holding_tank_for_errors
EXEC master..xp_cmdshell @cmd
--if any errors during execution, save to a permanent table to look at later.
IF EXISTS
(
SELECT *
FROM #holding_tank_for_errors
WHERE resultant LIKE '%sql_script_to_tfs.ps1%'
)
BEGIN
INSERT INTO eventnotificationrec.dbo.TFS_Checkin_Errors (time_job_run, original_id, resultant)
SELECT GETDATE(), id, resultant
FROM #holding_tank_for_errors ORDER BY id
END
SET @min = @min + 1
END
--and now that we're successful, add a record so we know when we ran.
INSERT INTO eventnotificationrec.dbo.TFS_Last_Time_Run (last_time_run) VALUES (@now)
GO
view raw EN_To_TFS_2 hosted with ❤ by GitHub


Run once to make sure it's acting as expected, then set up a job to run it every 5 minutes.

....and..... you're done!
Check the error tables for issues; right now it doesn't handle spaces in the object name, I'm fixing that this week.