Tuesday, January 16, 2018

[WMI] permissions for Win32_PerfFormattedData_*

Note: if you're trying to use WMI to query any of the performance counters, you don't just need the normal WMI permissions; you also need the account running the query to be added to "Performance Monitor Users" (or, if using a GPO, "BUILTIN\Performance Monitor Users").  In my case, when I ran the query I'd get nothing.  No errors, no data, nothing.  Adding that group (in addition to all the others like WMI perms) fixed it.

in this case, found it (and more) at: http://vniklas.djungeln.se/2012/08/22/set-up-non-admin-account-to-access-wmi-and-performance-data-remotely-with-powershell/

Friday, January 12, 2018

[Powershell] Credentials/Credential/Credentialing



Had to dig through email to find this, so obviously it needs to go into my aux brain - this blog.
I had a special account set up with WMI permissions.  But that's not the same as the SQL Server account running the script..

  • To get credentialing working the first time:
    • Log onto the box as the service account for SQL Server
    • Run this in Powershell: read-host -assecurestring | convertfrom-securestring |out-file c:\scripts\securestring_theADaccountName.txt # it will ask for the password – provide it.
      • (that will read the password then save it, encrypted and readable only by that logged-in user, to the named file.  Name it so that you know which account you have to log in as in order for it to work.)
  • To use from then on:
$username = "mydomain\myWMIacct"
$password = cat c:\scripts\securestring_theADaccountName.txt | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential `
         -argumentlist $username, $password
Get-WmiObject -ComputerName myservername -Class Win32_operatingsystem -Namespace "root\cimv2" -Credential $Cred


 Now, when you run the job, it'll run the query as that account.

And if you're feeling like trouble, here's a HORRIBLE alternative:

$credential = New-Object System.Management.Automation.PsCredential("yourdomain\youruser", (ConvertTo-SecureString "yourpassword" -AsPlainText -Force))

Tuesday, January 9, 2018

[IMOLTP] Detaching/Attaching from one server to another.

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it'd be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including "5(Access is denied.)".

Thanks to Simon


Test:

Create DATABASE demo
go
ALTER DATABASE demo ADD FILEGROUP demo_mod CONTAINS MEMORY_OPTIMIZED_DATA
go
ALTER DATABASE demo ADD FILE (name='demo_mod1', filename='M:\temp\demo_mod1') TO FILEGROUP demo_mod
go
Use Demo
go
  CREATE TABLE dbo.ShoppingCart (   
    ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,  
    UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),   
    CreatedDate DATETIME2 NOT NULL,   
    TotalPrice MONEY  
    ) WITH (MEMORY_OPTIMIZED=ON)   
  GO  



Now take offline, copy the Data file, Log file, and the in-memory folder (demo_mod1).

On the target server, right-click, go to security, and add the SQL Server service user and grant FULL CONTROL.

Now, issue this to mount it.  I just tested, and the commented out line doesn't APPEAR to be needed, but that's because my new location matched; if all the locations match, it will grab the demo_mod1 location, probably from the MDF. So in theory, if all your drive letters match, you only need the two FILENAMES. 

CREATE DATABASE [demo] ON 
( FILENAME = N'M:\Data\demo.mdf' ),
( FILENAME = N'L:\Logs\demo_log.ldf' ),
--FILEGROUP [demo_mod] CONTAINS FILESTREAM DEFAULT
(NAME = N'demo_mod1',FILENAME='M:\Data\demo_mod1') -- this is actually a folder...
FOR ATTACH 

Friday, September 1, 2017

[AWS] Installing powershell, basics, and adding multiple tags to RDS instances with Powershell

The better-formatted parts are courtesy of coworker.  : )

In the AWS Console: Go to IAM, create a new user, set yourself up for programmatic access with an access key.  Download the credentials in the CSV.

Go download the Powershell Cmdlets for AWS.

AWS Powershell.  Download and install the AWSPowershell cmdlets
  1. Download from https://aws.amazon.com/powershell/.  
    1. If you already have the AWS Powershell cmdlets installed and you need to install a later version, UNINSTALL the existing version before installing the new version. 
  2. Run the msi to install the cmdlets.
  3. Start PowerShell ISE as Administrator.  Determine which version of PowerShell is installed by running $PSVersionTable.PSVersion.
  4. If you have version 3 or later, type "import-module awspowershell" at the ISE command prompt.  For earlier versions, use:  Import-Module -Name "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.dll".  
    1. If you don't want to run this manually each time you load it, you can add it to your powershell profile, but it takes 20 seconds or so to load.  
    2. Alternatively, you can create a startup script to run each time you need to use the cmdlets.
    3. To verify the modules were loaded, run the command "Get-Module".  You should see the line "Binary     AWSPowerShell".  To determine the version of the tools, run the command "Get-AWSPowerShellVersion" or "Get-AWSPowerShellVersion -ListServiceVersionInfo", which includes the AWS services that are supported.
  5. See http://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html#pstools-config-ps-window for more information on how to get started with the AWS cmdlets.


For a session, you'll have to import the module.  You can do this by adding it to your profile (though it takes 20 seconds or so), or just run it by hand.

(powershell 3+)
import-module awspowershell
or
(powershell 2?) 
Import-Module -Name "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.dll".  


  • Set up your credentials and save them in a local (and encrypted, I believe) store on your desktop.  


set-awscredential  -AccessKey your_access_key -SecretKey your_secret_key -StoreAs default    



Adding tags to my clusters
(The biggest trick here was the "array of arrays" for the key/value.  Normal powershell splatting is name/value, which chokes when trying to insert into AWS' key/value convention.  Tried to find other ways to do it, but his works and is relatively easy.  If you have a better way, let me know! (as per https://aws.amazon.com/blogs/developer/tagging-amazon-ec2-instances-at-launch/).

Grab each matching instance, then give them those 4 tags

$Tags = @( @{key="tagname1";value="1"}, `
           @{key="tagname2";value="narf"}, `
           @{key="Description";value="yet more info"}, `
           @{key="Servertype";value="testAurora"} )
get-rdsdbinstance -Region us-west-1 | where {$_.DBClusterIdentifier -like '*aurora*'} | %{
add-rdstagstoresource -resourcename $_.DBInstanceArn -Tag $Tags -region us-west-1}

[AWS] Series of small posts coming. "Yeah, I didn't think I was going to AWS either"

New tag notification.  AWS.

Lessons learned, etc.  Decent bit of powershell in here too.

Monday, August 28, 2017

[CTE] fun with naming of fields CTE

Well, it's been a busy month, and I have mostly been working with AWS Aurora, but came upon this tidbit courtesy of a coworker. He was trying to figure out why it was returning bad data.

WITH cte (wt, cmd, lt ) AS
(SELECT lastwaittype lt, waittime wt, cmd cd FROM sysprocesses)
SELECT * FROM cte

So, we have a CTE with named fields, but then the expression within has named fields.  So what happens?



Yup!  The CTE overrides what's in the expression. It makes sense, certainly, but at the same point not really what I expected.

Thursday, July 20, 2017

[CLR] Soup-to-nuts, making a generic CLR to perform Web API POST calls, via SQL Server

(this is the part I'm both looking forward & dreading, where people see my CLR code and find it ...lacking)

We are trying to use New Relic a cloud-based monitoring company to build alerting off of metrics inside our SQL Server database.  Not of data ABOUT our SQL Servers, but data stored IN SQL Server.  We have a bunch of front-end servers that handle transaction processing.  Once the transaction is done, we send copies of the transaction, including all the metadata around the transaction (how long did it take, where did it come from, where did it go, etc), to SQL Server, which then parses it.

So, how do get this data into our cloud-based monitoring?

What we settled on was building a CLR that would make the web calls, feeding it our data via a FOR JSON query.  We would then log the results into a separate table to make sure everything worked as expected.  I made this as generic as possible so that others could use it.

So let's go through the steps.

  1. Create the .Net code necessary
  2. Create a CLR script for compilation
  3. Compile the CLR
  4. CREATE the ASSEMBLY
  5. CREATE the PROCEDURE
  6. Call the procedure
  7. Run it automatically

Step 1: Create the .Net code for the CLR.  

Originally pilfered from this Stack Overflow item: https://stackoverflow.com/questions/28435637/sending-http-post-request-from-sql-server-2012-or-sql-clr-c-sharp.  Many thanks to @Dennis!  There are a lot of examples, all of which slightly differently.  And one comment in here (https://sqlsunday.com/2013/03/03/web-requests-using-clr-proc/) made me super paranoid, since it spoke of a potential long-term memory leak in case of connections issues.  So I brought it to one of our developers, who proceeded to slightly change it so that it looked like...

Step 2: CLR script to compile.

The script I was given was pretty much complete, but I wound up having to add some headers ("using", though there's 2 different types of USING in the code) so it would compile.  I also added the X-Insert-Key header here.

 Save this script as webPOST.cs



Step 3: Compiling the CLR.  

We're cheating, somewhat.  We will come back and properly sign and certificate-ize this, but we wanted to get it up and testing.  So, open a command line and navigate to the folder where you saved the script from Step 2.  Open a command line, and run the following statement:

"C:\Program Files (x86)\MSBuild\14.0\Bin\amd64\csc.exe" /target:library webPOST.cs

That worked on mine, and left a "webPOST.dll" file alongside my webPOST.cs.  Copy that DLL over to a folder on your SQL Server.

Step 4: Creating the Assembly.

Like I said at the very beginning, we're cheating here.  There's two ways to create the assembly. (aka putting the DLL inside SQL Server).  The first involves creating a key (inside Visual Studio; no good way to do it via code), signing the assembly, creating the logins & users necessary to execute the assembly.

The second one is to create a separate database that has virtually no permissions, and that's NOT a good idea.  Here be DRAGONS, people!  Big nasty sticky security issues.  Good "TRUSTWORTHY SQL SERVER" and READ.

CREATE DATABASE webposttest
go
ALTER DATABASE webposttest SET TRUSTWORTHY ON
go
CREATE ASSEMBLY webPOST FROM 'c:\temp\webPOST.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS;

What's EXTERNAL_ACCESS?  There's more info about CLRs online, but it comes down to "give this code access to resources outside the SQL Server, but still keep it in the protected memory space".  Which means that it _shouldn't_ be able to take your SQL Server down.  There are several examples out there using UNSAFE.  DON'T USE UNSAFE unless you have a really excellent reason.  Especially since we're able to do all this with EXTERNAL_ACCESS.

Step 5: Creating the stored procedure.

This is your T-SQL interface into the CLR. Note all the NVARCHARs, since .Net assumes it's all unicode. 

CREATE PROCEDURE webPost
(@urlToSendTo     nvarchar(400)
,@appHeader       nvarchar(40)
,@xInsertKey      nvarchar(40)
,@dataToSend      nvarchar(MAX)
,@responseFromWeb nvarchar(MAX) OUTPUT
,@error           nvarchar(MAX) OUTPUT)
AS

EXTERNAL NAME webPost.StoredProcedures.POSTWebRequest

GO


Step 6: Call the procedure.

Let's do a simple test.  I'm converting the login_time because they seem to want UNIX "epoch" time.  So long as everything's since 1970, you're good (well, until the epochalypse occurs - see "Year 2038 problem").  You need the eventType to tell you what "set" this is part of, and they say they'll only accept 1000 events at once (it does seem to vary; I'm sending more than that)

DECLARE @responsefromweb NVARCHAR(MAX), @error NVARCHAR(MAX), @JSONset NVARCHAR(MAX)


SELECT @JSONset = (SELECT top 1000
        'SQLSysProc' AS eventType,
        kpid,
        lastwaittype,
        DATEDIFF(SECOND, {D '1970-01-01'}, login_time) AS login_time
FROM    sys.sysprocesses
FOR JSON PATH);
 

EXEC webpost @urltosendto = 'https://insights-collector.newrelic.com/v1/accounts/SOMEBIGNUMBERHERE/events',
@appheader = 'json',
@xinsertkey = 'yourAPInameGOEShere',
@datatosend = @JSONset,
@responsefromweb = @responsefromweb OUTPUT,
@error = @error OUTPUT


SELECT @responsefromweb, @error


...and what do we get back?
{"success":true} for the @responsefromwebSP

Step 7: Run it Automatically


That's my next step!  Combine all of it, set up logging, deal with errors, retry logic, etc.

Friday, April 21, 2017

[Powershell] Run powershell code 30 seconds after the job it's in finishes(!)

We have a process that, when it fails, has a tracking table, and as each step is run it updates that row in the tracking table.  If the job fails, we know in which step it failed.

I also have a process that looks at failed jobs every minute, and sends out the tracking table info when a SSISDB job fails, so we get a useful message.  (See SSISDB in this blog).

However, the tracking table means that a job will wait to be fixed.  I don't want to update it as part of the job, since then I can't get the state of the tracking table. So what I need is either to kick off another job that waits 2 minutes... or use powershell.  :)

Next up is feeding it a parameter, and setting the background script to use an update statement

Note: works in powershell 2 or greater.  This seems much easier to do in newer versions.

First script "runinforeground.ps1":
start-process -filepath "powershell" -argumentlist "c:\temp\runinbackground.ps1"

Second Script "RunInBackground.ps1":
$now = get-date
start-sleep -s 10
$now | out-file c:\temp\whatisnow.txt #here is where I'd do the invoke-sqlcmd2 and the update



SQL Server Agent Job Step (CmdExec):
powershell.exe "c:\temp\runinforeground.ps1"

Wednesday, March 22, 2017

[Event Notifications] CONNECT and SQL permissions between boxes.

Just ran into this and didn't have it blogged for some reason.

For Event Notifications, you need two permissions on the monitorING server, for the service account running on the monitorED server.  First, there's SQL.  Second, there's Endpoint.  It must have both.

If you just have CONNECT SQL enabled, not the endpoint, then what I saw in sys.transmission_queue was:


An error occurred while receiving data: '24(The program issued a command but the command length is incorrect.)'.

Here's the command to grant it.  Remember that the authorization will be the name you're logged in as, so you might change that or ALTER the AUTHORIZATION afterwards (see http://thebakingdba.blogspot.com/2015/01/event-notifications-changing-security.html)

use [master]
GO
GRANT CONNECT ON ENDPOINT::[ENAudit_Endpoint] TO [yourdomain\monitorEDserviceacct]
GO

Wednesday, March 1, 2017

[Event Notifications] SQL 2016 and a new exclusion

We just rolled a bunch of servers to SQL Server 2016, and we're now getting this in our event log, once every 5 minutes, for each server.  On the plus side, I didn't know about telemetry_xevents before, and it looks like a nice new tool to query.  However, I don't need these.


Here's how to prevent them from showing up.  Super-simple exclusion.


INSERT INTO ENAudit_Exclusion_List
        (exclusion_set,
         exclusion_type,
         excluded_value,
         insert_datetime,
         update_datetime,
         active
        )
VALUES  ('aw', -- exclusion_set - char(2)
         'ObjectName', -- exclusion_type - varchar(128)
         'telemetry_xevents', -- excluded_value - varchar(128)
         GETDATE(), -- insert_datetime - datetime
         null, -- update_datetime - datetime
         1  -- active - bit
        )