Monday, May 7, 2018

[AWS] Reading Aurora audit logs in Cloudwatch for DDL changes

Another form of log-watching, this time the Audit logs that Aurora can push to Cloudwatch.  It's not all stuff by any means; heck, the example they use is just about failed logins.

In this case, we're reading the Cloudwatch logs for our RDS clusters.
Since Cloudwatch offers filters, in this example we're looking for QUERY commands, since we've turned Server Audit on, and are uploading connects + queries to Cloudwatch.  (More information on that in:  Remember to set a cap on it, though then interesting things could be buried.

Why are we doing this?  In our case, this gives us a few bits of cool info.  Specifically, we can log all the DDL commands that come across our cluster, making sure everything is behaving as expected.

[AWS] Aurora - reading the errorlog files with powershell

Been working on monitoring.  For some reason, when you tell Aurora to send errorlogs to Cloudwatch, all it sends are the Audit Logs, which will tell you that code had changed, etc, but doesn't (!?!?!??!!!) actually put your logs in Cloudwatch.  I don't understand it, so I built this process to look through logs and return the data.  The next step would be to format it and either upload to Cloudwatch manually, or log it, or send email.  Whatever works for you.  You be you.  :D

Thursday, May 3, 2018

[AWS] powershell to patch all Aurora clusters

Pretty basic, but took longer than I figured it would.  The catch was figuring out how to look inside the results.

set-awscredentials -accesskey youraccesskey -secretkey yoursecretkey

Submit-RDSPendingMaintenanceAction -ResourceIdentifier $_.ResourceIdentifier -applyaction $_.PendingMaintenanceActionDetails.action -OptInType immediate }

So when you get the results back, it looks like:

PendingMaintenanceActionDetails               ResourceIdentifier                                         
-------------------------------               ------------------                                         
{Amazon.RDS.Model.PendingMaintenanceAction}   arn:aws:rds:us-west-1:xxxxxxxx:cluster:xxxxxx

How do you view what's in that Amazon.RDS object?  I have no doubt there's some way to unpack it with powershell, but I'm not sure what that is.

What I did:

Looked at the PoSH module documentation for this cmdlet (Get-RDSPendingMaintenanceAction) to see what it returned:

Which says:

Which, indeed, is what it returned to us.

Now, clicking on the object info from the documentation:

takes us to:

And THAT page says it has Action, AutoAppliedAfterDate, etc.
When I run

$DataSet = Get-RDSPendingMaintenanceAction

Here's what I get:

Action               : system-update
AutoAppliedAfterDate : 1/1/0001 12:00:00 AM
CurrentApplyDate     : 5/2/2018 4:41:00 PM
Description          : Aurora 1.17.2 release
ForcedApplyDate      : 1/1/0001 12:00:00 AM
OptInStatus          : immediate

So, now we have the fields we need: what kind of action to take (non optional, and it can be db-update or system-update), and the ResourceIdentifier for the cluster.

Friday, April 13, 2018

[Statistics] Last 4 stats redux - which stats did I update just now that all recent maintenance didn't do?

This is an addendum to my other "last 4 stats" article.  We had a query that had a bad plan, either due to bad statistics or sniffing or whatnot.  Normally rerunning a stats update would fix it, but not today.

Here's our normal stats maint (separate from index maint).

EXECUTE dba_utils.dbo.IndexOptimize
@Databases = 'eif_cde',
@FragmentationLow = NULL,
@FragmentationMedium = NULL,
@FragmentationHigh = NULL,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'  

The last time I looked, Ola's code uses the built-in rule for stats maintenance to determine if it should be updated, same as sp_updatestats, I ran a full update (OnlyModifiedStatistics = 'N').  But I was unsure what I had updated that hadn't been before.  Fortunately, due to some other code I had (, I was able to get the last 4 stats updates.  So, let's figure out which ones were JUST updated, and hadn't been since our last update window at 11pm.  Then we get the columns involved in that table, in hopes that we can look at our "hung" query and see that one of the fields in the WHERE clause hadn't been updated.  That gives us an idea where to tweak, or possibly just to force a rebuild next time.

Step 1: run the code from the previous blog post, making sure it doesn't drop the table when done.
Step 2: run the below code, substituting your own windows.  In my case, I wanted to see which newly updated stats may have fixed the issue.

--and now to return which particular stats hadn't been updated.
SELECT stats.the_schema_name,
--       stats.[inserts since last update],
--       stats.[deletes since last update],  FROM #stats_info2 stats
INNER JOIN sys.STATS ss ON ss.NAME = STATS.stat_name
INNER JOIN sys.stats_columns sstatc ON ss.object_id = sstatc.object_id AND sstatc.stats_id = ss.stats_id
INNER JOIN sys.columns ON columns.column_id = sstatc.column_id AND columns.object_id = ss.object_id
WHERE stats.updated >='20180413 14:00'
( SELECT 1 FROM #stats_info2 stats2 WHERE
stats2.updated < '20180413 14:00' --AND stats2.UPDATEd >'20180412 22:59:00' but gives the same results
AND STATS.the_schema_name = stats2.the_schema_name
AND stats.table_name =stats2.table_name
AND stats.stat_name = stats2.stat_name
ORDER BY the_schema_name, table_name, stat_name

Tuesday, March 13, 2018

[AWS] Querying CloudWatch logs using powershell to find DDL changes in your Aurora instance.

Did I get enough buzzwords in that post title? 

So, I've been working on an AWS Aurora MySQL project lately (yes, you need to qualify Aurora, because these days there's a Postgres variant).

One problem we had was getting info out of the logs.  Querying the logs is possible from within MySQL, but time consuming.  Ditto querying the logs that are being saved to disk.  We used the AWS directions ( to get our logs to dump into CloudWatch, which offers a Powershell cmdlet to filter them (although it limits to 1mb at a time, hence the DO WHILE with a low rowcount).  Unfortunately, I didn't find any actual working examples. 

So while this may not be the best, it does work. 

NOTE: results currently go to an out-gridview window.  Obviously, change that to what you need, or:
* remove the write-host
* remove the ogv line
* wrapper the entire DO WHILE into a variable then return that at the end. BE CAREFUL DOING THIS. I tried it with a decent-sized results set (30k) and got a weird error back:

Get-CWLFilteredLogEvent : Error unmarshalling response back from AWS. Request ID:

Monday, February 12, 2018

[Replication] The row was not found at the Subscriber when applying the replicated (null) command for Table '(null)' with Primary Key(s): (null)

Had this come in courtesy of one of my replication monitors (see other posts).

DESCRIPTION:   Replication-Replication Distribution Subsystem: agent my-ser-ver-here-main-Main_big-my-other-server-here-35 failed. The row was not found at the Subscriber when applying the replicated (null) command for Table '(null)' with Primary Key(s): (null)

Yes, it actually says "(null)".

What was it?  We run multi-tier replication.  That way each datacenter only gets one stream of changes, and we don't run into problems due to desktop heap (see other posts, again, on this blog).

After running a trace, we finally figured it out - the second tier had a foreign key where it shouldn't have had one.  The FKs are handled on the "front" server, the one that everything replicates from.  Because the FK was there, the rows were being deleted out of order, so it had a fit and gave us "(null)" instead of a valid name.  

Note that this is only a problem with multi-tier replication; normal repl handles it, but it somehow choked on this.  

In other news: I've been doing a lot of AWS Aurora I need to post about here, including both MySql SQL, and Powershell, but I don't have enough usable yet.  Coming Soon!

Tuesday, January 16, 2018

[WMI] permissions for Win32_PerfFormattedData_*

Note: if you're trying to use WMI to query any of the performance counters, you don't just need the normal WMI permissions; you also need the account running the query to be added to "Performance Monitor Users" (or, if using a GPO, "BUILTIN\Performance Monitor Users").  In my case, when I ran the query I'd get nothing.  No errors, no data, nothing.  Adding that group (in addition to all the others like WMI perms) fixed it.

in this case, found it (and more) at:

Friday, January 12, 2018

[Powershell] Credentials/Credential/Credentialing

Had to dig through email to find this, so obviously it needs to go into my aux brain - this blog.
I had a special account set up with WMI permissions.  But that's not the same as the SQL Server account running the script..

  • To get credentialing working the first time:
    • Log onto the box as the service account for SQL Server
    • Run this in Powershell: read-host -assecurestring | convertfrom-securestring |out-file c:\scripts\securestring_theADaccountName.txt # it will ask for the password – provide it.
      • (that will read the password then save it, encrypted and readable only by that logged-in user, to the named file.  Name it so that you know which account you have to log in as in order for it to work.)
  • To use from then on:
$username = "mydomain\myWMIacct"
$password = cat c:\scripts\securestring_theADaccountName.txt | convertto-securestring
$cred = new-object -typename System.Management.Automation.PSCredential `
         -argumentlist $username, $password
Get-WmiObject -ComputerName myservername -Class Win32_operatingsystem -Namespace "root\cimv2" -Credential $Cred

 Now, when you run the job, it'll run the query as that account.

And if you're feeling like trouble, here's a HORRIBLE alternative:

$credential = New-Object System.Management.Automation.PsCredential("yourdomain\youruser", (ConvertTo-SecureString "yourpassword" -AsPlainText -Force))

Tuesday, January 9, 2018

[IMOLTP] Detaching/Attaching from one server to another.

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it'd be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including "5(Access is denied.)".

Thanks to Simon


Create DATABASE demo
ALTER DATABASE demo ADD FILE (name='demo_mod1', filename='M:\temp\demo_mod1') TO FILEGROUP demo_mod
Use Demo
  CREATE TABLE dbo.ShoppingCart (   
    CreatedDate DATETIME2 NOT NULL,   
    TotalPrice MONEY  

Now take offline, copy the Data file, Log file, and the in-memory folder (demo_mod1).

On the target server, right-click, go to security, and add the SQL Server service user and grant FULL CONTROL.

Now, issue this to mount it.  I just tested, and the commented out line doesn't APPEAR to be needed, but that's because my new location matched; if all the locations match, it will grab the demo_mod1 location, probably from the MDF. So in theory, if all your drive letters match, you only need the two FILENAMES. 

( FILENAME = N'M:\Data\demo.mdf' ),
( FILENAME = N'L:\Logs\demo_log.ldf' ),
(NAME = N'demo_mod1',FILENAME='M:\Data\demo_mod1') -- this is actually a folder...

Friday, September 1, 2017

[AWS] Installing powershell, basics, and adding multiple tags to RDS instances with Powershell

The better-formatted parts are courtesy of coworker.  : )

In the AWS Console: Go to IAM, create a new user, set yourself up for programmatic access with an access key.  Download the credentials in the CSV.

Go download the Powershell Cmdlets for AWS.

AWS Powershell.  Download and install the AWSPowershell cmdlets
  1. Download from  
    1. If you already have the AWS Powershell cmdlets installed and you need to install a later version, UNINSTALL the existing version before installing the new version. 
  2. Run the msi to install the cmdlets.
  3. Start PowerShell ISE as Administrator.  Determine which version of PowerShell is installed by running $PSVersionTable.PSVersion.
  4. If you have version 3 or later, type "import-module awspowershell" at the ISE command prompt.  For earlier versions, use:  Import-Module -Name "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.dll".  
    1. If you don't want to run this manually each time you load it, you can add it to your powershell profile, but it takes 20 seconds or so to load.  
    2. Alternatively, you can create a startup script to run each time you need to use the cmdlets.
    3. To verify the modules were loaded, run the command "Get-Module".  You should see the line "Binary     AWSPowerShell".  To determine the version of the tools, run the command "Get-AWSPowerShellVersion" or "Get-AWSPowerShellVersion -ListServiceVersionInfo", which includes the AWS services that are supported.
  5. See for more information on how to get started with the AWS cmdlets.

For a session, you'll have to import the module.  You can do this by adding it to your profile (though it takes 20 seconds or so), or just run it by hand.

(powershell 3+)
import-module awspowershell
(powershell 2?) 
Import-Module -Name "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.dll".  

  • Set up your credentials and save them in a local (and encrypted, I believe) store on your desktop.  

set-awscredential  -AccessKey your_access_key -SecretKey your_secret_key -StoreAs default    

Adding tags to my clusters
(The biggest trick here was the "array of arrays" for the key/value.  Normal powershell splatting is name/value, which chokes when trying to insert into AWS' key/value convention.  Tried to find other ways to do it, but his works and is relatively easy.  If you have a better way, let me know! (as per

Grab each matching instance, then give them those 4 tags

$Tags = @( @{key="tagname1";value="1"}, `
           @{key="tagname2";value="narf"}, `
           @{key="Description";value="yet more info"}, `
           @{key="Servertype";value="testAurora"} )
get-rdsdbinstance -Region us-west-1 | where {$_.DBClusterIdentifier -like '*aurora*'} | %{
add-rdstagstoresource -resourcename $_.DBInstanceArn -Tag $Tags -region us-west-1}