TL;DR – if the SA account is not named SA (because, as per MS and Corporate, we rename it), this patch fails with this error and the server shuts down.
(note the spots in bold). Fix is to start the service with a T910 trace flag, rename that formerly-known-as-SA account to SA, then stop the service and start service back up via SERVICES.
Worth reading:
https://blog.sqlauthority.com/2018/04/04/sql-server-upgrade-error-alter-database-statement-not-allowed-within-multi-statement-transaction/
DESCRIBES THE PROBLEM AND SOLUTION:
https://feedback.azure.com/forums/908035-sql-server/suggestions/35805787-sql-server-2016-sp2-cu3-patch-breaks-if-replicatio
We just spent a good 30+ minutes trying to solve this. Fortunately, after reading Pinal Dave’s post about it (top post), was able to figure out that I needed to google different terms, and found the above post which is far more applicable. (2nd link)
PREVENT:
Rename the SA account to SA before patching, at least for this specific patch. (2016 SP2 CU3)
THE FIX, if you patched and it doesn’t start back up:
1) From a command window, running AS ADMINISTRATOR, run this:
NET START MSSQLSERVER /T902
2) Connect via SSMS. Security->Logins, find the long-random-string-of-letters. (Alternatively, find it in syslogins, but this appears simpler)
3) Right-click on the former-SA-account, and choose “Rename”
4) Rename to SA.
5) Go back to the command window, run
NET STOP MSSQLSERVER /T902
6) Go back to the SERVICES, and start the SQL Service. It should come up.
Sunday, December 16, 2018
Thursday, August 16, 2018
[Query Plan] Getting out the query execution plan when SP_whoisactive gives you a big fat NULL.
Ran into a problem where I couldn't get the execution plan for a query that was taking forever.
Found a smarter man than me who'd build a powershell cmdlet to capture it and save it to disk.
http://www.patrickkeisler.com/2013/09/the-case-of-null-queryplan.html
Here's my wrapper for it:
$SPname = "yourspnamehere"
$servername = "yourserverhere"
. c:\powershell_scripts\invoke-sqlcmd2.ps1
$query = @"
SELECT master.dbo.fn_varbintohexstr(plan_handle) as plan_handle
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
WHERE
text LIKE '%$SPname%'
AND objtype = 'Proc'
ORDER BY usecounts DESC;
"@
$queryresults = invoke-sqlcmd2 -serverinstance $servername -query $query
$planhandle = $queryresults.plan_handle
. z:\mydocs\get-queryplan.ps1 -SqlInstance "ods_scd" -planhandle "$planhandle"
Found a smarter man than me who'd build a powershell cmdlet to capture it and save it to disk.
http://www.patrickkeisler.com/2013/09/the-case-of-null-queryplan.html
Here's my wrapper for it:
$SPname = "yourspnamehere"
$servername = "yourserverhere"
. c:\powershell_scripts\invoke-sqlcmd2.ps1
$query = @"
SELECT master.dbo.fn_varbintohexstr(plan_handle) as plan_handle
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
WHERE
text LIKE '%$SPname%'
AND objtype = 'Proc'
ORDER BY usecounts DESC;
"@
$queryresults = invoke-sqlcmd2 -serverinstance $servername -query $query
$planhandle = $queryresults.plan_handle
. z:\mydocs\get-queryplan.ps1 -SqlInstance "ods_scd" -planhandle "$planhandle"
Tuesday, June 26, 2018
[AWS] Powershell to figure out the encryption status of your EC2 drives.
since this turned out to be more difficult than I'd thought, posting it here.
$dbserver_instance = (Get-EC2Instance -Filter @{Name="tag:Name";Value="yourservernamehere"} -Region us-east-1).Instances
get-ec2volume -filter @{Name="attachment.instance-id";value="$($dbserver_instance.instanceid)"} -region us-east-1
The tricky part is the tag:Name, which isn't obvious. Also odd, if you just run the get-ec2volume with the tag:name and value, you get the exact same message as if you ran get-ec2instance! I don't pretend to understand that.
$dbserver_instance = (Get-EC2Instance -Filter @{Name="tag:Name";Value="yourservernamehere"} -Region us-east-1).Instances
get-ec2volume -filter @{Name="attachment.instance-id";value="$($dbserver_instance.instanceid)"} -region us-east-1
The tricky part is the tag:Name, which isn't obvious. Also odd, if you just run the get-ec2volume with the tag:name and value, you get the exact same message as if you ran get-ec2instance! I don't pretend to understand that.
Wednesday, June 6, 2018
[AWS RDS] Sending an email when there's a new version of RDS available
Something I needed recently. Difficulty level: Get-RDSPendingMaintenanceAction ONLY returns info when there's something to be done. If you're current, there's no way to find out what it looks like.
This will email you a HTML-formatted table with the details of each pending maintenance action for every instance in a region.
This will email you a HTML-formatted table with the details of each pending maintenance action for every instance in a region.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Remove-Variable -Name * -ErrorAction SilentlyContinue | |
import-module awspowershell | |
$myaccesskey = "youraccesskey" | |
$mysecretkey = "yoursecretkey" | |
$myregion = "yourregion" | |
$myemail = "youremailaddress" | |
$mysmtp = "smtp.yourcompany.com" | |
set-awscredentials -accesskey $myaccesskey -secretkey $mysecretkey | |
$now = get-date | |
$upgrade_info = Get-RDSPendingMaintenanceAction -Region $myregion | |
# Taking the results and parsing out the info for each instance | |
$Parsed_Results = $upgrade_info | select @{Label='ResourceIdentifier';Expression={$_.ResourceIdentifier} | |
}, @{Label='Action';Expression={$_.PendingMaintenanceActionDetails.Action} | |
}, @{Label='Description';Expression={$_.PendingMaintenanceActionDetails.Description} | |
}, @{Label='AutoAppliedAfterDate';Expression={$_.PendingMaintenanceActionDetails.AutoAppliedAfterDate} | |
}, @{Label='CurrentApplyDate';Expression={$_.PendingMaintenanceActionDetails.CurrentApplyDate} | |
}, @{Label='ForcedApplyDate';Expression={$_.PendingMaintenanceActionDetails.ForcedApplyDate} | |
}, @{Label='OptInStatus';Expression={$_.PendingMaintenanceActionDetails.OptInStatus} | |
} | |
# Making an HTML table | |
# https://jamesdatatechq.wordpress.com/2014/12/23/how-to-create-an-html-table-from-powershell/ | |
$Style = " | |
<style> | |
TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;} | |
TH{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
TD{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
</style> | |
" | |
#Convert the parsed info to a table | |
$error_table = $Parsed_Results| ConvertTo-HTML -AS Table -Fragment -PreContent "<h2>Aurora Updates as of $now</h2>"|Out-String | |
#Save the HTML Web Page with the table style | |
[string]$email_table = (ConvertTo-HTML -head $Style -PostContent $error_table) | |
# Now send the mail | |
if ($upgrade_info){ | |
#Send email to $myemail. SMTP is a CNAME to the corporate SMTP server we use. | |
Send-MailMessage -smtpserver $mysmtp ` | |
-To $myemail ` | |
-From "My_AWS_Account_Do_Not_Reply <Dev@Null.Com>" ` | |
-Subject "[AWS] Aurora updates available - time to test and mark for upgrades if good. $now" ` | |
-body $email_table -BodyAsHtml | |
} |
Monday, May 7, 2018
[AWS] Reading Aurora audit logs in Cloudwatch for DDL changes
Another form of log-watching, this time the Audit logs that Aurora can push to Cloudwatch. It's not all stuff by any means; heck, the example they use is just about failed logins.
In this case, we're reading the Cloudwatch logs for our RDS clusters.
Since Cloudwatch offers filters, in this example we're looking for QUERY commands, since we've turned Server Audit on, and are uploading connects + queries to Cloudwatch. (More information on that in: https://aws.amazon.com/blogs/database/monitoring-amazon-aurora-audit-events-with-amazon-cloudwatch/). Remember to set a cap on it, though then interesting things could be buried.
Why are we doing this? In our case, this gives us a few bits of cool info. Specifically, we can log all the DDL commands that come across our cluster, making sure everything is behaving as expected.
In this case, we're reading the Cloudwatch logs for our RDS clusters.
Since Cloudwatch offers filters, in this example we're looking for QUERY commands, since we've turned Server Audit on, and are uploading connects + queries to Cloudwatch. (More information on that in: https://aws.amazon.com/blogs/database/monitoring-amazon-aurora-audit-events-with-amazon-cloudwatch/). Remember to set a cap on it, though then interesting things could be buried.
Why are we doing this? In our case, this gives us a few bits of cool info. Specifically, we can log all the DDL commands that come across our cluster, making sure everything is behaving as expected.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import-module awspowershell | |
# removing variables. Note that this will wipe out default region, credentials, etc, if before this line. | |
Remove-Variable -Name * -ErrorAction SilentlyContinue | |
Set-DefaultAWSRegion -Region us-west-1 | |
$now = get-date | |
# For conversion from Unix Epoch | |
$origin = New-Object -Type DateTime -ArgumentList 1970, 1, 1, 0, 0, 0, 0 | |
# here, we use implicit credentials, so there's no $credentials | |
set-awscredentials -accesskey youraccesskey -secretkey yoursecretkey | |
$myemail = "dev@null.com" | |
# Get the current timestamp | |
$Current_Unix_Timestamp = [Math]::Floor([decimal](Get-Date(Get-Date).ToUniversalTime()-uformat "%s")) * 1000 | |
# Get the timestamp to start at. | |
$hours_to_look_back = 72 | |
$Past_Unix_Timestamp = [Math]::Floor([decimal](Get-Date((Get-Date).AddHours(-1*$hours_to_look_back)).ToUniversalTime()-uformat "%s")) * 1000 | |
# Get a list of all of our CloudWatch log groups | |
$All_CW_RDS_Logs = get-CWLLogGroup -LogGroupNamePrefix "/aws/rds" | where {$_.LogGroupName -like "*/myprefix-*"} | |
foreach ($Specific_CW_Log_Group in $All_CW_RDS_Logs) { | |
write-host $Specific_CW_Log_Group.LogGroupName | |
$CW_NextToken = $null # reset for each log group. Required for NextToken to work | |
#Using $null for NextToken means we can use the same pattern as for regular logs | |
#NOTE: this hangs if the FilterPattern is invalid. Which apparently includes commas, backslashes, etc. | |
#$CW_Results = | |
DO { | |
#write-host "CWToken $CW_NextToken" | |
$CW_RDS += | |
Get-CWLFilteredLogEvent ` | |
-LogGroupName $Specific_CW_Log_Group.LogGroupName ` | |
-StartTime $Past_Unix_Timestamp ` | |
-EndTime $Current_Unix_Timestamp ` | |
-FilterPattern "QUERY" ` | |
-Limit 2000 ` | |
-NextToken $CW_NextToken | |
#FilterPattern can't use commas. ",QUERY," should show all create/truncate/drop, but we use QUERY instead | |
#unlike the regular logs, this one returns a normal powershell dataset - no need to parse it out, just query events | |
# $CW_RDS.Events # moved to outside the loop. | |
$CW_NextToken = $CW_RDS.NextToken | |
}WHILE ($CW_NextToken -ne $null) | |
} | |
# Regex-based filtering | |
$Log_Filters = "temporary table","truncate " | |
#We use this line to take our list of log filters and generate a regex that is then compared against | |
# https://blogs.technet.microsoft.com/heyscriptingguy/2011/02/18/speed-up-array-comparisons-in-powershell-with-a-runtime-regex/ | |
# the replace is needed because I want an exact match of those commas, and the escape($_) does weird stuff to it | |
# 2018/05/09 removed [regex] at beginning as per https://stackoverflow.com/questions/50261383/powershell-3-regex-why-is-case-insensitive-not-working#50261506 | |
$Log_Filter_regex = (($Log_Filters|foreach {[regex]::escape($_) -replace ",","\,"}) –join "|") | |
# Applying the regex filter | |
$final_log = $CW_RDS.Events| where {$_.Message -inotmatch $Log_Filter_regex} ` | |
| select @{Label='TimeStampHR';Expression={$origin.AddSeconds($_.TimeStamp/1000).ToLocalTime()}}, LogStreamName, message | |
# Making an HTML table | |
# https://jamesdatatechq.wordpress.com/2014/12/23/how-to-create-an-html-table-from-powershell/ | |
$Style = " | |
<style> | |
TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;} | |
TH{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
TD{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
</style> | |
" | |
$error_table = $final_log| ConvertTo-HTML -AS Table -Fragment -PreContent "<h2>Cloudwatch Alerts as of $now</h2>"|Out-String | |
#Save the HTML Web Page as a string so we can email it | |
[string]$email_table = (ConvertTo-HTML -head $Style -PostContent $error_table) | |
# Now send the mail | |
if ($final_log){ | |
#Send email to $myemail. SMTP is a CNAME to the corporate SMTP server we use. | |
Send-MailMessage -smtpserver smtp.mycompany.com ` | |
-To $myemail ` | |
-From "Do_Not_Reply_PoSH <dev@null.com>" ` | |
-Subject "[AWS] Cloudwatch errors in Prod as of $now" ` | |
-body $email_table -BodyAsHtml | |
} |
[AWS] Aurora - reading the errorlog files with powershell
Been working on monitoring. For some reason, when you tell Aurora to send errorlogs to Cloudwatch, all it sends are the Audit Logs, which will tell you that code had changed, etc, but doesn't (!?!?!??!!!) actually put your logs in Cloudwatch. I don't understand it, so I built this process to look through logs and return the data. The next step would be to format it and either upload to Cloudwatch manually, or log it, or send email. Whatever works for you. You be you. :D
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
################################################### | |
## Reading error logs for Aurora with Powershell ## | |
################################################### | |
# 1.01 - MDB 2018/05/30 Better trapping of time; there's a lot of rows that have no datetime that I want. Also added HTML email. | |
# 1.02 - mdb 2018/05/30 also dealing better with the regex filter. Recommend you convert it to a NOTMATCH; too many different error types | |
clear | |
Remove-Variable -Name * -ErrorAction SilentlyContinue | |
$myemail="dev@null.com" | |
# Note that this one does NOT read the audit log, which means we can filter the log based on a normal timestamp. | |
# But it does filter messages added after the time given. | |
import-module awspowershell | |
$credentials = get-awscredentials -ProfileName mysavedprofilename | |
Set-DefaultAWSRegion -Region us-west-1 | |
$now = (get-date).ToString('yyyy/MM/dd HH:mm:ss') | |
$final_log = @() #setting the array to prevent type errors when adding to the array | |
# Figure out what files we need to look at, since they have a "last written to" field. Despite the FLOOR, the Past_Unix_Timestamp is X hours ago, down to the second, from when it is run. | |
$hours_to_look_back = 1.25 | |
$Past_Unix_Timestamp = [Math]::Floor([decimal](Get-Date((Get-Date).AddHours(-1*$hours_to_look_back)).ToUniversalTime()-uformat "%s")) * 1000 | |
$Past_Timestamp = (Get-Date((Get-Date).AddHours(-1*$hours_to_look_back)).ToUniversalTime()) | |
#See far below for more on filters; be careful on what you filter. I'd probably use a NotMatch to ignore certain things | |
$Log_Filters = "[ERROR]","failed","Note","[Warning]" | |
#We use this line to take our list of log filters and generate a regex that is then compared against | |
# https://blogs.technet.microsoft.com/heyscriptingguy/2011/02/18/speed-up-array-comparisons-in-powershell-with-a-runtime-regex/ | |
# [regex] $a_regex = ‘(?i)^(‘ + (($a |foreach {[regex]::escape($_)}) –join “|”) + ‘)$’ | |
# the replace is needed because I want an exact match of those commas, and the escape($_) does weird stuff to it | |
# 2018/05/09 removed [regex] at beginning as per https://stackoverflow.com/questions/50261383/powershell-3-regex-why-is-case-insensitive-not-working#50261506 | |
$Log_Filter_regex = (($Log_Filters|foreach {[regex]::escape($_) -replace ",","\,"}) –join "|") | |
#removed this from the WHERE line below, though useful depending on what you're looking for: -and ($_.LogFileName -notlike '*audit*') | |
get-rdsdbinstance -Credential $credentials | ` | |
where {$_.DBClusterIdentifier -like '*'}|% { | |
$instancename = $_.DBInstanceIdentifier | |
$instancename | |
get-rdsdblogfile -dbinstanceidentifier $instancename -Credential $credentials | ` | |
where {($_.LastWritten -gt $Past_Unix_Timestamp) -and ($_.LogFileName -notlike '*slow*') ` | |
-and ($_.LogFileName -notlike 'audit/*') | |
} | | |
% { | |
$_.LogFileName | |
$LogFileName = $_.LogFileName | |
$Marker = 0 | |
DO | |
{ | |
$AWSLogFile = Get-RDSDBLogFilePortion -DBInstanceIdentifier $instancename -LogFileName $_.LogFileName ` | |
-Credential $credentials -Marker $Marker -NumberOfLines 2000 | |
#$AWSLogFile | |
if ($AWSLogFile.LogFileData) | |
{ | |
$parsed_log = $AWSLogFile.LogFileData.split("`n`r") #splitting it to multiple lines using the CRLF in the file | |
#Here, I try and parse the first 19 characters, which in the normal log, should be a timedatestamp. | |
# If it's either parseable and recent, or unparseable, I then filter and toss into a master array of data. | |
#https://social.technet.microsoft.com/Forums/ie/en-US/0a7cbba0-87fb-450d-b8e8-29aed0947fd7/cant-get-datetimetryparseexact-to-work-using-powershell?forum=winserverpowershell | |
#https://stackoverflow.com/questions/2988880/extricate-a-substring-using-powershell | |
#https://blogs.technet.microsoft.com/heyscriptingguy/2014/12/19/powertip-convert-string-into-datetime-object/ | |
#parsedDat is what we get back from TryParseExact - either a valid time or a big nothing. | |
$parsedDat=0 | |
$dateformat = 'yyyy-MM-dd HH:mm:ss' | |
$final_log += foreach($line in $parsed_log){ | |
#reset this each time so we don't get bad data | |
$attemptedtime = $null | |
#The actual substring, but unlike substring it won't throw an error if the line is empty or too short. | |
[string]$attemptedtime = $line[0..18] -join '' | |
#Trying to convert it to an actual date. | |
$linetime = if([DateTime]::TryParseExact($attemptedtime, $dateformat, $null,'None',[ref]$parsedDat)){ | |
$parseddat | |
}else{ | |
get-date #this way, anything that can't be parsed will be within our window | |
} | |
#Now we get recent lines, run it through the REGEX filter, and save it. Use {1 -eq 1} to return all. | |
# There are a LOT of messages that don't have the right tags, so you might leave it off. | |
if ($linetime -gt $Past_Timestamp) | |
{ | |
$line | where {$_ -imatch $Log_Filter_regex}| ` | |
select @{LABEL="Instance"; EXPRESSION = {$instancename}}, ` | |
@{LABEL="LogFileName"; EXPRESSION = {$LogFileName}}, ` | |
@{LABEL="Message"; EXPRESSION = {$_}} | |
} | |
} | |
} | |
$Marker = $AWSLogFile.Marker | |
}WHILE ($AWSLogFile.AdditionalDataPending -eq $true) | |
} | |
} | |
$final_log #you still need to get it into a format you can send; I'm using an HTML table | |
#echoing it out here so that the SQLAgent log has a record of what happened, if everything worked. | |
# Making an HTML table | |
# https://jamesdatatechq.wordpress.com/2014/12/23/how-to-create-an-html-table-from-powershell/ | |
$Style = " | |
<style> | |
TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;} | |
TH{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
TD{border-width: 1px;padding: 3px;border-style: solid;border-color: black;} | |
</style> | |
" | |
$error_table = $final_log| ConvertTo-HTML -AS Table -Fragment -PreContent "<h2>ErrorLog messages as of $now, starting at $Past_TimeStamp </h2>"|Out-String | |
#Save the HTML Web Page | |
[string]$email_table = (ConvertTo-HTML -head $Style -PostContent $error_table) | |
# Now send the mail | |
if ($final_log){ | |
#Send email to $myemail. SMTP is a CNAME to the corporate SMTP server we use. | |
Send-MailMessage -smtpserver smtp.mycompany.com ` | |
-To $myemail ` | |
-From "Do_Not_Reply_PoSH <do_not_reply@dev.null>" ` | |
-Subject "[AWS] RDSDB Errors in Prod as of $now" ` | |
-body $string_log | |
"mail sent" | |
} |
Thursday, May 3, 2018
[AWS] powershell to patch all Aurora clusters
Pretty basic, but took longer than I figured it would. The catch was figuring out how to look inside the results.
set-awscredentials -accesskey youraccesskey -secretkey yoursecretkey
Get-RDSPendingMaintenanceAction|%{
Submit-RDSPendingMaintenanceAction -ResourceIdentifier $_.ResourceIdentifier -applyaction $_.PendingMaintenanceActionDetails.action -OptInType immediate }
So when you get the results back, it looks like:
PendingMaintenanceActionDetails ResourceIdentifier
------------------------------- ------------------
{Amazon.RDS.Model.PendingMaintenanceAction} arn:aws:rds:us-west-1:xxxxxxxx:cluster:xxxxxx
How do you view what's in that Amazon.RDS object? I have no doubt there's some way to unpack it with powershell, but I'm not sure what that is.
What I did:
Looked at the PoSH module documentation for this cmdlet (Get-RDSPendingMaintenanceAction) to see what it returned:
https://docs.aws.amazon.com/powershell/latest/reference/Index.html
Which says:
PendingMaintenanceActionDetails
ResourceIdentifier
Which, indeed, is what it returned to us.
Now, clicking on the object info from the documentation:
System.Collections.Generic.List<Amazon.RDS.Model.PendingMaintenanceAction>
takes us to:
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/index.html?page=RDS/TRDSResourcePendingMaintenanceActions.html&tocid=Amazon_RDS_Model_ResourcePendingMaintenanceActions)
And THAT page says it has Action, AutoAppliedAfterDate, etc.
When I run
$DataSet = Get-RDSPendingMaintenanceAction
$DataSet.PendingMaintenanceActionDetails
Here's what I get:
Action : system-update
AutoAppliedAfterDate : 1/1/0001 12:00:00 AM
CurrentApplyDate : 5/2/2018 4:41:00 PM
Description : Aurora 1.17.2 release
ForcedApplyDate : 1/1/0001 12:00:00 AM
OptInStatus : immediate
set-awscredentials -accesskey youraccesskey -secretkey yoursecretkey
Get-RDSPendingMaintenanceAction|%{
Submit-RDSPendingMaintenanceAction -ResourceIdentifier $_.ResourceIdentifier -applyaction $_.PendingMaintenanceActionDetails.action -OptInType immediate }
So when you get the results back, it looks like:
PendingMaintenanceActionDetails ResourceIdentifier
------------------------------- ------------------
{Amazon.RDS.Model.PendingMaintenanceAction} arn:aws:rds:us-west-1:xxxxxxxx:cluster:xxxxxx
How do you view what's in that Amazon.RDS object? I have no doubt there's some way to unpack it with powershell, but I'm not sure what that is.
What I did:
Looked at the PoSH module documentation for this cmdlet (Get-RDSPendingMaintenanceAction) to see what it returned:
https://docs.aws.amazon.com/powershell/latest/reference/Index.html
Which says:
PendingMaintenanceActionDetails
ResourceIdentifier
Which, indeed, is what it returned to us.
Now, clicking on the object info from the documentation:
System.Collections.Generic.List<Amazon.RDS.Model.PendingMaintenanceAction>
takes us to:
https://docs.aws.amazon.com/sdkfornet/v3/apidocs/index.html?page=RDS/TRDSResourcePendingMaintenanceActions.html&tocid=Amazon_RDS_Model_ResourcePendingMaintenanceActions)
And THAT page says it has Action, AutoAppliedAfterDate, etc.
When I run
$DataSet = Get-RDSPendingMaintenanceAction
$DataSet.PendingMaintenanceActionDetails
Here's what I get:
Action : system-update
AutoAppliedAfterDate : 1/1/0001 12:00:00 AM
CurrentApplyDate : 5/2/2018 4:41:00 PM
Description : Aurora 1.17.2 release
ForcedApplyDate : 1/1/0001 12:00:00 AM
OptInStatus : immediate
So, now we have the fields we need: what kind of action to take (non optional, and it can be db-update or system-update), and the ResourceIdentifier for the cluster.
Friday, April 13, 2018
[Statistics] Last 4 stats redux - which stats did I update just now that all recent maintenance didn't do?
This is an addendum to my other "last 4 stats" article. We had a query that had a bad plan, either due to bad statistics or sniffing or whatnot. Normally rerunning a stats update would fix it, but not today.
Here's our normal stats maint (separate from index maint).
The last time I looked, Ola's code uses the built-in rule for stats maintenance to determine if it should be updated, same as sp_updatestats, I ran a full update (OnlyModifiedStatistics = 'N'). But I was unsure what I had updated that hadn't been before. Fortunately, due to some other code I had (https://thebakingdba.blogspot.com/2016/08/statistics-get-last-4-stat-update-time.html), I was able to get the last 4 stats updates. So, let's figure out which ones were JUST updated, and hadn't been since our last update window at 11pm. Then we get the columns involved in that table, in hopes that we can look at our "hung" query and see that one of the fields in the WHERE clause hadn't been updated. That gives us an idea where to tweak, or possibly just to force a rebuild next time.
Step 1: run the code from the previous blog post, making sure it doesn't drop the table when done.
Step 2: run the below code, substituting your own windows. In my case, I wanted to see which newly updated stats may have fixed the issue.
--and now to return which particular stats hadn't been updated.
SELECT stats.the_schema_name,
stats.table_name,
stats.stat_name,
stats.updated,
-- stats.[inserts since last update],
-- stats.[deletes since last update],
columns.name FROM #stats_info2 stats
INNER JOIN sys.STATS ss ON ss.NAME = STATS.stat_name
INNER JOIN sys.stats_columns sstatc ON ss.object_id = sstatc.object_id AND sstatc.stats_id = ss.stats_id
INNER JOIN sys.columns ON columns.column_id = sstatc.column_id AND columns.object_id = ss.object_id
WHERE stats.updated >='20180413 14:00'
AND NOT EXISTS
( SELECT 1 FROM #stats_info2 stats2 WHERE
stats2.updated < '20180413 14:00' --AND stats2.UPDATEd >'20180412 22:59:00' but gives the same results
AND STATS.the_schema_name = stats2.the_schema_name
AND stats.table_name =stats2.table_name
AND stats.stat_name = stats2.stat_name
)
ORDER BY the_schema_name, table_name, stat_name
Here's our normal stats maint (separate from index maint).
EXECUTE
dba_utils.dbo.IndexOptimize
@Databases = 'eif_cde',
@FragmentationLow = NULL,
@FragmentationMedium =
NULL,
@FragmentationHigh =
NULL,
@UpdateStatistics =
'ALL',
@OnlyModifiedStatistics =
'Y'
Step 1: run the code from the previous blog post, making sure it doesn't drop the table when done.
Step 2: run the below code, substituting your own windows. In my case, I wanted to see which newly updated stats may have fixed the issue.
--and now to return which particular stats hadn't been updated.
SELECT stats.the_schema_name,
stats.table_name,
stats.stat_name,
stats.updated,
-- stats.[inserts since last update],
-- stats.[deletes since last update],
columns.name FROM #stats_info2 stats
INNER JOIN sys.STATS ss ON ss.NAME = STATS.stat_name
INNER JOIN sys.stats_columns sstatc ON ss.object_id = sstatc.object_id AND sstatc.stats_id = ss.stats_id
INNER JOIN sys.columns ON columns.column_id = sstatc.column_id AND columns.object_id = ss.object_id
WHERE stats.updated >='20180413 14:00'
AND NOT EXISTS
( SELECT 1 FROM #stats_info2 stats2 WHERE
stats2.updated < '20180413 14:00' --AND stats2.UPDATEd >'20180412 22:59:00' but gives the same results
AND STATS.the_schema_name = stats2.the_schema_name
AND stats.table_name =stats2.table_name
AND stats.stat_name = stats2.stat_name
)
ORDER BY the_schema_name, table_name, stat_name
Tuesday, March 13, 2018
[AWS] Querying CloudWatch logs using powershell to find DDL changes in your Aurora instance.
Did I get enough buzzwords in that post title?
So, I've been working on an AWS Aurora MySQL project lately (yes, you need to qualify Aurora, because these days there's a Postgres variant).
One problem we had was getting info out of the logs. Querying the logs is possible from within MySQL, but time consuming. Ditto querying the logs that are being saved to disk. We used the AWS directions (https://aws.amazon.com/blogs/database/monitoring-amazon-aurora-audit-events-with-amazon-cloudwatch/) to get our logs to dump into CloudWatch, which offers a Powershell cmdlet to filter them (although it limits to 1mb at a time, hence the DO WHILE with a low rowcount). Unfortunately, I didn't find any actual working examples.
So while this may not be the best, it does work.
NOTE: results currently go to an out-gridview window. Obviously, change that to what you need, or:
* remove the write-host
* remove the ogv line
* wrapper the entire DO WHILE into a variable then return that at the end. BE CAREFUL DOING THIS. I tried it with a decent-sized results set (30k) and got a weird error back:
Get-CWLFilteredLogEvent : Error unmarshalling response back from AWS. Request ID:
So, I've been working on an AWS Aurora MySQL project lately (yes, you need to qualify Aurora, because these days there's a Postgres variant).
One problem we had was getting info out of the logs. Querying the logs is possible from within MySQL, but time consuming. Ditto querying the logs that are being saved to disk. We used the AWS directions (https://aws.amazon.com/blogs/database/monitoring-amazon-aurora-audit-events-with-amazon-cloudwatch/) to get our logs to dump into CloudWatch, which offers a Powershell cmdlet to filter them (although it limits to 1mb at a time, hence the DO WHILE with a low rowcount). Unfortunately, I didn't find any actual working examples.
So while this may not be the best, it does work.
NOTE: results currently go to an out-gridview window. Obviously, change that to what you need, or:
* remove the write-host
* remove the ogv line
* wrapper the entire DO WHILE into a variable then return that at the end. BE CAREFUL DOING THIS. I tried it with a decent-sized results set (30k) and got a weird error back:
Get-CWLFilteredLogEvent : Error unmarshalling response back from AWS. Request ID:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import-module awspowershell | |
#You should have already saved your credentials locally using: | |
# set-awscredentials -accesskey myaccesskey -secretkey mysecretkey -StoreAs MyDeveloper | |
$credential = get-awscredentials -ProfileName MyDeveloper | |
# Get the current timestamp | |
$Current_Unix_Timestamp = [Math]::Floor([decimal](Get-Date(Get-Date).ToUniversalTime()-uformat "%s")) * 1000 | |
# Get the timestamp to start at. | |
$hours_to_look_back = 48 | |
$Past_Unix_Timestamp = [Math]::Floor([decimal](Get-Date((Get-Date).AddHours(-1*$hours_to_look_back)).ToUniversalTime()-uformat "%s")) * 1000 | |
# Get a list of all of our CloudWatch log groups | |
$All_CW_RDS_Logs = get-CWLLogGroup -LogGroupNamePrefix "/aws/rds" | where {$_.LogGroupName -like "*/myproject*"} | |
foreach ($Specific_CW_Log_Group in $All_CW_RDS_Logs) { | |
write-host $Specific_CW_Log_Group.LogGroupName | |
$CW_NextToken = $null # reset for each log group. Required for NextToken to work | |
#Using $null for NextToken means we can use the same pattern as for regular logs | |
#NOTE: this hangs if the FilterPattern is invalid. Which apparently includes commas, backslashes, etc. | |
DO { | |
$CW_RDS = | |
Get-CWLFilteredLogEvent ` | |
-LogGroupName $Specific_CW_Log_Group.LogGroupName ` | |
-StartTime $Past_Unix_Timestamp ` | |
-EndTime $Current_Unix_Timestamp ` | |
-FilterPattern "QUERY" ` | |
-Limit 500 ` | |
-NextToken $CW_NextToken | |
#FilterPattern can't use commas. ",QUERY," should show all create/truncate/drop, but we must use QUERY instead | |
#unlike the regular logs, this one returns a normal powershell dataset - nothing left to do | |
$CW_RDS.Events|ogv | |
$CW_NextToken = $CW_RDS.NextToken | |
}WHILE ($CW_NextToken -ne $null) | |
} |
Monday, February 12, 2018
[Replication] The row was not found at the Subscriber when applying the replicated (null) command for Table '(null)' with Primary Key(s): (null)
Had this come in courtesy of one of my replication monitors (see other posts).
DESCRIPTION: Replication-Replication
Distribution Subsystem: agent my-ser-ver-here-main-Main_big-my-other-server-here-35
failed. The row was not found at the Subscriber when applying the replicated
(null) command for Table '(null)' with Primary Key(s): (null)
Yes, it actually says "(null)".
What was it? We run multi-tier replication. That way each datacenter only gets one stream of changes, and we don't run into problems due to desktop heap (see other posts, again, on this blog).
After running a trace, we finally figured it out - the second tier had a foreign key where it shouldn't have had one. The FKs are handled on the "front" server, the one that everything replicates from. Because the FK was there, the rows were being deleted out of order, so it had a fit and gave us "(null)" instead of a valid name.
Note that this is only a problem with multi-tier replication; normal repl handles it, but it somehow choked on this.
In other news: I've been doing a lot of AWS Aurora I need to post about here, including both MySql SQL, and Powershell, but I don't have enough usable yet. Coming Soon!
Tuesday, January 16, 2018
[WMI] permissions for Win32_PerfFormattedData_*
Note: if you're trying to use WMI to query any of the performance counters, you don't just need the normal WMI permissions; you also need the account running the query to be added to "Performance Monitor Users" (or, if using a GPO, "BUILTIN\Performance Monitor Users"). In my case, when I ran the query I'd get nothing. No errors, no data, nothing. Adding that group (in addition to all the others like WMI perms) fixed it.
in this case, found it (and more) at: http://vniklas.djungeln.se/2012/08/22/set-up-non-admin-account-to-access-wmi-and-performance-data-remotely-with-powershell/
in this case, found it (and more) at: http://vniklas.djungeln.se/2012/08/22/set-up-non-admin-account-to-access-wmi-and-performance-data-remotely-with-powershell/
Friday, January 12, 2018
[Powershell] Credentials/Credential/Credentialing
Had to dig through email to find this, so obviously it needs to go into my aux brain - this blog.
I had a special account set up with WMI permissions. But that's not the same as the SQL Server account running the script..
- To
get credentialing working the first time:
- Log
onto the box as the service account for SQL Server
- Run this in Powershell: read-host -assecurestring | convertfrom-securestring |out-file c:\scripts\securestring_theADaccountName.txt # it will ask for the password – provide it.
- (that
will read the password then save it, encrypted and readable only by that
logged-in user, to the named file. Name it so that you know which
account you have to log in as in order for it to work.)
- To
use from then on:
$username =
"mydomain\myWMIacct"
$password =
cat c:\scripts\securestring_theADaccountName.txt
| convertto-securestring
$cred =
new-object -typename
System.Management.Automation.PSCredential
`
-argumentlist $username, $password
Get-WmiObject -ComputerName myservername -Class
Win32_operatingsystem -Namespace "root\cimv2"
-Credential $Cred
$credential = New-Object System.Management.Automation.PsCredential("yourdomain\youruser", (ConvertTo-SecureString "yourpassword" -AsPlainText -Force))
Tuesday, January 9, 2018
[IMOLTP] Detaching/Attaching from one server to another.
So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).
I thought it'd be simple: copy the files over, along with the Filestream folders, and attach. Nope! Various errors, including "5(Access is denied.)".
Thanks to Simon
Test:
Create DATABASE demo
go
ALTER DATABASE demo ADD FILEGROUP demo_mod CONTAINS MEMORY_OPTIMIZED_DATA
go
ALTER DATABASE demo ADD FILE (name='demo_mod1', filename='M:\temp\demo_mod1') TO FILEGROUP demo_mod
go
Use Demo
I thought it'd be simple: copy the files over, along with the Filestream folders, and attach. Nope! Various errors, including "5(Access is denied.)".
Thanks to Simon
Test:
Create DATABASE demo
go
ALTER DATABASE demo ADD FILEGROUP demo_mod CONTAINS MEMORY_OPTIMIZED_DATA
go
ALTER DATABASE demo ADD FILE (name='demo_mod1', filename='M:\temp\demo_mod1') TO FILEGROUP demo_mod
go
Use Demo
go
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,
TotalPrice MONEY
) WITH (MEMORY_OPTIMIZED=ON)
GO
Now take offline, copy the Data file, Log file, and the in-memory folder (demo_mod1).
On the target server, right-click, go to security, and add the SQL Server service user and grant FULL CONTROL.
Now, issue this to mount it. I just tested, and the commented out line doesn't APPEAR to be needed, but that's because my new location matched; if all the locations match, it will grab the demo_mod1 location, probably from the MDF. So in theory, if all your drive letters match, you only need the two FILENAMES.
CREATE DATABASE [demo] ON
( FILENAME = N'M:\Data\demo.mdf' ),
( FILENAME = N'L:\Logs\demo_log.ldf' ),
--FILEGROUP [demo_mod] CONTAINS FILESTREAM DEFAULT
(NAME = N'demo_mod1',FILENAME='M:\Data\demo_mod1') -- this is actually a folder...
FOR ATTACH
Subscribe to:
Posts (Atom)