Friday, April 19, 2013

[WAT] bizarrreness with ISNULL

Coworker came across this, and even found an article by Aaron Bertrand about it.



But the practical takeaway we had today, from Aaron's code - ISNULL can cause truncation:

DECLARE @c5 VARCHAR(5);
SELECT 'COALESCE', COALESCE(@c5, 'longer name')
UNION ALL
SELECT 'ISNULL',   ISNULL(@c5,   'longer name');



Wednesday, April 17, 2013

[Extended Events] What servers are connecting to my SQL Server 2012 box?

(code is complete, soup-to-nuts, but only runs on 2012; I'll modify it for 2008/R2 in another post.)

(update 2013/04/18 Jonathan Kehayias helpfully provided a way to do it by adding the existing_connection event - but there appears to be a bug with the histogram, and it doesn't return the right data.  Connect item if you want to vote on it: https://connect.microsoft.com/SQLServer/feedback/details/785042/extended-events-histogram-does-not-properly-save-existing-connection-results)


First of all, MANY thanks to Jonathon Kehayias for all the XE wisdom and code - I modified the snot out of his http://www.sqlskills.com/blogs/jonathan/tracking-sql-server-database-usage/ lock-tracking-using-a-histogram to build this...


Say I want to figure out what servers are connecting to my SQL Server.  This is something you'd traditionally either poll sysprocesses for periodically, or probably run a trace.  Both have issues, though, especially if they're busy servers.  So, how can we get that?  Extended Events.

The easiest way to do this would be using Extended Events - create a "bucketizer" (now called the histogram), and watch for new logins, saving the servername and incrementing a counter each time it happens.  As a bonus, we could then filter it - exclude particular applications, servers, etc (see the commented out code).  And, since it's a pretty basic XE, overhead is very low.  The one caveat is that you can only get one piece of information out - servername.  I'd LOVE to get the App Name & DB Name as well, but you can only get one piece of information at a time.  Downer.  : - (

One caveat: If you have servers that stay connected, and don't open new connections, you won't see them here.  The obvious exclusion, then, is replication, but maybe your app hangs on for a long time (we've seen that with third-party tools).  Easy answer for those is to use SP_WHOISACTIVE.

Any questions?


-- If the Event Session Exists, drop it first
IF EXISTS (SELECT 1
FROM sys.server_event_sessions
WHERE name = 'UnknownAppHosts')
DROP EVENT SESSION UnknownAppHosts
ON SERVER;
-- Create the Event Session
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.login(
ACTION(sqlserver.client_hostname)
--add filters on the login here. This would only watch anonymous .Net apps
-- WHERE ([sqlserver].[client_app_name] LIKE '.Net SQLClient Data Provider%')
WHERE ([sqlserver].client_hostname NOT LIKE '%-MQ-%')
)
ADD TARGET package0.histogram
( SET slots = 50, -- Adjust based on expected number of "buckets" needed
filtering_event_name='sqlserver.login',
source_type=1,
source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO
-- Start the Event Session
ALTER EVENT SESSION UnknownAppHosts
ON SERVER
STATE = start ;
GO
-- Parse the session data to determine the host/database.
-- When you stop the session, the records go AWAY, so make sure to run this before that!
SELECT slot.value('./@count', 'int') AS [Count] ,
slot.query('./value').value('.', 'varchar(20)')
FROM
(
SELECT CAST(target_data AS XML) AS target_data
FROM sys.dm_xe_session_targets AS t
INNER JOIN sys.dm_xe_sessions AS s
ON t.event_session_address = s.address
WHERE s.name = 'UnknownAppHosts'
AND t.target_name = 'Histogram') AS tgt(target_data)
CROSS APPLY target_data.nodes('/HistogramTarget/Slot') AS bucket(slot)
ORDER BY slot.value('./@count', 'int') DESC
GO
----------------------------------
--WHEN AND ONLY WHEN YOU'RE DONE--
----------------------------------
-- Stop the Event Session
ALTER EVENT SESSION UnknownAppHosts
ON SERVER
STATE = STOP ;
GO
-- Drop the Event Session
DROP EVENT SESSION UnknownAppHosts
ON SERVER;
view raw gistfile1.sql hosted with ❤ by GitHub

Tuesday, April 16, 2013

[Servers] Extensible (Powershell) Repository - just add scripts!

(Version 1.1. I so should've posted this earlier - looks like everybody has been releasing their version of this idea over the past week).


Over the past few months, we've been working on knowing more about our servers.  Besides sp_blitz, there's a lot of data we want: DMVs, WMI info, etc. So a coworker & I had a challenge going - which would be a faster way to query our 80+ production servers, SSIS or PowerShell?  Well, he had it running faster, but then I asked him to up the number of simultaneous threads and it was a chore for him to change.  For me, alter a line of code in a text file.  And it seemed easier for me to add more scripts.

So I decided to make mine, while not (necessarily) the fastest, the easiest to use.  It uses Powershell and some PS scripts originally written by Chad Miller, Arnoud Jansveld, and several other people smarter than me.  I just put it all together.

TL;DR: Drop a query in a folder, and it runs several threads in parallel against all your servers, saving the details out to a table, overwriting the old data if you want.

Pros:

  • Trivial install - 6 scripts in a folder, a table with a list of servers, 1 or 2 empty subfolders, and 1 job.
  • Low overhead - a sample (simple) script ran on 80+ servers in under 6 seconds.
  • Easy to add a new collection - just drop a SQL/PS script in the folder with the name for the table.
  • Need more servers done faster?  Up the threads.
  • Skips servers it can't connect to.
  • No powershell extensions needed - just those 6 scripts.  PS3 below, modify the one line for PS2.


Cons:

  • Does not deploy code; just runs scripts. (so sp_blitz, for instance, needs to be installed separately)
  • No failure info. That's on my to-do list, but is not nearly as easy as I'd like
  • Datatypes for new tables need to be tweaked if the script creates the table - text fields default to varchar(1000)



To install:
  • Create a table with a list of servers to monitor.
  • Create 3 folders: c:\sql_tools (or wherever; change the actual script's foldername), and underneath it one for scripts where you want to keep the data, and one where you don't.
  • Grab add-sqltable.ps1, write-datatable.ps1, and invoke-sqlcmd2.ps1 from Hey Scripting Guy or poshcode. Put in c:\sql_tools.
  • Save the below script to a file called "repository_extensible.ps1", in c:\sql_tools.
  • Create a job with 2 job steps, both as Type: "Operating System (CmdExec)":
    • powershell "& c:\sql_tools\repository_extensible.ps1 c:\sql_tools\repository_scripts_keep 0"
    • powershell "& c:\sql_tools\repository_extensible.ps1 c:\sql_tools\repository_scripts_delete 1"
    • The first script runs scripts where each time, records will be added to the table.  The second will delete records from each server, replacing it with the new records. 
  • Toss a couple sample scripts in the folders 
  • Run the job once.
  • Go into your repository database and modify the table; by default strings automatically become varchar(1000).  Yes, it's not great, but will work for now.

#######################
<#
Purpose: Take a list of servers from a table. Run a series of scripts in a folder against each server, saving to a table.
Requirements: invoke-sqlcmd2, split-job (1.2.1), write-datatable, and add-sqltable.
See link below for those scripts; split-job is from Poshcode
Installation:
Create a folder called c:\sql_tools.
Put each of the 4 scripts in it.
Create a folder called c:\sql_tools\repository_scripts.
Put whatever SQL code you want to run against each server in the repository_scripts folder.
Each resulting table will also have two fields: ServerName, and Insert_Datetime
Change the sql_repository to be whatever your repository server is called.
It will create a table for each script, named whatever the script is. If the table exists it should skip it, but
even if it throws an error because it exists, it worked.
For new scripts: if you don't have a table already set up for a script, run this then go back and shrink fields.
Since we're using powershell "string" data types, the servername gets set as a varchar(1000).
Alternatively, create a proper table to begin with. CreaTrying to make this simple.
NOTE: ServerName & Insert_Datetime are added by this code; don't include in yours, or don't name them that.
Invoking: currently two parameters: script path, and do you delete old records or not (default, aka 0 is to not delete)
powershell "& c:\sql_tools\repository_extensible.ps1" c:\sql_tools\repository_scripts 0
Thanks: Chad Miller and Hey Scripting Guy, along with Arnoud Jansveld (and the people on PoshCode) for split-job
http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx
1.0 - mdb - initial release.
1.1 - mdb - Powershell script support added, tweaked so that it runs in PS3
1.11 - mdb 2013/04/16 - dumb bug that only ran last script instead of each script.
1.2 - mdb 2013/05/22 - adding parameter to do ALL connectable servers.
1.21 - mdb 2015/03/06 - writing out the current time when a script runs, so we know how long everything is taking.
code found at http://techibee.com/powershell/powershell-create-a-variable-to-always-show-current-time/1132
1.22 - mdb 2015/03/09 - adding some TRY CATCH, though this means you need to add a job to parse the log, since
all the scripts will continue to run even if one breaks.
Future enhancements: A way to better parse the filename so that it knows whether or not to delete, rather than a parameter.
???
#>
clear
. C:\sql_tools\invoke-sqlcmd2.ps1;
. C:\sql_tools\split-job.ps1;
. C:\sql_tools\write-datatable.ps1;
. C:\sql_tools\add-sqltable.ps1;
$script_path=$args[0] #Where the scripts sit that you want run. Done this way so you can have two folders, 1 that deletes, 1 that doesn't.
$do_we_delete=$args[1] #1 means yes, delete old. 0 means no, keep old
$include_non_prod=$args[2] #1 means all servers, anything else (including blank) means prod-only
$server_repository = 'sql_repo'
$database_repository = 'EPR_repo'
#get list of servers that meet our criteria; our code will run against these
if ($include_non_prod -eq 1)
{
$serverlist = invoke-sqlcmd2 -serverinstance $server_repository -database $database_repository `
-query "SELECT server FROM dbo.ServerList WHERE Connect = 1 order by server"
}
else
{
$serverlist = invoke-sqlcmd2 -serverinstance $server_repository -database $database_repository `
-query "SELECT server FROM dbo.ServerList WHERE Connect = 1 order by server"
}
#list out the servers we'll be doing; this way the errorlog gets the list.
$serverlist
#now use Split-Job (from POSHcode) to split into a number of Runspaces ("hosted runspaces")
#variables need to be inside the split-job due to scope issues, though there's a parameter (see end) that will import them.
$serverlist|Split-Job {%{
#because the scope is separate in here, you need to re-dot-source and reinit your parameters.
. C:\sql_tools\invoke-sqlcmd2.ps1;
. C:\sql_tools\write-datatable.ps1;
. C:\sql_tools\add-sqltable.ps1;
. C:\sql_tools\out-datatable.ps1;
$server2 = $_.server
#adding a variable so we can easily write out the time in the log
$global:currenttime= Set-PSBreakpoint -Variable currenttime -Mode Read -Action { $global:currenttime= Get-Date }
write-host $server2 #for the logs so we know where we are thus far. Not 100% since we run in parallel.
#Run each SQL script in our folder
if (!$script_path.EndsWith("\"))
{
$script_path = $script_path + "\"
}
#get list of files
$Dir = get-childitem $script_path -recurse
$List = $Dir | where {$_.extension -eq ".sql" -or $_.extension -eq ".ps1"}
#Now run each script
foreach ($file.Name in $List)
{
$tablename = $file.BaseName
write-host $tablename ' ' $currenttime
#Doing all of the script running (including table create) as part of the try..catch. But it means that
# you need to scrape the logfile to make sure everything ran; it won't choke if a file is bad.
try
{
#SQL Scripts
# Nothing special needed; uses invoke-sqlcmd2 to run same script on each server
if ($file.extension -eq ".sql")
{
#$server2$tablename #list the scripts being run
#run the actual query
$quer = invoke-sqlcmd2 -serverinstance $server2 -InputFile $script_path$file -As 'DataTable'
}
#Powershell Scripts
# In the script, you need to tell it which computer to query, like: -computername $args[0]
if ($file.extension -eq ".ps1")
{
$command = ". '$script_path$file'" + " " + $server2
$command
$quer = invoke-expression $command | out-datatable
}
#add the ServerName to the results and populate it.
$quer.Columns.Add("ServerName") |out-null
$quer | %{$_.ServerName = $server2}
#add the Insert_Datetime field and populate it.
$quer.columns.add("Insert_Datetime",[DateTime]) |out-null
$quer | %{$_.Insert_Datetime = [datetime](Get-Date)} #Better version by Nick on Stackoverflow
#Now that we have our table, complete with ServerName, create the table, delete existing rows, and add new rows
#create the table if it doesn't exist (can throw error the first time per script, because multiple servers hit it at once)
$tablecount = invoke-sqlcmd2 -serverinstance $server_repository -database $database_repository `
-query "select count(*) as tablecount from information_schema.tables where table_name = '$tablename'"
if ($tablecount.tablecount -eq 0)
{
add-sqltable -serverinstance $server_repository -Database $database_repository -Tablename $tablename -DataTable $quer
}
#We do this here so that, if a server goes away or becomes unreachable, we still have the last set of data on it.
if ($do_we_delete -eq 1)
{
invoke-sqlcmd2 -serverinstance $server_repository -database $database_repository -query "delete from $tablename where servername = '$server2'"
}
Write-DataTable -ServerInstance $server_repository -Database $database_repository -TableName $tablename -Data $quer
}
catch
{
#throw $_.Exception.Message #doesn't remove enough crap
Write-Error $_.Exception.Message #-ErrorAction Stop I dont want to stop it, just throw an error.
}
}
}} -MaxDuration 300 -MaxPipelines 6 -Variable script_path, server_repository, do_we_delete, database_repository -NoProgress
#the variable above allows the split-job to read from outside the scope and put it inside

Monday, April 8, 2013

[Replication Monitor] YA monitor - check delay times with last_distsync and sp_replmonitorhelpsubscription

Got hosed because I accidentally added a a clause in our "how backed up are we" from http://thebakingdba.blogspot.com/2010/10/replication-better-alternative-to.html , and set it up wrong, so I filtered stuff I didn't mean to.

Lessons learned, and got me thinking ... One thing I haven't had on my replication monitor was a reliable way of detecting if any of my subscriptions are expired or about to expire.  The dread 72 hours.  So, several hours later, a profile trace and a question on StackOverflow, and I got a way to do it.

Basically, you're running the system SP sp_replmonitorhelpsubscription, once for each server that the distributor handles.  You then filter that out (looking for warnings and ignoring recent records) and send an email if there's anything left.

One downside: because we're trying to avoid the NESTED EXEC issue (can't have an INSERT INTO EXEC where the code you're running has an INSERT INTO EXEC), the most reliable way to avoid it is by using OPENROWSET (thanks to Paul Ibison for that).  However, that requires that Ad Hoc Distributed Queries be enabled via sp_configure.  Yes, it can be a security hole.

An alternative way to do it, according to replication expert Hilary Cotter, is to run the SP twice, and only INSERT INTO on the second Exec.  In my testing it's not as reliable (fails occasionally, and doesn't lend itself to automation quite as well, but it may be a better option for you.

Hope this helps (and hopefully the blogspot sourcecode formatter I'm using is good)!


--2013/06/17 1.10 mdb thebakingdba.blogspot.com
-- added filter for "last_distsync is null", which should only be running or never-run.
DECLARE @min INT, @max INT, @sql NVARCHAR(4000)
DECLARE @repl_server_list TABLE(id INT IDENTITY, srvname sysname)
SET NOCOUNT ON
--build a stripped down temp table; the OPENROWSET allows us to skip fields we don't care about.
IF OBJECT_ID('tempdb..#tmp_subscriptiondata') IS NOT NULL
DROP TABLE #tmp_subscriptiondata
create table #tmp_subscriptiondata (
[status] int null,
warning int null ,
subscriber sysname null ,
subscriber_db sysname null ,
publisher_db sysname null ,
publication sysname null ,
publication_type int null ,
subtype int null ,
latency int null ,
latencythreshold int null ,
agentnotrunning int null ,
agentnotrunningthreshold int null ,
timetoexpiration int null ,
expirationthreshold int null ,
last_distsync datetime null ,
distribution_agentname sysname null ,
mergeagentname sysname null ,
mergesubscriptionfriendlyname sysname null ,
mergeagentlocation sysname null ,
mergeconnectiontype int null ,
mergePerformance int null ,
mergerunspeed float null ,
mergerunduration int null ,
monitorranking int null ,
distributionagentjobid binary(30) null ,
mergeagentjobid binary(30) null ,
distributionagentid int null ,
distributionagentprofileid int null ,
mergeagentid int null ,
mergeagentprofileid int null ,
logreaderagentname sysname null
)
--list every server that our current server is handling distribution duties for.
-- we do this since you can tell a different server to be the distributor.
INSERT INTO @repl_server_list
SELECT DISTINCT srvname --b.srvname,a.publisher_db,a.publication
FROM distribution.dbo.MSpublications a, master.dbo.sysservers b
WHERE a.publisher_id=b.srvid
--------------------------------
--Get list of all replications--
--------------------------------
SELECT @min = MIN(id), @max = MAX(id) FROM @repl_server_list
WHILE @min <= @max
BEGIN
--Transactional Replication
select @sql = 'SELECT * FROM OPENROWSET(''SQLNCLI'', ''Server=' + @@servername + ';Trusted_Connection=yes'','
+ ' ''set fmtonly off; exec distribution..sp_replmonitorhelpsubscription @Publisher="' + srvname + '",@publication_type=0'')a'
FROM @repl_server_list WHERE id = @min
Insert Into #tmp_subscriptiondata
EXEC sp_executesql @sql
--Snapshot Replication
select @sql = 'SELECT * FROM OPENROWSET(''SQLNCLI'', ''Server=' + @@servername + ';Trusted_Connection=yes'','
+ ' ''set fmtonly off; exec distribution..sp_replmonitorhelpsubscription @Publisher="' + srvname + '",@publication_type=1'')a'
FROM @repl_server_list WHERE id = @min
Insert Into #tmp_subscriptiondata
EXEC sp_executesql @sql
--Merge Replication
select @sql = 'SELECT * FROM OPENROWSET(''SQLNCLI'', ''Server=' + @@servername + ';Trusted_Connection=yes'','
+ ' ''set fmtonly off; exec distribution..sp_replmonitorhelpsubscription @Publisher="' + srvname + '",@publication_type=2'')a'
FROM @repl_server_list WHERE id = @min
Insert Into #tmp_subscriptiondata
EXEC sp_executesql @sql
SET @min = @min + 1
END
-------------------
--Reporting Email--
-------------------
--variables and tablevar defined here to more easily add/modify/test rules
DECLARE @tableHTML NVARCHAR(MAX)
,@MailSubject VARCHAR(100)
, @rowcount INT
DECLARE @final_error_list TABLE (
subscriber sysname null ,
subscriber_db sysname null ,
publisher_db sysname null ,
publication sysname null ,
warning int null ,
last_distsync datetime null ,
hours_delayed INT,
distribution_agentname sysname null
)
SELECT @MailSubject = '[Replication] Delays/Errors on ' + @@servername
--using an interim table so that we can query it to see how many they are;
-- we could use a CTE but then we have no easy way, short of checking the HTML length,
-- of verifying there are records that need to be emailed.
INSERT INTO @final_error_list
SELECT subscriber, subscriber_db, publisher_db, publication, warning, last_distsync,
DATEDIFF(hh,last_distsync, GETDATE()) AS Hours_Delayed,distribution_agentname
FROM #tmp_subscriptiondata WHERE warning > 0
UNION ALL
SELECT subscriber, subscriber_db, publisher_db, publication, warning, last_distsync,
DATEDIFF(hh,last_distsync, GETDATE()) AS Hours_Delayed, distribution_agentname
FROM #tmp_subscriptiondata
--rule 1 - ignore publications that are current in the last hour or currently running.
DELETE FROM @final_error_list
WHERE last_distsync > DATEADD(mi,-60, GETDATE()) OR last_distsync IS null
--rule 2 - ignore subscriptions that only run once a day, after midnight
DELETE FROM @final_error_list
WHERE (publication = 'dailypub' AND last_distsync > CONVERT(CHAR(8),GETDATE(),112))
--SELECT * FROM @final_error_list
IF (SELECT COUNT(*) FROM @final_error_list)>0
BEGIN
select @tableHTML = N'<H3>Replication Delays and Errors</H3>'
+ N'<table border="1">' + N'<tr>'
+ N'<th>Subscriber</th>' +
+ N'<th> Subscriber_DB </th>'
+ N'<th> Publisher_DB </th>'
+ N'<th> Publication </th>'
+ N'<th>Warning</th>'
+ N'<th> Last_Distsync </th>'
+ N'<th> Hours</th>'
+ N'<th> Distribution_AgentName</th>'
+ N'</tr>' + CAST((SELECT td = RTRIM(LTRIM(T.Subscriber))
,''
,td = RTRIM(LTRIM(T.Subscriber_DB))
,''
,td = RTRIM(LTRIM(T.Publisher_DB))
,''
,td = RTRIM(LTRIM(T.Publication))
,''
,td = RTRIM(LTRIM(T.Warning))
,''
,td = CONVERT(VARCHAR(16), T.Last_Distsync, 120)
,''
,td = CONVERT(VARCHAR(3), T.Hours_Delayed)
,''
,td = RTRIM(LTRIM(T.Distribution_AgentName))
FROM @final_error_list T
ORDER BY T.[Warning] DESC, T.last_distsync ASC
FOR
XML PATH('tr')
,TYPE
) AS NVARCHAR(MAX)) + N'</table>';
--PRINT @tableHTML
EXEC msdb.dbo.sp_send_dbmail @profile_name = 'yourprofile', @recipients = 'dev@null.com',
@subject = @MailSubject, @body = @tableHTML, @body_format = 'HTML';
END
DROP TABLE #tmp_subscriptiondata

Friday, April 5, 2013

[tips] using OPENROWSET to avoid "An INSERT EXEC statement cannot be nested."

Found this, looks like from Paul Ibison (replicationanswers.com).  In this case I _did_ need it for replication, but this could be useful whenever you are run into the error "An INSERT EXEC statement cannot be nested.".

One note - depending on how the original piece of code works, you may or may not need the SET FMTONLY OFF;.  (In this case I do).  If you do, be aware that the code is run TWICE.  For this, it works fine, but keep it in mind.

USE eif_workspace
GO
create table dbo.repmonitor (
[status] int null,
warning int null ,
subscriber sysname null ,
subscriber_db sysname null ,
publisher_db sysname null ,
publication sysname null ,
publication_type int null ,
subtype int null ,
latency int null ,
latencythreshold int null ,
agentnotrunning int null ,
agentnotrunningthreshold  int null ,
timetoexpiration  int null ,
expirationthreshold  int null ,
last_distsync  datetime null ,
distribution_agentname  sysname null ,
mergeagentname  sysname null ,
mergesubscriptionfriendlyname  sysname null ,
mergeagentlocation  sysname null ,
mergeconnectiontype  int null ,
mergePerformance  int null ,
mergerunspeed float null ,
mergerunduration int null ,
monitorranking  int null ,
distributionagentjobid  binary(30) null ,
mergeagentjobid binary(30) null ,
distributionagentid  int null ,
distributionagentprofileid int null ,
mergeagentid int null ,
mergeagentprofileid int null ,
logreaderagentname sysname null
)
go

Insert Into aud.dbo.repmonitor

SELECT *

FROM OPENROWSET('SQLNCLI', 'Server=yourservername;Trusted_Connection=yes',  'set fmtonly off;  exec distribution..sp_replmonitorhelpsubscription @Publisher=@@servername,@publication_type=0')

Wednesday, April 3, 2013

[Extended Events] Finding new connections and saving to a asynchronous bucketizer/histogram

We're trying to get rid of our .Net SQLClient Data Provider apps.  Well, trying to get rid of the useless name.  How do we do that?  By figuring out which servers they're coming from and which databases they're hitting, and giving that to our systems folk so they can find the connection strings and add Application Name.

My first thought was EN...  ha!  No, it won't work for that, I don't think.
My second thought were traces.  Better, but we'd need to just get audit_login, then we'd have to parse it out, etc.
So my third thought was to use XE.  Lo and behold, it works!  Get the client host names, and save them to a bucket.  Then, as they connect, either add to or increment the number for that bucket.






-- If the Event Session Exists, drop it first
IF EXISTS (SELECT 1
FROM sys.server_event_sessions
WHERE name = 'UnknownAppHosts')
DROP EVENT SESSION UnknownAppHosts
ON SERVER;

-- Create the Event Session
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.login(
    ACTION(sqlserver.client_hostname)
WHERE ([sqlserver].[client_app_name] LIKE 'Microsoft SQL Server Management%')
)
ADD TARGET package0.histogram
( SET slots = 50, -- Adjust based on number of databases in instance
 filtering_event_name='sqlserver.login',
 source_type=1,
 source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO

-- Start the Event Session
ALTER EVENT SESSION UnknownAppHosts
ON SERVER
STATE = start ;
GO

-- Parse the session data to determine the databases being used.
SELECT  slot.value('./@count', 'int') AS [Count] ,
        slot.query('./value').value('.', 'varchar(20)')
FROM
(
SELECT CAST(target_data AS XML) AS target_data
FROM sys.dm_xe_session_targets AS t
    INNER JOIN sys.dm_xe_sessions AS s
ON t.event_session_address = s.address
WHERE   s.name = 'UnknownAppHosts'
 AND t.target_name = 'Histogram') AS tgt(target_data)
CROSS APPLY target_data.nodes('/HistogramTarget/Slot') AS bucket(slot)
ORDER BY slot.value('./@count', 'int') DESC

GO


-- Start the Event Session
ALTER EVENT SESSION UnknownAppHosts
ON SERVER
STATE = STOP ;
GO