Showing posts with label Azure. Show all posts
Showing posts with label Azure. Show all posts

Wednesday, April 9, 2025

Did I just find SQL Injection within Azure? No, but...

 I run a n Extended Event (Xevent) that collects errors when the severity is above 10. I've written about it elsewhere in the blog. 

But one thing that's been bugging me is errors like the following in my logs, courtesy of "DMVCollector". 




And what causes it? Well, thanks to the Xevent, I have the code. I've shared it with MS and they're working on a fix, but a couple of things stand out.

1. Yup, Dynamic SQL is tough. Someone missed a quote. 

2. they're using EXEC. Not SP_EXECUTESQL. And they're not escaping/quoting the databasename.

3. I'm the first to report this. Really? 

4. How long has this been running, not working?


But that EXEC made me wonder if you can abuse that. The equivalent of 'little bobby tables'. There's a dependency on dm_hadr_fabric_partition_states, which I simply don't know what determines if it's a "real" database name (not the GUID that they use everywhere internally). So it looks like we're safe, due to them using the GUID. But still, crazy, right? 


 IF OBJECT_ID('tempdb.dbo.#memory_optimized_tables_internal_attributes ') IS NOT NULL

                    DROP TABLE #memory_optimized_tables_internal_attributes


                -- Create the table with the proper schema

                SELECT TOP 0

                    newid() as logical_database_guid,

                    db_name() as logical_database_name,

                    db_name() as db_name,

                    object_id,

                    xtp_object_id,

                    type,

                    type_desc

                INTO

                    #memory_optimized_tables_internal_attributes

                FROM

                    sys.memory_optimized_tables_internal_attributes


                -- This is a DB-scoped DMV. Iterate over all user databases (id > 4) and collect the data.

                DECLARE @name sysname;

                DECLARE @logical_database_guid UNIQUEIDENTIFIER;

                DECLARE @logical_database_name sysname;

                DECLARE @database_id int;

                DECLARE @SQLString nvarchar(max);

                DECLARE DbCursor CURSOR LOCAL FORWARD_ONLY

                FOR

                SELECT

                    PS.logical_database_id [logical_database_guid], D.name [name], D.database_id [database_id], DB_NAME(D.database_id) [logical_database_name]

                FROM

                    sys.databases D join sys.dm_hadr_fabric_partition_states PS ON PS.database_name = D.name


                OPEN DbCursor

                FETCH NEXT FROM DbCursor INTO @logical_database_guid, @name, @database_id, @logical_database_name


                WHILE @@fetch_status = 0

                BEGIN

                    SET @SQLString = N'INSERT INTO #memory_optimized_tables_internal_attributes

                        SELECT '''

                        + convert(nvarchar(100), @logical_database_guid) + ''', '''

                        + QUOTENAME(@logical_database_name, '''') + ', '

                        + QUOTENAME(@name, '''') + ',

                        object_id,

                        xtp_object_id,

                        type,

                        type_desc

                    FROM '

                        + QUOTENAME(@name) + '.sys.memory_optimized_tables_internal_attributes'


                    BEGIN TRY

                        EXEC (@SQLString)

                    END TRY

                    BEGIN CATCH

                        -- Swallow the exception.

                    END CATCH


                    -- Move to the next database.

                    FETCH NEXT FROM DbCursor INTO @logical_database_guid, @name, @database_id, @logical_database_name

                END


                CLOSE DbCursor

                DEALLOCATE DbCursor

Wednesday, February 26, 2025

Azure - CETAS with OPENQUERY works!

 Just putting it here since I'm thrilled this weird way of doing it worked. 

I was trying to use a linked server from an MI to an Azure SQL DB, so that I could CETAS (Create External Table As Select) from my Azure SQL DB to blob storage (since the only way to do it from ASD is either ADF or some sort of preview thing). It didn't work. Closest I got was XML errors on distributed queries. Then I went to OPENQUERY Woot! And I even got the xml field copied over!




CREATE EXTERNAL TABLE mytablename

WITH (

    LOCATION = 'capture/20250226/01',

    DATA_SOURCE = [my_blob_data_source],

    FILE_FORMAT = CETAS_ParquetSN_FileFormat)

AS 

SELECT fielda, fieldb, fieldc FROM OPENQUERY(my_ro_linked_server,'SELECT fielda, fieldb,  convert(nvarchar(max),fieldc) as fieldc FROM capture.dbo.mytable')




Thursday, February 13, 2025

Resizing operations - getting notifications on Managed Instance & regular SQL DB

How to monitor the progress of a resizing operation. (see previous post for Elastic Pool changes)

Managed Instance:

  •  Get-AzSqlInstanceOperation -ResourceGroupName "myrgnamehere" -managedinstancename "myservernamehere"
Azure SQL Database, not elastic pool. 
  •  get-azsqldatabaseactivity -ResourceGroupName "myrgnamehere" -servername "myservernamehere" -databasename "dbnamehere"

Note that instead of IN_PROGRESS, for both of these the state is "InProgress". Sloppy, Microsoft, sloppy.

Oh, but at least they start reporting info shortly after it starts!


$activity = (the above)

$latest_activity = $activity|sort -Property StartTime -Descending |select -first 1 #unsure if I need this, but what the heck, still works

if ($latest_activity.state -ne "INPROGRESS" -and $latest_activity.StartTime -gt [datetime]"2025/02/13 19:00:00")

{

#notification here
}

Tuesday, February 4, 2025

Azure SQL Database Elastic Pools - fun monitoring the resize.

Turns out the Cloud is, as previously surmised, is other people's computers. And they want their two dollars. 


So let's resize! And then get a notification when it's done!


The annoying part, though, is that for the FIRST change, it doesn't tell you ANYthing until you're done. Run the command, get back an empty prompt. Seriously it was a "WTF". 

AFTER it's done, the SECOND change you can monitor. 


This code will do both. Warning that it will Just Keep Notifying you because I am lazy and just wanted SOMETHING to work. 


And the part with writehost? here's what it looks like in my job history:


Oh, and percent done appears to START at 50%. Like I started a resize on a >1tb SQL DB, and under 2 minutes it says 50%. Sure, sure. *pat DB on head* who's a good little monitor?


```

Connect-AzAccount 

$activity = @()

$activity = Get-AzSqlElasticPoolActivity -ResourceGroupName "yourgroupnamehere" -servername "yourservernamehere" -elasticpoolname "yourelasticpoolnamehere"

$latest_activity = $activity|sort -Property StartTime -Descending |select -first 1

write-host "$($latest_activity.state) is the state of the operation started at $($latest_activity.starttime) and pct is $($latest_activity.PercentComplete)"

#that is so that when I run it with sql server (VM), I can get back the results of the powershell script. 


if ($latest_activity.state -ne "IN_PROGRESS" -and $latest_activity.StartTime -gt [datetime]"2025/02/04 21:13:00")

{

notificationcodehere
}

```

Friday, October 20, 2023

Azure SQL Database - ports you didn't realize you needed for your firewall request.

 Azure is fun. FUN, I say!


Ports: 

1433 - standard SQL port

11000-11999 - ports if you're using Proxy instead of Redirect

14000-14999 (!) - ports for the DAC (Dedicated Admin Connection)

Thursday, April 27, 2023

Azure Managed Instance to Parquet using CETAS and t-sql

And we're back! Not that anybody noticed.  'Scool. IP agreements and the like. 


I'll make it brief since I logged on at 11pm to get this working.

TL;DR - the below lines will allow you to query a table on your MI, creating Parquet files in Azure blob storage. And you can query it! Next up is partitioning over time, etc, etc. But this is freaking fantastic. I have a python script I wrote that does it, but it's nowhere as nice/easy as this.

Why do you care? Because it's a fantastically easy way to archive older data to blob storage, and I suspect (need to test) that if you do it right, you can then have it go to cool/archive storage via a lifecycle setup, so that if you need it much later, you can. Parquet is a columnar compressed format that can then feed other things like Snowflake, Hadoop, Redshift Spectrum, AWS Athena, etc. For me it's a platform-agnostic, portable, highly-compressed data format that I can use to better archive older data. 

CETAS: Create External Table As Select


1) Walking the dog and reading my RSS feeds on Feedly, I see this:
https://azure.microsoft.com/en-us/updates/azure-sql-general-availability-updates-for-lateapril-2023/

2) reading through, go to this:
SQL Managed Instance now supports CETAS (“Create External Table As Select”) functionality. This feature allows users to export data from local database tables into parquet and CSV files in Azure storage and creates external tables for easy querying of the exported data through data virtualization.

3) Run back home and log in. My poor dog was out of breath.

4) After dinking for 30 minutes trying to get my local copy of the az powershell module to work with the command, (still isn't working, seriously, wtaf), and being unable to use cloud shell since it requires cloud storage and permissions are locked down too much, I finally use -UseDeviceauthentication and run these commands on my instance:

# Enable ServerConfigurationOption with name "allowPolybaseExport"

Set-AzSqlServerConfigurationOption -ResourceGroupName "myresource_group_name" -InstanceName "myManagedInstanceName" `

-Name "allowPolybaseExport" -Value 1


5) Verify it works.

SELECT [name], [value] FROM sys.configurations WHERE name ='allow polybase export';

Returns 1!


6) now, I use the following t-sql, pretty much verbatim from their example and a SAS token. The only magic I've added is the 10 minutes on how to use a SAS token. 

USE my

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password would go here';

GO
CREATE DATABASE SCOPED CREDENTIAL [CETASCredential]
    WITH IDENTITY ='SHARED ACCESS SIGNATURE',
SECRET = 'sp=rwl&st=2023-04-27T04:37:23Z&se=2023-05-26T12:37:23Z&spr=https&sv=2021-12-02&sr=c&sig=stringofstuffremovedhahahahahthisisfromthesastokenA%3D' ; --Removing leading '?'
GO

 

CREATE EXTERNAL DATA SOURCE [CETASExternalDataSource]
WITH (
    LOCATION = 'abs://mycontainer@mystorageacct.blob.core.windows.net',
    CREDENTIAL = [CETASCredential] );
GO

 

CREATE EXTERNAL FILE FORMAT [CETASFileFormat]
WITH(
    FORMAT_TYPE=PARQUET,
    DATA_COMPRESSION = 'org.apache.hadoop.io.compress.SnappyCodec'
    );
GO

 

-- Count how many rows we plan to offload
SELECT COUNT(*) FROM my.dbo.table

CREATE EXTERNAL TABLE mytable_polybase_example

WITH (
    LOCATION = 'I/get/multiple/paths/for/multiple/tables!/',
    DATA_SOURCE = [CETASExternalDataSource],
    FILE_FORMAT = [CETASFileFormat])
AS 
    SELECT 
        *
    FROM 
       my.dbo.table

 

-- you can query the newly created external table
SELECT COUNT (*) FROM mytable_polybase_example;


Friday, October 28, 2022

Azure SQL Database - something like sp_send_dbmail

 I needed the ability to send emails from Azure SQL DB, which doesn't support it. The long-term goal is to use LogicApps or SendGrid or something like that, but while I wait for permissions and the like, let me share this instead. It acts like sp_send_dbmail, and will even run queries in the database (or other databases on that server!) and send them from your on-prem server.

This is based on others. They had the awesome parts, my parts are the crappy glue making it work. : )


The emailqueue table looks like this:

CREATE TABLE [dbo].[EmailQueue](
[Id] [INT] IDENTITY(1,1) NOT NULL,
[Recipients] [VARCHAR](250) NOT NULL,
[Cc_recipients] [VARCHAR](250) NULL,
[Email_Subject] [VARCHAR](250) NOT NULL,
[Email_body] [VARCHAR](MAX) NULL,
[Email_body_format] [VARCHAR](10) NULL,
[Query] [NVARCHAR](MAX) NULL,
[profile_name] [VARCHAR](250) NULL,
[QueueTime] [DATETIME2](7) NOT NULL,
[SentTime] [DATETIME2](7) NULL,
[Importance] [VARCHAR](10) NULL,
[attach_query_result_as_file] [BIT] NULL,
[query_attachment_filename] [VARCHAR](200) NULL,
[query_result_header] [BIT] NULL,
[query_result_width] [INT] NULL,
[query_result_separator] [VARCHAR](2) NULL,
[query_result_no_padding] [BIT] NULL,
[bcc_recipients] [VARCHAR](255) NULL,
[execute_query_database] [VARCHAR](255) NULL,
[from_address] [VARCHAR](MAX) NULL,
PRIMARY KEY CLUSTERED 
(
[Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[EmailQueue] ADD  CONSTRAINT [Co_Importance]  DEFAULT ('Normal') FOR [Importance]
GO
ALTER TABLE [dbo].[EmailQueue] ADD  DEFAULT ((0)) FOR [attach_query_result_as_file]
GO
ALTER TABLE [dbo].[EmailQueue] ADD  DEFAULT ((1)) FOR [query_result_header]
GO
ALTER TABLE [dbo].[EmailQueue] ADD  DEFAULT ((0)) FOR [query_result_no_padding]
GO


And the SP: 

CREATE PROCEDURE [dbo].[log_dbmail] 

   @profile_name               sysname    = NULL,        

   @recipients                 VARCHAR(MAX)  = NULL, 

   @copy_recipients            VARCHAR(MAX)  = NULL,

   @blind_copy_recipients      VARCHAR(MAX)  = NULL,

   @subject                    NVARCHAR(255) = NULL,

   @body                       NVARCHAR(MAX) = NULL, 

   @body_format                VARCHAR(20)   = NULL, 

   @importance                 VARCHAR(6)    = 'NORMAL',

   @sensitivity                VARCHAR(12)   = 'NORMAL',

   @file_attachments           NVARCHAR(MAX) = NULL,  

   @query                      NVARCHAR(MAX) = NULL,

   @execute_query_database     sysname       = NULL,  

   @attach_query_result_as_file BIT          = 0,

   @query_attachment_filename  NVARCHAR(260) = NULL,  

   @query_result_header        BIT           = 1,

   @query_result_width         INT           = 256,            

   @query_result_separator     VARCHAR(2)       = ' ',

   @exclude_query_output       BIT           = 0,

   @append_query_error         BIT           = 0,

   @query_no_truncate          BIT           = 0,

   @query_result_no_padding    BIT           = 0,

   @mailitem_id               INT            = NULL OUTPUT,

   @from_address               VARCHAR(MAX)  = NULL,

   @reply_to                   VARCHAR(MAX)  = NULL

AS 

INSERT INTO dbo.EmailQueue

(

    Recipients,

    Cc_recipients,

Bcc_recipients,

    Email_Subject,

    Email_body,

    Email_body_format,

Query,

Execute_Query_Database,

attach_query_result_as_file,

query_attachment_filename,

query_result_header,

query_result_width,

query_result_separator,

query_result_no_padding,

    profile_name,

    QueueTime,

    SentTime,

Importance

)

VALUES

(   @recipients,            -- Recipients - varchar(250)

    @copy_recipients,          -- Cc_recipients - varchar(250)

@blind_copy_recipients,

    @subject,            -- Email_Subject - varchar(250)

    @body,          -- Email_body - varchar(max)

    @body_format,          -- Email_body_format - varchar(10)

@query,

@execute_query_database,

@attach_query_result_as_file,

@query_attachment_filename,

@query_result_header,

@query_result_width,

@query_result_separator,

@query_result_no_padding,

@profile_name,          -- profile_name - varchar(250)

    CAST(SYSDATETIMEOFFSET() AT TIME ZONE 'Central Standard Time' AS DATETIME), -- QueueTime - datetime2

    NULL,           -- SentTime - datetime2

@importance

    )



The powershell that pulls it all together: 

Wednesday, April 20, 2022

Xquery - a surprising difference between Azure SQL Database and Azure Managed Instance with Quotes

I'm moving processes into Azure.  Given that it's me, probably poorly. But I'm using this as an opportunity to improve things where possible, specifically looking at moving some processes to Azure SQL Database. 
Why? I love them for several reasons: 

  • Simpler
  • Cheaper
  • Quick to create
  • Quick to modify the instance (seconds, not hours)


Yes, there are still gotchas (Elastic Jobs, no cross-db queries, no CLR, can't replicate from it), but overall? Soooo nice.

---

I was trying to get some Xquery parsing working, and ran into a weird error:
"CONDITIONAL failed because the following SET options have incorrect settings: 'QUOTED_IDENTIFIER'"

Weird.  Doubly so because this code has worked on-prem for a decade, and the Managed Instance is running it now.  And because the SP specifically has a SET QUOTED_IDENTIFIER ON; 

Okay, let's take alook online.


Yeah, that's the problem.  And indeed, my code has similar xquery to the example:
set @myval = @xml.[value]('(*/A[B="C"]/D[E="F" or G="H"]/I)[1]', 'varchar (35)');

What's the fix? Use two single-quotes instead of the double quote.
set @myval = @xml.[value]('(*/A[B=''C'']/D[E=''F'' or G=''H'']/I)[1]', 'varchar (35)');