Thursday, July 31, 2008

[Tip] SELECT INTO with an IDENTITY column

Really basic, but I always forget the syntax.

select identity(int, 1,1) as ID [...] into [nonexistent table] from [other tables]

Wednesday, July 9, 2008

[Indexes] Making sure you CAN reorg/rebuild/defrag indexes

In SSMS, prior to SP2, the default when creating indexes was to create them with page locking disabled. Which means you can't defragment or rebuild them.

Here's a script, courtesy of "TheSQLGuru", which will script out the corrections.
http://www.eggheadcafe.com/software/aspnet/29953067/finding-all-the-indexes-o.aspx

set quoted_identifier off
go
SELECT "alter index [" + i.name + "] ON [" + s.name + "].[" + t.name + "]
SET (ALLOW_PAGE_LOCKS = ON)
go"
-- s.name, t.name, i.name
FROM sys.schemas s
JOIN sys.tables t ON
t.schema_id = s.schema_id
JOIN sys.indexes i ON
i.object_id = t.object_id
WHERE
i.index_id > 0
AND INDEXPROPERTY(i.object_id, i.name, 'IsPageLockDisallowed') = 1
AND INDEXPROPERTY(t.object_id, i.name, 'IsStatistics') = 0
AND NOT EXISTS
(
SELECT *
FROM sys.objects keys
WHERE
keys.parent_object_id = i.object_id AND
keys.name = i.name AND
keys.type IN('PK', 'UQ')
)

Thursday, June 26, 2008

[Index] DMV to create your missing indexes

This is pure genius.

http://blogs.msdn.com/bartd/archive/2007/07/19/are-you-using-sql-s-missing-index-dmvs.aspx

Uses the DMV to create missing indexes.

(update 2013/05/10 and something, in retrospect, you need to be careful about - don't just add everything it shows.  I was reading that MS actually had this in a CTP, where it would tell you the indexes to add, like Bart's script below.  And people wrote code that would automatically add them, without thinking about what other similar indexes were there, what the impact would be on inserts and updates, etc.  So they caused massive performance problems.  Practical upshot: be careful)


SELECT
mig.index_group_handle, mid.index_handle,
CONVERT (decimal (28,1),
migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)
) AS improvement_measure,
'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle)
+ ' ON ' + mid.statement
+ ' (' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END
+ ISNULL (mid.inequality_columns, '')
+ ')'
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,
migs.*, mid.database_id, mid.[object_id]
FROM sys.dm_db_missing_index_groups mig
INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle
WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

Wednesday, June 25, 2008

[Documentation] Now where are my databases again?

Everyone probably already has a version of this, but I saw some horrid code for it the other day. The below is probably 2005 specific (obviously, because of sys.master_files). Master.sys.master_files is very nice - keeps track of all the databases in one spot, rather than needing to query each databases' table.

USE master
go
SELECT sysdatabases.NAME, mf.* FROM sys.master_files mf
INNER JOIN sysdatabases ON mf.database_id = sysdatabases.dbid
ORDER BY physical_name

Monday, June 23, 2008

[Shopping] Results of my pressure cooker expedition

So, I got one. Fagor Duo. In retrospect, the Splendid would work fine, since very few recipes call for 7psi - everyone uses 15. And while some things work wonderfully well, it's not a panacea.

For one, most of the recipes are things you could've done in a crockpot. (Although maybe that's a plus...)

Secondly, while some things work wonderfully, others don't. I have a beef stew (I'll put the recipe up, even though it's not baking) that normally needs to cook for 40 minutes. With the pressure cooker, it cooks for 20... but requires 5 minutes to get up to temp, and 15 to cool down (better results). So the same time, and I have to clean the pressure cooker. Some things work well... but is cooking something in 10 minutes that much better than something in 20?

However, the beef stew _does_ gain the following:
1) Even more tender
2) Flavor better
3) The potatoes get cooked with the meat, so I don't have to deal with boiling potatoes or scrubbing that pot to remove the starch.

[Documentation] List your backups in Wiki format

Something simple I cooked up for documentation. Basically, it looks at the backups made in the past 2 weeks and create a table (in Wiki format). Cut and paste into your wiki page, and done.

BTW - if anyone has code to easily POST this to a web site, I'd be grateful. Ideally I'd set this up to run weekly and show changes in the environment.

Cut and paste the code - Wiki tables require that spacing. Make sure to run in TEXT mode

And yes, I know. The code is crude.

CREATE TABLE #wikilist (id INT IDENTITY, formatting VARCHAR(300))

INSERT INTO #wikilist (formatting)
SELECT distinct 'Server: ' + server_name + '{BR}{| border="1"
! Database !! Type !! Location'
FROM msdb.dbo.backupmediafamily backupmediafamily
inner join msdb.dbo.backupset backupset
on backupset.media_set_id = backupmediafamily.media_set_id
where backup_start_date > getdate()-14

INSERT INTO #wikilist (formatting)
--SELECT * FROM
select distinct '|-
| ' +
database_name + ' || ' +
CASE backupset.[type]
WHEN 'D' then 'Database'
WHEN 'I' THEN 'Differential database'
WHEN 'L' THEN 'TLog'
WHEN 'F' THEN 'File or filegroup'
WHEN 'G' THEN 'Differential file'
WHEN 'P' THEN 'Partial'
WHEN 'Q' THEN 'Differential partial'
END + ' || ' +
lower(left(physical_device_name,(len(rtrim(physical_device_name)) - charindex('\',reverse(rtrim(physical_device_name)))))) AS Location
from msdb.dbo.backupmediafamily backupmediafamily
inner join msdb.dbo.backupset backupset
on backupset.media_set_id = backupmediafamily.media_set_id
where backup_start_date > getdate()-14
and left(physical_device_name,(len(rtrim(physical_device_name)) - charindex('\',reverse(rtrim(physical_device_name))))) not like '%:'
GROUP BY '|-
| ' +
database_name + ' || ' +
CASE backupset.[type]
WHEN 'D' then 'Database'
WHEN 'I' THEN 'Differential database'
WHEN 'L' THEN 'TLog'
WHEN 'F' THEN 'File or filegroup'
WHEN 'G' THEN 'Differential file'
WHEN 'P' THEN 'Partial'
WHEN 'Q' THEN 'Differential partial'
END + ' || ' +
lower(left(physical_device_name,(len(rtrim(physical_device_name)) - charindex('\',reverse(rtrim(physical_device_name))))))
ORDER BY '|-
| ' +
database_name + ' || ' +
CASE backupset.[type]
WHEN 'D' then 'Database'
WHEN 'I' THEN 'Differential database'
WHEN 'L' THEN 'TLog'
WHEN 'F' THEN 'File or filegroup'
WHEN 'G' THEN 'Differential file'
WHEN 'P' THEN 'Partial'
WHEN 'Q' THEN 'Differential partial'
END + ' || ' +
lower(left(physical_device_name,(len(rtrim(physical_device_name)) - charindex('\',reverse(rtrim(physical_device_name))))))

INSERT INTO #wikilist (formatting) VALUES ('|}')
SELECT formatting FROM #wikilist ORDER BY id

Wednesday, June 11, 2008

[Linked Servers] Trick to making cross-server queries faster

Learned this years ago, and it's one of those nice tricks to keep in your cap.

So, what collation are you running? Do you know? Do you always use the default? Do you use linked servers and need a little more performance?

If you do, then you're in luck. If you have a linked server between two servers running the same collation, enable "collation compatible" and your queries will run faster.

Why? As I remember, if you don't have it enabled, then your query is sent across without the WHERE clause. Once it comes back, it's evaluated, ensuring that collation is properly dealt with. If you have collation compatible = true, then it sends over the whole query, including the WHERE clause. So, fewer results returned, lower I/O on the far-side, and no processing required locally.

One thing, though - make sure you're on the same collation. On 2005, the default is still the same (IIRC), but what it's called has changed.

Friday, May 30, 2008

[Replication] Document what tables are replicated

I needed this, and rather than do it the easy/long way once, I wrote a piece of code to do it from now on. Feel free to modify it, and if you can get that inner loop to create the #Articles table dynamically, PLEASE let me know. I also decided to use the SPs to try and keep it somewhat portable, and because I wanted to play with the prior post (saving SPs to tables).

Oh yeah, what this does: finds each replicated database, then walks each publication and get a list of the articles in that publication. Then formats it for a wiki, using bullets
(aka
* item 1
* item 2).

You'll have to enable data access on your local machine:
EXEC sp_serveroption [your_server_name] , 'data access' , 'true'

CREATE TABLE #Articles
(
[article id] INT,
[article name] sysname,
[base object] nvarchar(257),
[destination object] sysname null,
[synchronization object] nvarchar(257),
[type] SMALLINT,
[status] TINYINT,
filter nvarchar(257),
[description] nvarchar(255),
insert_command nvarchar(255),
update_command nvarchar(255),
delete_command nvarchar(255),
[creation script path] nvarchar(255),
[vertical partition] BIT,
pre_creation_cmd TINYINT,
filter_clause NTEXT,
schema_option binary(8),
dest_owner sysname null,
source_owner sysname null,
unqua_source_object sysname null,
sync_object_owner sysname null,
unqualified_sync_object sysname null,
filter_owner sysname null,
unqua_filter sysname null,
auto_identity_range INT,
publisher_identity_range INT,
identity_range BIGINT,
threshold BIGINT,
identityrangemanagementoption INT,
fire_triggers_on_snapshot BIT
)

CREATE TABLE #finished_list (id INT IDENTITY(1,1), names VARCHAR(100))

DECLARE
@minid SMALLINT,
@maxid SMALLINT,
@min_database SMALLINT,
@max_database SMALLINT,
@min_publication SMALLINT,
@max_publication SMALLINT,
@sqlstatement VARCHAR(1000),
@db_name sysname,
@publication_name sysname

SELECT identity(int, 1,1) as ID, name INTO #Databases FROM MASTER.dbo.sysdatabases WHERE category > 0 AND NAME <> 'distribution'
SELECT @min_database = MIN(id), @max_database = MAX(id) FROM #Databases

--outer loop - each database that has publications
WHILE @min_database <= @max_database
BEGIN
SET @db_name = null
SELECT @db_name = NAME FROM #Databases WHERE id = @min_database
IF @min_database = 1
SELECT @sqlstatement = 'SELECT identity(int, 1,1) as ID, * INTO ##Publications FROM OPENQUERY( [your_server],''SET FMTONLY OFF {call ' + @db_name + '..sp_helppublication}'')'
ELSE SELECT @sqlstatement = 'insert INTO ##Publications exec ' + @db_name + '..sp_helppublication'
EXEC(@sqlstatement)
INSERT INTO #finished_list (names) select '* ' + @db_name + ''
SELECT @min_publication = MIN(id), @max_publication = MAX(id) FROM ##Publications
--inner loop - each particular article
WHILE @min_publication <= @max_publication
BEGIN
SET @publication_name = NULL
SELECT @publication_name = NAME FROM ##publications WHERE id = @min_publication
INSERT INTO #finished_list (names) SELECT '** ' + @publication_name + ''
SELECT @sqlstatement = 'INSERT INTO #Articles EXEC ' + @db_name + '..sp_helparticle @publication = ''' + @publication_name + ''''
--I can't get this to work - if you can, please share. Probably something involving sp_executesql
-- IF @min_publication = 1
-- SELECT @sqlstatement = 'SELECT identity(int, 1,1) as ID, * INTO ##Articles FROM OPENQUERY( [your_server],''SET FMTONLY OFF {call ' + @db_name + '..sp_helparticle @publication = N''''' + @publication_name +'''''}'')'
-- ELSE SELECT @sqlstatement = 'insert into ##articles EXEC ' + @db_name + '..sp_helparticle @publication = N'''+ @publication_name +''' '
EXEC(@sqlstatement)
INSERT INTO #finished_list (names) SELECT '*** ' + [article name]+ '' FROM #articles ORDER BY [article name]
truncate table #articles
SET @min_publication = @min_publication + 1
END

TRUNCATE TABLE ##publications
SET @min_database = @min_database+1
END

SELECT * FROM #finished_list ORDER BY id
DROP TABLE #Databases
DROP TABLE ##Publications
DROP TABLE #finished_list
DROP TABLE #articles

Thursday, May 29, 2008

SELECT INTO from SP

Found this on usenet, courtesy of BP Margolin (who pointed at a post by Umachandar Jayachandran). I've always just figured out what the columns are and built a table, but this will save me a ton of work.

-- To enable data access locally, do:
EXEC sp_serveroption localsrvr , 'data access' , 'true'
--(where localsrvr is your server's name)

-- To do SELECT...INTO from SPs results do:
SELECT * INTO SomeTbl FROM OPENQUERY(localsrvr , '{call sp_who}')

-- If the SP uses temporary tables,
-- you have to do something like the following b'cos
-- the above call will fail
SELECT * INTO SomeTbl FROM OPENQUERY( localsrvr ,'SET FMTONLY OFF {call sp_who}')

But you need to understand that this will result in the SP being executed twice once when OPENQUERY tries to determine the metadata for the result set & again
when it actually executes the call.

Sunday, April 13, 2008

[Shopping] Buying a pressure cooker

Mostly for my benefit, but maybe it'll help someone else.

Why a pressure cooker?


Easy - time. Cooks things much faster. Boiling point of water in a pressure cooker (at 15psi) is a mere 250 degrees. Which means you can boil things in 2/3rds the time. And some other impressive things - a 6-hour stock in an hour.

Rules for pressure cookers


  1. Don't low-ball the price. You're talking about something that will hold boiling liquids/food at pressure. As in bike-tire pressures. The last thing you want is for it to give - it can hurt you, maim you, and at the very least make a mess of the kitchen that will be Epic.
  2. Same goes for used. Be safe on something like this.
  3. If it feels cheap, it probably is. You want heft.
  4. You want Stainless Steel, not aluminum. Aluminum will pit and hold gunk within - not a happy thing.
  5. 3-layer bottoms. Makes it cook more evenly.
  6. Bigger is better, you need room for the steam to build pressure. Supposed 6 quarts is the magic number.
  7. Get a modified-first-gen (pressure valve) or second-gen (spring valve), not a "jiggle top".
  8. Expect to spend between $70 and $250.

Brands I've seen recommended: Fagor (budget pick), Kuhn Rikon ("mercedes of pressure cookers"), Magefesa, and WMF (brand used by Alton Brown, but man that's pricey).

references:
http://missvickie.com/workshop/buying.html
http://www.realfoodliving.com/KuhnRikon.htm
http://query.nytimes.com/gst/fullpage.html?res=980DE5D61E3CF93BA15750C0A9679C8B63&sec=&spon=&pagewanted=all

My decision:
Either the Fagor or the Kuhn Rikon. Fagor's about $50 less, but I think either will be a good choice. The way I figure it, I cheaped out once on a different piece of cookware (a cast-iron skillet), and I might as well throw it out... tried seasoning it for years, and it still has yet to taste as good as the Lodge Logic we bought.