Tuesday, December 3, 2013

SQL PASS 2014 Early Bird



Next year, the PASS Summit will be held in Seattle on November 4th through the 7th. The full price is $2295. However, if you register early, you can get it for as little as $1095. That amounts to a discount of $1200, or a 52% discount.

This year, I am registering for PASS now to get the Early Bird Price. And, I encourage many of you to do the same thing. Act now, don't delay. The Early Bird Price is only good until December 6th. Then, the price starts creeping up every couple of months.

The primary reason I am doing this is to take control of my career and my learning. While I have had employers agree to send me to PASS in years past, it was never a sure thing. Sometimes yes, sometimes no.

And to be fair, your job doesn't really owe you that, either. However, I feel PASS is important enough to my career, that I am willing to pay the costs myself to ensure that I am able to attend.

Of course, the best option is if your employer covers the costs. And what better way to convince your boss than to make it cheaper for her? By paying for the registration myself, I'm doing two things. One, I've cut the costs in half. That's enough to pay for most of my travel costs. And two, I have helped with budget planning. The training budget for 2014 may not be approved or funded yet. If your company is on a fiscal calendar, and most are, any training money for next November probably won't be available until after the summer.

Hopefully, this will demonstrate some things to your employer. You are serious about SQL Server and your career. If you are willing to spend your own money for training, then they should recognize that you will be a good steward in spending their money. You are the one keeping up with current trends in technology. When it is time to help steer the company's technology vision, you will be the one they call on.

Here are a few links to some other brave (or foolish) souls who have done the same thing:
"The only thing worse than training your employees and having them leave is not training them and having them stay." - Henry Ford

Tuesday, November 12, 2013

SQL Saturday #255 Dallas Revisited



Last week, I presented my session on Troubleshooting Memory Pressure in SQL Server at the SQL Saturday in Dallas. I have uploaded my session slides and sample code both here and at the SQL Saturday website.

This SQL Saturday was a special one for me. However, there were both good and bad sides to my experience.

The Good

At this point, I have presented at all three SQL Saturdays in Texas: Austin, Houston, and Dallas. Of course, since I've only done one per year, it's taken me three years to get this far. So I'm not exactly tearing it up. :-)

So my goal for next year is two-fold. On the one hand, I want to present at as many of the Texas SQL Saturdays as my schedule will allow. On the other, I would like to present at at least one SQL Saturday that is out of state. Of course, I have to balance this with work requirements and my family obligations.

I had a great crowd for my session. All the seats were filled and people were sitting on the floor in the aisles. The estimated crowd size was about 65 people. Not bad at all.

As always, one of the best aspects of any community event is, the community. I really enjoyed the speakers dinner, reconnecting with old friends, and making some new ones in the process.

I got to see some great sessions in the afternoon. To start things off, Tim Mitchell had a great session full of tips to Make Your SSIS Packages Run Faster. Then, I sat in on Sri Sridharan's session on Turbocharging Your Career. He had some great ideas on how to take your career to the next level. To close out the day, I checked out Mike Hotek's session on how he designed a 10+ Exabyte Data Warehouse. Afterwards, several of us were trying to figure out who the client was. But alas, it was confidential.

The Bad

In the weeks leading up to SQL Saturday I have been sick. So I was not exactly enthused about presenting in front of anybody. All I wanted to do was stay home and sleep. Add to this I had some other fairly stressful things going on and I wasn't the happiest camper this particular weekend.

I signed up for Andy Leonard's pre-con on SSIS. There wasn't anything wrong about his presentation. But I felt so sick that I ended up leaving at lunchtime. Basically, I could do the pre-con or give my session the next day, but I couldn't do both.

But that's alright, because things soon took a turn for the worse, and helped me forget about any of that.

The Ugly

I had some pretty bad laptop problems during my presentation. Everything was going fine while I was getting setup, but the moment I started my laptop display and the projector display cut out. It took the A/V guys a full ten minutes of futzing around with it to get everything running again.

At that point, I was thrown off balance a bit and ended up rushing through my presentation. To make matters worse, it seemed like every time I switched between PowerPoint and SSMS I would loose my laptop screen again. So I would have to crane my neck to look at the projector screen while setting up each demo. Lots of fun.

At the end of the day, I have to take responsibility for my laptop problems. I have given presentations several times before, but this was a new laptop and I had never used it before. If it were not for the Johnny-on-the-Spot A/V guy, this could have been a lot worse. Thank you, sir!

So I pushed on through my presentation, trying to make the best of a bad situation. I made some jokes about how the title should have been about Disaster Recovery. People laughed at my jokes, and no one walked out. Thank you, Dallas!

I gotta tell you, I was really dreading my evaluations. I did get dinged by a few people, fair enough. But I was
pleasantly surprisedshocked to find that I overwhelmingly received fours and fives on my evaluations. Thank you again Dallas.

Phoenix Rising

I've heard it said that whatever does not kill you, makes you stronger. I must say that I agree.

Just a week ago, I was sick, stressed out, and not a happy camper. Now, my career has taken a new turn that should be quite entertaining.

Many times, the SQL Community has given me a little boost that I needed to get myself back on track. Get involved with your local SQL User Group. Network, learn, and grow. And when you're ready, or even when you are not, sign up to give a presentation. You might be surprised how your career starts to take off.

Monday, November 11, 2013

Veterans Day

Iwo Jima

Today is veterans day. I like to take some time to reflect on the all the men and women in service who have sacrificed for the rest of us.

I am also thankful for the things that the Marines have provided me, as well. They taught my the value of integrity and hard work. They also helped pay my way through college.

Semper Fidelis

As you are going about your day, please take a moment to think about the world around us and be thankful for what you have in this world.

Service and sacrifice is not limited to the military. Throughout human history countless people have stood up for what they felt was right and have helped propel us along. It is that desire for improvement that makes us all human.

Wednesday, October 23, 2013

SQL Server 2014 CTP2

SQL Server 2014 CTP2 is now available. This is the last public beta before the final RTM release. So, if you like playing with new features and experimenting before everyone else then this is the download for you.

CTP2 offers some new enhancements with In-Memory OLTP, aka Hekaton. I'm interested to see the new range indexes work with memory-optimized tables. In CTP1 we only had hash indexes.

Kalen Delaney has an updated white paper available. If you want read more about Hekaton under the covers, then be sure to check it out.

Tuesday, October 1, 2013

I'm Speaking at SQL Saturday #255 Dallas

SQL Saturday is coming to town. SQL Saturday is coming to town. OK, so that's not as awesome as Santa Claus coming to town, it's still pretty cool. You get a full day of SQL Training for free. Well, almost free. There is a $10 charge for lunch.


If you haven't been to a SQL Saturday before, here is your chance. Dallas is hosting SQL Saturday #255 on Saturday, November 2nd. Saturday? Are you kidding me? I know, I thought it was a joke the first time I heard it, but this will make my 5th SQL Saturday.

SQL Saturday's are a ton of fun. You get to meet other SQL Folks from all over the state. There are always a significant number of attendees that come from across the country, and even a few that will come in from overseas.

Quite a few of the speakers will be top tier talent who have spoken at PASS or other national conferences. And then they let me in the club. Not sure how I made it, but I've been selected to speak, as well.

I'll be presenting a session on Troubleshooting Memory Issues in SQL Server. I'll go through the basics of Memory Pressure, and show you various tools and techniques to troubleshoot it. Be sure to bring some tomatoes and old DIMMs for the interactive portion of the show.

If you're not interested in my session, there are a total of 42 sessions being offered. Douglas Adams would be proud. Sessions are organized into several tracks including Administration, Development, Business Intelligence, and Career Development.

Additionally, on Friday there are three full-day pre-con sessions being offered. These cost $100 and you must register ahead of time. I've registered for Andy Leonard's session covering SSIS 2012. Another great one is Grant Fritchy's session on Query Tuning. I saw his session at PASS last year; it's a good one. At PASS these sessions cost about $300 - $400. So this is a huge discount for the same level of content.

So what are you waiting for? Grab a mouse, head to the website, and register. Oh, you don't live anywhere near Dallas. That's OK, because there's a SQL Saturday coming soon to a town near you.

Tuesday, September 24, 2013

An Overview of SQL Server 2014 In-Memory OLTP Hekaton

So you've heard of Hekaton, but what is it, and why do you want it?

Hekaton, or In-Memory OLTP, is an entirely new set of data structures for tables and indexes that is optimized for memory-optimized storage as opposed to disk-based storage.

Hekaton is the code name for In-Memory OLTP, so I will use these two terms interchangeably.

Why is Microsoft doing this? Short version, memory and server are much cheaper now, than they were when SQL Server first launched.

At this point, it is feasible to have enough memory on your server to house the entire database. Even large, one terabyte databases.

However, the query optimizer and its costing rules haven't changes along with this. So, even if you have tons of memory, SQL Server is still making assumptions that it will be reading data off of the disk.

Basic Glossary of Terms

  • Cross-Container Transactions - transactions that use both disk-based tables and memory-optimized tables

  • Disk-Based Tables - plain old normal tables, what you have right now, 8k pages

  • Hekaton - codename for In-Memory OLTP

  • In-Memory OLTP - new architecture and data structures using memory for data storage instead of disks

  • Interop - a TSQL query written against memory-optimized tables

  • Memory-Optimized Tables - tables using new memory data structures to store their data

  • Natively Compiled Stored Procedures - compiled machine code instead of interpreted TSQL, still written in TSQL but with some restrictions


Databases

In order to make use of In-Memory OLTP, you need a database that supports it. It's fairly easy to do this. When you create the database you need a special filegroup with the CONTAINS MEMORY_OPTIMIZED_DATA clause. Additionally, you need to use a Windows BIN2 collation. This can be done at the database, table, or query level.

Tables

To create a Memory-Optimized you use the MEMORY_OPTIMIZED = ON clause. There are several restrictions on column types, but in simple terms no LOB data types are allowed, no whatever(max), and no CLR.

Rows are limited to 8060 bytes with nothing stored off row. The size limitation is enforced at creation, so all of your columns sizes must fit within this limitation.

DML triggers are not allowed, neither are foreign key or check constraints. Love GUIDs? I hope so, because identity columns are out, too.

There are two basic types of Memory-Optimized Tables, SCHEMA_ONLY and SCHEMA_AND_DATA.

SCHEMA_ONLY tables are non-durable. You can put data in there, but in the event of a restart or crash, the table is recreated but your data is gone. This could be useful for storing application session state or for staging tables in a data warehouse.

Indexes

Memory-Optimized Tables can have two types of indexes, Hash Indexes and Range Indexes. All tables must have at least one index, and no more than eight. Also, tables that are defined as SCHEMA_AND_DATA must have a primary key. Indexes are rebuilt each time SQL Server starts up.

A Hash Index is an array of pointer, where each element points to a linked list of rows. The number of elements in the array is controlled by the BUCKET_COUNT clause. In general, you want to set the BUCKET_COUNT to at least the number of unique key values in your table.

If you have too few buckets, then multiple key values will share the same linked list, which will mean longer scans to look for your row. If you have too many, then you will be wasting memory with empty buckets.

Range Indexes are good for when you will be searching for a range of values, or if you are not able to properly estimate the BUCKET_COUNT size. However, Range Indexes are not available in CTP1, so we'll have to wait a bit to learn more about those.

Queries and Stored Procedures

There are two basic methods for querying memory-Optimized Tables. Natively Compiled Stored Procedures or good old-fashioned TSQL, known as Interop, which also includes regular stored procedures.

Natively Compiled Stored Procedures are going to be the fastest. However, they are only able to access Memory-Optimized Tables. If you want to be able to query regular tables along with Memory Optimized Tables, then you will need to use TSQL Interop. There are a variety of restrictions when using TSQL Interop such as MERGE, cross database queries, locking hints, and linked servers.

TSQL Interop allows you to make a gradual migration to In-Memory OLTP. This way you can slow convert a few objects at a time based on which ones will give you the most performance gain.

One Big Caveat

One thing to keep in mind is that tables, indexes, and stored procedures cannot be modified in Hekaton. This means that you will need to drop and re-create these objects in order to make changes. Also, stats have to be rebuilt manually. And then to take advantage of them, the stored procedures would need to be recreated, as well.

Obviously, this is a fairly major restriction. However, I think I can live with this for a version one product. I hope that by the time SQL Server 2015 comes out, there will be an easier way to add a column to a Memory-Optimized Table.

Concurrency

Hekaton offers an improved versioned optimistic concurrency model for memory-Optimized Tables that removes waiting for locks and latches. Explicit transactions are supported using Repeatable Read, Serializable, and Snapshot isolation levels. Read Committed and RCSI are only available with autocommit transactions, with RCSI only if no disk-based tables are involved.

High Availability and Disaster Recovery

All the basics such as backups and restores are available. Additionally, AlwaysOn and Log Shipping are fully supported. Unfortunately, Mirroring and Transactional Replication are not. However, this isn't too much a surprise since SQL Server is definitely pushing AlwaysOn as the new HA/DR solution.

Migration Assistance

The AMR Tool (Analyze, Migration, Reporting) will identify unsupported data types and constraints in tables. It will also recommend which tables and stored procedures should see the most performance improvement by converting to In-Memory OLTP and Memory-Optimized Tables.

Management Data Warehouse offers the Transaction Performance Collection Set, which will help you gather the necessary data in order to let the AMR Tool work its magic.

Wednesday, September 18, 2013

SQL Server 2014 AMR Tool for In-Memory OLTP

SQL Server 2014 has many new features and improvements over SQL Server 2012. One feature that a lot of people are interested in is In-Memory OLTP. Knowing where or how to take advantage of this feature can hold people back from playing with it.

The AMR Tool (Analyze, Migrate, Report) helps you simplify migrations to SQL Server 2014 In-Memory OLTP.

SQL 2014 AMR Tool
The AMR Tool helps you identify which tables and stored procedures will benefit from In-Memory OLTP. If you already have some migration plans, then the AMR Tool can help validate your plans. It will evaluate what needs to be done to migrate your tables and stored procedures.

In order to take advantage of the AMR Tool you will need the following three items:
  • A target database that you want to migrate to SQL Server 2014. This needs to be SQL Server 2008 or higher. So no old-school databases here.

  • A copy of SQL Server 2014 CTP1 Management Studio installed. Note, you do not need a SQL Server 2014 instance or database, just the tools.

  • And last, a Management Data Warehouse with the Transaction Performance Collection Set installed.
Once you have these items setup, you are ready to begin using the AMR Tool to generate recommendations based on the access characteristics of your workload, contention statistics, and CPU usage of stored procedures.

Resources

Benjamin Nevarez has a nice tutorial on using the AMR Tool. Another good resource is the Hekaton whitepaper by Kalen Delaney. If you don't already have SQL Server 2014 CTP1, you can download it here.

Monday, September 16, 2013

Performance Tuning with Compression

One lesser known trick for performance tuning is compression. Wait, what? Isn't compression all about saving space? Yes, but it also tends to have another pleasant side effect.

SQL Server is typically an IO bound application. That means, IO is almost always your constraining factor. Whenever I am troubleshooting a system, IO is one of the areas that I always take a look at.

Enabling compression will reduce the amount of IO that SQL Server uses to satisfy SELECT queries. Since more data is being stored on each page, it takes less pages to complete a query.

A Quick Demo

We'll use the following simple query as our baseline. Run the query and then take a look at the number of IOs. To see this click on the Messages Tab after you run the query.
-- example query
use AdventureWorksDW2012;

set statistics io on;

select ProductKey, 
 count(*) AS QuantitySold,
 sum(SalesAmount) AS TotalSales
from dbo.FactInternetSales
group by ProductKey
order by ProductKey;
go

Results

IO Baseline
Here's our baseline. We had 1,240 reads on the FactInternetSales table. Now, let's enable Row Compression and re-run the query.
-- turn on row compression
use AdventureWorksDW2012;

alter table dbo.FactInternetSales rebuild partition = all
 with (data_compression = row);
go

Results

Row Compression
Here you can see the IOs were cut in half, 656 reads from FactInternetSales. Last, turn on Page Compression and run the query one more time.
-- turn on page compression
use AdventureWorksDW2012;

alter table dbo.FactInternetSales rebuild partition = all
 with (data_compression = page);
go

Results

Page Compression
Now we have less than a quarter of the original IOs. Only 292 reads from FactInternetSales. Looks good to me.

There's No Such Thing as a Free Lunch

One thing to keep in mind is that compression will increase your CPU usage. In practice, I have usually found this to be in the range of one to three percent. That said, if you are currently experiencing CPU issues with your server, it would behoove you to address that first. Oddly enough, quite often I find that CPU problems are being driven by IO problems. So be sure to tune your queries and check your indexes.

Row Compression versus Page Compression

There are two types of compression available for use with SQL Server; Row Compression and Page Compression.

Row Compression stores fixed data type columns using a variable length format. Page Compression adds to that by incorporating Prefix and Dictionary Compression to the mix. Page Compression works very well when you have lots of repeating values in your tables. Like a Data Warehouse...

Generally speaking, I recommend using Row Compression with OLTP databases, and using Page Compression with Data Warehouses.

Now this doesn't mean you should blindly enable compression for all tables and indexes on all of your databases. Do a little analysis first and start small.

Focus on your largest tables first; the ones that are causing you pain. Run some checks and see if those tables would benefit from having compression enabled. Pick your top ten.

The best candidates for compression are tables that are not being updated frequently. So if you have a table that is getting 25% of its rows updated every day, that may not be the best table to compress. As always, you will need to test your servers and workload to see what combination works best for your environment.

Show me the T-SQL

The script below will check all of your tables and indexes. It will report back the current size, current compression methods being used, and an estimation of the space savings you can achieve by using either Row Compression or Page Compression. It runs on SQL 2008, SQL 2012, and SQL 2014 CTP1.
-- Steven Ormrod
-- 7/11/13
-- estimates row and page compression savings on all tables in a database
-- Version: 2.1
-- Source: http://stevenormrod.com/
-- License: Creative Commons. Attribution-NonCommercial CC BY-NC
-- http://creativecommons.org/licenses/by-nc/3.0/

set nocount on;

set transaction isolation level read uncommitted;

declare @currenttable numeric(10, 2);
declare @tablecount  numeric(10, 2);

-- var to hold cursor values
declare @table  varchar(255);
declare @schema  varchar(255);

declare @sql  nvarchar(255);

-- temp table for row compression information
create table #rowcompression (

 TableName   varchar(255),
 SchemaName   varchar(255),
 IndexID    int,
 PartitionNumber  int,
 CurrentSizeKB  bigint,
 RequestedSizeKB  bigint,
 CurrentSampleKB  bigint,
 RequestedSampleKB bigint

);

-- temp table for page compression information
create table #pagecompression (

 TableName   varchar(255),
 SchemaName   varchar(255),
 IndexID    int,
 PartitionNumber  int,
 CurrentSizeKB  bigint,
 RequestedSizeKB  bigint,
 CurrentSampleKB  bigint,
 RequestedSampleKB bigint

);

-- current compression information
select s.name as SchemaName,
 t.name as TableName,
 i.name as IndexName,
 i.index_id,
 p.data_compression_desc as CompressionType
into #currentcompression
from sys.schemas s
 join sys.tables t
  on s.schema_id = t.schema_id
 join sys.indexes i
  on t.object_id = i.object_id
 join sys.partitions p
  on i.object_id = p.object_id
   and i.index_id = p.index_id
order by s.name, t.name, i.name;

select @tablecount = count(*) from sys.tables;

set @currenttable = 0;

-- declare variable for the cursor
declare curTables cursor for
-- sql statement for the cursor
select t.name as TableName, 
 SCHEMA_NAME(t.schema_id) as SchemaName
from sys.tables t
order by SchemaName, TableName;

-- open 'er up
open curTables;

-- load the first row
fetch next from curTables into @table, @schema;

-- loop through the cursor
while @@fetch_status = 0 begin

 -- do some work
 print 'Estimating row and page compression for: ' + @schema + '.' + @table + '...';

 set @sql = 'exec sp_estimate_data_compression_savings ''' + @schema + ''', ''' + @table + ''', null, null, row';

-- print @sql

 insert into #rowcompression
 execute sp_executesql @sql;

 -- estimate page compression
 set @sql = 'exec sp_estimate_data_compression_savings ''' + @schema + ''', ''' + @table + ''', null, null, page';

-- print @sql

 insert into #pagecompression
 execute sp_executesql @sql;

 -- executive update
 set @currenttable = @currenttable + 1;
 print char(9) + 'Percent Complete: ' + cast(cast(@currenttable / @tablecount * 100.0 as numeric(10, 2)) as varchar(255));

 -- advance to the next row
 fetch next from curTables into @table, @schema;

end;

-- build the executive executive summary
select r.SchemaName, r.TableName, 
-- r.IndexID,
 i.name as IndexName,
 r.CurrentSizeKB, 
 c.CompressionType as CurrentCompression,
 r.RequestedSizeKB as RowCompressionKB,
 case r.CurrentSizeKB
  when 0
   then 0
  else
   cast((1.0 - r.RequestedSizeKB / (r.CurrentSizeKB * 1.0)) * 100 as numeric(10, 2))
 end as RowDecreasePercent,
 p.RequestedSizeKB as PageCompressionKB,
 case r.CurrentSizeKB
  when 0
   then 0
  else
   cast((1.0 - p.RequestedSizeKB / (r.CurrentSizeKB * 1.0)) * 100 as numeric(10, 2))
 end as PageDecreasePercent
into #executivesummary
from #currentcompression c
 join #rowcompression r
  on c.SchemaName = r.SchemaName
   and c.TableName = r.TableName
    and c.index_id = r.IndexID
 join #pagecompression p
  on r.TableName = p.TableName
   and r.SchemaName = p.SchemaName
    and r.IndexID = p.IndexID
 join sys.indexes i
  on r.TableName = OBJECT_NAME(i.object_id)
   and r.IndexID = i.index_id
order by r.SchemaName, r.TableName, IndexName;

-- show everything
select * from #executivesummary;

/*

-- if you want to change how reporting is done
-- experiment with the following queries
-- be sure to comment out the original one, above

-- focus on the largest tables
select top 10           -- focus on the biggest bang for the buck
* 
from #executivesummary
where RowDecreasePercent + PageDecreasePercent > 50  -- peel off the bottom layer
order by CurrentSizeKB desc;       -- focus on the larger tables and indexes

-- focus on the largest compression
select top 10           -- focus on the biggest bang for the buck
* 
from #executivesummary
where CurrentSizeKB > 1024
order by RowDecreasePercent + PageDecreasePercent desc;

*/

-- doing the dishes
close curTables;
deallocate curTables;

drop table #currentcompression;
drop table #rowcompression;
drop table #pagecompression;
drop table #executivesummary;

go



Results

Compression Estimates

Compression Estimates Percent Complete
The default is to display everything sorted by Table Name and Index Name. I've included a few other queries in the comments that will let you modify the display to focus on the largest tables, and to show that tables that should receive the highest percentage of compression.

Enjoy!

Tuesday, September 3, 2013

Master of Puppets

You may have heard about the recent announcement from Microsoft to cancel the Microsoft Certified Masters program. AKA, the MCM and MCSM programs. Well, it wasn't really an announcement. An announcement is when you make a public declaration of some information or event. Think about a wedding announcement.


Instead, this was announced in an email very late on Friday night. Because of Time Zone differences, mine came in after midnight. In the email we were informed that all of the MCM exams and the entire program are being retired on October 31st.

A few news sites have picked up on this already, and actually had made public the non-announcement.

What's so awful about this is, within the past few weeks there have been other, contrasting announcements about expanding the number of testing centers, upcoming release of a new exam, etc. It's like the proverbial saying about the right hand not knowing what the left hand is doing.

Strange, to say the least.

Anyone who was part-way through the MCM program is being left out to dry at this point. All your time and money spent for naught. I can only imagine how that must feel. Awful, truly awful.

On the one hand, I'm disappointed. I spent a lot of time, effort, and money to achieve the MCM, only to have it discontinued. Kind of a slap in the face. However, I have to admit, that on the other hand, I'm a little relieved.

Wait, let me explain. As I see it, there are some serious problems with the overall Microsoft Certification Program. Since they've decided to gut the MCM Program, this is a good opportunity to fix everything that is wrong with it.

A Looming Deadline

I finished the MCM a little over a month ago, and I couldn't be more relieved. I had put a lot of time, money, and effort into this. My Significant Other has been very patient and supportive during this journey, but it was time for it to end. Or at least be able to rest for a little while.

Nope.

You see, once you have the SQL 2008 MCM, you only have until June of 2014 to complete the SQL 2012 MCM. That's only ten months away! Also remember, there is a 90-day waiting period for retakes at this level.

If you couldn't make that deadline, then you start back at the bottom with all the MCP/MCTS/MCITP/MCSA/MCSE exams to pre-qualify you for the opportunity to try the SQL 2012 MCM exams once again.

And, guess what, they didn't even have the the SQL 2012 MCM Lab Exam ready. So you have a deadline ticking away, but not much you can do about it.

I was a little frustrated by that timeline. I had just spent a considerable amount of money, my own money, to complete the MCM program, and now I had jump right back in and start spending a bunch more money, immediately. Add to that, I've used about half of my PTO (vacation days) for my studies, travel, and test taking along this journey.

So you can see why I'm a little relieved. Now, I don't have to explain my elaborate cover story as to why I'm not going to bother with the SQL 2012 MCM. Instead, I can join the chorus of folks who are screaming bloody murder about the program being canceled.

Maybe now, I can have a little break and attempt to pay down my SQL 2008 MCM costs. Maybe tomorrow, there will be a new announcement of an entirely news certification program that has been in development for months and months. Wink, wink, nudge, nudge.

Don't Change the Brand

One of the problems with the Microsoft Certifications and the MCM Program is the names and acronyms keep changing. Take a page from other, successful companies and don't.

Since the beginning of time, it has been the MCP. My first certification was an MCP in Visual Basic Programming. That should continue to be the basis of everything. Stop changing the names of the certifications every time a new version of the product comes out.

People are just now starting to learn about the MCM Program. Most don't even know what it is, including recruiters and HR, and now you're changing it to MCSM, why? And now, before people have a chance to be confused about the MCSM, it's getting scrapped.

Keep the standard MCP/MCTS/MCITP/MCM naming scheme, do you see Ford renaming the Mustang to Horse 2.0? No, you don't.

I can't tell you how many times I've seen a job posting or spoken to HR/Recruiting and they ask if I'm an MCDBA for SQL 2008 or even SQL 2014.

If you say 'no, I have the MCITP or MCM for SQL 2008 which is the newer version' all they hear is 'no' and move on. So, what you have to say is 'yes, I have the McDBA for SQL 2012' or whatever stupid crap recruiters are asking for.

TLAs are better than FLAs

But if you were going to change the names of the certificates, at least choose something easy to say, easy to understand, and that is intuitive.

People love three letter acronyms. They roll off the tongue easier, and they just sound so cool.

I would propose the following nomenclature:

  • MCA, MCP, MCE, MCM. Simple, easy, TLAs.

  • Associate, Professional, Expert, Master.

Most people intuitively know how to rank those four levels. You don't need to know anything about the technology in order to understand that an Associate is lower than an Expert, or Master.

Certificates Shouldn't Expire

I'm not saying you shouldn't continue to train, get certified, learn new skills, etc. But the certs you've earned should stay with you, period. Think about how many SQL 2000 installations there are still out there.

If you are an expert on an old piece of technology, and the customer needs that, then you are still the expert.

If a certification is tied to a specific version of technology there is no need to expire it. That person is not diminishing or interfering with new technology or certifications.

If someone only has certification from ten years ago, and nothing more recent, then let the customer decide if that is what they want.

Specialists Specialize

The SQL 2008 Server Certification program had three tracks: DEV, DBA, BI. There were two levels: junior and senior. Now, you have to complete all three tracks to get the entry level certification for SQL 2012.

Think about cars for a minute. Mechanics specialize. You have transmissions, engines, fuel injection systems, etc. Someone who knows how to fix one, rarely knows the others. Or you have an oil-change technician.

Or doctors? Orthopedic surgeon; ear, nose, and throat; endocrinology. Or you have a general practitioner.

Have you perused job descriptions that require you to be an expert in all three: BI, Dev, DBA; yet paid lower than just one? Me too, lot's of them. Those are interesting interviews, but they are also jobs to be avoided like the plague.

The official party line seems to be that Dev, DBA, and BI are so intertwined that you have to understand all of them in order to do any of them. Well, the real world doesn't quite work that way. Knowing about other areas certainly makes you better, and should be rewarded. But for an entry level certification that is ridiculous.

And, if you truly believed that, then how come someone can upgrade to the new MCSA with only one of the old MCTS certs. If all three skills were so intertwined, then you would require someone doing the upgrade to hole all three MCTS certifications.

Cost Benefit Analysis

All this leads me to the question whether I made the best choice of pursuing the SQL 2008 MCM. What is the cost / benefit analysis of all the time, money, effort, PTO, relationship costs, etc. for pursuing the MCM?

With the same money, you could self-fund a trip to the PASS or BA conferences. You could speak at tons of SQL Saturdays. You could take all the SQL 2012 MCSE Certifications. You could go on a SQL Cruise. And you'd still have money left over.

MCM RIP

I do hope Microsoft reconsiders canceling the MCM Program. This was the only certification that was serious and had sufficient rigor. It gave you something to strive for if you wanted to distinguish yourself from your peers.

Please take a moment and register a comment on the connect site and let Microsoft  know how you feel.

Tuesday, August 13, 2013

T-SQL Tuesday #45

T-SQL Tuesday
T-SQL Tuesday is rotation blog series started by Adam Machanic. Each month a different person hosts the series and decides on a topic. Then, everyone writes a blog post on the same topic. This month, it is being hosted by Mickey Stuewe, and the topic is Auditing.

Auditing

In layman's terms, Auditing is keeping track of changes and who made them. There could be many reason for wanting to do this. For example, legal requirements, change control process, security or troubleshooting.

Who Done It?

Once upon a time, I was a Database Administrator and I had a system that was suffering from some random errors that we were unable to pin down. Basically, at seemingly random times various parts of the application would fail. Usually this would involve alerts firing after hours and the ensuing late night fun.

The team would get called together to troubleshoot the problem, and typically a developer would put together a fix to the code to get the application working again. However, we were unable to identify the root cause.

Turns out, one of the developers was making unauthorized changes to the database. For example, modifying a table or stored procedure. This, in turn, would cause the application to break.

What the Deuce?

We did not have the best separation of duties at this organization. The developers were part of the deployment process. As such, one of the developers had gained access to the Application Service Account. So, he would use it from time to time to "fix" things, or slide in some last minute changes that got missed in the deployment steps.

We started to suspect this when it was always the same developer who would volunteer to help us troubleshoot and miraculously come up with the code fix. He got to be the hero, and we were the zeroes.

To Catch a Thief?

We added a DDL Trigger to start logging all changes to the database. In addition to the standard information returned by the EVENTDATA() function, we added a few other things. Since he was using the Application Service Account, we needed a few more details to distinguish the culprit from one of us. So, we added the IP Address and the name of the application that was issuing the DDL commands.

Show Me the Audit

One thing you'll want is a central place to store all of your Audit History. While you could create this table within the database you are auditing, I prefer using a separate database. This way, you can have several databases logging to the same database for easier reporting. Or, you could locate the audit database on a separate server for added security and redundancy.

[sql]

-- create a separate database for housing all of your audit information
create database DBAAudit;
go

use DBAAudit;
go

-- create a table to store the history of DDL changes
create table dbo.DDLHistory (

DDLID int identity(1, 1) not null,
DDLDate datetime not null default getdate(),
LoginName nvarchar(256) not null,
IPAddress nvarchar(25) not null,
AppName nvarchar(100) not null,
DatabaseName nvarchar(256) not null,
SchemaName nvarchar(256) not null,
ObjectName nvarchar(256) not null,
ObjectType nvarchar(50) not null,
EventType nvarchar(50) not null,
DDLCommand nvarchar(max) not null,
DDLXML xml not null

);
go

[/sql]

Setting up a DDL Trigger is fairly straightforward. All the relevant information is returned by the EVENTDATA() function. This will return various details about the DLL event in an XML format.

You can add any other code to flesh out your trigger and you see fit.

I like to add the IP Address and the Application Name. These are easy enough with some built-in functions.

[sql]

-- replace with your own database name
create database MyTest;
go

-- replace with your own database name
use MyTest;
go
-- modify trigger to capture the information that is relevant to you
create trigger DDLTracking
on database
for create_table, alter_table, drop_table,
create_procedure, alter_procedure, drop_procedure,
create_function, alter_function, drop_function
as begin

set nocount on;

-- grab the trigger event data
declare @eventdata xml;
set @eventdata = EVENTDATA();

-- grab the ip address, sometimes people use another login, this will help trace to their machine
declare @ipaddress nvarchar(25);

select @ipaddress = client_net_address
from sys.dm_exec_connections
where session_id = @@SPID;

-- log the info in our table
insert into DBAAudit.dbo.DDLHistory
(LoginName, IPAddress, AppName, DatabaseName, SchemaName, ObjectName, ObjectType, EventType, DDLCommand, DDLXML)
values (
@eventdata.value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(256)'),
@ipaddress,
APP_NAME(), -- grabs what program the user was using, e.g. management studio
@eventdata.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(256)'),
@eventdata.value('(/EVENT_INSTANCE/ObjectType)[1]', 'nvarchar(50)'),
@eventdata.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(50)'),
@eventdata.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(256)'),
@eventdata
);

end;
go

[/sql]

Now, this is a fairly basic DDL Trigger. From here, you can modify it to add any addition information that you may require. As always, tailor any code to your own situation.

If you are co-mingling information from multiple servers, you may wish to add a column for that. You may also wish to look into locating the Audit table on a remote server.

And There You Have It

A good Audit or Logging System can help you solve all manner of mysteries. They make troubleshooting a server much easier than trying to divine what happened in the past.

And remember, when you have ruled out the impossible, whatever remains, however improbable, is the answer.

Monday, August 5, 2013

Why is my SQL Agent Job Slowing Down?

The other day, we had a situation where a user had the perception that a particular job was taking longer than normal.

So I opened the Job History and glanced over the run times. Everything seemed fine to me. But I realized, I wanted something more precise than 'it looks good when I look at it'.

msdb to the Rescue

So I started digging into the job history tables in msdb. msdb is one of the system databases. Specifically, it keeps track of all the jobs, their run times, history, errors, outcomes, etc.

In fairly short order, I pulled together a couple of scripts to analyze the job history, and tell me exactly how run times have been changing, or not, over the past month or so.

The Persistence of Time

As it turns out, the run times were stable, the user's perception of time was not.

Overview of all jobs:

Job Run Times - All Jobs
Details on one job:

Job Run Times - One Job
A Couple of Quick Scripts

The first script gives an overview of all jobs, aggregated by month.

[sql]

-- Steven Ormrod
-- job run times - all jobs.sql

use msdb;
go

-- aggregations, omits failures as they skew the data
select j.name, left(h.run_date, 6) as 'year/month',
avg(h.run_duration) as 'avg duration',
max(h.run_duration) as 'max',
min(h.run_duration) as 'min',
cast(stdev(h.run_duration) as numeric(20, 2)) as 'stdev'
from dbo.sysjobhistory h
join dbo.sysjobs j
on h.job_id = j.job_id
where h.step_id = 0
and h.run_status = 1 -- omits failures
group by j.name,
left(h.run_date, 6)
order by j.name,
left(h.run_date, 6) desc;
go

[/sql]

The second script gives you the details for a specific job.

[sql]

-- Steven Ormrod
-- job run times - one job.sql

use msdb;
go

declare @strJobName varchar(100);
set @strJobName = 'YOURJOBNAMEGOESHERE';



-- details
select j.name, h.run_date, h.run_time, h.run_duration,
--select 'Slow Running Job' as 'name', h.run_date, h.run_time, h.run_duration,
h.run_status, h.message
from dbo.sysjobhistory h
join dbo.sysjobs j
on h.job_id = j.job_id
where j.name = @strJobName
and h.step_id = 0
and h.run_status = 1 -- just the successful runs
order by h.run_date desc,
h.run_time,
h.step_id;

go

[/sql]

Details are Important

The next time one of your users reports that a job is taking longer than usual, gather some basic stats so you can establish the facts before going too far.

Friday, July 5, 2013

TechNet Subscriptions Retiring

You may have heard that Microsoft recently announced the retirement of the TechNet Subscriptions service.

TechNet Subscriptions are a popular method for developers and other IT professionals to get a legitimate copy of all the latest and greatest goodies from Microsoft. If you needed to setup a lab environment to try out various features or a few test servers, this was your one-stop-shop. No more.

TechNet Subscription Retiring
But wait, all is not lost. You can still buy MSDN Subscriptions.

MSDN Pricing
If that's not your bag, try signing this petition to keep TechNet around, just a little while longer.

Petition To Keep TechNet
You never know, it just might work.

Monday, June 3, 2013

Remote Trace Slowing Down Server

Recently, I was troubleshooting a server where users were complaining about slow response times. Some folks had already looked over the server and didn't find anything. When looking at task manager, the CPU and memory looked fine.

However, when you tried to connect SQL Server Management Studio it took nearly 20 seconds to connect, open a query, and run a simple DMV select query.

So I decided to look at the SQL Server wait stats; they showed trace-related types as the top three wait types.

Remote Trace Total Waits

This is a fairly typical query, but it focuses on the total wait time. I was curious how long each wait was taking. So I worked up a new query that gave me the average wait times.

Remote Trace Average Waits

This not only confirmed that the trace was causing things to slow down, but it also showed that the application was experiencing nearly 10 second wait times. Turns out, there was a piece of auditing software in the mix that was writing a trace to a file share on a remote virtual server.

Going one step further, I adjusted the query one more time to see which were the top resource waits. You either have signal waits, or resource waits. Signal waits indicate CPU pressure. Resource waits mean the database is waiting on a resource.

Remote Trace Resource Waits

After much nail biting and cajoling we were able to redefine the trace to write to a local drive. Once we did this, the wait times dropped immediately, and the application responsiveness returned to what it had been previously.

Remote Trace Fixed Waits

When looking at SQL Server wait stats, be sure to slice and dice the numbers a few different ways to make sure you are seeing the complete picture. Don't just look at the total wait times, also check the average wait times, resource waits, and signal waits.


Here is the final query...


[sql]

-- Steven Ormrod
-- 2/25/13
-- waits - top waits
-- Version: 3.3
-- Source: http://stevenormrod.com/
-- License: Creative Commons. Attribution-NonCommercial CC BY-NC
-- http://creativecommons.org/licenses/by-nc/3.0/

/*
-- clear wait stats
dbcc sqlperf('sys.dm_os_wait_stats', clear)
*/-- wait stats, cumulative, filtered

select
wait_type,
waiting_tasks_count as 'count',
wait_time_ms as 'total wait ms',
max_wait_time_ms as 'max wait ms',
cast(wait_time_ms * 1.0 / waiting_tasks_count as numeric(10, 2)) as 'avg wait ms',
wait_time_ms - signal_wait_time_ms as 'resource wait ms',
signal_wait_time_ms as 'signal wait ms',
cast(100.0 * wait_time_ms / sum(wait_time_ms) over() as numeric(5,2)) as 'percent wait time',
cast(100.0 * (wait_time_ms - signal_wait_time_ms) / sum(wait_time_ms - signal_wait_time_ms) over() as numeric(5,2)) as 'percent resource time',
cast(100.0 * signal_wait_time_ms / sum(signal_wait_time_ms) over() as numeric(5,2)) as 'percent signal time'
into #waits
from sys.dm_os_wait_stats
where waiting_tasks_count > 0
-- filter the 'harmless' waits
and wait_type not in (
'BAD_PAGE_PROCESS', 'BROKER_EVENTHANDLER', 'BROKER_RECEIVE_WAITFOR', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH', 'BROKER_TRANSMITTER',
'CHECKPOINT_QUEUE', 'CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT', 'CLR_SEMAPHORE',
'DBMIRROR_EVENTS_QUEUE', 'DBMIRRORING_CMD', 'DISPATCHER_QUEUE_SEMAPHORE',
'FT_IFTS_SCHEDULER_IDLE_WAIT', 'FT_IFTSHC_MUTEX',
'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE',
'ONDEMAND_TASK_QUEUE',
'PREEMPTIVE_OS_AUTHENTICATIONOPS', 'PREEMPTIVE_OS_GETPROCADDRESS',
'REQUEST_FOR_DEADLOCK_SEARCH',
'RESOURCE_QUEUE',
'SLEEP_BPOOL_FLUSH', 'SLEEP_SYSTEMTASK', 'SLEEP_TASK', 'SQLTRACE_BUFFER_FLUSH', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', 'SQLTRACE_LOCK', 'SQLTRACE_WAIT_ENTRIES',
'TRACEWRITE',
'WAITFOR',
'XE_DISPATCHER_JOIN', 'XE_DISPATCHER_WAIT', 'XE_TIMER_EVENT'
)
go

select top 5
wait_type, [avg wait ms]
-- , [max wait ms]
--, *
from #waits
order by 'avg wait ms' desc;
go

select top 5
wait_type, [percent resource time]
--, *
from #waits
order by 'percent resource time' desc;
go

drop table #waits
go

[/sql]

Monday, May 6, 2013

Who Made the Backup

As DBAs, we often obsess over backups. When was the backup made, how big is it, how quickly can I restore, can I restore it, etc. But sometimes, we need to know who made a backup.

Shouldn't that be me? Well, of course, it should be you. But sometimes, it ain't.

A few months back, I was asked to troubleshoot a SQL Server where they were getting disk full alerts. After a few quick checks, I was able to see it was the backup LUN that was filling up.

When I looked in the folder, I noticed there was a FULL backup file with an unusual timestamp. Most of the backup files were all created around the same time of day, but there was one with a completely different time. Next, I looked in the job history and this matched what I saw at the file system level.

OK, so now I knew what caused the space alerts, I still didn't know who had caused them. Time to dig into the msdb database. msdb is a system database that contained all the job history for your SQL Server. A quick query revealed to me who it was that ran the ad-hoc backup.

[sql]

-- look for who ran a manual backup
select top 30
s.database_name, s.user_name, s.backup_start_date,
m.physical_device_name
from msdb.dbo.backupset s
join msdb.dbo.backupmediafamily m
on s.media_set_id = m.media_set_id
where s.type = 'D'
and s.database_name = 'YOUR_DB_NAME_HERE'
order by s.backup_start_date desc
go

[/sql]

Who Made the Backup

Next, I wanted to know how the user had access to create the backup. Let's look at security. The first place I typically look is, who has admin access.

[sql]

-- check server role membership
select l.name,
-- 'People with elevated access' as 'name',
l.sysadmin, l.securityadmin, l.serveradmin, l.setupadmin,
l.processadmin, l.diskadmin, l.dbcreator, l.bulkadmin,
l.createdate, l.updatedate
--, l.*
from sys.syslogins l
where l.sysadmin = 1
or l.securityadmin = 1
or l.serveradmin = 1
or l.setupadmin = 1
or l.processadmin = 1
or l.diskadmin = 1
or l.dbcreator = 1
or l.bulkadmin = 1
order by l.name;
go

[/sql]

Who has Admin Access

And there's my new best friend. Sometimes referred to as the Extra DBA, or People Who Can Get You Fired. In this case, there were several people who had been granted admin access. Time to start pruning some permissions.

The biggest hole on any system is extra people who have admin access. I see this all the time; perhaps a manager needs to run reporting queries, or developers need to maintain and troubleshoot an application.

When deadlines are looming and people are standing over your shoulder, the simplest fix often seems to just grant the person admin access. You'll remember to remove it later, or when the app is deployed, right?

Tuesday, April 9, 2013

T-SQL Tuesday #41

T-SQL Tuesday

T-SQL Tuesday is a rotating blog series brought to life by Adam Machanic. Basically, a different person is chosen to host the topic each month. We all participate by writing a blog post on that topic. And slowly world peace ensues.

How I Came to Love Presenting

This month the host is Bob Pusateri and the topic is how did you come to love presenting. In his invitation, Bob laments the fact that is has been almost two years since the last time he hosted T-SQL Tuesday.

Well, here I am lamenting the fact that it has been almost two years since I posted on T-SQL Tuesday. In fact, the last time I did it, he was the host. I guess there must be something about Bob.

Theatre

Drama Masks
For me, everything began with Theatre. All through middle school and high school I did Theatre; I even did a little in college. When I took my sabbatical to become a teacher for several years, one of the subjects I did was, yeah you guessed it, Theatre. I helped the Drama teacher run the department, put on plays, and did a few plays with the kids.

Tons of fun.

There is something really cool about taking on a role and letting it consume you, motivate you, and drive you to new places that you didn't realize were inside you. That, and the cast parties are lot's of fun.

I also did lots of crew; set building, lights, sound, etc. For my money, the best troupes are the ones where the cast does crew as well.

SWT Computer Club

College - The best four or five years of you life!
I went to college at Southwest Texas and started off as an Anthropology Major / Music Minor. Somewhere along the way, I was seduced by the dark side of Computer Science and endless fun of flame wars on the university's mainframe message boards.

I became involved in the Computer Club and one year they made me the President. As part of this, I was the public face of the club for several years. I would organize and run the meetings, trick people into paying club dues, throw a few parties, and cajole speakers to come. One time we got Richard Garriott to come!

Really cool guy...

Teaching

Spicoli and Mr. Hand
While in college, I also worked as a Teaching Assistant. I taught the lab sections of CS1308 - Intro to Computer Science and CS3409 Intro to Computer Architecture.

At another point in my computer career, I took a sabbatical from IT and became a high school teacher. I taught Computer Science, Math, Robotics, and Theatre. I also helped found Akins New Tech High as a school within a school for AISD.

I tell ya, hanging around kids, you will never feel so old and so young all at the same time.

SQL Server

Now that I'm all grown up, I work with SQL Server; focusing on areas such as Clustering, Indexing, and Troubleshooting. One of the things that has really inspired me in my career is the SQL Community. It is so vibrant, welcoming, and supportive. Once you start getting involved and meeting people you realize you are part of a growing family who will help you anytime you need it, provide mentoring to help you grow, and an outlet for you to give back to the community that gives so much to you.

I'm talking about your local  SQL Server User Group and SQL Saturday! Oh, and also that little thing known as SQL PASS.

Before I discovered these, I wasn't very happy with how my career was progressing, and not even sure I wanted to continue working with SQL Server. It was totally random that I learned about my user group, CACTUSS, and even more random when I learned about SQL PASS and SQL Saturday.

I attended a meeting, then another, then another, and pretty soon I started to realize that I was part of a SQL Family who wanted to help me fix my career and watch me grow.

After a while I took the plunge and signed up to present at one of the monthly meetings. I was a little nervous presenting to a room full of experts ready to pounce on every little mistake I made. But I soon realized that everyone there had the same questions and experiences that I had, and couldn't have been more supportive.

I was hooked.

So I reworked my presentation, adding stuff, removing stuff; and then presented it to another user group. During that time I attended my first SQL Saturday in Houston. Simply amazing. I couldn't believe all these folks were giving up their weekends to put together free training for me. Très cool.

SQL Saturday #97 - Austin
A few months later, Austin had its inaugural SQL Saturday. I immediately submitted my session, and I guess the judges were sleeping or drunk because they picked me. Now the pressure was really on. I reworked my session a few more times and presented it at a user group again.

When the big day came, I couldn't believe I had a full room and people asking questions; and no one threw tomatoes at me. After I finished, I had several people hanging around asking questions about my session and about problems they were experiencing in their environments. I traded emails and scripts with a few folks and even ended up hiring one of them the following year.

Get Out There

Get Out There
If you're looking to get more involved in the SQL Server Community there is no better way than to start attending your local SQL Server User Group and sign up to give a presentation.

Don't worry about being a beginner. I think you will find the User Groups to be appreciative and supportive environments where you can hone your presentation skills and your technical chops.

You might even give your career a boost to boot.

Monday, April 8, 2013

Looking in the Backup History

The other day I had a developer request a refresh from production from a date in the past. It seems they were testing a bug that cropped up and needed to replicate the way the system looked on a particular date.

Since this was beyond our local retention policy, we needed to restore the backup file from the virtual tape library.

Time to make friends with the Backup Administrator...

You want to tell your Backup Administrator the name of the server and the file, along with the date of the backup. This last part is slightly tricky. It is fairly typical for the file you are looking for to be on the following days' backup.

For example, if you wrote a SQL Backup file on January 1st, it probably isn't backed up until January 2nd. You may want to spend some time with your Backup Administrators to understand how your backups work and what the procedure is to get a file restored.

So how do you find out what the backup file was called two weeks ago? Query the msdb database, of course.

[sql]

-- look for a backup filename from the past...

select top 30
s.database_name, s.backup_start_date,
m.physical_device_name
--, s.*
--, m.*
from msdb.dbo.backupset s
join msdb.dbo.backupmediafamily m
on s.media_set_id = m.media_set_id
where s.database_name = 'YOUR_DATABASE_NAME_HERE'
and s.type = 'D'
order by s.backup_start_date desc
go

[/sql]

This query will return the names of the FULL backup files for the past thirty day. You can modify this query as needed.

Backup History

Monday, April 1, 2013

SQL Server on Ubuntu

I'm excited by the announcement, Microsoft SQL Server is finally supported on Linux, specifically the Ubuntu distribution. Apparently, this is in response to the Chinese government's choosing of Ubuntu as the state-sponsored operating system of choice.

While the Minister of Technology initially looked at using MySQL in conjunction with Ubuntu, ultimately he wanted a database platform with the best support model, sources close to the minister are quoted as saying.

Then he tried looking at Oracle, but apparently the proper price point could not be derived. 大象爆炸式的拉肚子 (Da-shiang bao-tza shr duh lah doo-tze) which roughly translates to 'Costs too much, does too little' the minister was overheard saying during lunch recently.

Chaz Brockwell, lead user experience developer for the Silicon Valley start-up SighberCool exclaimed, 'This is the biggest development in the past eighteen months. Finally, Microsoft is opening up their flagship product to the rest of us!' His company is working on a new wearable-computing device that combines Ubuntu, SQL Server, and the iPhone.

The NASDAQ is up 17 points today in light trading, while the Hang Seng trading down.

SQL Server on Ubuntu