Wednesday, December 21, 2011

Who Done It?

Have you ever had a production application or database go offline for no apparent reason? What would you do?

I recently encountered a situation with a client where a production application was not running and the database appeared to be inaccessible. Predictably enough, the developers were surrounding the DBA trying to get him to restart SQL services or possibly the entire server. It fixed it last time…


sp_who2 had been run, but would not return positive results. Instead, it gave output similar to the following:
Msg 952, Level 16, State 1, Procedure sp_who2, Line 89
Database 'DATABASENAME' is in transition. Try the statement

Then, when you tried to expand the view for this database in SSMS you would get the following error:

Database Not Accessible

dbcc checkdb had been run, and gave these results:
Msg 2520, Level 16, State 5, Line 1
Could not find database 'DATABASENAME'. The database either does
not exist, or was dropped before a statement tried to use it.
Verify if the database exists by querying the sys.databases
catalog view.

They were getting ready to look at restoring from a backup. Wait a second hoss…

DMVs to the Rescue

When I was asked to look at this, I went straight to the DMVs for help.  In fact, if you go back and read the error message, it tells you to do what I'm about to do next.

First, I queried sys.databases to verify the status of this database:


-- check status of databases
select name, state_desc
from sys.databases
where name = 'DATABASENAME';


This confirmed for me that the database was present, but offline.

Suspicious Minds

I have to admit, I was suspicious of the developers; they kept letting out little hints that they knew the database was offline before anyone had looked at it. So I did a little more digging…

Next, I ran another DMV query to find out what activity was happening on the server:


-- current activity on the server
select r.session_id, s.host_name, s.program_name, s.login_name,
DB_NAME(r.database_id) as 'Database', r.status,
r.command, r.percent_complete, st.text
from sys.dm_exec_requests r
join sys.dm_exec_sessions s
on r.session_id = s.session_id
cross apply sys.dm_exec_sql_text(r.sql_handle) st
where s.is_user_process = 1
and r.session_id <> @@SPID
order by r.session_id;


These results showed me that someone had just issued the following command:


-- the offending command
alter database [DATABASENAME] set offline;


Connecting the Dots

I was able to tell that a user had run this command from a desktop computer, using SSMS, and was logged in with the application’s SQL login credentials.  Suddenly I'm thinking of CLUE...

The reason we were getting the ‘in transition’ error is because several developers had multiple sessions connected and were blocking each other from finishing.

Once we had this basic information, we were able to have the Network and Server teams track down who the user was. At this point in time, all the developers that were surrounding the DBA disappeared and disavowed all knowledge of anyone running anything.

Kill 'Em All, Let the DBA Sort 'Em Out

Next, we killed the offending session:


-- kill the offending session


After the session was killed, it was time to check the consistency of the database.

Run checkdb:


-- check database consistency
dbcc checkdb('DATABASENAME')
with all_errormsgs, no_infomsgs;


Everything looked good, so we were ready to bring the database back online:


-- bring the database back online
alter database [DATABASENAME] set online;


And of course, run a new FULL backup and save off the existing FULL and LOG backups in case they were needed later.

Post Mortem

In the post mortem, we discovered a few things...

The login passwords for DEV and PRD were set to the same value. This allowed the developers to obtain front door access to the production servers. Needless to say, this was immediately changed.

This also prompted some interesting discussions around security best practices and the need to verify backup files in case you ever need them.

Remember, disaster can strike at any moment. The key is to remain calm and work your way through the problem. Take every setback as a learning experience, and use it to make the customer’s environment better.


Also, resist the urge to throw someone under the bus. Rather, try to help each other understand the reasons for security and separation of duties and environments. Look at situations like this as a learning opportunity, not a burning opportunity.

Monday, December 19, 2011

Cloning a VirtualBox Guest

Cloning a Virtual Machine is a great way to leverage your virtualized environments and spin up test servers with ease.

First, create a base server to use for your clones. Try not to modify this one too much. Perhaps set the time zone and display resolution. If you change too much, you may end up modifying your clones later on.

Next, right-click on your base server and pick clone.

Clone the Base Guest VM

Give the clone a new name. Be sure to reinitialize the MAC address of your network cards.

Reinitialize the MAC Address

A full clone will make a separate copy of your base server. There will be no dependencies on the base server from this point on.

A linked clone will depend on your base, and will keep track of the differences between them. This can result in a great amount of disk space savings. However, your clone will be dependent on the base server.

Full Clone or Linked Clone

Hmmm...this is taking a while.  Let's go watch The 6th Day or perhaps The Island.

Wait of the Clones

OK, my clone is finished.  That was way easier than it is in the movies.

Your Clone is Ready

And there you have it!  Time for fun…

Wednesday, December 14, 2011

Highlight Important Emails

Do you find yourself overwhelmed by the volume of email you receive on a daily basis? Me too. Well, there’s not much I can do to help you stop it, other than say log off and stay logged off. Unfortunately, we all need to work to pay our bills, so we need another solution.

Here’s a quick tip I learned from LifeHacker that will help you filter the most important emails. This tip works on the theory that if an email is sent directly to you, then it is more important. If it was sent to a group that you are a member of, then it is less important.

While this heuristic may not cover everything, it does peel away a significant number of emails for me.

In Outlook, click the View menu, and then Click the View Settings button.

View Settings Menu

Next, click the Conditional Formatting button.

Advanced View Settings - Conditional Formatting

From there, you will be presented with the Conditional Formatting screen. Click Add to create a new Rule. I named mine ‘Highlight to me.’

Conditional Formatting

Next, click the Font button and decide what color you want to highlight your messages. I chose blue.

Choose the Font

Then, click the Condition button and setup the rule to filter your messages.

Define Your Condition

And voila, all messages that have my name listed in the To: or Cc: lines will highlight blue. Now, I can quickly scan and find all the messages that are of higher import.

Mail Highlighted Blue

I hope this tip helps you cut through the daily deluge of digital distractions that are depriving you of your destiny.  Code on!

Monday, December 12, 2011

Enabling Hardware Virtualization

Recently, I got a new server, and I've been dying to setup virtualization and create a new test lab.  I have 64-bit hardware and Windows 64-bit.  Then, I downloaded and installed both VirtualBox and VMWare Server.  Next, I downloaded the Windows 2008 R2 SP1 64-bit ISO.

Imagine my surprise when I got the following messages when trying to setup the Guest OS:

Virtualization Error

And this:

VirtualBox Error

Even though my server was 64-bit all the way, I still needed to enable Hardware Virtualization at the BIOS level if I wanted to create 64-bit Guest Operating Systems.

I got the same message when I tried again with VMWare Server:

VMWare Error

And I got the same thing when I used Microsoft's Hardware-Assisted Virtualization Detection Tool.  Whew, that's a mouthful!

Hardware-Assisted Virtualization Detection Tool Error

The first thing I did was verify that my CPU actually supported Hardware Virtualization; just because you have a 64-bit machine does not mean it will handle Hardware Virtualization.

CPUID is fantastic, free tool that serves many purposes: it will identify Hyperthreading, CPU Throttling, and Hardware Virtualization.

CPUID VT-x Error

What I'm looking for here is the presence of the VT-x Instruction Set.  Looks like I have it, so I'm golden.  Now, I just need to figure out how to enable it.

Another way to verify if your CPU supports Hardware Virtualization is to check the manufacturer's website.  Both Intel and AMD list their processors and allow you to verify this.


In order to enable Hardware Virtualization you will need to change some settings in your BIOS.  The problem here is, every computer's BIOS will be a little different.  You may have to hunt around in a non-obvious location to find it.  Mine was hidden under Security Settings.  I guess it's like they always say, it's in the last place you look...

Happiness is a Virtualized CPU

Once I took care of that, is was Blue Skies Again.

Hardware Assisted Virtualization Detection Tool Success

Now to work on creating my Virtual Sandbox...

Monday, November 28, 2011

Registered Servers 101

Registered Servers are a great tool for keeping a list of all the servers you work with. This can be useful both for the servers you use every day, and for those servers you only occasionally touch. How often have you had to check a rarely used server only to forget the exact name of the server and instance?

Registered Servers come in two varieties, Local Server Groups and Central Management Servers. For today, I will cover Local Server Groups. In a future post, I will discuss Central Management Servers.

Local Server Groups

Local Server Groups are great for keeping a list of all the servers in your environment. You can group the servers by project, OS version, SQL version, time zone, or any other grouping that makes sense to you. Additionally, a server may be registered in more than one group, and groups can be nested within one another.

Creating Local Server Groups

View - Registered Servers

The first step is to tell SQL Server Management Studio that you would like to work with Registered Servers. Go to the View menu and select Registered Servers. Alternatively, you could use the Keyboard Shortcut Ctrl+Alt+G.

Once you do that, you will see two choices; Local Server Groups and Central Management Servers. As I stated previously, for this post we will focus on Local Server Groups. The good news is, everything we learn about Local Server Groups will be applicable to Central Management Servers, as well.

If you select New Server Group, you will be presented with a simple dialog to enter your group details.

New Server Group - Step 1

New Server Group - Step 2

After you have created you Server Group, you can start registering servers into it.

New Server Registration - Step 1

New Server Registration - Step 2

Here is an example with several groups and servers...

Local Server Groups Example

Exporting and Importing Servers

Now that you've spent all this time building up your server list, wouldn't it be great if you could share it with others? Thankfully, this is very easy. To export your server list, just select on one of the folders or sub-folders, right-click, and follow the prompts.

Export Server List - Step 1

Export Server List - Step 2

To import the server list, follow similar steps. Right-click on Local Server Groups and follow the prompts.

Import Server List - Step 1

Import Server List - Step 2

Multi-Instance Queries

Now that we've gone through the hassle of registering all our servers, it is time to put that to good use. The primary reason to create Registered Server lists is to run multi-instance queries. These are great for auditing systems or performing administrative tasks.

To do so, simple right-click on one of your groups and select New Query. Alternatively, you can select the group, then go double-click on one of your query.sql files.

Multi-Instance Query - Connection

There are a few things to notice that are different from a normal query window.

First, the footer is pink; this alerts you to the fact that you are connected to multiple servers at the same time. SELECTs are generally fine, but double-check before you run any INSERTS, UPDATEs, or DELETEs.

Second, it tells you how many servers were in the group it attempted to connect to, and how many it was able to reach. In this example, I was able to connect to ten out of ten. If you see less than the number you were expected, make sure you find out why those servers were not available.

Third, you will see an indicator telling you which Server Group you have connected to. This is where a good naming convention comes into play. When you are creating your groups make sure the names make sense both to you and to your coworkers.

Multi-Instance Query - Results

Here, you will notice that the results from each server has been stitched together in one results window. From here, you can save the results to Excel for better formatting or reporting.

Pros and Cons

The pros are great. You can keep track of all your servers. You can run queries or updates against multiple servers at the same time. Groups can be nested to mimic the hierarchy of your organization. Servers can even be in multiple groups at the same time.

So what's not to love? Local Server Groups are just that, local. That means, all of your hard work means nothing if you happen to logon to a different server. Or to your coworkers. Or...

Of course, you can export you list and import on other servers, and share it with your coworkers. But just like anything you print these days, it is immediately obsolete. Any time you add or update a new server, you will need to export that change to every other server or coworker that is using your list.

A Better Way

Never fear, there is a better way. Central Management Servers take everything good from Local Server Groups and none of the bad. In a nutshell, these are Server Lists that you can share among your organization and are always kept up to date.

Next time, we will look at Central Management Servers.

Monday, November 14, 2011


The past twelve months have been fantastic for me, both personally and professionally.  Heck, I even finished remodeling my house.  I embraced each positive experience and fed it into the next one down the line.  I would never have had such a good year if it were not for the wonderful people I met along the way.

Thanksgiving is as apropos as can be.

PASS Summit

Last year, I was fortunate enough to go to the PASS Summit.  PASS, as it is colloquially known, is a world-wide conference of SQL Professionals.  It is jam packed with speakers, technical sessions, networking, parties, and hot tubs.  Although, not necessarily all at the same time.

I attended tons of great sessions, many being given by the same bloggers I had been reading; one of whom was Trevor Barkhouse (blog | twitter).  On the meet and greet day, I met Wes Brown (blog | twitter), who runs the local CACTUSS group in Austin, and he mentioned the SQL Saturday conferences.

I left Seattle absolutely loving the city, ready to move there, and recharged about my career with SQL Server.  I have been working with technology for over fifteen years, but I have never experienced the camaraderie that I felt with the SQL community.

SQL Saturday

After learning about SQL Saturday, I started looking up when and where the next one was going to be.  SQL Saturday is an almost free mini-PASS conference put on your fellow DBAs in the field.  It is on a rotating schedule that moves from town to town, and even other countries.

The first SQL Saturday I attended was in Houston, where I saw a session listed by Trevor Barkhouse.  I recognized his name from PASS, so I attended his session, asked a bunch of weird questions, and won a book at the end: SQL Server 2008 Internals and Troubleshooting; which I proceeded to read furiously...

After that, I attended the Dallas SQL Saturday and met-up with Trevor, and others, again.  The SQL community was really beginning to feel like a family.


To say that it was a blessing to attend the SQLskills classes this year would be an understatement.  Paul Randal (blog | twitter), Kimberly Tripp (blog | twitter), Bob Beauchemin (blog | twitter),  and Jonathan Kehayias (blog | twitter) offer, hands down, the best training I have ever had.  Period.  Forget about SQL training, technical training, or any other training.  I have never been so challenged, felt so dumb, and felt so smart; all at the same time.

As a special treat, I was able to attend a class where Brent Ozar (blog | twitter) was an additional one of the instructors.  His is one of the first blogs I started reading when I started out as a DBA.

If you can only do one thing as a DBA next year, attend a SQLskills class.  You will come away with a sore brain, a list of projects to implement in your shop, and a ton of great scripts and demos to review later.

Grandma's 99th Birthday & Summer Mummers

This summer, my grandmother turned ninety-nine.  So, my cousin organized a secret family reunion to surprise her.  This was a fantastic event and we had people come out of the woodwork for this.  It was great to see her and other family members that I haven't seen recently.  It amazes me how lucent she still is; I hope some of her genes rub off on me.

While in Midland, I also got to check out Summer Mummers.  Summer Mummers is a gin-u-wine Wild West Vaudeville Show.  It is an incredible experience and well worth the drive through West Texas to see it.  In fact, people travel from other countries every year to see it.

If you love theatre, do yourself a favor and go see this show at least once in your life.  Once you do, you will wonder how you ever lived before.

SQL Saturday Austin

With some encouragement from Wes Brown (blog | twitter) and Trevor Barkhouse (blog | twitter), I went for broke and submitted a session for SQL Saturday #97 in Austin.  I did a trial run of my presentation at the local CACTUSS user group, and even though I went way over time, I got plenty of good feedback.  So I edited and cut my way down to 60 minutes and re-worked my demos and went for it.

SQL Saturday Austin was like a class reunion for me.  I saw several of my friends from SQLskills class: Trevor Barkhouse (blog | twitter), Vicky Harp (twitter), and Rudy Rodarte (blog | twitter | youtube).  All of us as presenters!

By taking the leap of faith and becoming a speaker, I met tons of cool folks and made some good, new friends.  Doing this really charged my batteries and primed me for the next great chapter in my career development.


November brought on another change; after much consideration, I have decided to move on from Whole Foods Market.  During the past four and a half years, I have had a wonderful tenure.  I have made some great friends, worked on some great projects, and gained tons of valuable experience.  I feel very fortunate for having worked there, and am grateful for all the opportunities they have afforded me.  Between data center migrations, multi-instance clustering and consolidation, and some interesting disaster recovery situations I feel I could not have had a better training ground as a Database Administrator.  And the location's not bad, either.  It's hard to beat 6th & Lamar; downtown Austin at its finest...

In a couple of weeks, I will begin the next chapter of my career in the healthcare industry with a global provider of orthotics and prosthetics.  I am excited to be working with an organization that does so much to improve the lives of people all around the world.  I will continue my work with clustering and Denali.  While I will no longer be downtown, the Domain is a pretty cool location, as well.  I'm sure I will find some cool places to explore...

SQLskills Mentoring

Last week, I pretty much won the lottery.  Around mid-week, Jonathan Kehayias (blog | twitter) reached out to tell me a new mentoring program that SQLskills is starting, and oh by the way, would I be interested?  They say there is no such thing as a dumb question, but I'm not so sure...  :-)

Needless to say, I was interested; and floored, honored, humbled, etc.  I am truly touched that Jonathan and Paul reached out to me and are willing to take the time to guide me along my personal and professional goals for the coming year.  My hope is that I will be able to pay this forward to someone else down the line.

So my next mission is to sort out what exactly are my goals for two thousand and twelve.  I guess I'm not gonna worry about that Mayan Calendar thing any more.  But I do think December 20th, 2012 will be a good day to buy some stocks...  :-)

Everything's Coming up Milhouse

When I look back upon this great year, it amazes me how things have developed.  If you don't think your life is going as well as you would like, you DO have the power to change things.  It takes a lot of work and is not easy, but you can create your own luck and make the life that you would like to have.  You will not always get everything you want, but as Wayne Gretzky said, 'You miss one hundred percent of the shots you don't take.'

Just go for it.

Monday, October 3, 2011

SQL Saturday #97 - The Texas Trifecta

I do it for the schwag...
This weekend, I rounded out the SQL Saturday Texas Trifecta: Houston, Dallas, and Austin.

Earlier this year, I attended the SQL Saturdays in Houston and Dallas.  These were both fantastic events; I gained some excellent technical training and I made some great connections with other DBAs in the field.

When it was announced that Austin was going to host a SQL Saturday, I immediately jumped at the chance to present.  With this move, I went from being a spectator to a spectacle.

SQL Server Amnesia

I gave my presentation over SQL Server memory usage and how to troubleshoot problems that arise from it.  I took my attendees through the basics of using Perfmon and DMVs to peek under the hood and see what SQL Server and Windows are doing with all the memory.  Thankfully, all my demos worked as planned.  When I did this presentation at the CACTUSS User Group, my laptop locked up, and I had to pop the battery out.  How's that for memory pressure?

No one threw tomatoes at me, and several people asked me some extended questions afterwards. So, I guess that means I did alright.  If you did see my session, please take a moment to give me some feedback.  If you would like the slides or sample code, they are posted on the SQL Saturday #97 website.

I was a little nervous making the jump from presentations at the CACTUSS User Group to a SQL Saturday.  In the end, it was a good decision and I feel excited about making the plunge.  I see this as an opportunity to step up my game and try to give a little back to the community that has helped me hone my DBA skills.

The Sessions

After presenting my session, I checked out some of the other great speakers.  Russell Johnson (blog | twitter) gave a nice overview of the various options that are available for HA and DR in our ever-futile pursuit of a No-Downtime Architecture.

SQL Saturday #97 Austin
Then it was time for lunch, so I stowed my laptop, grabbed lunch, and chatted with some of the other attendees.  Lunch is also a great time to check out the vendor and sponsor booths.

After lunch, I checked out Jim Murphy's (blog | twitter) demo of Denali's Always On Architecture.  He took us through the basics of how it works, and then showed off a demo of Always On in action.

Next, I attended Joe Celko's dive into Advanced Table Constraints.  He took us through some of the more obscure options for constraints, and discussed how these can assist the query optimizer.

To finish out the day, I learned about the potpourri of management TLAs that are available: CMS, PBM, EPM, MDW, UCP, and DAC.  Nancy Hidy-Wilson (blog) took us through the basics of each of these tools and showed us how they all fit together to help you manage your enterprise.

Vendors and Sponsors

In the foyer, there were booths from several vendors: Idera, Quest, Fusion-IO, Confio, and Texas Memory Systems.  Included with your registration were several raffle tickets that you could use at the vendor tables to try to win some cool prizes: Kindles, iPads, Gift Cards, Software Licences, and more. Several of these vendors had free versions of their tools available to try.

I really have to commend Idera here, as they went above and beyond in their sponsorship of the event: running the registration, manning the info booth, and giving away some great prizes.

Speakers Dinner and After Party

The Speakers Dinner was a nice event where we got to meet the other speakers, finalize any last minute changes to the event, and blow off a little steam before the big event.  Wes Brown & Co. (blog | twitter) treated us to some local grub at Iron Works BBQ.  I really enjoyed chatting with some of the more experienced speakers and getting some tips.  It was also cool meeting a few of the other first timers and seeing I wasn't the only nervous one in the bunch.

We had the After Party at the same location, and this was a good opportunity to reflect on the day's events and compare notes with speakers and attendees alike.  I met some really cool people and I'm looking forward to seeing my new friends again.


I hope this is shrink to fit...
If you have never attended a SQL Saturday, I encourage you to check the schedule and see when we'll be coming to your town.  When I look back to January, when I attended my first SQL Saturday in Houston to ten months later, presenting at my first one in Austin, it amazes me what a good year 2011 has been for me.

From learning about some cool tools, picking up some good technical knowledge, and most importantly making some new friends; I have been very fortunate indeed.  The best part, I finally got a coveted SQL Saturday shirt to call my own.

Wednesday, June 22, 2011

CACTUSS June 2011

Thanks for the Memory

Last night, I had the good fortune of being selected to present at the local SQL Server User Group, CACTUSS.

This was my first presentation with the user group, and my first one on SQL Server, as well.  I have given technical presentations before, but it has been a couple of years.  So I was a little rusty when it came to pacing and how much content to try to include.

SQL Server Memory

My topic for the night was how SQL Server uses memory.  As a back story, I spoke about some of my experiences supporting and troubleshooting SQL Server over the years.  From there I rolled it up into some techniques and queries to spot problems.

Bit Off More Than I Could Chew

One comment I received, and I agree with, is that while I had plenty of good information, perhaps I could break it up into multiple presentations.  Even after removing several slides and demos I still went over time.

The other thing I realize is that instead of saving all the demos for the end, it may have been more effective to intersperse the demos within the presentation.  Present Topic 1, Demo Topic 1; Present Topic 2, Demo Topic 2, etc.

My audience was most gracious and stayed the entire time.  The best compliment for me was that several people were taking notes on what I was showing them.  Nice.  :-)

Disaster Strikes

As much as I practiced my presentation and related demos, I was not able to keep Mister Murphy at bay.  Once I first adjusted my Buffer Pool to introduce some Memory Pressure, I was no longer to connect and query the database.  I had to laugh and tell the audience that I just created some true Memory Pressure.

Slide Deck

If you are interested in the slides from my presentation, SQL Server Amnesia.  It doesn't look like the free, hosted version of WordPress will let me upload a .ZIP or .SQL with all of my demo scripts.  My apologies, I'm still new at blogging.  I may publish the code later as a separate post.


If you were there and would like to leave me some feedback, feel free to leave a comment. Alternatively, you can use SpeakerRate to give me some feedback.

Club News

Before the presentation, Wes Brown (blog | twitter) gave us an update about the South Austin SQL Users Group and SQL Saturday Austin.  It looks like our space for the South Austin Group fell through for this month, so now we are hoping to try again for next month.

Likewise, the struggle for a SQL Saturday Austin got another twist.  It turns out the date we were shooting for has several other regions hosting a SQL Saturday at the same time.  So now, the official rumour date is December 3rd.  Cross your fingers...

Wednesday, June 1, 2011

Lamborghinis and Pintos

After working hard all summer and saving your pennies you finally have enough money to buy a Lamborghini.  So you pedal downtown to the dealership and trade your jar of coins for a shiny new Fasterosa.  While driving home, you start to notice some bondo and shellac flaking off your pimp ride.  By the time you pull back into the driveway, you realize what you got was a Pinto.  Somebody done got swindled, and it was you!

CPU Throttling

CPU Throttling can occur when your Operating System and BIOS are set to Power Savings mode.  On certain types of computers, DEV boxes and desktop for example, this may be OK.  However, this is not something that you typically want on a Production Database Server.


CPUID Utility
The best way to verify if you have CPU Throttling is to use the CPUID utility.  CPUID is a free utility that will give you correct information about you CPU speeds, cache, and memory.  It will even tell you which memory slots are in use and the specifics for that particular stick of RAM.  You can also use it to determine if you have Hyper-threading enabled.

Performant Green

For many organizations, being green is part of their mission statement.  Perhaps it is even part of your own personal philosophy.  However, as a DBA you must consider the cost / benefit analysis of Power Savings versus Performance.

Are these DEV servers, or do they run an application critical to your business?  Would slow performance cause safety problems for anyone who depends upon your data?

Licensing Costs

Another important factor is licensing and hardware costs.  SQL Server Enterprise Edition costs about $30,000 per processor.  How much do your servers and racks cost?

Add up all of your servers and CPUs and do a little math.  For example, if you have four servers with two CPUs each in your onsite data center.  Then, another four servers in your offsite data center.  That makes for a total of 16 CPUs with a licensing cost of approximately $480,000.  Let's call this M, for money.

CPU Throttling
Next, figure out just how much throttling is going on.  Launch CPUID and look at the CPU tab.  First, look for the section named Specification; this is what you paid your hardware vendor for.  Let's call this C, for CPU speed.  Next, look for the section named Core Speed; this is what your systems guys are giving you.  Let's call this T for throttling.  Once you have those two numbers, it is time for some tricky division.

OK, let's find out what percentage of your capicity you are actually using...

True Speed = T / C * 100

Looking at my example numbers, I would have:

True Speed = 1600 / 2800 * 100

True Speed = 0.5714 * 100

True Speed = 57%

Now, let's use that number to find out how much money you are throwing out the window...

Wasted Money = (1 - T / C)  * M

Looking at my example numbers, I would have:

Wasted Money = (1 - 1600 / 2800) * 480000

Wasted Money = (1 - 0.5714) * 480000

Wasted Money = 0.4286 * 480000

Wasted Money = $205,714.29

Since we are true geeks, let's use SQL Server to do the calculations for us.


-- CPU Throttling Analysis

declare @CPUSpeed as numeric;
declare @ThrottlingSpeed as numeric;

set @CPUSpeed = 2800; -- CPUID Specification
set @ThrottlingSpeed = 1600; -- CPUID Core Speed

-- calculate how much throttling is going on
select cast(cast(@ThrottlingSpeed / @CPUSpeed * 100 as numeric(5,2)) as varchar) + '%'
as 'What your cruise control is set to...';

-- now, let's factor in your licensing costs...

declare @LicensingCost as numeric;

-- add up your total licensing costs
set @LicensingCost = 480000;

-- figure out how much money you are wasting
select '$' + cast(cast((1 - @ThrottlingSpeed / @CPUSpeed) * @LicensingCost as numeric(10,2))as varchar)
as 'How much bread you''re wasting...'



CPU Throttling Analysis

Business Cost

How much does it cost your business when critical processes take longer to complete?  Will you lose sales?  Will patients receive the care they need?

Are there costs associated when certain processes are not able to finish in a short amount of time?

If a customer has to wait on your application longer than the competition, then you may lose that customer.

Only you can answer these questions, they will be different for every business.

This one is a little harder to quantify.  After you have gotten management's attention with the money figures from above, use this angle to tie everything back to the business.  Say it with me, you are not a DBA, you are a Business Partner.

If you can create a simple spreadsheet or PowerPoint presentation which illustrates this, then you will have a much easier time of convincing management of your proposal.

CPU Utilization versus CPU Speed

One misconception is that if you do not have high CPU utilization, then you do not need to worry about CPU throttling.  If look at my example, you will see that we are not maxing out the CPU, so why worry about it?

CPU Utilization
CPU Utilization and CPU Throttling are two different ideas and have different affects upon the system.  CPU Utilization is telling us how many of the CPU cycles we are using.

Consider a highway.  Let's say we have a six-lane highway, but only three lanes have traffic in them.  This is 50% utilization.

Next, let's consider the Speed Limit.  If the same six-lane highway has a speed limit of 35 MPH, then it will take us a while to get out of Dodge.  Remember the Lamborghini you bought?  If the engine has a governor that keeps you legal, then you may as well have bought a Pinto.  That is throttling.

So, even though your utilization is low, i.e. lots of lanes are open, you may have the cruise control set to half of what the car is capable of.  You have lots of cycles available for processing; they're just running at half speed.

Tricking Out the Camaro

The fix for this is pretty easy, but it takes two steps.  First, you will need to check your BIOS settings and see what your Power Management settings are currently set to.  Each manufacturer is different, so my screenshot my not reflect the hardware you have in your shop.  However, it should be fairly similar.  The bad option is highlighted in red, while the good option is highlighted in green.

BIOS Power Savings The Bad with the Good

Next, under the Control Panels, select Power Options and make sure High Performance is selected.

Power Savings Turned Off

Trust, but Verify

Once you have taken care of these two steps, run CPUID again and verify that you are getting all the CPU that you have paid for.  Have fun, and enjoy that Lamborghini; you've earned it!

Monday, May 30, 2011

Memorial Day 2011

Today is Memorial Day.  The purpose of this holiday is to commemorate any soldiers who have lost their lives while serving their country.  Please take a moment to consider and be grateful for the things you have in your life, and have a little thought in your heart for those who have been lost.

I would like to thank the Marine Corps for helping to fund my way through college and giving me the drive to achieve the things I have in my life.

Memorial Day is also used to mark the start of the Summer vacation season.  So it's time to get the Family Truckster headed to the beach.  I hope everyone is enjoying the day off with family and friends, and of course, digging into some good BBQ!

Wednesday, May 25, 2011

SQL Saturday #63 Dallas

SQL Saturday #63 Dallas Schwag

This was my second SQL Saturday after attending the one in Houston earlier in the year.  I was curious to see if this one would be as awesome as my first.  Now, I am excited about the prospect of a Texas Trifecta in Austin later in the year.

SQL Saturday is an almost free, all day event dedicated to all things SQL Server.  I saw almost-free since you will still have travel costs: hotel, gas, rental car, meals, etc.  However, I think this is the best bang for the buck that you will find anywhere.


Dallas' SQL Saturday was held on April 2nd at the Region 10 Education Service Center.  All of the conference rooms were very nice and had adequate audiovisual equipment.  However, in between the sessions, the hallways and common areas were a little cramped.  I think it would have been better if a few more conference rooms were availble.  This way, the vendor and networking areas could have been hosted there, thus alleviating some of the congestion.


Friday, before the event, Idera hosted a casual meet and greet at Wizard's Sports Cafe.  This was an excellent opportunity to meet your fellow attendees, have a cold beer, and display your lack of pool skills.

SQL Saturday itself is a great way to meet other DBAs from across the state and the country.  You can talk shop, trade horror stories, and even get feedback on your resume.

Saturday, the official networking event was held at Dave and Busters.  There was free food to much on, other attendees to trade notes with, and presenters to pepper with more questions.


SQL Saturday #63 Schedule
There were a total of forty-two sessions covering seven different tracks.  So a wide variety of topics to choose from.  Some sessions were of the same quality you would find at SQL PASS.  While others with from beginning speakers looking to polish their craft.  I think it is wonderful that there is a hospitable venue for the aspiring speaker.


I began my morning with a deep dive on Solid State Storage with Wes Brown (blog | twitter).  He took us through the differences amongst the various vendors.  Additionally, we learned about some of the gotchas that come along with flash storage, and what the difference between enterprise and consumer grade was.

Next, I took in Kevin Kline's (blog | twitter) presentation on SQL Internals and Architecture.  This was a lively show and the room was literally standing room only.  Afterwards, Kevin was very gracious and took the time for some one-on-one questions from me.

For the afternoon, I checked out Kevin Boles' (blog | twitter) session on Parallel Query Execution.  He have us all the basics and showed up what all the options available with sp_configure mean.  In addition to sharing his technical skills with us, he also displayed his sense of humor while dealing with a heckler.

To close out the day, I saw Suresh Kandoth (blog | twitter) give a lesson on Database Availability and Recovery.  This is an area that we all have to deal with from time to time.  Having some knowledge of the fundamentals will help make things easier when the time comes.

Deep Ellum Arts Festival

Deep Ellum Music
As luck would have it, the Deep Ellum Arts Festival was being held on the same weekend as SQL Saturday.  Before checking into the hotel, I decided to explore a little bit.  It was held in Deep Ellum and several streets were blocked off to traffic and pedestrianized.  There were multiple stages and several local bands were playing a variety of styles.  Additionally, there were local artists and vendors with booths setup to showcase their wares.  While the event itself was free, you did have to pay for the concessions.  Wandering the streets, listening some local bands, drinking a cold brew...what a great way to get ready for SQL Saturday.


If you have never been to a SQL Saturday, then you are missing out.  It is a great way to get some nice training, make some new friends, and re-energize yourself about SQL Server.

Tuesday, May 24, 2011

Heaps of Heaps

Recently, I was looking into a stored procedure that was running poorly.  As part of my analysis, I examined the underlying tables and their indexes.  By using Kimberly Tripp's (blog | twitter) rewrite of  sp_help, I was able to quickly determine that one of the tables was a heap.


A Heap is a table without a Clustered Index.  This is bad for a few reasons:

  • When you update a Heap, you can get forwarded records

  • When you insert to the Heap, an 8-byte uniquifier will be added

  • Can compromise insert performance

How Many Heaps in a Pile

Like most things in nature, when there is one, there is another, and another, and another...

Knowing this, I set about to discover how many other tables in this database were heaps.

First, let's look at our overall count of heaps in relation to the total table count.


-- percentage of heaps
declare @heapCount numeric;
declare @tableCount numeric;

select @heapCount = count(t.object_id)
from sys.tables as t
join sys.indexes as i
on t.object_id = i.object_id
where i.type_desc = 'HEAP';

select @tableCount = COUNT(*) from sys.tables;

select @heapCount as 'Heap Count',
@tableCount as 'Table Count',
CAST((@heapCount / @tableCount * 100) as numeric(5, 2)) as 'Percentage of Heaps';


Next, let's enumerate all the tables that are heaps.


-- list all the heaps in the currently selected database
select OBJECT_NAME(t.object_id) as 'Table Name', i.type_desc
from sys.tables as t
join sys.indexes as i
on t.object_id = i.object_id
where i.type_desc = 'HEAP'
order by OBJECT_NAME(t.object_id);


Time to Make the Clusters

What I found was startling to say the least.  On this particular server, all three databases contain quite a few heaps.

Heaps of Heaps

Obviously, this is going to be where I start my tuning efforts.  After seeing how many heaps existed on a strange database, I think it would be wise to make this a standard check when first looking at any unfamiliar database.

Monday, May 23, 2011

Clearing the Plan Cache

While at SQL Saturday #63 in Dallas, I got some questions about clearing the plan cache.  This evolved into an email discussion; which resulted in me writing up some example scripts.  Since I received some good feedback, I thought this would be a good opportunity to turn them into a blog post.

Warning: These scripts should almost never be run on a production system.  These are meant for debugging and testing on DEV.  If you do run these on PROD, be sure to understand the consequences.

I could see running these on PROD in a few limited cases.  For example, if you have a query that is stuck with a bad plan in cache, you can use these scripts to remove that individual plan.  Another possibility would be to clear out the adhoc plans, perhaps just after you have enabled the optimize for adhoc workloads setting.

Examining the Plan Cache

The Plan Cache is where SQL Server stores the Execution Plans for any queries or stored procedures it has run.  The purpose of this is to enable SQL Server to reuse Execution Plans for each subsequent run of the query or stored procedure.  This allows SQL Server to scale up as more requests begin to pile on.

To examine the Plan Cache you can start by querying the sys.dm_exec_cached_plans DMV.


-- look at the plan cache, list all plans
select * from sys.dm_exec_cached_plans;


This is not very useful by itself.  In order to make this query a bit more useful, we can link in either the Query Text or the Query Plan by using two other DMVs: sys.dm_exec_sql_text and sys.dm_exec_query_plan respectively.


-- link in the sql text
select decp.*, dest.text
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest;

-- link in the query plan
select decp.*, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp;


And if you wish to see both the Query Text and the Query Plan together, you can link multiple DMVs in the same query.


-- link in both
select decp.*, dest.text, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp;


Clearing the Plan Cache

As I work through clearing the Plan Cache, I'm going to start with the broadest example, and then get more granular as we go along.  If you need to clear the entire Plan Cache, you can do so with a DBCC FREEPROCCACHE command.

I feel the need to issue another warning here, and reiterate that this is almost always a bad idea on a production system.


-- clears everything in the plan cache
dbcc freeproccache;


Clearing by Database

If clearing the entire Plan Cache seems a bit excessive, as well it should, then you can get a wee bit more focused by only clearing the plans for a particular database.  This can be achieved by using the DBCC FLUSHPROCINDB command.  This is an undocumented command in SQL 2008, however, you can read about it under the SQL 2005 documentation.

As with clearing the entire cache, this is probably not something you want to do with a production system unless you have a good reason.  You may have one database that is a special case, which needs to be tamed.


-- clear the plan cache for a particular database
declare @v_DBID tinyint;

select @v_DBID = dbid from sys.sysdatabases where name = 'AdventureWorks2008R2';

dbcc flushprocindb (@v_DBID);


Clearing by Object Type

Next, let's look at clearing the Plan Cache for a particular type of object.  By using DBCC FREESYSTEMCACHE we can choose to clear out the Adhoc Plans, Stored Procedure Plans, Views, or Extended Stored Procedures.

Most likely, the ones you will be interested in are Adhoc Plans and Stored Procedures.


-- clear plans for a particular type of object

-- clear sql plans: adhoc sql, prepared statements
dbcc freesystemcache('SQL Plans');

-- clear object plans: stored procedures, functions, triggers
dbcc freesystemcache('Object Plans');

-- clear bound trees: views, constraints, defaults
dbcc freesystemcache('Bound Trees');

-- clear extended stored procedures
dbcc freesystemcache('Extended Stored Procedures');


Like a broken record, I gotta remind you that it is a bad idea to do this on a production system.

Optimize for Adhoc Workloads

The primary reason you may wish to clear out the Adhoc Plans from your cache, is if you have just enabled the SQL Server setting 'optimize for adhoc workloads'.  This setting tells SQL Server not to cache the Execution Plans for Adhoc queries until they have been run more than once.  This helps keep Plan Cache Bloat under control.

You can enable this setting by using sp_configure.


-- show advanced options - enable all options to be seen / set
use [master]
exec sp_configure 'show advanced options', 1
reconfigure with override

-- optimize for ad hoc workloads
sp_configure 'optimize for ad hoc workloads', 1
reconfigure with override

-- verify change
sp_configure 'optimize for ad hoc workloads'


Once you have enabled this setting, the Single-Use Adhoc Plans will remain in cache until either the SQL Service has been restarted, or you have explicitly cleared the Plan Cache of Adhoc Plans by using the code example above.

Resource Governor

If you are using Resource Governor, then you have the option of clearing the Plan Cache for a particular Resource Pool.  Resource Governor allows you to create various Resource Pools and then designate a certain percentage of CPU or Memory to that pool.

In order to get a list of the Resource Pools that are defined on your system, you can query the sys.dm_resource_governor_resource_pools DMV.


-- list the resource governor pools
select distinct name as 'Resource Governor Pools'
from sys.dm_resource_governor_resource_pools;


Once you have identified the Resource Pool that needs to have its associated Cached Plans cleared, you can clear the plans by using the DBCC FREEPROCCACHE command.


-- clear the default resource governor pool
dbcc freeproccache ('default');

-- clear the internal resource governor pool
dbcc freeproccache ('internal');

-- clear the default resource governor pool
dbcc freeproccache ('YOURRESOURCEGOVERNORPOOL');


Again, please do not do this on a production system unless you are certain this is what you need.

Clearing a Single Plan

If you have a particular query which is causing you problems, for example, it got compiled with a bad plan, then, one option is to remove the offending plan from the cache, which will cause a recompile.  There is no guarantee, however, that the new plan will be any better than the old plan.

You may wish to capture the existing plan so that you can compare it with the new plan.  This way you can explore what the differences are, and focus your tuning efforts there.

To find the query plan for a particular query we will need a search string; the SQL from your query.  We will place your query inside of mine, so we can search the Plan Cache for it.  It is not necessary to use the entire text of the query, that would be a bit cumbersome. All we need is enough to make it unique, or unique enough that we can find it in the results.


-- or it may be even better to search by the sql text
-- so that you only see the ones that interest you
-- also link in the query plan so that you can analyze it
select decp.usecounts, decp.size_in_bytes, dest.text, decp.plan_handle, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp
where dest.text like '%PLACE YOUR SQL CODE SNIPPET HERE%';


Once you have identified the offending plan, it is a simple matter to remove it from the cache.  Use the plan_handle with the DBCC FREEPROCCACHE command to clear it from the Plan Cache.


-- once you find the plan you wish to prune
-- place the plan_handle in the following statement
dbcc freeproccache (0x0600010010450926B8E00C07000000000000000000000000);


So this is the only one of the options where I can see a valid case for doing it on Production.  All the same, be careful and understand what you are doing.  It is important to clear only the Query Plan that is needed, and no more.


It is important to keep tabs on what is going on in your Plan Cache.  Being able to clear out all or part of your Plan Cache can be a powerful debugging technique.  Please bear in mind that this is not something that should be done on a Production server without careful consideration of the side effects.

However, playing with this in DEV can help you understand more of what is happening under the covers of SQL Server.