Those of you who hit query performance problems probably have heard a solution: "UPDATE STATISTICS".
Yep, sometimes it helps and you are starting to use it as a "Golden Hammer" and run "UPDATE STATISTICS" command on nightly basis.
Then your database grows and nightly time frame became too slow for full statistics update.
You start doing it on weekly basis.
Then you decrease percentage of the statistical sample.
Then you start rotating tables for statistical updates....
And you still have not enough time.
In addition to that, your SAN is working like crazy during that operation.
Will do a demonstration what causes this problem.
At first will run a profiler and capture execution of following command:
In my case it looks like this:
Is it a lot for that table or not?
Let's look at list of temporary statistics for it:
In my case I have 8 (eight) temporary statistics for that table!
I use following script to delete these temp stats, in your can you might have little bit different names.
Now will update statistics for that table and capture it by profiler again:
Isn't it nice? amount of IO, CPU time and duration decreased in about 3.5 times!
Immediate question: Where did I get all these crazy stats?
Immediate answer: Every time somebody runs SQL statement, which requires selection on a column SQL Server automatically creates Statistics on that column in case that column is not already have a statistic and AUTO_CREATE_STATISTICS option is ON for that database.
Do we still need to update all these useless statistics, which were used maybe once couple of years ago and since then we are spending our IO and CPU to update them?
You can definitely review what you use and what you don't, but when you are done, you can use following script to get rid of all these garbage stats you have in your database.
In case you want to restore some of these temporary statistics you can use that script: