ASP.NET MVC Performance Profiling

Building up a profile of how a web application functions, all the database interactions that take place and where the server-side time is spent during a request can be a challenging task when you are new to an existing codebase. If you’re trying to address the generic/non-specific “we need to improve performance / it’s slow” issue, you need to get a good picture of what is going on and where to prioritise effort. [Read More]

Quick win - check your table variable use

Quick wins are awesome. Making a change that takes minimal effort and yields a significant performance improvement is very satisfying. This particular potential quick win relates to the use of table variables vs. temporary tables. Have a non-trivial stored procedure that produces some intermediary results and stores in a table variable which then goes on to be used further in the stored procedure? Consider evaluating a switch to a temporary table instead. [Read More]

The importance of "Working Set"

One of the things that I see cropping up pretty often is this thing called “working set”. After recently chipping in on another StackOverflow question on the subject of “What does it meant to fit ‘working set’ in RAM?”, I thought it was a good subject for a blog post. This is really just a copy and extension of my input on that question and focused in certain parts on MongoDB, but is also as relevant to other databases. [Read More]

SqlBulkCopy to SQL Server in Parallel

In an earlier post last year, I blogged about high performance bulk loading to SQL Server from .NET using SqlBulkCopy. That post highlighted the performance gain that SqlBulkCopy gives over another batched insert approach using an SqlDataAdapter. But is it possible to squeeze more performance out? Oh yes. First, a quick recap. For optimal performance: load into a heap table (with no indexes - add any indexes you need AFTER you’ve loaded the data) [Read More]

Optimising wildcard prefixed LIKE conditions

Suppose you want to write a query to find all products in your database that have a name beginning with “Long-Sleeve”. You’d more than likely use something like below (examples based on AdventureWorks LT sample database): SELECT name FROM SalesLT.Product WHERE name LIKE 'Long-Sleeve%' This produces a good execution plan, performing an index seek on the nonclustered index that exists on the name column. If you look at the seek operation, you’ll see the query optimiser has done a good job of ensuring an index can be used by looking at the seek predicate: [Read More]

The SQLBits VI Experience

I had high expectations about SQLBits VI and it certainly lived up to it! Not only did it have great speakers, who really know their stuff and have an effortless way of getting the information across, but it was also free. So it goes without saying that it was a popular event. There were 4 sessions going on at any one time with a theme of performance and scalability and plenty of choice across DBA, Dev and BI areas. [Read More]

Optimising date filtered SQL queries

How you structure your SQL queries is very important and choosing the wrong approach can have big effects on the performance of the query. One of the key things that should flag up the potential for needing optimisation, is if you are using a function/calculation within a WHERE clause. These can lead to inefficient execution plans, preventing sub-optimal index use. A classic example to demonstrate this is when querying the data for a specific month. [Read More]

Using the SP:StmtCompleted SQL Profiler trace event class

One question I’ve seen popping up a few times recently, is how to check what individual statement(s) within an SQL Server stored procedure are taking the most time. The scenario is that there is a stored procedure that consists of a number of SQL statements. The stored procedure has been flagged up as running slower than perhaps it was expected to, but as there are a number of statements/queries within it, it’s not immediately clear where the time is being taken. [Read More]

Sorting a DataTable - LINQ performance

Whenever there are a number of ways to achieve the same goal, I’m always inquisitive as what the performance difference is between them. I’m a firm believer in thinking about scalability from the start - if you can make your best effort to prepare for scale, then you can save yourself time and effort further down the line. I like to try and avoid pain - it doesn’t agree with me. [Read More]

SQL Server 2008 - Table Valued Parameters vs XML vs CSV

The scenario is, you want to create a stored procedure that returns the records relating to a finite set of keys/IDs that you pass in to it. How do you do it? Well you could use dynamic SQL, but for the purpose of this article I’m going to ignore that option. Test scenario -- Create basic customer table CREATE TABLE [dbo].[Customer] ( Id INTEGER IDENTITY(1,1) PRIMARY KEY, Name NVARCHAR(50) NOT NULL ); GO -- Populate Customer table with 100,000 sample records DECLARE @Counter INTEGER SET @Counter = 1 WHILE (@Counter <= 100000) BEGIN INSERT Customer (Name) VALUES ('Test Customer #' + CAST(@Counter AS VARCHAR(10))) SET @Counter = @Counter + 1 END Option 1 - CSV list of keys In SQL Server 2000, you would most likely have ended up passing in a CSV list of keys, splitting that out into a table, and then joining that on to your real data table to match the records to return. [Read More]