Thursday, 21 March 2013

.NET Project File Analyser

I've started knocking together a little app to automate the process of trawling through a folder structure and checking .NET project files (C# .csproj currently) to extract some info out of them. The DotNetProjectFileAnalyser repo is up on GitHub. I've been working against Visual Studio 2010 project files, but could well work for other versions assuming the project file structure is the same for the elements it currently looks at - I just haven't tried as yet.
Currently, it will generate an output file detailing for each .csproj file it finds:
  • Build output directory (relative and absolute) for the configuration/platform specified (e.g. Debug AnyCpu). Useful if you want find which projects you need to change to build to a central/common build directory.
  • List of all Project Reference dependencies (as opposed to assembly references). Useful if you want to find the projects that have Project References so you can switch them to assembly references


DotNetProjectFileAnalyser.exe {RootDirectory} {Configuration} {Path}

{RootDirectory} = start directory to trawl for .csproj files (including subdirectories)
{Configuration} = as defined in VS, e.g. Debug, Release
{Platform} = as defined in VS, e.g. AnyCpu
DotNetProjectFileAnalyser.exe "C:\src\" "Debug" "AnyCpu"

More stuff will go in over time, with ability to automatically update csproj files as well to save a lot of manual effort.

Wednesday, 13 March 2013

GB Post Code Importer Conversion Accuracy Fix

In a post last year (Ordnance Survey Data Importer Coordinate Conversion Accuracy) I looked into an accuracy issue with the conversion process within the GeoCoordConversion DLL that I use in this project (blog post). Bottom line, was that it was a minor with an average inaccuracy of around 2.5 metres and a max of ~130 metres by my reckoning. I've since had a few requests asking if I can supply an updated GeoCoordConversion DLL with fixes to the calculations.

After getting in contact with the owner of the GeoCoordConversion project, they've kindly added me as a committer. I've now pushed the fixes up to it, rebuilt the DLL (now v1.0.1.0) and pushed up the latest DLL to the Ordnance Survey Importer project on GitHub.

Friday, 8 March 2013

SQL Server Table Designer Bug With Filtered Unique Index

A colleague was getting a duplicate key error when trying to add a new column to a table via the Table Designer in SQL Server Management Studio (2008R2 - not tested on other versions), despite there being no violating data in the table. After a bit of digging around, I tracked the problem down to what appears to be a bug in Table Designer when there is a unique, filtered index in place on the table and the table is being recreated (i.e. you're adding a new column, but not at the end after all the existing columns).

Steps to reproduce
  1. In SSMS in Tools -> Options -> Designers -> Table and Database Designers, uncheck the "Prevent saving changes that require table re-creation" option
  2. Create table:
    CREATE TABLE [dbo].[Test]
    Column3 INTEGER NULL
  3. Create dummy data:
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, 1);
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, NULL); -- OK as Column3 is NULL
  4. Now run the following, duplicate key error is correctly thrown:
    -- Errors, Duplicate Key exception as expected
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, 2); 
    So at this point we have 2 rows in the table, no violations of the unique filtered index.
  5. Right click the table in SSMS -> Design
  6. Insert a new column "Column4" before Column3 and the press Save.
The error that occurs is:
'Test' table
- Unable to create index 'IX_Test_Column1_Column2'.  
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.Test' and the index name 'IX_Test_Column1_Column2'. The duplicate key value is (1, 1).
The statement has been terminated.

So what it appears to be doing, is losing the WHERE filter on the index. This can be confirmed by clicking "Generate change script" in the Table Designer instead of Save - at the end of the generated script:

Now, if there was no data in the table, or there were no rows with the same Column1 and Column2 value combination when you go into Table Designer, then you can save the table change and be blissfully unaware that the filter has been lost from the index. i.e. repeat the repro steps again, but this time move step 3 and 4 (insert dummy data) to the end of the process. The previously OK 2nd data row will now error upon insert.

Personally, I almost never use the Table Designer and as a safeguard, will be recommending to the rest of the team that the "Prevent saving changes that require table re-creation" option is checked as a basic guard.

Update: The following Connect items relating to this issue:
The filter expression of a filtered index is lost when a table is modified by the table designer
2008 RTM, SSMS/Engine: Table designer doesn't script WHERE clause in filtered indexes
Referring to comments in that 2nd item, my "Script for server version" setting was set to "SQL Server 2008 R2".

Sounds like this may have been addressed in SQL 2012, but still a problem in 2008/2008R2.

Sunday, 3 March 2013

MongoDB ASP.NET Session Store Provider v1.1.0

Since I created the MongoDB ASP.NET Session State Store Provider (v1.0.0), a few things have moved on in the MongoDB C# Driver. I've pushed a number of changes up to the project on GitHub (which I've incremented to v1.1.0), so it now uses v1.7.0.4714 of the driver. There is no change to way it is configured in web.config, so if you are using v1.0.0 of my provider it should be painless. Of course, I'd recommend thorough testing first :)

The changes relate to:

web.config recap
These web.config settings have not changed, so should continue working as before.
    <add name="MongoSessionServices" connectionString="mongodb://localhost" />
        <add name="MongoSessionStateProvider"
             replicasToWrite="0" />
replicasToWrite is interpreted as follows: < 0 = ignored. Treated as 0.
0 = will wait for acknowledgement from primary node.
> 0 = will wait for writes to be acknowledged from (1 + {replicasToWrite}) nodes.

Please Note, per the MongoDB Driver docs, if (1 + {replicasToWrite}) equates to a number greater than the number of replica set members that hold data, then MongoDB waits for the non-existent members to become available (so blocks indefinitely).

It still treats write concerns as important, ensuring it waits for acknowledgement back from at least one MongoDB node.

As always, all feedback welcome.