Thursday, 19 September 2013

70-486 Developing ASP.NET MVC 4 Web Applications

Last week I passed the 70-486 Microsoft exam - Developing ASP.NET MVC 4 Web Applications, so thought I'd knock up a quick post on my experience and what materials I found useful as part of my preparation.

Going into this exam, I had just over a year and a half's commercial experience using ASP.NET MVC 2 & 3. Before that, I had experience in ASP.NET and prior to that, when dinosaurs still roamed, classic ASP. It's no secret I'm a bit of a data nerd (a lot of my blog is db related, SQL Server, MongoDB...) and I have tended to be backend focused, but I wanted to even out the balance a bit by pushing myself in this area, and in JS/HTML5/CSS3. I took the opportunity to upgrade the web solution at my current company from MVC 3 to MVC 4, and being able to do that at the start of my preparation was really useful - this is how I like to learn, by actually "getting stuff done". That is the obvious, number 1 recommendation - do not just read and swot up on the theory, actually "do". My brain likes me to be stupid and make practical mistakes - when I then work out how I've done something daft, it reinforces the learning and makes the knowledge stick.

Preparation

The first thing I was disappointed to find was that there is (as of time of writing) no Microsoft Exam prep book for this exam. There is one due out I believe at the beginning of October 2013 - Exam Ref 70-486 : Developing ASP.NET MVC 4 Web Applications by William Penberthy (ISBN-10: 0735677220 | ISBN-13: 978-0735677227). So I obviously can't comment on how good that book is. The book I went with was Professional ASP.NET MVC 4 from Wrox, by Jon Galloway (Twitter), Phil Haack (Twitter), K. Scott Allen (Twitter) and foreword by Scott Hanselman (Twitter). While the book alone isn't enough for the exam, for me it gave a good coverage on quite a few areas I needed so it is definitely worth a read.

Having a Pluralsight subscription was good and while not necessarily geared towards the exam, you can never go wrong with a bit of Pluralsight training. I went through a number of the MVC courses - ASP.NET MVC Fundamentals, ASP.NET MVC 2.0 Fundamentals, MVC 4 Fundamentals and Building Applications with MVC 4. One of the things I like about Pluralsight is you can control the playback speed so for most of the stuff I already knew, I glossed over at a faster speed. I also went through the "Building Web Apps with ASP.NET Jump Start" training on the Microsoft Virtual Academy (Scott Hanselman, Jon Galloway and Damian Edwards (Twitter)) - that was quite entertaining!

I found some great study guide blog posts that collated together a lot of links to some good material:

These give a lot of useful links to MSDN articles, MS resources, blogs, interesting StackOverflow questions etc.

Last but definitely not least, there was my practical setup - Windows 8 on a VM, with Visual Studio 2012 and a Windows Azure account. To re-iterate what I said before - the best way to learn, is to do...and make daft mistakes. I started work on a new web app from scratch, with a real-world mindset on (i.e. writing production-worthy code) putting to good use the new things I was learning. I also had a scratch-pad web app where I would just dump rough code to try out short, simple snippets.

Bottom line

Overall, I put a lot of time and effort into preparation for this exam and it paid off. The biggest benefit for me is what I learned along the way and the challenge it gave me.

Wednesday, 5 June 2013

SQL Server 2008 R2 in-place upgrade error

Today I encountered the following error during the process of performing an in-place upgrade of a SQL Server 2008 instance to 2008 R2:
The specified user 'someuser@somedomain.local' does not exist
I wasn't initially sure what that related to - I hadn't specified that account during the upgrade process so I went looking in the Services managemement console. The SQL Server service for the instance I was upgrading was configured to "Log On As" that account and it had been happily running prior to the upgrade.

This domain account did exist but to sanity check, the first thing I did was switched the service to use another domain account that also definitely existed and retried the Repair process. Same error. Then I spotted further down the list, another service had specified an account in the "somedomain\someuser" form. So I switched the SQL Server instance service to specify the account in that form (Down-Level Logon Name) instead of the UPN (User Principal Name) format (reference).

Bingo.

While I was waiting for the Repair process to run through again, I carried on searching and found this question on the MSDN Forums. The very last answer there confirmed it. The SQL Server 2008 R2 upgrade does NOT like user accounts in the UPN format.

Thursday, 21 March 2013

.NET Project File Analyser

I've started knocking together a little app to automate the process of trawling through a folder structure and checking .NET project files (C# .csproj currently) to extract some info out of them. The DotNetProjectFileAnalyser repo is up on GitHub. I've been working against Visual Studio 2010 project files, but could well work for other versions assuming the project file structure is the same for the elements it currently looks at - I just haven't tried as yet.
Currently, it will generate an output file detailing for each .csproj file it finds:
  • Build output directory (relative and absolute) for the configuration/platform specified (e.g. Debug AnyCpu). Useful if you want find which projects you need to change to build to a central/common build directory.
  • List of all Project Reference dependencies (as opposed to assembly references). Useful if you want to find the projects that have Project References so you can switch them to assembly references

Usage

DotNetProjectFileAnalyser.exe {RootDirectory} {Configuration} {Path}

{RootDirectory} = start directory to trawl for .csproj files (including subdirectories)
{Configuration} = as defined in VS, e.g. Debug, Release
{Platform} = as defined in VS, e.g. AnyCpu
Example:
DotNetProjectFileAnalyser.exe "C:\src\" "Debug" "AnyCpu"

More stuff will go in over time, with ability to automatically update csproj files as well to save a lot of manual effort.

Wednesday, 13 March 2013

GB Post Code Importer Conversion Accuracy Fix

In a post last year (Ordnance Survey Data Importer Coordinate Conversion Accuracy) I looked into an accuracy issue with the conversion process within the GeoCoordConversion DLL that I use in this project (blog post). Bottom line, was that it was a minor with an average inaccuracy of around 2.5 metres and a max of ~130 metres by my reckoning. I've since had a few requests asking if I can supply an updated GeoCoordConversion DLL with fixes to the calculations.

After getting in contact with the owner of the GeoCoordConversion project, they've kindly added me as a committer. I've now pushed the fixes up to it, rebuilt the DLL (now v1.0.1.0) and pushed up the latest DLL to the Ordnance Survey Importer project on GitHub.

Friday, 8 March 2013

SQL Server Table Designer Bug With Filtered Unique Index

A colleague was getting a duplicate key error when trying to add a new column to a table via the Table Designer in SQL Server Management Studio (2008R2 - not tested on other versions), despite there being no violating data in the table. After a bit of digging around, I tracked the problem down to what appears to be a bug in Table Designer when there is a unique, filtered index in place on the table and the table is being recreated (i.e. you're adding a new column, but not at the end after all the existing columns).

Steps to reproduce
  1. In SSMS in Tools -> Options -> Designers -> Table and Database Designers, uncheck the "Prevent saving changes that require table re-creation" option
  2. Create table:
    CREATE TABLE [dbo].[Test]
    (
    Column1 INTEGER NOT NULL,
    Column2 INTEGER NOT NULL,
    Column3 INTEGER NULL
    );
    
  3. Create dummy data:
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, 1);
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, NULL); -- OK as Column3 is NULL
    
  4. Now run the following, duplicate key error is correctly thrown:
    -- Errors, Duplicate Key exception as expected
    INSERT dbo.Test(Column1, Column2, Column3) VALUES (1, 1, 2); 
    
    So at this point we have 2 rows in the table, no violations of the unique filtered index.
  5. Right click the table in SSMS -> Design
  6. Insert a new column "Column4" before Column3 and the press Save.
The error that occurs is:
'Test' table
- Unable to create index 'IX_Test_Column1_Column2'.  
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.Test' and the index name 'IX_Test_Column1_Column2'. The duplicate key value is (1, 1).
The statement has been terminated.

So what it appears to be doing, is losing the WHERE filter on the index. This can be confirmed by clicking "Generate change script" in the Table Designer instead of Save - at the end of the generated script:

CREATE UNIQUE NONCLUSTERED INDEX IX_Test_Column1_Column2 ON dbo.Test
 (
 Column1,
 Column2
 ) WITH( STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Now, if there was no data in the table, or there were no rows with the same Column1 and Column2 value combination when you go into Table Designer, then you can save the table change and be blissfully unaware that the filter has been lost from the index. i.e. repeat the repro steps again, but this time move step 3 and 4 (insert dummy data) to the end of the process. The previously OK 2nd data row will now error upon insert.

Personally, I almost never use the Table Designer and as a safeguard, will be recommending to the rest of the team that the "Prevent saving changes that require table re-creation" option is checked as a basic guard.

Update: The following Connect items relating to this issue:
The filter expression of a filtered index is lost when a table is modified by the table designer
2008 RTM, SSMS/Engine: Table designer doesn't script WHERE clause in filtered indexes
Referring to comments in that 2nd item, my "Script for server version" setting was set to "SQL Server 2008 R2".

Sounds like this may have been addressed in SQL 2012, but still a problem in 2008/2008R2.

Sunday, 3 March 2013

MongoDB ASP.NET Session Store Provider v1.1.0

Since I created the MongoDB ASP.NET Session State Store Provider (v1.0.0), a few things have moved on in the MongoDB C# Driver. I've pushed a number of changes up to the project on GitHub (which I've incremented to v1.1.0), so it now uses v1.7.0.4714 of the driver. There is no change to way it is configured in web.config, so if you are using v1.0.0 of my provider it should be painless. Of course, I'd recommend thorough testing first :)

The changes relate to:


web.config recap
These web.config settings have not changed, so should continue working as before.
<configuration>
  <connectionStrings>
    <add name="MongoSessionServices" connectionString="mongodb://localhost" />
  </connectionStrings>
  <system.web>
    <sessionState
        mode="Custom"
        customProvider="MongoSessionStateProvider">
      <providers>
        <add name="MongoSessionStateProvider"
             type="MongoSessionStateStore.MongoSessionStateStore"
             connectionStringName="MongoSessionServices"
             writeExceptionsToEventLog="false"
             fsync="false"
             replicasToWrite="0" />
      </providers>
    </sessionState>
  </system.web>
</configuration>
replicasToWrite is interpreted as follows: < 0 = ignored. Treated as 0.
0 = will wait for acknowledgement from primary node.
> 0 = will wait for writes to be acknowledged from (1 + {replicasToWrite}) nodes.

Please Note, per the MongoDB Driver docs, if (1 + {replicasToWrite}) equates to a number greater than the number of replica set members that hold data, then MongoDB waits for the non-existent members to become available (so blocks indefinitely).

It still treats write concerns as important, ensuring it waits for acknowledgement back from at least one MongoDB node.

As always, all feedback welcome.

Sunday, 10 February 2013

Importing twitter archive into ElasticSearch

I recently downloaded my full history of tweets from Twitter (via the "Request your archive" button on the Settings page), which gives the data in CSV and JSON formats. The download comes with a simple web UI so you can browse locally, all your historical tweets, as well as perform basic searches. When I tried performing a wildcard search, it resulted in an unresponsive script warning and nobbled the browser tab.

Cue ElasticSearch

I've kept meaning to have another hack around with ElasticSearch for a while now, as I started looking into it in 2011 and was very impressed with it. So I decided to knock up a little .NET app to take the .js files provided by Twitter in the export, extract the tweet data and push into ElasticSearch. It seems like a good little starter project to learn more about ElasticSearch and actually could give me some useful insights into all the tweets I've made. I'm sure I'm going to find some really good links I tweeted and then couldn't find again!

At the moment there's a very basic GUI (and I mean basic!) exposing the functionality to import the tweet data into ElasticSearch. I plan to stick a basic search GUI together too and then...well, I'm not quite sure yet, but I'm sure as time goes on I'll see more ways to extract interesting information from my tweet history.

The ElasticTweets project as I've called it, can be found on GitHub here - feel free to use and abuse it, and let me know how you get on!