Sunday, 29 April 2018

Samsung Gear Sport vs SWIMTAG - Swimathon 2018

This post is a bit different to my normal ones in that it's not my usual ramblings about something software development related. Instead, it's a cross between a write-up of my taking part in Swimathon 2018, and a side-by-side comparison of how my swim was tracked by 2 different bits of kit: a Samsung Gear Sport and SWIMTAG. For those that sponsored me, this is proof that I held up my side of the deal! :) If you are here for the proof, feel free to scroll down - I won't judge you....much.

Swimathon is an event to raise money for 2 very worthwhile charities: Cancer Research UK and Marie Curie. You pick a swimming challenge to complete from 400m, 1.5km, 2.5km or 5km and raise money by getting sponsors. Simple as that. I signed up to complete 1.5km, which I did on Sunday 29th April, 2018.

Background

In 2017, I took up swimming after picking up a knee injury during my annual 10K road race. Being unable to run without pain left a huge gap in my exercise routine, so I naively thought I'd migrate my running fitness level over to swimming. However, having not swum properly since my school days 20-odd years ago, and never having done lane swimming, it was a wet slap round the face and a bit disheartening when I struggled to complete more than a couple of lengths without feeling like I'd run a fast 5K. It turns out, running fitness doesn't necessarily translate to swimming fitness. There were a number of problems I had:
  1. unrealistic expectation of translating running performance over to swimming
  2. poor technique
  3. going too fast / not wanting to be "that person" who holds everyone up in the lane - the ego factor
  4. no way to track my progress
So I researched into swimming - read articles and watched videos on technique, slowed everything down, got my breathing sorted etc. I'm lucky enough to have a local leisure centre that has SWIMTAG. SWIMTAG tracks your swim and gives you insights into your swim - how many lengths, split times, stroke types, DPS (Distance Per Stroke), tracks your PBs, allows you to enter competitions to compete against others locally, nationally or globally. After a swim, the data is sync'd to your account and then available online or via a mobile app. As a Software Developer stat geek, this is right up my street.

I analyse things.

ALL TEH THINGZ.

I like to see my progress as that drives me on to do more and do better - I'm one for continual improvement and SWIMTAG has been a big part of that. Soon I was making progress; my times were dropping, my distances were increasing. And 9 months after picking up my knee injury, I was also able to get back to running. #Winning!

Samsung Gear Sport - The Chosen One

It's my Jose Mourinho of smartwatches. Earlier this year, I started looking into smart watches/fitness trackers. I went with a Samsung Gear Sport as it ticked the main boxes for what I was looking for:
  • GPS for tracking runs outside
  • waterproof to 5ATM (50 metres) with swimming tracking
  • offline storage for Spotify music
  • heart rate monitor
  • slick, responsive UI - TizenOS is great
  • good battery life (for a smartwatch!)
The Gear Sport is, for me, a great option if you want a watch that has a good combination of smartwatch functionality crossed with fitness tracking. SWIMTAG is dedicated to swim tracking so I'd expect that to be more accurate. In previous swims where I wore one or the other, but not both at the same time, they seemed on a par in terms of accuracy of tracking lengths and stroke types. Both missed the odd length at times. But one huge advantage of the Samsung Gear Sport is the fact it has a visual display, plus can give you feedback via vibration every x lengths - if you're like me and get distracted by thinking about what's for lunch halfway down a length and lose your count, this is a very useful feature. The Samsung display, as you'd expect, is awesome, though I would like a slightly stronger vibration and sometimes during a swim you might not always feel it. There are a few apps you can install to track swims - I used the one as part of Samsung Health, as personally I liked all my fitness activity activity going into one place. I have also tried the Speedo app briefly in the past, which would be another option.

Swimathon 2018 result

I achieved my goal of completing the 1.5km swimming without rests in just under 38 minutes. Despite aspiring to glide through water like a cross between Adam Peaty and a dolphin, I knew that would not be the case - **obviously** only because of the reduced streamlining I introduced by wearing a Gear Sport on one wrist, and a SWIMTAG and locker key on the other ;)

SWIMTAG

  • tracked all 60 lengths correctly
  • tracked the stroke type correctly
  • reported a time of 37m 43s

Samsung Gear Sport

  • did not track the lengths correctly - it went wonky twice, missing 2 lengths in total. I found this weird, as both times, I'd paused at the end for 5 seconds to let someone pass so it should have been easier if anything, to detect the end of a length. This shows up in the breakdown you get in Samsung Health as 2 length times are about double the rest.
  • tracked the stroke type correctly
  • reported a time of 37m 48s
 

Verdict
The Samsung Gear Sport is a great watch. For those that want the best fitness tracker, I'm sure there are better options that are more fitness focused. In terms of swimming, I was disappointed that it didn't track the full swim accurately. Usually, a brief pause at each end is enough for the tracking to be spot on. Perhaps the Speedo app would have performed differently - unfortunately, I couldn't track in both apps at the same time.

For me, SWIMTAG is my primary source of tracking. As a dedicated swim tracker, I like having all my history in there with PBs tracked, competition leagues and a sense that it should be the most accurate. I like the Gear Sport as a secondary tracker - much better for feedback while you're swimming as opposed to SWIMTAG which is only helpful post-swim.

Tuesday, 11 October 2016

Getting started with SignalR

Follows on from previous post: Real-time SignalR and Angular 2 dashboard

I had a couple of questions from someone who forked the project on GitHub and is using it as the basis for their own custom dashboard, which reminded me that there are a few key things I picked up on my learning journey that I should share. These are primarily focused around SignalR aspects.

An instance of a SignalR hub is created for each request

A single instance is not shared across all clients connected to that hub. This is much like a controller in ASP.NET MVC, whereby a new instance of the controller is created for each request. Hence in my SignalRDashboard project, the mechanism for holding a single instance of shared data for all connections, is via a singleton instance that gets passed into each hub's constructor.

Setup callback methods in Javascript for SignalR hubs BEFORE starting the SignalR connection

To open a SignalR connection, you use the following:
    $.connection.hub.start();
To be able to receive messages from the server-side SignalR hub in client-side Javascript, you have to hookup a method to be called. You'd do this using something like:
    var hub = $.connection.myDemoHub;
    hub.client.updateMeWhenYouHaveUpdates = function(message) {
        // Do something useful with what we've just received from the server
    };
The import thing to remember is once you've started the SignalR connection, you can't hook up these callbacks. So make sure they are done BEFORE you start the connection. The correct ordering would be:
    var hub = $.connection.myDemoHub;
    hub.client.updateMeWhenYouHaveUpdates = function(message) {
        // Do something useful with what we've just received from the server
    };
    $.connection.hub.start();

You can connect to multiple SignalR hubs over the same connection

This is great as it means, you can connect to multiple hubs easily - all you need to do is setup the callback methods in Javascript before you open the connection, as per the previous point.
    var hub1 = $.connection.myDemoHub;
    hub1.client.updateMeWhenYouHaveUpdates = function(message) {
        // Do something useful with what we've just received from the server
    };

    var hub2 = $.connection.myOtherHub;
    hub2.client.callMePlease = function(message) {
        // Do something useful with what we've just received from the server
    };
    $.connection.hub.start();
In the SignalRDashboard project, you'll see this being handled in dashboard.js as follows:
  • each Angular 2 dashboard component registers itself with dashboard.js when it is constructed
  • once the component has been initialised (ngOnInit event), it lets dashboard.js know that it's finished and registration is completed
  • once all components have completed registration, then the initialiseComponents() method is called which then calls in to each registered component (each must expose a setupHub() method) so they can set up the callbacks they're interested in from the hubs they use
  • finally, the connection is then started

The basics are covered well in the official SignalR site, but the points above were some of the main things that I found myself looking a bit deeper for.

Friday, 7 October 2016

Real-time SignalR and Angular 2 dashboard

tl;dr Check out my new SignalRDashboard GitHub repo
Followup post: Getting started with SignalR

For a while now, I've had that burning curiosity to do more than just the bit of hacking around with SignalR that I'd previously done. I wanted to actually start building something with it. If you're not familiar with it, SignalR is a:

...library for ASP.NET developers that makes developing real-time web functionality easy.

It provides a simple way to have two-way communication between the client and server; the server can push updates/data to any connected client. There are a number of great starting examples out there, but to give a very quick overview, a typical basic example is web-based chat functionality. When a user navigates to the chat page using their browser-of-choice, a connection is made to a SignalR "hub" on the server, using Javascript. This connection provides the channel for bi-directional client-server communication. When a user enters some text in the UI and presses send, a message is sent across this channel to the server, invoking a method on the hub. In the case of a basic chat example, this then broadcasts the message out to all the clients who are connected to that hub, which results in a Javascript method on the client being called to then render that message out to each user's screen.

No need to make AJAX requests to send requests to the server. No need to make AJAX requests to make polling-style requests to the server for updates. SignalR handles the approach used to maintain the communication channel between the client browser and the server, depending on the functionality supported by the browser. It also handles the nuts and bolts of how to broadcast out to any connected clients.

Enter the dashboard

In the dev room I work in, we've had a large screen showing a dashboard of metrics for years - evolving over time as we realise the value of new metrics. I decided that a great learning project to work on in my spare time would be to create a real-time web-based replacement (the original one was a WPF app). I'm a big fan of giving visibility of relevant and timely, high-level application metrics to the business. Current CI build broken? Show it. Recent system error rates? Show it. How many stories do the test team have left to accept? Show it. That ability to take a quick glance, and get a quick feel for the current state of play across a wide area of the business, is invaluable. It's also handy to have things flash and/or play a sound, when there's something of particular importance to flag up.

From a technical perspective, I set myself a number of goals I wanted to achieve:

  • a centralised dashboard accessible across all areas of the business (it's an ASP.NET MVC 5 web app...job done!)
  • no matter how many clients are connected to the dashboard, I don't want to increase the load on the systems/services that the dashboard is drawing data from
  • try out a new (to me) Javascript framework to aid the creation of slick, dynamic views
  • create an dashboard framework that's easy to extend, with some demo dashboard components so hopefully others interested in the technology can find it useful, and better still, put it to use in their own environments

Moar learning!

AngularJS has also been on my radar for a while, and with Angular 2 coming along as the successor to Angular 1, it seemed like a chance to have a play around with that too. When I first started hacking about with it, an early beta version was available which I started off using. More recently, the Final Release has become available - so I have since upgraded to that, patching up where there were breaking changes.

GitHub repo

The result is a cunningly named (does-what-it-says-on-the-tin) SignalRDashboard project GitHub repo. There's a few demo components as examples, and I plan to add to those as and when I have time. It is still very much work in progress and evolving!

Here's a quick screenshot with some of the demo components currently written (all randomly generated data, refreshing every 15 seconds):

As always, all feedback is welcome - if you do find it useful/end up using it, drop me a message as it would be great to know!

Tuesday, 12 August 2014

dm_exec_query_plan returning NULL query plan

I recently hit a scenario (SQL Server 2012 Standard, 11.0.5058) where I was trying to pull out the execution plan for a stored procedure from the plan cache, but the following query was returning a NULL query plan:
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, 
    qp.query_plan, tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE  text LIKE '%MyStoredProcedure%'
 AND objtype = 'Proc'
Each time I ran the stored procedure, the usecounts was incrementing, but I just could not get the query plan to be returned. Initially I thought I'd found the answer on this blog post:
However, dm_exec_text_query_plan also returned NULL for the plan handle so it was a dead end for this scenario. So, a bit more digging around and came across this question on StackOverflow. This was pretty much the scenario I was experiencing - my stored procedure had a conditional statement that wasn't being hit based on the parameters I was supplying to the stored procedure. I temporarily removed the IF condition, ran it again and hey presto, this time an execution plan WAS returned. Re-instating the condition then, sure enough, made it no longer return the plan via `dm_exec_query_plan`. I tried to create a simplified procedure to reproduce it, with multiple conditions inside that weren't all hit, but a query plan was successfully returned when I tested it - so it wasn't as straight forward as just having multiple branches within a procedure.

I was just starting to suspect it was something to do with temporary table jiggery-pokery that was being done within the conditional statement, and trying to create a very simplified repro when...
This was pretty much exactly the scenario I was hitting. I carried on with my ultra-simplified repro example which shows the full scope/impact of this issue (see below). As noted in the forum post provided above, it's an issue that occurs when using a temp table in this context, but table variables do NOT result in the same behaviour (i.e. testing a switch over to a table variable instead of a temp table sure enough did result in query plan being returned by dm_exec_query_plan ). N.B. It goes without saying, this is not an endorsement for just blindly switching to table variables!
-- 1) Create the simple repro sproc
CREATE PROCEDURE ConditionalPlanTest 
 @Switch INTEGER
AS
BEGIN
 CREATE TABLE #Ids (Id INTEGER PRIMARY KEY)
 DECLARE @Count INTEGER

 IF (@Switch > 0)
  BEGIN  
   INSERT INTO #Ids (Id) VALUES (1)
  END 

 IF (@Switch > 1)
  BEGIN
   INSERT #Ids (Id) VALUES (2)
  END

 SELECT * FROM #Ids
END
GO

-- 2) Run it with a value that does NOT result in all conditions being hit
EXECUTE ConditionalPlanTest 1
GO

-- 3) Check plan cache - no query plan or text query plan will be returned, 
--    usecounts = 1
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, 
    qp.query_plan, tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE text LIKE '%ConditionalPlanTest%'
 AND objtype = 'Proc'
GO

-- 4) Now run it with a different parameter that hits the 2nd condition
EXECUTE ConditionalPlanTest 2
GO

-- 5) Check the plan cache again - query plan is now returned and 
--    usecounts is now 2.
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, 
    qp.query_plan, tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE text LIKE '%ConditionalPlanTest%'
 AND objtype = 'Proc'
GO

-- 6) Recompile the sproc
EXECUTE sp_recompile 'ConditionalPlanTest'
GO

-- 7) Confirm nothing in the cache for this sproc
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, 
    qp.query_plan, tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE text LIKE '%ConditionalPlanTest%'
 AND objtype = 'Proc'
GO

-- 8) This time, run straight away with a parameter that hits ALL conditions
EXECUTE ConditionalPlanTest 2
GO

-- 9) Check the plan cache again - query plan is returned and usecounts=1.
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, 
    qp.query_plan, tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE text LIKE '%ConditionalPlanTest%'
 AND objtype = 'Proc'
GO

-- 10) Now change the sproc to switch from temp table to table variable
ALTER PROCEDURE ConditionalPlanTest 
 @Switch INTEGER
AS
BEGIN
 DECLARE @Ids TABLE (Id INTEGER PRIMARY KEY)
 DECLARE @Count INTEGER

 IF (@Switch > 0)
  BEGIN  
   INSERT INTO @Ids (Id) VALUES (1)
  END 

 IF (@Switch > 1)
  BEGIN
   INSERT @Ids (Id) VALUES (2)
  END

 SELECT * FROM @Ids
END
GO

-- 11) Execute the sproc with the parameter that does NOT hit all the conditions
EXECUTE ConditionalPlanTest 1
GO

-- 12) Check the plan cache - query plan is returned, usecounts=1
SELECT plan_handle,usecounts, cacheobjtype, objtype, size_in_bytes, text, qp.query_plan, 
    tqp.query_plan AS text_query_plan
FROM sys.dm_exec_cached_plans cp
 CROSS APPLY sys.dm_exec_sql_text(plan_handle) t
 CROSS APPLY sys.dm_exec_query_plan(plan_handle) qp
 CROSS APPLY sys.dm_exec_text_query_plan(plan_handle, NULL, NULL) tqp
WHERE text LIKE '%ConditionalPlanTest%'
 AND objtype = 'Proc'
GO

-- 13) CLEANUP
DROP PROCEDURE ConditionalPlanTest
GO

Thursday, 19 September 2013

70-486 Developing ASP.NET MVC 4 Web Applications

Last week I passed the 70-486 Microsoft exam - Developing ASP.NET MVC 4 Web Applications, so thought I'd knock up a quick post on my experience and what materials I found useful as part of my preparation.

Going into this exam, I had just over a year and a half's commercial experience using ASP.NET MVC 2 & 3. Before that, I had experience in ASP.NET and prior to that, when dinosaurs still roamed, classic ASP. It's no secret I'm a bit of a data nerd (a lot of my blog is db related, SQL Server, MongoDB...) and I have tended to be backend focused, but I wanted to even out the balance a bit by pushing myself in this area, and in JS/HTML5/CSS3. I took the opportunity to upgrade the web solution at my current company from MVC 3 to MVC 4, and being able to do that at the start of my preparation was really useful - this is how I like to learn, by actually "getting stuff done". That is the obvious, number 1 recommendation - do not just read and swot up on the theory, actually "do". My brain likes me to be stupid and make practical mistakes - when I then work out how I've done something daft, it reinforces the learning and makes the knowledge stick.

Preparation

The first thing I was disappointed to find was that there is (as of time of writing) no Microsoft Exam prep book for this exam. There is one due out I believe at the beginning of October 2013 - Exam Ref 70-486 : Developing ASP.NET MVC 4 Web Applications by William Penberthy (ISBN-10: 0735677220 | ISBN-13: 978-0735677227). So I obviously can't comment on how good that book is. The book I went with was Professional ASP.NET MVC 4 from Wrox, by Jon Galloway (Twitter), Phil Haack (Twitter), K. Scott Allen (Twitter) and foreword by Scott Hanselman (Twitter). While the book alone isn't enough for the exam, for me it gave a good coverage on quite a few areas I needed so it is definitely worth a read.

Having a Pluralsight subscription was good and while not necessarily geared towards the exam, you can never go wrong with a bit of Pluralsight training. I went through a number of the MVC courses - ASP.NET MVC Fundamentals, ASP.NET MVC 2.0 Fundamentals, MVC 4 Fundamentals and Building Applications with MVC 4. One of the things I like about Pluralsight is you can control the playback speed so for most of the stuff I already knew, I glossed over at a faster speed. I also went through the "Building Web Apps with ASP.NET Jump Start" training on the Microsoft Virtual Academy (Scott Hanselman, Jon Galloway and Damian Edwards (Twitter)) - that was quite entertaining!

I found some great study guide blog posts that collated together a lot of links to some good material:

These give a lot of useful links to MSDN articles, MS resources, blogs, interesting StackOverflow questions etc.

Last but definitely not least, there was my practical setup - Windows 8 on a VM, with Visual Studio 2012 and a Windows Azure account. To re-iterate what I said before - the best way to learn, is to do...and make daft mistakes. I started work on a new web app from scratch, with a real-world mindset on (i.e. writing production-worthy code) putting to good use the new things I was learning. I also had a scratch-pad web app where I would just dump rough code to try out short, simple snippets.

Bottom line

Overall, I put a lot of time and effort into preparation for this exam and it paid off. The biggest benefit for me is what I learned along the way and the challenge it gave me.

Wednesday, 5 June 2013

SQL Server 2008 R2 in-place upgrade error

Today I encountered the following error during the process of performing an in-place upgrade of a SQL Server 2008 instance to 2008 R2:
The specified user 'someuser@somedomain.local' does not exist
I wasn't initially sure what that related to - I hadn't specified that account during the upgrade process so I went looking in the Services managemement console. The SQL Server service for the instance I was upgrading was configured to "Log On As" that account and it had been happily running prior to the upgrade.

This domain account did exist but to sanity check, the first thing I did was switched the service to use another domain account that also definitely existed and retried the Repair process. Same error. Then I spotted further down the list, another service had specified an account in the "somedomain\someuser" form. So I switched the SQL Server instance service to specify the account in that form (Down-Level Logon Name) instead of the UPN (User Principal Name) format (reference).

Bingo.

While I was waiting for the Repair process to run through again, I carried on searching and found this question on the MSDN Forums. The very last answer there confirmed it. The SQL Server 2008 R2 upgrade does NOT like user accounts in the UPN format.

Thursday, 21 March 2013

.NET Project File Analyser

I've started knocking together a little app to automate the process of trawling through a folder structure and checking .NET project files (C# .csproj currently) to extract some info out of them. The DotNetProjectFileAnalyser repo is up on GitHub. I've been working against Visual Studio 2010 project files, but could well work for other versions assuming the project file structure is the same for the elements it currently looks at - I just haven't tried as yet.
Currently, it will generate an output file detailing for each .csproj file it finds:
  • Build output directory (relative and absolute) for the configuration/platform specified (e.g. Debug AnyCpu). Useful if you want find which projects you need to change to build to a central/common build directory.
  • List of all Project Reference dependencies (as opposed to assembly references). Useful if you want to find the projects that have Project References so you can switch them to assembly references

Usage

DotNetProjectFileAnalyser.exe {RootDirectory} {Configuration} {Path}

{RootDirectory} = start directory to trawl for .csproj files (including subdirectories)
{Configuration} = as defined in VS, e.g. Debug, Release
{Platform} = as defined in VS, e.g. AnyCpu
Example:
DotNetProjectFileAnalyser.exe "C:\src\" "Debug" "AnyCpu"

More stuff will go in over time, with ability to automatically update csproj files as well to save a lot of manual effort.