Software Development

Using GitHub Two Factor Authentication (2FA) with TeamCity

If you have two factor authentication (2FA) set up in GitHub and you also want to use TeamCity, the easiest way to set this up is to set up SSH keys to access the GitHub repository.

The first step is to follow this guide to creating SSH keys for GitHub. Remember the passphrase you use when creating the key, you’ll need it later.

Once you have created your keys and applied it to your GitHub account you can then follow this guide for managing SSH keys in TeamCity.

Finally, when setting up your VCS Root in Team City you set the Fetch URL to the SSH variant. You can find this on your project page on Github towards the bottom of the right sidebar.

You may need to click the “SSH” link below the URL if it does not already show the SSH URL.

Back in Team City you can paste this URL in the Fetch URL box in the general settings. Further down the form in the Authentication Settings section you can specify the SSH key you uploaded earlier.

By specifying “Uploaded Key” the boxes below will change. Select the key you uploaded earlier, the user name is “git”, and enter the passphase you used when you created the SSH key.

You should now be able to test the connection to see if all is well.

Software Development

Debugging a process that cannot be invoked through Visual Studio.

Sometimes it is rather difficult to debug through Visual Studio directly even although the project is right there in front of you. In my case I have an assembly that is invoked from a wrapper that is itself invoked from an MSBuild script. I could potentially get VS to invoke the whole pipeline but it seemed to me a less convoluted process to try and attach the debugger to the running process and debug from there.

But what if the process is something quite ephemeral. If the process starts up, does its thing, then shuts down you might not have time to attach a debugger to it before the process has completed. Or the thing you are debugging is in the start up code and there is no way to attach a debugger in time for that.

However there is something that can be done (if you have access to the source code and can rebuild).

for (int i = 30; i >= 0; i--)
    Console.WriteLine("Waiting for debugger to attach... {0}", i);
    if (Debugger.IsAttached)

if (Debugger.IsAttached)
    Console.WriteLine("A debugger has attached to the process.");
    Console.WriteLine("A debugger was not detected... Continuing with process anyway.");

You could get away with less code, but I like this because is is reasonably flexible and I get to see what’s happening.

First up, I set a loop to count down thirty seconds to give me time to attach the debugger. On each loop it checks to see if the debugger is attached already and exits early if it has (This is important as otherwise you could attach the debugger then get frustrated waiting for the process to continue.)

After the loop (regardless of whether it simply timed-out or a debugger was detected) it does a final check and breaks the execution if a debugger is detected.

Each step of the way it outputs to the console what it is doing so you can see when to attach the debugger and you can see when the debugger got attached, or not.

My recommendation, if you want to use this code, is to put it in a utility class somewhere that you can call when needed, then take the call out afterwards.

Software Development

Two Factor Authentication with GitHub and Visual Studio 2013

New job, new tools, new processes. In my new job we’re using GitHub for source control, and because the data is sensitive we’re also using two factor authentication. Because I develop with Visual Studio that presents and interesting issue if you are using Visual Studio 2013’s build in Git Source Control provider.

After turning on Two Factor Authentication, the next time you have to communicate with GitHub (e.g. pull/push/sync’ing, etc.) it will pop up a dialog asking for your credentials, even if you already entered them previously before turning on 2FA.

You get an error message that looks like this:

An error occurred. Detailed message: An error was raised by libgit2. Category = Net (Error).
Response status code does not indicate success: 401 (Authorization Required).

Entering your credentials won’t do you any good. It won’t work. It will just request them again, ad infinitum.

There is no where to enterthe 2FA code, so you can’t authenticate yourself here.

However, you can go to GitHub and create a personal access token in order that Visual Studio 2013 can access your repositories.

You can either drop down the menu on your avatar and go to “Settings”, then go to “Personal Access Tokens” (link in the side bar) or you can just go here

Then click on “Generate New Token”. You’ll be asked for your credentials again just to be sure you are still you.

Once you’ve done that you’ll be taken to the page to create your credentials

For what Visual Studio wants the default permissions are fine. Also, give the token an appropriate name so it can be identified easily.

Then press “Generate token”.

You will then be taken back to the “Personal access tokens” page. This time there is a new token which you can use in Visual Studio. Be careful here, this is the one and only time you will be able to access this token so copy it and keep it safe.

Back in Visual Studio try and sync the commits to GitHub. It will pop up the credentials dialog again. This time you are going to enter the token in the username box and leave the password box blank.

Then press OK.

Finally, your changes will sync with GitHub and you’ll get a success message.

Software Development

Custom routing to support multi-tenancy applications

The company I currently work for has many brands so they are looking for a website that can be restyled for each brand. They also want the styling information to be managed through an admin area rather than have to go to the development team each time they want to change something.

To that end I have added some custom routing into the application to allow assets to be delivered via a controller yet have the URL look like a path to a file on disk.

The summary of the steps involved are:

  • Set up a custom route with a custom route handler
  • Build the logic in the custom route handler to ensure that the data is passed to the controller correctly.
  • Update the web.config file to tell IIS to allow certain paths through to ASP.NET MVC that would otherwise look like a static path.

Setting up the custom route

My custom route is inside an area to keep all tenant specific URLs separate from the rest of the application.

public override void RegisterArea(AreaRegistrationContext context) 
        new { action = "Index", controller="Content"}
    ).RouteHandler = new TenantRouteHandler();

So, this means that the routing engine can extract the “tenant” from the URL and it will also allow multiple path parts in the “contents” value. So, if the URL is: then the RouteValueDictionary will contain:

  • tenant: MyBrand
  • controller = Content
  • action = Index
  • contents = my/virtual/file/path/to/styles.css

However, we want to split up the contents into its individual components, so a TenantRouteHandler class is created to do that.

Build the custom route handler

Without going too much in to what this class does in this specific instance (which isn’t relevant to the general concept) the basics are

  • Create a class that implements IRouteHandler
  • In GetHttpHandler process the routing information to get what we want in a format that suits the application.
  • Create a regular IRouteHandler object (normally an MvcRouteHandler) and call its GetHttpHandler() method with the updated requestContext as I otherwise want the same functionality as a regular handler.
public class TenantRouteHandler : IRouteHandler
    private readonly Func<IRouteHandler> _routeHandlerFactory;
    public TenantRouteHandler()
        _routeHandlerFactory = ()=> new MvcRouteHandler();

    public TenantRouteHandler(Func<IRouteHandler> routeHandlerFactory)
        _routeHandlerFactory = routeHandlerFactory;

    public IHttpHandler GetHttpHandler(RequestContext requestContext)
        ProcessRoute(requestContext); // does stuff to modify the request context
        IRouteHandler handler = _routeHandlerFactory();
        return handler.GetHttpHandler(requestContext);

ProcessRoute(requestContext) is the specific implementation that extracts the information out of the route and I then add it back into the RouteValueDictionary so my controller can access it. It isn’t relevant so I’m not including it.

What I also do here (because ASP.NET MVC, despite being launched in 2009 to a fanfare of being easy to unit test, isn’t east to unit test at all) is also set up some functional hooks that I can use in unit testing. The default constructor simply sets up the strategy to return a regular MvcRouteHandler and that’s the constructor that will be used in production. The other constructor is used in unit tests so that I can inject my own handler that doesn’t need tons of MVC infrastructure to be set up in advance.

So, the contents part of the route is now split out as we need it for the application.

Getting IIS to pass through these routes to ASP.NET MVC

Running the application at this point won’t return anything. In fact, IIS will throw up an error page that it cannot find the file. The expected paths all look like static content, which IIS thinks it is best placed to deal with. However, the web.config can be updated to let IIS know that certain paths have to be passed through to ASP.NET MVC for processing.

Once the following is added to the web.config everything should work as expected.

      <!-- This is required for the multi-tenancy part so it can serve virtual files that don't exist on the disk -->
        preCondition="integratedMode" />

And that’s it. Everything should work now. Anything in the Tenant/* path will be processed by ASP.NET MVC and the controller will decide what to serve up to the browser.

Software Development

Changing the default Assembly/Namespace on an MVC appliction


When getting an error message like this:

Compiler Error Message: CS0246: The type or namespace name 'UI' could not be found (are you missing a using directive or an assembly reference?)

when compiling Razor views after changing the namespace names in your project, check the web.config file in each of the view folders for a line that looks like this:

<add namespace="UI" />

And update it to the correct namespace, e.g.

<add namespace="‹CompanyName›.‹ProjectName›.UI" />

Full story

I guess this is something I’ve not done before, so it caught me out a little.

I’ve recently created a new project and to reduce the file paths I omitted information that is already implied by the parent directory.

So, what I ended up with is a folder called ‹Company Name›/‹ProjectName› and in it a solution with a number of projects named UI or Core and so on. I then added a couple of areas and ran up the application to see that the initial build would work. All okay… Great!

I now have a source code repository that looks like this:


In previous projects I’d have the source set out like this:


While this is nice and flat, it does mean that when you add in thing like C# project files you get lots of duplication in the paths that are created. e.g. /src/‹CompanyName›.‹ProjectName›.Core/‹CompanyName›.‹ProjectName›.Core.csproj

Next up I realised that the namespaces were out as they were defaulting to the name of the C# project file name. So I went into the project settings and changed the default namespace and assembly name so that they’d be fully qualified (just in case we ever take a third party tool with similar names, so we need to ensure they don’t clash in the actual code). I also went around the few code files that had been created so far and ensured their namespaces were consistent. (ReSharper is good at doing this, so you just have to press Alt-Enter on the namespace and it will correct it for you)

I ran the application again and it immediately failed when trying to compile a Razor view with the following error message:

Compiler Error Message: CS0246: The type or namespace name 'UI' could not be found (are you missing a using directive or an assembly reference?)

Looking at the compiler output I could see where it was happening:

Line 25:     using System.Web.Mvc.Html;
Line 26:     using System.Web.Routing;
Line 27:     using UI;
Line 28:     
Line 29: 

However, it took me a little investigation to figure out where that was coming from.

Each Views folder in the application has something like this in it:

    <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
    <pages pageBaseType="System.Web.Mvc.WebViewPage">
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Routing" />
        <add namespace="UI" />

There was my old “UI” namespace that had been replaced. It was these settings that were generating the using statements in the Razor precompiled source.

The solution was simple, replace that “UI” namespace with the fully qualified version.

        <add namespace="‹CompanyName›.‹ProjectName›.UI" />

And now the application works!

Software Development

The difference between & and && operators

Here is a bit of code that was failing:

if (product!=null & !string.IsNullOrEmpty(product.ProductNotes))

If you look closely you can see it uses just the single ampersand operator, not the usual double ampersand operator.

There is a significant function difference between the two that under normal circumstances may not be obvious when performing logic operations on business rules. However in this case it become significant.

Both operators produce a Boolean result, both operators function as in this truth table

LHS (Left-hand side) RHS (Right-hand Side) Result
False False False
True False False
False True False
True True True

But there is a functional difference. For a result to be true, both LHS AND RHS must be true, therefore if LHS is false then the result of RHS is irrelevant as the answer will always be false.

The single ampersand operator (&) evaluates both sides of the operator before arriving at its answer.

The double ampersand operator (&& – also known as the conditional-AND operator) evaluates the RHS only if the LHS  is true. It short-circuits the evaluation so it doesn’t have to evaluate the RHS if it doesn’t have to. This means you can put the quick stuff on the left and the lengthy calculation on the right and you only ever need to do the lengthy calculation if you need it.

In the case above the code is checking if product is not null AND if product notes is not null or whitespace. It cannot evaluate the RHS if the LHS is false. Therefore a single ampersand operator will cause a failure when product is NULL simply because it is trying to evaluate both sides of the operator.

For more information see:

Software Development

Setting up Code Coverage in Team City

The project that I’m working on has a rather spotted history for unit tests until now. There would be periods of people actually writing them, but then come unstuck when a random test failed and eventually the tests would be allowed to rot a bit, until something spurned everyone back to life again.

However, unit testing is something we, as a team, are getting better at and to aid the continual improvement of code base I turned on code coverage metrics in Team City earlier this week, and set it to fail the build if the metrics fell below a certain threshold. That way if test coverage dropped the build would fail, and since we’re also driving towards implementing a continuous delivery process failing builds will mean we can’t deploy either. This is good. It means we will not be able to deploy poor quality code.

What does code coverage on a build server get you?

Put simply, it shows you what code was covered during the automated tests run on the build server. You can then use this information to direct focus on areas that are not sufficiently tested to drive up code quality.

Turning on code coverage means that Team City will be able to display those metrics to you after the build is complete. You can drill in to the metrics to get a better understanding of what code your tests are exercising, and perhaps more importantly, what code isn’t being exercised.

When Team City builds a project it will run several build steps, at least one of which should be some sort of testing suite and you may have more if you have a mix of test types (e.g. unit tests and integration tests). The code coverage metrics are generated during the testing steps. At the end, you can examine the results just like any other aspect of the build.

This is an example of what the “overview” looks like after a build it run with code coverage turned on:

Nightly Build Overview

I like that it also shows the differences between this and the previous build as well as the overall numbers.

The statistic that I think is probably the most important is the statements percentage as that is unlikely to change as much as if you split out code in to separate classes or methods. It also means that you get a better idea of coverage if you have larger methods with high cyclomatic complexity or large classes.

Incidentally, splitting large methods and classes into smaller ones should make things easier to test, but I’d rather not have that impacting the metric we are using for code coverage. The number of statements remains relatively similar even after splitting stuff out and also since there so many more statements than methods or classes even if there is a slight change in the number the overall impact on the percentage metric is minimised.

If you want to drill into the detail, you can go into “full report” and see the break down by assembly. For example:

Overall Coverage Summary

As you can see it gives a breakdown of coverage by assembly. Each of the assembly names is a link allowing you to drill in further.

Overall Coverage Summary for Assembly

Eventually you can get as far down as the individual file and this is where the power really lies. When viewing an individual file you can see individual lines of code that have been exercised in some way (in green) and ones that have not (in red).

Coverage Summary for Type

It is worth noting that just because a line is green it does not mean it was exercised well. Code Coverage is just a metrics tool. It just says something was done, not that it was correct. This is vitally important to remember. However, it is still a valuable tool to be able to see areas where there is no code coverage at all.

Setting up Code Coverage in Team City

On the Team City dashboard go to the project you are interested in, click the down arrow next to the build name and select “Edit Settings” from the menu.

Team City Dashboard with Build context menu

On the Build Configuration Settings page, click on “Build Steps” in the side bar.

Build Configuration Settings sidebar

Then, find the step that performs the tests, and click edit.

Towards the bottom of the settings is an entry marked “.NET Coverage tool”. Select “JetBrains dotCover” as this is built in to Team City. You can of course choose one of the other coverage options, but you’ll have to configure that yourself.

.NET Coverage settings

You don’t need to enter a path to dotCover unless you are installing a different version to the one Team City has built in. (We’re using Team City 8.1 and it has an older version of dotCover, but it works well enough for what we want.)

In the filters section you need to put in details of the code you want to cover. I’ve just put in a line for all assemblies, then some lines to exclude other assemblies. I found that it sometimes picks up assemblies that were brought in via NuGet and shouldn’t be covered as it will skew your statistics. You can see the assemblies it picked up automatically by drilling into the code coverage from the overview page for the build as I showed above. I also excluded assemblies that are purely generated code, like the assembly that contains just an EDMX and the code autogenerated from it.

The basic format of the filters is:
+:assembly=<assembly-name> to include an assembly, wild cards are supported.
-:assembly=<assembly-name> to exclude an assembly, wild cards are supported.

However, you can filter on more that just the assembly level. More detail is available from Team City’s online help.

Once you’ve set this up hit “Save” at the bottom of the page.

You can now perform a test build to see how it works by pressing “Run…” at the top of the page.

When the run is complete you can view the results by clicking on the build on the main page.

Getting to the build report

From the overview you can drill in and see the assemblies that were picked up, as shown above. If there are any assemblies that shouldn’t be there (e.g. third party libraries) you can then add them to the filter in the build step configuration to make sure they are excluded from the code coverage reports.

Finally, to ensure that you don’t fall backwards and ensure that you maintain the code coverage above a certain threshold you can add a failure condition to the Build Configuration Settings.

Go back to the build settings page, and then to the “Failure Conditions” section.

Failure Conditions section

Then press “+ Add failure condition” and a dialog will pop-up.

Add Failure Condition popup

Select “Fail build on metric change” from the drop down.

Then, in the “Fail build if” section, select “its” “percentage of statement coverage”, “Is compared to” “constant value” “is” “less” than <percentage-value> “default units for this metric”

For <percentage-value> take the value form your first reference build subtract about half a percent then round down. There will naturally be minor fluctuations up-and-down on each build and you don’t want to be too strict either.

Then press “Save”. The page will refresh and you’ll see the failure condition added to the table at the bottom of the page.

Additional Failure Conditions

Now each time this build is run it will run code coverage and if it drops below the threshold it will fail the build.