In the workplace

How not to interview a candidate

I recently started a new job. I interviewed for a few companies and got a few offers on the table. However, one interview stuck out for me because, well, let me tell you the story…

I interviewed for a company and the first interview went really well. I liked the guys that were interviewing me and everything went really positively. They also liked me and invited me back for a second interview. They explained at the end of the first interview that the second interview was going to involve some technical competency questions, whiteboarding and so on. I was looking forward to it.

A few days later the recruitment agent called to confirm the time for the second interview and I duly turned up at the appropriate time. The second interview had a different panel to the first which isn’t unusual. However, the format was very similar to the first interview.

Towards the end of the interview one of the interviewers asked me about Scottish Developers, the user group I help run. This is not unusual. Companies are often keen to see such passion for the job that they see this as a positive. Not this chap. I was subjected to several minutes of questions about how my involvement could negatively impact the company and what I’d be doing to mitigate that.

I have found that negative leaning questions fail to understand the implications of running a user group. I run Scottish Developers as part of my own personal training and skills improvement programme, it also benefits many people around me also. To ask how that might negatively affect the company and to seek further assurances that any activities that are organised in my own time, which they are, and are done so well in advance displays a level of control that would be unacceptable, not to mention detrimental to them as I would not be able to keep my skills as up-to-date as I’d like. Something that benefits them more than it does me. I do this because I enjoy software development and I want to continually improve, the side effect of this is that they get a employee that is highly skilled and highly motivated. To cause demotivation in that way would be detrimental to both of us.

Suffice to say that line of questioning left a bitter taste in my mouth.

Then they asked me what I thought of the interview process, so I mentioned in my answer that I thought that the second interview was going to be the more technical and there would be some whiteboarding, problem-solving, and technical questions.

“Where did you hear that from?” was the unexpected, almost barked, response.

“Well, I was told…” was as far as I got in a reply.

“Was it the [name-of-recruitment-agent] at [recruitment-agents-firm]?”

“N…” again I didn’t get a chance to respond.

“I’m going to have to have words with her. That’s not acceptable”

And that was when a previously very positive impression of the company turned into a very firm definite rejection.

He refused to listen to my answer. In fact, he refused to let me speak. He made a wild assumption, which was wrong, and wouldn’t let me correct it. Finally he set out a course of action he was going to take based on that incorrect assumption which was to chastise the recruitment agent for something that was clearly not her fault.

Some interviewers forget that an interview is a two way process. The company want to see what the candidate is like and the candidate want to see what the company is like. For an interviewer to act in such a way was quite revealing.

In one respect I was glad that he did react this way at the interview stage, it saved me finding out once I actually started with the job.

Software Development

Custom routing to support multi-tenancy applications

The company I currently work for has many brands so they are looking for a website that can be restyled for each brand. They also want the styling information to be managed through an admin area rather than have to go to the development team each time they want to change something.

To that end I have added some custom routing into the application to allow assets to be delivered via a controller yet have the URL look like a path to a file on disk.

The summary of the steps involved are:

  • Set up a custom route with a custom route handler
  • Build the logic in the custom route handler to ensure that the data is passed to the controller correctly.
  • Update the web.config file to tell IIS to allow certain paths through to ASP.NET MVC that would otherwise look like a static path.

Setting up the custom route

My custom route is inside an area to keep all tenant specific URLs separate from the rest of the application.

public override void RegisterArea(AreaRegistrationContext context) 
        new { action = "Index", controller="Content"}
    ).RouteHandler = new TenantRouteHandler();

So, this means that the routing engine can extract the “tenant” from the URL and it will also allow multiple path parts in the “contents” value. So, if the URL is: then the RouteValueDictionary will contain:

  • tenant: MyBrand
  • controller = Content
  • action = Index
  • contents = my/virtual/file/path/to/styles.css

However, we want to split up the contents into its individual components, so a TenantRouteHandler class is created to do that.

Build the custom route handler

Without going too much in to what this class does in this specific instance (which isn’t relevant to the general concept) the basics are

  • Create a class that implements IRouteHandler
  • In GetHttpHandler process the routing information to get what we want in a format that suits the application.
  • Create a regular IRouteHandler object (normally an MvcRouteHandler) and call its GetHttpHandler() method with the updated requestContext as I otherwise want the same functionality as a regular handler.
public class TenantRouteHandler : IRouteHandler
    private readonly Func<IRouteHandler> _routeHandlerFactory;
    public TenantRouteHandler()
        _routeHandlerFactory = ()=> new MvcRouteHandler();

    public TenantRouteHandler(Func<IRouteHandler> routeHandlerFactory)
        _routeHandlerFactory = routeHandlerFactory;

    public IHttpHandler GetHttpHandler(RequestContext requestContext)
        ProcessRoute(requestContext); // does stuff to modify the request context
        IRouteHandler handler = _routeHandlerFactory();
        return handler.GetHttpHandler(requestContext);

ProcessRoute(requestContext) is the specific implementation that extracts the information out of the route and I then add it back into the RouteValueDictionary so my controller can access it. It isn’t relevant so I’m not including it.

What I also do here (because ASP.NET MVC, despite being launched in 2009 to a fanfare of being easy to unit test, isn’t east to unit test at all) is also set up some functional hooks that I can use in unit testing. The default constructor simply sets up the strategy to return a regular MvcRouteHandler and that’s the constructor that will be used in production. The other constructor is used in unit tests so that I can inject my own handler that doesn’t need tons of MVC infrastructure to be set up in advance.

So, the contents part of the route is now split out as we need it for the application.

Getting IIS to pass through these routes to ASP.NET MVC

Running the application at this point won’t return anything. In fact, IIS will throw up an error page that it cannot find the file. The expected paths all look like static content, which IIS thinks it is best placed to deal with. However, the web.config can be updated to let IIS know that certain paths have to be passed through to ASP.NET MVC for processing.

Once the following is added to the web.config everything should work as expected.

      <!-- This is required for the multi-tenancy part so it can serve virtual files that don't exist on the disk -->
        preCondition="integratedMode" />

And that’s it. Everything should work now. Anything in the Tenant/* path will be processed by ASP.NET MVC and the controller will decide what to serve up to the browser.

Software Development

Changing the default Assembly/Namespace on an MVC appliction


When getting an error message like this:

Compiler Error Message: CS0246: The type or namespace name 'UI' could not be found (are you missing a using directive or an assembly reference?)

when compiling Razor views after changing the namespace names in your project, check the web.config file in each of the view folders for a line that looks like this:

<add namespace="UI" />

And update it to the correct namespace, e.g.

<add namespace="‹CompanyName›.‹ProjectName›.UI" />

Full story

I guess this is something I’ve not done before, so it caught me out a little.

I’ve recently created a new project and to reduce the file paths I omitted information that is already implied by the parent directory.

So, what I ended up with is a folder called ‹Company Name›/‹ProjectName› and in it a solution with a number of projects named UI or Core and so on. I then added a couple of areas and ran up the application to see that the initial build would work. All okay… Great!

I now have a source code repository that looks like this:


In previous projects I’d have the source set out like this:


While this is nice and flat, it does mean that when you add in thing like C# project files you get lots of duplication in the paths that are created. e.g. /src/‹CompanyName›.‹ProjectName›.Core/‹CompanyName›.‹ProjectName›.Core.csproj

Next up I realised that the namespaces were out as they were defaulting to the name of the C# project file name. So I went into the project settings and changed the default namespace and assembly name so that they’d be fully qualified (just in case we ever take a third party tool with similar names, so we need to ensure they don’t clash in the actual code). I also went around the few code files that had been created so far and ensured their namespaces were consistent. (ReSharper is good at doing this, so you just have to press Alt-Enter on the namespace and it will correct it for you)

I ran the application again and it immediately failed when trying to compile a Razor view with the following error message:

Compiler Error Message: CS0246: The type or namespace name 'UI' could not be found (are you missing a using directive or an assembly reference?)

Looking at the compiler output I could see where it was happening:

Line 25:     using System.Web.Mvc.Html;
Line 26:     using System.Web.Routing;
Line 27:     using UI;
Line 28:     
Line 29: 

However, it took me a little investigation to figure out where that was coming from.

Each Views folder in the application has something like this in it:

    <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
    <pages pageBaseType="System.Web.Mvc.WebViewPage">
        <add namespace="System.Web.Mvc" />
        <add namespace="System.Web.Mvc.Ajax" />
        <add namespace="System.Web.Mvc.Html" />
        <add namespace="System.Web.Routing" />
        <add namespace="UI" />

There was my old “UI” namespace that had been replaced. It was these settings that were generating the using statements in the Razor precompiled source.

The solution was simple, replace that “UI” namespace with the fully qualified version.

        <add namespace="‹CompanyName›.‹ProjectName›.UI" />

And now the application works!

Software Development

The difference between & and && operators

Here is a bit of code that was failing:

if (product!=null & !string.IsNullOrEmpty(product.ProductNotes))

If you look closely you can see it uses just the single ampersand operator, not the usual double ampersand operator.

There is a significant function difference between the two that under normal circumstances may not be obvious when performing logic operations on business rules. However in this case it become significant.

Both operators produce a Boolean result, both operators function as in this truth table

LHS (Left-hand side) RHS (Right-hand Side) Result
False False False
True False False
False True False
True True True

But there is a functional difference. For a result to be true, both LHS AND RHS must be true, therefore if LHS is false then the result of RHS is irrelevant as the answer will always be false.

The single ampersand operator (&) evaluates both sides of the operator before arriving at its answer.

The double ampersand operator (&& – also known as the conditional-AND operator) evaluates the RHS only if the LHS  is true. It short-circuits the evaluation so it doesn’t have to evaluate the RHS if it doesn’t have to. This means you can put the quick stuff on the left and the lengthy calculation on the right and you only ever need to do the lengthy calculation if you need it.

In the case above the code is checking if product is not null AND if product notes is not null or whitespace. It cannot evaluate the RHS if the LHS is false. Therefore a single ampersand operator will cause a failure when product is NULL simply because it is trying to evaluate both sides of the operator.

For more information see:

In the workplace

Let’s hack it now and fix it later

“Let’s hack it now and fix it later”. I’ve heard that before!

3 years later looking at some old source code, “WTF was I thinking?!”

Then I remember. The deadline was tight. We needed to get something out the door. And we decided “Let’s hack it now and fix it later”.

Now I’m feeling dispirited.

Reality is often quite nuanced as it was not necessarily a bad decision, it just missed one qualifier.

It depends on when “later” is. If later set at the time of the hack (“lets hack this now and we’ll fix it next month”) then that is acceptable. However, hacking on the never never has many consequences.

The arguments for “lets hack this now and we can fix it later” run the same way as taking on a debt. If it is managed then it can be a sensible decision (e.g. a mortgage or car loan) . If it is left to run rampant (just piling spending on a credit card and only ever playing the minimum balance) then extreme measures may have to be taken down the line.

Software Development

Setting up Code Coverage in Team City

The project that I’m working on has a rather spotted history for unit tests until now. There would be periods of people actually writing them, but then come unstuck when a random test failed and eventually the tests would be allowed to rot a bit, until something spurned everyone back to life again.

However, unit testing is something we, as a team, are getting better at and to aid the continual improvement of code base I turned on code coverage metrics in Team City earlier this week, and set it to fail the build if the metrics fell below a certain threshold. That way if test coverage dropped the build would fail, and since we’re also driving towards implementing a continuous delivery process failing builds will mean we can’t deploy either. This is good. It means we will not be able to deploy poor quality code.

What does code coverage on a build server get you?

Put simply, it shows you what code was covered during the automated tests run on the build server. You can then use this information to direct focus on areas that are not sufficiently tested to drive up code quality.

Turning on code coverage means that Team City will be able to display those metrics to you after the build is complete. You can drill in to the metrics to get a better understanding of what code your tests are exercising, and perhaps more importantly, what code isn’t being exercised.

When Team City builds a project it will run several build steps, at least one of which should be some sort of testing suite and you may have more if you have a mix of test types (e.g. unit tests and integration tests). The code coverage metrics are generated during the testing steps. At the end, you can examine the results just like any other aspect of the build.

This is an example of what the “overview” looks like after a build it run with code coverage turned on:

Nightly Build Overview

I like that it also shows the differences between this and the previous build as well as the overall numbers.

The statistic that I think is probably the most important is the statements percentage as that is unlikely to change as much as if you split out code in to separate classes or methods. It also means that you get a better idea of coverage if you have larger methods with high cyclomatic complexity or large classes.

Incidentally, splitting large methods and classes into smaller ones should make things easier to test, but I’d rather not have that impacting the metric we are using for code coverage. The number of statements remains relatively similar even after splitting stuff out and also since there so many more statements than methods or classes even if there is a slight change in the number the overall impact on the percentage metric is minimised.

If you want to drill into the detail, you can go into “full report” and see the break down by assembly. For example:

Overall Coverage Summary

As you can see it gives a breakdown of coverage by assembly. Each of the assembly names is a link allowing you to drill in further.

Overall Coverage Summary for Assembly

Eventually you can get as far down as the individual file and this is where the power really lies. When viewing an individual file you can see individual lines of code that have been exercised in some way (in green) and ones that have not (in red).

Coverage Summary for Type

It is worth noting that just because a line is green it does not mean it was exercised well. Code Coverage is just a metrics tool. It just says something was done, not that it was correct. This is vitally important to remember. However, it is still a valuable tool to be able to see areas where there is no code coverage at all.

Setting up Code Coverage in Team City

On the Team City dashboard go to the project you are interested in, click the down arrow next to the build name and select “Edit Settings” from the menu.

Team City Dashboard with Build context menu

On the Build Configuration Settings page, click on “Build Steps” in the side bar.

Build Configuration Settings sidebar

Then, find the step that performs the tests, and click edit.

Towards the bottom of the settings is an entry marked “.NET Coverage tool”. Select “JetBrains dotCover” as this is built in to Team City. You can of course choose one of the other coverage options, but you’ll have to configure that yourself.

.NET Coverage settings

You don’t need to enter a path to dotCover unless you are installing a different version to the one Team City has built in. (We’re using Team City 8.1 and it has an older version of dotCover, but it works well enough for what we want.)

In the filters section you need to put in details of the code you want to cover. I’ve just put in a line for all assemblies, then some lines to exclude other assemblies. I found that it sometimes picks up assemblies that were brought in via NuGet and shouldn’t be covered as it will skew your statistics. You can see the assemblies it picked up automatically by drilling into the code coverage from the overview page for the build as I showed above. I also excluded assemblies that are purely generated code, like the assembly that contains just an EDMX and the code autogenerated from it.

The basic format of the filters is:
+:assembly=<assembly-name> to include an assembly, wild cards are supported.
-:assembly=<assembly-name> to exclude an assembly, wild cards are supported.

However, you can filter on more that just the assembly level. More detail is available from Team City’s online help.

Once you’ve set this up hit “Save” at the bottom of the page.

You can now perform a test build to see how it works by pressing “Run…” at the top of the page.

When the run is complete you can view the results by clicking on the build on the main page.

Getting to the build report

From the overview you can drill in and see the assemblies that were picked up, as shown above. If there are any assemblies that shouldn’t be there (e.g. third party libraries) you can then add them to the filter in the build step configuration to make sure they are excluded from the code coverage reports.

Finally, to ensure that you don’t fall backwards and ensure that you maintain the code coverage above a certain threshold you can add a failure condition to the Build Configuration Settings.

Go back to the build settings page, and then to the “Failure Conditions” section.

Failure Conditions section

Then press “+ Add failure condition” and a dialog will pop-up.

Add Failure Condition popup

Select “Fail build on metric change” from the drop down.

Then, in the “Fail build if” section, select “its” “percentage of statement coverage”, “Is compared to” “constant value” “is” “less” than <percentage-value> “default units for this metric”

For <percentage-value> take the value form your first reference build subtract about half a percent then round down. There will naturally be minor fluctuations up-and-down on each build and you don’t want to be too strict either.

Then press “Save”. The page will refresh and you’ll see the failure condition added to the table at the bottom of the page.

Additional Failure Conditions

Now each time this build is run it will run code coverage and if it drops below the threshold it will fail the build.


Happy 10th Anniversary

Today marks the 10th Anniversary of this blog. My first post was the 22nd April 2005 and was about passing values in a WinForms application.

A lot of things have changed in the last 10 years, for a start I used to look like this:

Colin Mackay, circa 2005

And now I look more like this:

Colin Mackay, circa 2014

I’m still writing code in C# and with the .NET framework. I’m still writing mostly web applications. But I use a lot more open source, and I’ve created my own open source project now too.

For a period I was a Microsoft MVP (for 4 years), I was also a Code Project MVP as well (for 5 years). I’ve worked for 4 companies in that time, I’ve lived in three places. I found love, and got married (finally!)

I wonder what the next 10 years will bring.