Setting up Code Coverage in Team City

The project that I’m working on has a rather spotted history for unit tests until now. There would be periods of people actually writing them, but then come unstuck when a random test failed and eventually the tests would be allowed to rot a bit, until something spurned everyone back to life again.

However, unit testing is something we, as a team, are getting better at and to aid the continual improvement of code base I turned on code coverage metrics in Team City earlier this week, and set it to fail the build if the metrics fell below a certain threshold. That way if test coverage dropped the build would fail, and since we’re also driving towards implementing a continuous delivery process failing builds will mean we can’t deploy either. This is good. It means we will not be able to deploy poor quality code.

What does code coverage on a build server get you?

Put simply, it shows you what code was covered during the automated tests run on the build server. You can then use this information to direct focus on areas that are not sufficiently tested to drive up code quality.

Turning on code coverage means that Team City will be able to display those metrics to you after the build is complete. You can drill in to the metrics to get a better understanding of what code your tests are exercising, and perhaps more importantly, what code isn’t being exercised.

When Team City builds a project it will run several build steps, at least one of which should be some sort of testing suite and you may have more if you have a mix of test types (e.g. unit tests and integration tests). The code coverage metrics are generated during the testing steps. At the end, you can examine the results just like any other aspect of the build.

This is an example of what the “overview” looks like after a build it run with code coverage turned on:

Nightly Build Overview

I like that it also shows the differences between this and the previous build as well as the overall numbers.

The statistic that I think is probably the most important is the statements percentage as that is unlikely to change as much as if you split out code in to separate classes or methods. It also means that you get a better idea of coverage if you have larger methods with high cyclomatic complexity or large classes.

Incidentally, splitting large methods and classes into smaller ones should make things easier to test, but I’d rather not have that impacting the metric we are using for code coverage. The number of statements remains relatively similar even after splitting stuff out and also since there so many more statements than methods or classes even if there is a slight change in the number the overall impact on the percentage metric is minimised.

If you want to drill into the detail, you can go into “full report” and see the break down by assembly. For example:

Overall Coverage Summary

As you can see it gives a breakdown of coverage by assembly. Each of the assembly names is a link allowing you to drill in further.

Overall Coverage Summary for Assembly

Eventually you can get as far down as the individual file and this is where the power really lies. When viewing an individual file you can see individual lines of code that have been exercised in some way (in green) and ones that have not (in red).

Coverage Summary for Type

It is worth noting that just because a line is green it does not mean it was exercised well. Code Coverage is just a metrics tool. It just says something was done, not that it was correct. This is vitally important to remember. However, it is still a valuable tool to be able to see areas where there is no code coverage at all.

Setting up Code Coverage in Team City

On the Team City dashboard go to the project you are interested in, click the down arrow next to the build name and select “Edit Settings” from the menu.

Team City Dashboard with Build context menu

On the Build Configuration Settings page, click on “Build Steps” in the side bar.

Build Configuration Settings sidebar

Then, find the step that performs the tests, and click edit.

Towards the bottom of the settings is an entry marked “.NET Coverage tool”. Select “JetBrains dotCover” as this is built in to Team City. You can of course choose one of the other coverage options, but you’ll have to configure that yourself.

.NET Coverage settings

You don’t need to enter a path to dotCover unless you are installing a different version to the one Team City has built in. (We’re using Team City 8.1 and it has an older version of dotCover, but it works well enough for what we want.)

In the filters section you need to put in details of the code you want to cover. I’ve just put in a line for all assemblies, then some lines to exclude other assemblies. I found that it sometimes picks up assemblies that were brought in via NuGet and shouldn’t be covered as it will skew your statistics. You can see the assemblies it picked up automatically by drilling into the code coverage from the overview page for the build as I showed above. I also excluded assemblies that are purely generated code, like the assembly that contains just an EDMX and the code autogenerated from it.

The basic format of the filters is:
+:assembly=<assembly-name> to include an assembly, wild cards are supported.
-:assembly=<assembly-name> to exclude an assembly, wild cards are supported.

However, you can filter on more that just the assembly level. More detail is available from Team City’s online help.

Once you’ve set this up hit “Save” at the bottom of the page.

You can now perform a test build to see how it works by pressing “Run…” at the top of the page.

When the run is complete you can view the results by clicking on the build on the main page.

Getting to the build report

From the overview you can drill in and see the assemblies that were picked up, as shown above. If there are any assemblies that shouldn’t be there (e.g. third party libraries) you can then add them to the filter in the build step configuration to make sure they are excluded from the code coverage reports.

Finally, to ensure that you don’t fall backwards and ensure that you maintain the code coverage above a certain threshold you can add a failure condition to the Build Configuration Settings.

Go back to the build settings page, and then to the “Failure Conditions” section.

Failure Conditions section

Then press “+ Add failure condition” and a dialog will pop-up.

Add Failure Condition popup

Select “Fail build on metric change” from the drop down.

Then, in the “Fail build if” section, select “its” “percentage of statement coverage”, “Is compared to” “constant value” “is” “less” than <percentage-value> “default units for this metric”

For <percentage-value> take the value form your first reference build subtract about half a percent then round down. There will naturally be minor fluctuations up-and-down on each build and you don’t want to be too strict either.

Then press “Save”. The page will refresh and you’ll see the failure condition added to the table at the bottom of the page.

Additional Failure Conditions

Now each time this build is run it will run code coverage and if it drops below the threshold it will fail the build.

Happy 10th Anniversary

Today marks the 10th Anniversary of this blog. My first post was the 22nd April 2005 and was about passing values in a WinForms application.

A lot of things have changed in the last 10 years, for a start I used to look like this:

Colin Mackay, circa 2005

And now I look more like this:

Colin Mackay, circa 2014

I’m still writing code in C# and with the .NET framework. I’m still writing mostly web applications. But I use a lot more open source, and I’ve created my own open source project now too.

For a period I was a Microsoft MVP (for 4 years), I was also a Code Project MVP as well (for 5 years). I’ve worked for 4 companies in that time, I’ve lived in three places. I found love, and got married (finally!)

I wonder what the next 10 years will bring.

Code Review: Making a drop down list out of an enum

I’ve come across code like this a couple of times and it is rather odd:

IEnumerable<CalendarViewEnum> _calendarViewEnums =
    Enum.GetValues(typeof(CalendarViewEnum)).Cast<CalendarViewEnum>();
List selectList = new List<SelectListItem>();
foreach (CalendarViewEnum calendarViewEnum in _calendarViewEnums)
{
  switch (calendarViewEnum)
  {
    case CalendarViewEnum.FittingRoom:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.FittingRoom) 
          });
      break;
    case CalendarViewEnum.Staff:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewStaff, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.Staff) 
      });
    break;
    case CalendarViewEnum.List:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewList, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.List) 
      });
    break;
    default:
      throw new Exception("CalendarViewEnum Enumeration does not exist");
  }
  return selectList.ToArray();
}

So, what this does is it generates a list of values from an enum, then it loops around that list generating a second list of SelectListItems (for a drop down list box on the UI). Each item consists of a friendly name (to display to the user), a integer value (which is returned to the server on selection) and a Boolean value representing whether that item is selected (which is actually always true, so it is lucky that MVC ignores this the way the Drop Down List was rendered, otherwise it would get very confused.)

Each loop only has one possible path (but the runtime doesn’t know this, so it slavishly runs through the switch statement each time). So that means we can do a lot to optimise and simplify this code.

Here it is:

List<SelectListItem> selectList = new List<SelectListItem>();
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
    Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewStaff, 
    Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewList, 
    Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
  });
return selectList.ToArray();

There is one other another bit of refactoring we can do. We always, without exception, return the same things from this method and it is a known fixed size at compile time. So, let’s just generate the array directly:

return new []
{
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
      Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewStaff, 
      Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewList, 
      Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
    }
};

So, in the end a redundant loop has been removed and a redundant conversion from list to array has been removed. The code is also easier to read and easier to maintain. It is easier to read because the cyclomatic complexity of the method is now one, previously it was 5 (one for the loop, and one for each case clause in the switch statement [assuming I’ve calculated it correctly]). The lower the cyclomatic complexity of a method is the easier it is to understand and maintain as there are less conditions to deal with. It is easier to maintain because now instead of a whole new case statement to be added to the switch statement a single line just needs to be added to the array. The redundant Selected property has also been removed.

There are still ways to improve this, but the main issue (what that loop is actually doing) for anyone maintaining this code has now been resolved.

Code Review: FirstOrDefault()

I regularly review the code that I maintain. Recently, I’ve come across code like this fairly often:

someCollection.FirstOrDefault().Id

I cannot rightly comprehend why anyone would do this.

FirstOrDefault() returns the first item in a sequence or the default value if it doesn’t exist (i.e. the sequence is empty). For a reference type (classes, basically) the default value is null. So using the value returned by FirstOrDefault() without a null check is only valid for when the sequence contains a value type (e.g. int, decimal, DateTime, Guid, etc.)

In the example above if someCollection is an empty list/array/collection/whatever then FirstOrDefault() will return null and the call to the Id property will fail.

Then you are left with a NullReferenceException on line xxx but you don’t know if it is someCollection, or the returned value from FirstOrDefault() which then wastes your time (or the time of someone else who is having to debug it).

So, if the sequence must always contain items then use First(), in the exceptional event that it is empty the call to First() will throw a more appropriate exception that will help you debug faster. If it is perfectly valid for the sequence to be empty then perform a null check and change the behaviour appropriately.

Previewing Config Transforms

Curiously, I didn’t know about this until recently and it is such a useful thing too.

You can preview the results of your configuration transformation from within Visual Studio.

First of all, right click the transform file and select “Preview Transform”

Then it will show you the differences between the original web.config file and the transformed file.

Really getting the latest changes with TFS

TFS Source Control doesn’t always get the latest changes. It gets what it thinks are the latest changes (and for the most part it gets it right if you work exclusively in Visual Studio). However, there are times when it gets it wrong and you have to force its hand a little.

So, if you have issues getting latest code then what you need to do is:

  • Right click the branch or folder that is problematic
  • Go to the “advanced” sub-menu and click “Get Specific Version…”
  • Then in the dialog check the two “overwrite…” boxes
  • Finally, press “Get”

At this point VS/TFS will retrieve all the files in the branch/folder selected and overwrite existing files. It will also retrieve files it didn’t already have, even although it thought it had them.

How to recover deleted files, folders and branches in TFS

In Visual Studio go to the menu item Tools–>Options…

Then navigate to the Source Control –> Visual Studio Team Foundation Server section.

In that section is a check box that says “Show deleted items in the Source Control Explorer”

Once you’ve ensured that the checkbox is checked, press “OK”

Then navigate to the Source Control Explorer and you’ll see that deleted files, folders and branches are now displayed with a large red cross next to them.

Right click the item you want to recover and select “Undelete” from the menu.

At this point Visual Studio stops responding to input. It displays a wait spinner briefly, but mostly it just looks like it has hung.

When Visual Studio does come back to life you can go to the Pending Changes to see the newly recovered files, folders, or branches.

If you are happy with this change, you can check it in to TFS as normal.

DunDDD 2014: Introduction to Node.js–From Hello World to Deploying on Azure

Thank you for those that came to my talk. As promised here are the slides, code, and links given in the talk.

Slides and Code

The slide deck is available as a PDF file.

Links from the talk

Many slides have a link at the bottom, but if you didn’t catch them, here they are again.

Deploying a Node.js with Express application on Azure

By the end of the last post, there was enough of an application to deploy it, so let’s deploy it.

Prerequisites

  • An FTP client to get at the files on Azure.
  • A GitHub account (or other source control, but this walk through uses GitHub) to deploy to Azure.
  • And an Azure account – this walk through does not require anything beyond the free account.

Setting up Azure

Log in to Azure, then click the "New +" button at the bottom right and Quick Add a website

When that’s okayed a message will appear at the bottom of the page.

Once the website has been provisioned it can be modified. From the starter page or dashboard, source control deployment can be configured

Above the starter page for new websites, below the side bar on the website dashboard.

Then select the desired source control. For this example, the deployment is in GitHub

The choose the repository and branch for the deployment.

Then press the tick icon to confirm.

Once the Azure website and source control are linked, it will start deploying the site…

Once finished the message will change to indicate that it is deployed.

 

At the point the website can be viewed. However, there are issues with it – It isn’t serving some files, as can be seen here.

What went wrong?

It is rather obvious that something is wrong. Images are not being rendered, although it looks like other things are, such as the CSS.

By examining the diagnostics tools in the browser it looks like the files are simply not found. But, there is no content.

A few blog posts ago, it was noted that if Node.js didn’t know how to handle a route then it would issue a 404 Not Found, but also it would render some content so that the browser had something to display to the user.

Here is a 404 Not Found that gets as far as Node.js:

The the browser window itself is the message that Node.js renders. It is returning a 404 status code, but it has content. Also, note that there is an X-Powered-By: Express as well as the X-Powered-By: ASP.NET from the previous example. This immediately suggests that the 404 is being issued before Node.js gets a chance to deal with the request.

It is for this reason that FTP is required so that some remote administration of the site is possible.

When the site is deployed to Azure, it recognises that it is a Node.js site and will look for an entry point. Normally it looks for a file called server.js, however, it can also work out that app.js is the file it is looking for. So, normally, the entry point into the application should be server.js for installing into Azure.

Azure creates a web.config for the application which has all the settings needed to tell IIS how to deal with the website. However, it is missing some bits. It does not know how to deal with SVG files, so it won’t serve them, even although the Node.js application understands that static content resides in a certain location.

The missing part of the web.config that is needed is:

    <staticContent>
      <mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
    </staticContent>

 

Accessing the files on Azure with FTP

This example uses FileZilla as the FTP Client.

First, the credentials need to be set for accessing the site via FTP. In the Website dashboard, the side bar contains a link to “Reset your deployment credentials”.

When clicked a dialog appears that allows the username and password to be set.

Once this is filled in and the tick clicked, the details for connecting via FTP will be in the side bar on the dashboard.

These details, along with the password previously created, can be used to connect via FTP.

Once connected, navigate to the location of the web.config file, which is in /site/wwwroot and transfer the file to the source code directory. The file can now be edited along with other source code and that means that when it is deployed any relevant updates can be deployed in one action, rather than requiring additional actions with FTP.

The changes to the web.config are to add the following

    <staticContent>
<mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
</staticContent>

to the <configuration><system.webServer> section of the file.

Finally, add the web.config file to the repository, commit the changes to source control and push it to GitHub. It only takes a few moments for Azure to pick it up and then the portal will display a new item in the deployment history.

Refreshing the site in a browser window finally reveals the missing graphics.

The project on GitHub

This was marked as a release on GitHub as part of the Xander.Flashcards project.

Node.js with Express – Come to the dark side. We have cookies!

So far, so good. At the point the application displays a list of languages and will display the language that the user picked. However, for the flashcards to work, that selection will have to be remembered. As there is so little state, it is possible to store that in a cookie.

Express handles the creation of cookies without the need for middleware. However, in order to read back the cookies a parser is needed.

Setting and Removing the cookie

To set the cookie with the language that is received from the form on the page:

var language = req.body.language;
var cookieAge = 24*60*60*1000; // 1 day
res.cookie("flashcard-language",language,{maxAge:cookieAge, httpOnly:true});

In the above code, the language is set from the form value, as seen in the previous blog post. The cookieAge is set to be one day, after which it will expire. Finally, the cookie is added to the response object. It is named "flashcard-language".

When the route is requested the HTTP response header will look something like this:

HTTP/1.1 200 OK
X-Powered-By: Express
Set-Cookie: flashcard-language=ca; Max-Age=86400; Path=/; Expires=Mon, 17 Nov 2014 23:22:12 GMT; HttpOnly
Content-Type: text/html; charset=utf-8
Content-Length: 18
Date: Sun, 16 Nov 2014 23:22:12 GMT
Connection: keep-alive

To clear the cookie, call clearCookie and pass in the name of the cookie to clear.

res.clearCookie("flashcard-language");

The HTTP Response will then contain the request for the browser to clear the cookie:

HTTP/1.1 304 Not Modified
X-Powered-By: Express
Set-Cookie: flashcard-language=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT
ETag: W/"SuG3Z498eJmmc04TIciYHQ=="
Date: Sun, 16 Nov 2014 23:29:25 GMT
Connection: keep-alive

Reading in the cookie

In order to read in the cookie some middleware is required. The changes to the app.js file are:

// Requirements section
var cookieParser = require("cookie-parser");
...
// set up section
app.use(cookieParser());

And in the route function that responds to the request, the cookie can be read back like this:

var language = req.cookies["flashcard-language"];

Code for this post

The code for this can be found here on GitHub.