Software Development

The difference between & and && operators

Here is a bit of code that was failing:

if (product!=null & !string.IsNullOrEmpty(product.ProductNotes))

If you look closely you can see it uses just the single ampersand operator, not the usual double ampersand operator.

There is a significant function difference between the two that under normal circumstances may not be obvious when performing logic operations on business rules. However in this case it become significant.

Both operators produce a Boolean result, both operators function as in this truth table

LHS (Left-hand side) RHS (Right-hand Side) Result
False False False
True False False
False True False
True True True

But there is a functional difference. For a result to be true, both LHS AND RHS must be true, therefore if LHS is false then the result of RHS is irrelevant as the answer will always be false.

The single ampersand operator (&) evaluates both sides of the operator before arriving at its answer.

The double ampersand operator (&& – also known as the conditional-AND operator) evaluates the RHS only if the LHS  is true. It short-circuits the evaluation so it doesn’t have to evaluate the RHS if it doesn’t have to. This means you can put the quick stuff on the left and the lengthy calculation on the right and you only ever need to do the lengthy calculation if you need it.

In the case above the code is checking if product is not null AND if product notes is not null or whitespace. It cannot evaluate the RHS if the LHS is false. Therefore a single ampersand operator will cause a failure when product is NULL simply because it is trying to evaluate both sides of the operator.

For more information see:

Software Development

Setting up Code Coverage in Team City

The project that I’m working on has a rather spotted history for unit tests until now. There would be periods of people actually writing them, but then come unstuck when a random test failed and eventually the tests would be allowed to rot a bit, until something spurned everyone back to life again.

However, unit testing is something we, as a team, are getting better at and to aid the continual improvement of code base I turned on code coverage metrics in Team City earlier this week, and set it to fail the build if the metrics fell below a certain threshold. That way if test coverage dropped the build would fail, and since we’re also driving towards implementing a continuous delivery process failing builds will mean we can’t deploy either. This is good. It means we will not be able to deploy poor quality code.

What does code coverage on a build server get you?

Put simply, it shows you what code was covered during the automated tests run on the build server. You can then use this information to direct focus on areas that are not sufficiently tested to drive up code quality.

Turning on code coverage means that Team City will be able to display those metrics to you after the build is complete. You can drill in to the metrics to get a better understanding of what code your tests are exercising, and perhaps more importantly, what code isn’t being exercised.

When Team City builds a project it will run several build steps, at least one of which should be some sort of testing suite and you may have more if you have a mix of test types (e.g. unit tests and integration tests). The code coverage metrics are generated during the testing steps. At the end, you can examine the results just like any other aspect of the build.

This is an example of what the “overview” looks like after a build it run with code coverage turned on:

Nightly Build Overview

I like that it also shows the differences between this and the previous build as well as the overall numbers.

The statistic that I think is probably the most important is the statements percentage as that is unlikely to change as much as if you split out code in to separate classes or methods. It also means that you get a better idea of coverage if you have larger methods with high cyclomatic complexity or large classes.

Incidentally, splitting large methods and classes into smaller ones should make things easier to test, but I’d rather not have that impacting the metric we are using for code coverage. The number of statements remains relatively similar even after splitting stuff out and also since there so many more statements than methods or classes even if there is a slight change in the number the overall impact on the percentage metric is minimised.

If you want to drill into the detail, you can go into “full report” and see the break down by assembly. For example:

Overall Coverage Summary

As you can see it gives a breakdown of coverage by assembly. Each of the assembly names is a link allowing you to drill in further.

Overall Coverage Summary for Assembly

Eventually you can get as far down as the individual file and this is where the power really lies. When viewing an individual file you can see individual lines of code that have been exercised in some way (in green) and ones that have not (in red).

Coverage Summary for Type

It is worth noting that just because a line is green it does not mean it was exercised well. Code Coverage is just a metrics tool. It just says something was done, not that it was correct. This is vitally important to remember. However, it is still a valuable tool to be able to see areas where there is no code coverage at all.

Setting up Code Coverage in Team City

On the Team City dashboard go to the project you are interested in, click the down arrow next to the build name and select “Edit Settings” from the menu.

Team City Dashboard with Build context menu

On the Build Configuration Settings page, click on “Build Steps” in the side bar.

Build Configuration Settings sidebar

Then, find the step that performs the tests, and click edit.

Towards the bottom of the settings is an entry marked “.NET Coverage tool”. Select “JetBrains dotCover” as this is built in to Team City. You can of course choose one of the other coverage options, but you’ll have to configure that yourself.

.NET Coverage settings

You don’t need to enter a path to dotCover unless you are installing a different version to the one Team City has built in. (We’re using Team City 8.1 and it has an older version of dotCover, but it works well enough for what we want.)

In the filters section you need to put in details of the code you want to cover. I’ve just put in a line for all assemblies, then some lines to exclude other assemblies. I found that it sometimes picks up assemblies that were brought in via NuGet and shouldn’t be covered as it will skew your statistics. You can see the assemblies it picked up automatically by drilling into the code coverage from the overview page for the build as I showed above. I also excluded assemblies that are purely generated code, like the assembly that contains just an EDMX and the code autogenerated from it.

The basic format of the filters is:
+:assembly=<assembly-name> to include an assembly, wild cards are supported.
-:assembly=<assembly-name> to exclude an assembly, wild cards are supported.

However, you can filter on more that just the assembly level. More detail is available from Team City’s online help.

Once you’ve set this up hit “Save” at the bottom of the page.

You can now perform a test build to see how it works by pressing “Run…” at the top of the page.

When the run is complete you can view the results by clicking on the build on the main page.

Getting to the build report

From the overview you can drill in and see the assemblies that were picked up, as shown above. If there are any assemblies that shouldn’t be there (e.g. third party libraries) you can then add them to the filter in the build step configuration to make sure they are excluded from the code coverage reports.

Finally, to ensure that you don’t fall backwards and ensure that you maintain the code coverage above a certain threshold you can add a failure condition to the Build Configuration Settings.

Go back to the build settings page, and then to the “Failure Conditions” section.

Failure Conditions section

Then press “+ Add failure condition” and a dialog will pop-up.

Add Failure Condition popup

Select “Fail build on metric change” from the drop down.

Then, in the “Fail build if” section, select “its” “percentage of statement coverage”, “Is compared to” “constant value” “is” “less” than <percentage-value> “default units for this metric”

For <percentage-value> take the value form your first reference build subtract about half a percent then round down. There will naturally be minor fluctuations up-and-down on each build and you don’t want to be too strict either.

Then press “Save”. The page will refresh and you’ll see the failure condition added to the table at the bottom of the page.

Additional Failure Conditions

Now each time this build is run it will run code coverage and if it drops below the threshold it will fail the build.

Misc, Software Development

Code Review: Making a drop down list out of an enum

I’ve come across code like this a couple of times and it is rather odd:

IEnumerable<CalendarViewEnum> _calendarViewEnums =
    Enum.GetValues(typeof(CalendarViewEnum)).Cast<CalendarViewEnum>();
List selectList = new List<SelectListItem>();
foreach (CalendarViewEnum calendarViewEnum in _calendarViewEnums)
{
  switch (calendarViewEnum)
  {
    case CalendarViewEnum.FittingRoom:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.FittingRoom) 
          });
      break;
    case CalendarViewEnum.Staff:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewStaff, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.Staff) 
      });
    break;
    case CalendarViewEnum.List:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewList, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.List) 
      });
    break;
    default:
      throw new Exception("CalendarViewEnum Enumeration does not exist");
  }
  return selectList.ToArray();
}

So, what this does is it generates a list of values from an enum, then it loops around that list generating a second list of SelectListItems (for a drop down list box on the UI). Each item consists of a friendly name (to display to the user), a integer value (which is returned to the server on selection) and a Boolean value representing whether that item is selected (which is actually always true, so it is lucky that MVC ignores this the way the Drop Down List was rendered, otherwise it would get very confused.)

Each loop only has one possible path (but the runtime doesn’t know this, so it slavishly runs through the switch statement each time). So that means we can do a lot to optimise and simplify this code.

Here it is:

List<SelectListItem> selectList = new List<SelectListItem>();
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
    Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewStaff, 
    Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewList, 
    Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
  });
return selectList.ToArray();

There is one other another bit of refactoring we can do. We always, without exception, return the same things from this method and it is a known fixed size at compile time. So, let’s just generate the array directly:

return new []
{
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
      Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewStaff, 
      Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewList, 
      Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
    }
};

So, in the end a redundant loop has been removed and a redundant conversion from list to array has been removed. The code is also easier to read and easier to maintain. It is easier to read because the cyclomatic complexity of the method is now one, previously it was 5 (one for the loop, and one for each case clause in the switch statement [assuming I’ve calculated it correctly]). The lower the cyclomatic complexity of a method is the easier it is to understand and maintain as there are less conditions to deal with. It is easier to maintain because now instead of a whole new case statement to be added to the switch statement a single line just needs to be added to the array. The redundant Selected property has also been removed.

There are still ways to improve this, but the main issue (what that loop is actually doing) for anyone maintaining this code has now been resolved.

Software Development

Previewing Config Transforms

Curiously, I didn’t know about this until recently and it is such a useful thing too.

You can preview the results of your configuration transformation from within Visual Studio.

First of all, right click the transform file and select “Preview Transform”

Then it will show you the differences between the original web.config file and the transformed file.

Software Development

Deploying a Node.js with Express application on Azure

By the end of the last post, there was enough of an application to deploy it, so let’s deploy it.

Prerequisites

  • An FTP client to get at the files on Azure.
  • A GitHub account (or other source control, but this walk through uses GitHub) to deploy to Azure.
  • And an Azure account – this walk through does not require anything beyond the free account.

Setting up Azure

Log in to Azure, then click the "New +" button at the bottom right and Quick Add a website

When that’s okayed a message will appear at the bottom of the page.

Once the website has been provisioned it can be modified. From the starter page or dashboard, source control deployment can be configured

Above the starter page for new websites, below the side bar on the website dashboard.

Then select the desired source control. For this example, the deployment is in GitHub

The choose the repository and branch for the deployment.

Then press the tick icon to confirm.

Once the Azure website and source control are linked, it will start deploying the site…

Once finished the message will change to indicate that it is deployed.

 

At the point the website can be viewed. However, there are issues with it – It isn’t serving some files, as can be seen here.

What went wrong?

It is rather obvious that something is wrong. Images are not being rendered, although it looks like other things are, such as the CSS.

By examining the diagnostics tools in the browser it looks like the files are simply not found. But, there is no content.

A few blog posts ago, it was noted that if Node.js didn’t know how to handle a route then it would issue a 404 Not Found, but also it would render some content so that the browser had something to display to the user.

Here is a 404 Not Found that gets as far as Node.js:

The the browser window itself is the message that Node.js renders. It is returning a 404 status code, but it has content. Also, note that there is an X-Powered-By: Express as well as the X-Powered-By: ASP.NET from the previous example. This immediately suggests that the 404 is being issued before Node.js gets a chance to deal with the request.

It is for this reason that FTP is required so that some remote administration of the site is possible.

When the site is deployed to Azure, it recognises that it is a Node.js site and will look for an entry point. Normally it looks for a file called server.js, however, it can also work out that app.js is the file it is looking for. So, normally, the entry point into the application should be server.js for installing into Azure.

Azure creates a web.config for the application which has all the settings needed to tell IIS how to deal with the website. However, it is missing some bits. It does not know how to deal with SVG files, so it won’t serve them, even although the Node.js application understands that static content resides in a certain location.

The missing part of the web.config that is needed is:

    <staticContent>
      <mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
    </staticContent>

 

Accessing the files on Azure with FTP

This example uses FileZilla as the FTP Client.

First, the credentials need to be set for accessing the site via FTP. In the Website dashboard, the side bar contains a link to “Reset your deployment credentials”.

When clicked a dialog appears that allows the username and password to be set.

Once this is filled in and the tick clicked, the details for connecting via FTP will be in the side bar on the dashboard.

These details, along with the password previously created, can be used to connect via FTP.

Once connected, navigate to the location of the web.config file, which is in /site/wwwroot and transfer the file to the source code directory. The file can now be edited along with other source code and that means that when it is deployed any relevant updates can be deployed in one action, rather than requiring additional actions with FTP.

The changes to the web.config are to add the following

    <staticContent>
<mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
</staticContent>

to the <configuration><system.webServer> section of the file.

Finally, add the web.config file to the repository, commit the changes to source control and push it to GitHub. It only takes a few moments for Azure to pick it up and then the portal will display a new item in the deployment history.

Refreshing the site in a browser window finally reveals the missing graphics.

The project on GitHub

This was marked as a release on GitHub as part of the Xander.Flashcards project.

Software Development

Node.js with Express – Come to the dark side. We have cookies!

So far, so good. At the point the application displays a list of languages and will display the language that the user picked. However, for the flashcards to work, that selection will have to be remembered. As there is so little state, it is possible to store that in a cookie.

Express handles the creation of cookies without the need for middleware. However, in order to read back the cookies a parser is needed.

Setting and Removing the cookie

To set the cookie with the language that is received from the form on the page:

var language = req.body.language;
var cookieAge = 24*60*60*1000; // 1 day
res.cookie("flashcard-language",language,{maxAge:cookieAge, httpOnly:true});

In the above code, the language is set from the form value, as seen in the previous blog post. The cookieAge is set to be one day, after which it will expire. Finally, the cookie is added to the response object. It is named "flashcard-language".

When the route is requested the HTTP response header will look something like this:

HTTP/1.1 200 OK
X-Powered-By: Express
Set-Cookie: flashcard-language=ca; Max-Age=86400; Path=/; Expires=Mon, 17 Nov 2014 23:22:12 GMT; HttpOnly
Content-Type: text/html; charset=utf-8
Content-Length: 18
Date: Sun, 16 Nov 2014 23:22:12 GMT
Connection: keep-alive

To clear the cookie, call clearCookie and pass in the name of the cookie to clear.

res.clearCookie("flashcard-language");

The HTTP Response will then contain the request for the browser to clear the cookie:

HTTP/1.1 304 Not Modified
X-Powered-By: Express
Set-Cookie: flashcard-language=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT
ETag: W/"SuG3Z498eJmmc04TIciYHQ=="
Date: Sun, 16 Nov 2014 23:29:25 GMT
Connection: keep-alive

Reading in the cookie

In order to read in the cookie some middleware is required. The changes to the app.js file are:

// Requirements section
var cookieParser = require("cookie-parser");
...
// set up section
app.use(cookieParser());

And in the route function that responds to the request, the cookie can be read back like this:

var language = req.cookies["flashcard-language"];

Code for this post

The code for this can be found here on GitHub.

Software Development

Node.js with Express – Getting form data

Now that view engines are wired up and working on this application, the next area to look at is getting data back from the browser.

By default Express doesn’t do anything with form data in the request and a piece of “middleware” needs to be added to get this to work. The reason for this is that there are many ways to process data from the browser (or perhaps it is data from something that is not a browser), so it is left up to the developer how best to process that data.

The view now has a form element and a submit button. It also has an input which will contain the name of the language the user wants. This information is transmitted to the server when the user presses the submit button.

In order to read this information a piece of middleware called body-parser is added.

First, it has to be installed into the application:

npm install body-parser --save

Then the application need to know about it. The following changes are made to the app.js file:

// In the requiments section
var bodyParser = require("body-parser");
...
// In the set up section
app.use(bodyParser.urlencoded());
...
// Set up the route as an HTTP POST request.
app.post("/set-language", setLanguage);

Since body-parser can handle a few different types of encoding the application needs to know which to expect. Browsers return form data as application/x-www-form-urlencoded parser, so that’s the parser that is used by the application.

There are some limitations to body-parser but it is good enough for this application. For example, it does not handle multi-part form data.

The route function can now read the body property that body-parser previously populated.

module.exports = function(req, res){
    var language = req.body.language;
    res.send("set-language to "+language);
};

This will now return a simple message to the user with the language code that was set on the previous page.

Viewing the full source code

Rather than paste the source code at the end of the blog post, I’ve released the project on to GitHub. You can either browse the code there, or get  a copy of the repository for examining yourself. There may be changes coming, so it is best to look for the release that corresponds with this blog post.