Software Development

Debugging a process that cannot be invoked through Visual Studio.

Sometimes it is rather difficult to debug through Visual Studio directly even although the project is right there in front of you. In my case I have an assembly that is invoked from a wrapper that is itself invoked from an MSBuild script. I could potentially get VS to invoke the whole pipeline but it seemed to me a less convoluted process to try and attach the debugger to the running process and debug from there.

But what if the process is something quite ephemeral. If the process starts up, does its thing, then shuts down you might not have time to attach a debugger to it before the process has completed. Or the thing you are debugging is in the start up code and there is no way to attach a debugger in time for that.

However there is something that can be done (if you have access to the source code and can rebuild).

for (int i = 30; i >= 0; i--)
{
    Console.WriteLine("Waiting for debugger to attach... {0}", i);
    if (Debugger.IsAttached)
        break;
    Thread.Sleep(1000);
}

if (Debugger.IsAttached)
{
    Console.WriteLine("A debugger has attached to the process.");
    Debugger.Break();
}
else
{
    Console.WriteLine("A debugger was not detected... Continuing with process anyway.");
}

You could get away with less code, but I like this because is is reasonably flexible and I get to see what’s happening.

First up, I set a loop to count down thirty seconds to give me time to attach the debugger. On each loop it checks to see if the debugger is attached already and exits early if it has (This is important as otherwise you could attach the debugger then get frustrated waiting for the process to continue.)

After the loop (regardless of whether it simply timed-out or a debugger was detected) it does a final check and breaks the execution if a debugger is detected.

Each step of the way it outputs to the console what it is doing so you can see when to attach the debugger and you can see when the debugger got attached, or not.

My recommendation, if you want to use this code, is to put it in a utility class somewhere that you can call when needed, then take the call out afterwards.

Software Development

The difference between & and && operators

Here is a bit of code that was failing:

if (product!=null & !string.IsNullOrEmpty(product.ProductNotes))

If you look closely you can see it uses just the single ampersand operator, not the usual double ampersand operator.

There is a significant function difference between the two that under normal circumstances may not be obvious when performing logic operations on business rules. However in this case it become significant.

Both operators produce a Boolean result, both operators function as in this truth table

LHS (Left-hand side) RHS (Right-hand Side) Result
False False False
True False False
False True False
True True True

But there is a functional difference. For a result to be true, both LHS AND RHS must be true, therefore if LHS is false then the result of RHS is irrelevant as the answer will always be false.

The single ampersand operator (&) evaluates both sides of the operator before arriving at its answer.

The double ampersand operator (&& – also known as the conditional-AND operator) evaluates the RHS only if the LHS  is true. It short-circuits the evaluation so it doesn’t have to evaluate the RHS if it doesn’t have to. This means you can put the quick stuff on the left and the lengthy calculation on the right and you only ever need to do the lengthy calculation if you need it.

In the case above the code is checking if product is not null AND if product notes is not null or whitespace. It cannot evaluate the RHS if the LHS is false. Therefore a single ampersand operator will cause a failure when product is NULL simply because it is trying to evaluate both sides of the operator.

For more information see:

Misc, Software Development

Code Review: Making a drop down list out of an enum

I’ve come across code like this a couple of times and it is rather odd:

IEnumerable<CalendarViewEnum> _calendarViewEnums =
    Enum.GetValues(typeof(CalendarViewEnum)).Cast<CalendarViewEnum>();
List selectList = new List<SelectListItem>();
foreach (CalendarViewEnum calendarViewEnum in _calendarViewEnums)
{
  switch (calendarViewEnum)
  {
    case CalendarViewEnum.FittingRoom:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.FittingRoom) 
          });
      break;
    case CalendarViewEnum.Staff:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewStaff, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.Staff) 
      });
    break;
    case CalendarViewEnum.List:
      selectList.Add(new SelectListItem { 
          Text = AdminPreferencesRes.Label_CalendarViewList, 
          Value = ((int)calendarViewEnum).ToString(CultureInfo.InvariantCulture), 
          Selected = (calendarViewEnum == CalendarViewEnum.List) 
      });
    break;
    default:
      throw new Exception("CalendarViewEnum Enumeration does not exist");
  }
  return selectList.ToArray();
}

So, what this does is it generates a list of values from an enum, then it loops around that list generating a second list of SelectListItems (for a drop down list box on the UI). Each item consists of a friendly name (to display to the user), a integer value (which is returned to the server on selection) and a Boolean value representing whether that item is selected (which is actually always true, so it is lucky that MVC ignores this the way the Drop Down List was rendered, otherwise it would get very confused.)

Each loop only has one possible path (but the runtime doesn’t know this, so it slavishly runs through the switch statement each time). So that means we can do a lot to optimise and simplify this code.

Here it is:

List<SelectListItem> selectList = new List<SelectListItem>();
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
    Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewStaff, 
    Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
  });
selectList.Add(new SelectListItem { 
    Text = AdminPreferencesRes.Label_CalendarViewList, 
    Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
  });
return selectList.ToArray();

There is one other another bit of refactoring we can do. We always, without exception, return the same things from this method and it is a known fixed size at compile time. So, let’s just generate the array directly:

return new []
{
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewFittingRoom, 
      Value = ((int) CalendarViewEnum.FittingRoom).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewStaff, 
      Value = ((int)CalendarViewEnum.Staff).ToString(CultureInfo.InvariantCulture) 
    },
  new SelectListItem { 
      Text = AdminPreferencesRes.Label_CalendarViewList, 
      Value = ((int)CalendarViewEnum.List).ToString(CultureInfo.InvariantCulture) 
    }
};

So, in the end a redundant loop has been removed and a redundant conversion from list to array has been removed. The code is also easier to read and easier to maintain. It is easier to read because the cyclomatic complexity of the method is now one, previously it was 5 (one for the loop, and one for each case clause in the switch statement [assuming I’ve calculated it correctly]). The lower the cyclomatic complexity of a method is the easier it is to understand and maintain as there are less conditions to deal with. It is easier to maintain because now instead of a whole new case statement to be added to the switch statement a single line just needs to be added to the array. The redundant Selected property has also been removed.

There are still ways to improve this, but the main issue (what that loop is actually doing) for anyone maintaining this code has now been resolved.

Misc

Code Review: FirstOrDefault()

I regularly review the code that I maintain. Recently, I’ve come across code like this fairly often:

someCollection.FirstOrDefault().Id

I cannot rightly comprehend why anyone would do this.

FirstOrDefault() returns the first item in a sequence or the default value if it doesn’t exist (i.e. the sequence is empty). For a reference type (classes, basically) the default value is null. So using the value returned by FirstOrDefault() without a null check is only valid for when the sequence contains a value type (e.g. int, decimal, DateTime, Guid, etc.)

In the example above if someCollection is an empty list/array/collection/whatever then FirstOrDefault() will return null and the call to the Id property will fail.

Then you are left with a NullReferenceException on line xxx but you don’t know if it is someCollection, or the returned value from FirstOrDefault() which then wastes your time (or the time of someone else who is having to debug it).

So, if the sequence must always contain items then use First(), in the exceptional event that it is empty the call to First() will throw a more appropriate exception that will help you debug faster. If it is perfectly valid for the sequence to be empty then perform a null check and change the behaviour appropriately.

Software Development

A better tracing routine

In .NET 4.5 three new attributes were introduced. They can be used to pass into a method the details of the caller and this can be used to create better trace or logging messages. In the example below, it outputs tracing messages in a format that you can use in Visual Studio to automatically jump to the appropriate line of source code if you need it to.

The three new attributes are:

If you decorate the parameters of a method with the above attributes (respecting the types, in brackets afterwards) then the values will be injected in at compile time.

For example:

public class Tracer
{
    public static void WriteLine(string message,
                            [CallerMemberName] string memberName = "",
                            [CallerFilePath] string sourceFilePath = "",
                            [CallerLineNumber] int sourceLineNumber = 0)
    {
        string fullMessage = string.Format("{1}({2},0): {0}{4}>> {3}", 
            memberName,sourceFilePath,sourceLineNumber, 
            message, Environment.NewLine);

        Console.WriteLine("{0}", fullMessage);
        Trace.WriteLine(fullMessage);
    }
}

The above method can then be used to in preference to the built in Trace.WriteLine and it will output the details of where the message came from. The format that the full message is output in is also in a format where you can double click the line in the Visual Studio output window and it will take you to that line in the source.

Here is an example of the output:

c:\dev\spike\Caller\Program.cs(13,0): Main
>> I'm Starting up.
c:\dev\spike\Caller\SomeOtherClass.cs(7,0): DoStuff
>> I'm doing stuff.

The lines with the file path and line numbers on them can be double-clicked in the Visual Studio output window and you will be taken directly to the line of code it references.

What happens when you call Tracer.WriteLine is that the compiler injects literal values in place of the parameters.

So, if you write something like this:

Tracer.WriteLine("I'm doing stuff.");

Then the compiler will output this:

Tracer.WriteLine("I'm doing stuff.", "DoStuff", "c:\\dev\\spike\\Caller\\SomeOtherClass.cs", 7);
Software Development

Using Contracts to discover Liskov Substitution Principle Violations in C#

In his book Agile Principles, Patterns, and Practices in C#, Bob Martin talks about using pre- and post-conditions in Eiffel to detect Liskov Substitution Principle violations. At the time he wrote that C# did not have an equivalent feature and he suggested ensuring that unit test coverage was used to ensure the same result. However, that does not ensure that checking for LSP violations are applied consistently. It is up to the developer writing the tests to ensure that they are and that any new derived classes get tested correctly. Contracts, these days, can be applied to the base class and they will automatically be applied to any derived class that is created.

Getting started with Contracts

If you’ve already got Visual Studio set up to check contracts in code then you can skip this section. If you don’t then read on.

1. Install the Code Contracts for .NET extension into Visual Studio.

2. Open Visual Studio and load the solution containing the projects you want to apply contracts to.

3. Open the properties for the project and you’ll see a new tab in the project properties window called “Code Contracts”

4. Make sure that the “Perform Runtime Contract Checking” and “Perform Static Contract Checking” boxes are checked. For the moment the other options can be left at their default values. Only apply these to the debug build. It will slow down the application while it is running as each time a method with contract conditions is called it will be performing runtime checks.

Visual Studio Project Properties

You are now ready to see code contract issues in Visual Studio.

For more information on code contracts head over to Microsoft Research’s page on Contracts.

Setting up the Contract

Using the Rectangle/Square example from Bob Martin’s book Agile Principles, Patterns and Practices in C# here is the code with contracts added:

public class Rectangle
{
    private int _width;
    private int _height;
    public virtual int Width
    {
        get { return _width; }
        set
        {
            Contract.Requires(value >= 0);
            Contract.Ensures(Width == value);
            Contract.Ensures(Height == Contract.OldValue(Height));
            _width = value;
        }
    }

    public virtual int Height
    {
        get { return _height; }
        set
        {
            Contract.Requires(value >= 0);
            Contract.Ensures(Height == value);
            Contract.Ensures(Width == Contract.OldValue(Width));
            _height = value;
        }
    }
    public int Area { get { return Width * Height; } }
}

public class Square : Rectangle
{
    public override int Width
    {
        get { return base.Width; }
        set
        {
            base.Width = value;
            base.Height = value;
        }
    }

    public override int Height
    {
        get { return base.Height; }
        set
        {
            base.Height = value;
            base.Width = value;
        }
    }
}

The Square class is in violation of the LSP because it changes the behaviour of the Width and Height setters. To any user of Rectangle that doesn’t know about squares it is quite understandable that they’d assume that setting the Height left the Width alone and vice versa. So if they were given a square and they attempted to set the width and height to different values then they’d get a result from Area that was inconsistent with their expectation and if they set Height then queried Width they may be somewhat surprised at the result.

But there are now contracts in place on the Rectangle class and as such they are enforced on Square as well. “Contracts are inherited along the same subtyping relation that the type system enforces.” (from he Code Contracts User Manual). This means that any class that has contracts will have those contracts enforced in any derived class as well.

While some contracts can be detected at compile time, others will still need to be activated through a unit test or will be detected at runtime. Be aware, if you’ve never used contracts before that the contract analysis can appear in the error and output windows a few seconds after the compiler has completed.

Consider this test:

[Test]
public void TestSquare()
{
    var r = new Square();
    r.Width = 10;

    Assert.AreEqual(10, r.Height);
}

When the test runner gets to this test it will fail. Not because the underlying code is wrong (it will set Height to be equal to Width), but because the method violates the constraints of the base class.

The contract says that when the base class changes the Width the Height remains the same and vice versa.

So, despite the fact that the unit tests were not explicitly testing for an LSP violation in Square, the contract system sprung up and highlighted the issue causing the test to fail.

 

Software Development

Getting Started with AngularJS – Bundling the files

When you are building AngularJS apps you will probably want to store all your various controllers, directives, filters, etc. into different files to keep it all nicely separated and easy to manage. However, putting script blocks to all those files in your HTML is not efficient in the least. Not only do you have several round-trips to the server, the browser will be downloading a lot of code that is designed to be readable and maintainable, potentially with lots of additional whitespace and comments.

If the back end of your application is using .NET then you can bundle together CSS and Javascript files to make them more optimised.

For example, I have a small AngularJS prototype application that uses bundling so that, when it is run with the optimisations turned on, it will need less files and more compact javascript and CSS. The method that creates these bundles looks like this:

public static void RegisterBundles(BundleCollection bundles)
{
    bundles.Add(new StyleBundle("~/Content/base-styles.css")
        .Include("~/Content/bootstrap.css")
        .Include("~/Content/angular-ui.css")
        .Include("~/Content/angularCatalogue.css"));

    bundles.Add(new ScriptBundle("~/Scripts/base-frameworks.js")
        .Include("~/Scripts/jquery-{version}.js")
        .Include("~/Scripts/angular.js")
        .Include("~/Scripts/angular-resource.js")
        .Include("~/Scripts/angular-ui.js")
        .Include("~/Scripts/bootstrap.js"));

    bundles.Add(new ScriptBundle("~/Scripts/angular-catalogue.js")
    // Configure the Angular Application
      .Include("~/ngapp/app.js")

    // Filters
      .Include("~/ngapp/filters/idFilter.js")
      .Include("~/ngapp/filters/allBut.js")

    // The services
      .Include("~/ngapp/Services/colourService.js")
      .Include("~/ngapp/Services/brandService.js")
      .Include("~/ngapp/Services/productTypeService.js")
      .Include("~/ngapp/Services/productService.js")
      .Include("~/ngapp/Services/sizeService.js")

  // Directives
      .Include("~/ngapp/Directives/userFilter.js")
      .Include("~/ngapp/Directives/productDetailsDirective.js")

  // Controllers
      .Include("~/ngapp/Controllers/productSearchController.js")
      .Include("~/ngapp/Controllers/productDetailController.js")
      .Include("~/ngapp/Controllers/editProductController.js"));
}

This method is called from the Application_Start() method in global.asax.cs.

What this does is set up a number of bundles. In this case three bundles are set up. One for the CSS, and two for javascript (one is a set of standard third party libraries, the other is the angularJS application itself).

In the layout or view you can then reference these bundles using the path passed in to the constructor. Like this:

<html>
  <head>
    <!-- Other bits go here -->
    @Styles.Render("~/Content/base-styles.css")
  </head>
  <body>
    @RenderBody()
    @Scripts.Render("~/Scripts/base-frameworks.js")
    @Scripts.Render("~/Scripts/angular-catalogue.js")
  </body>
</html>

Remember to use the tilde notation just like in the code that defines the bundles.

When the optimisations are turned off the scripts will render as a script block per include. When the optimisations are turned on then it outputs one script block. When the server receives a request for that script it resolves the name to match the bundle, it then sends back an amalgamated and minified  version of that script file. This then loads much faster on the client as there are less roundtrips to the server and takes much less bandwidth.

Here’s what the two scenarios look like:

Optimisations turned off

This is what the two @Script.Render() blocks at the end of the HTML look like:

<script src="/Scripts/jquery-1.9.1.js"></script>
<script src="/Scripts/angular.js"></script>
<script src="/Scripts/angular-resource.js"></script>
<script src="/Scripts/angular-ui.js"></script>
<script src="/Scripts/bootstrap.js"></script>

<script src="/ngapp/app.js"></script>
<script src="/ngapp/filters/idFilter.js"></script>
<script src="/ngapp/filters/allBut.js"></script>
<script src="/ngapp/Services/colourService.js"></script>
<script src="/ngapp/Services/brandService.js"></script>
<script src="/ngapp/Services/productTypeService.js"></script>
<script src="/ngapp/Services/productService.js"></script>
<script src="/ngapp/Services/sizeService.js"></script>
<script src="/ngapp/Directives/userFilter.js"></script>
<script src="/ngapp/Directives/productDetailsDirective.js"></script>
<script src="/ngapp/Controllers/productSearchController.js"></script>
<script src="/ngapp/Controllers/productDetailController.js"></script>
<script src="/ngapp/Controllers/editProductController.js"></script>

And when this is rendered in the browser, the following calls are made.

There are 18 requests in the above example. 901kb is transferred to the browser and it took 911ms to complete loading everything (the above does not show images, css or ajax calls that are also downloaded as part of the page)

Optimisations turned on

Now, compare the above to this representation of the same section of the page:

<script src="/Scripts/base-frameworks.js?v=oHeDdLNj8HfLhlxvF-JO29sOQaQAldq0rEKGzugpqe01"></script>
<script src="/Scripts/angular-catalogue.js?v=fF1y8sFMbNn8d7ARr-ft_HBP_vPDpBfWVNTMCseNPC81"></script>

And when rendered in the browser, it makes the following requests:

There are now just two requests, one for the base-framework bundle, and one for the angular-catalogue (our application code) bundle.

Because the bundling process minifies the files the amount of data transferred is much smaller too, in this case 223kb (a saving of 678kb or roughly 75%). For established frameworks that ship with a *.min.js version the bundling framework will use that convention and use the existing minified file. If it can’t find one it will minify the file for you.

And because there is less data to transfer and less network round-trips to wait for the time to fully load the page has been reduced  to 618ms (a saving of 293ms or roughly ⅔ of the time of the previous request).

More information

There is a lot more to bundling than I’ve mentioned here. For a more in depth view of bundling read Scott Guthrie’s blog on Bundling and Minification Support.