Express for node.js walk through – Hello World

While both Visual Studio and WebStorm have templates for node.js Express applications in this post (and and subsequent few posts) I’m going to walk through setting up an Express application so that you can see how it fits together.

Installing Express

To start with I created a folder with just a package.json file in it, then in a command line I ran npm to install Express:

npm install express –save

Express "Hello, World!"

At this stage an app.js file is created. It will bootstrap the Express application. It will configure the environment and start the HTTP listener.

In both the WebStorm and Visual Studio templates there is a line of code that looks like this:

app.set('port', process.env.PORT || 3000);

What this is doing is getting the port to run the application on. By default it will take the value in the "port" environment variable, and if not found it will default to port 3000 within the scope of the process. Although it is setting back the value of the environment variable it isn’t persisted outside of the application, so it really just serves to set up a default value without you having to check for a value and supply a default value when ever it is actually needed.

The full application, at this point is this:

// Requirements
var express = require("express");
var http = require("http");

// Set up the application
var app = express();
app.set("port", process.env.PORT || 3000);

// Run up the server
http.createServer(app).listen(app.get("port"), function(){
    console.log("Express server listening on port " + app.get("port"));

The HTTP module is the bit that communicates with the outside world, so we require it for our application to work. We create the server and pass it the app which is a function HTTP’s request event can call. The createServer then returns a server object. As we want to start listening immediately we can just call listen on the server, passing it the port we want to listen on, and a callback function that will be called when the server fires the listening event to indicate that it has started listening for requests.

At this point, browsing on localhost with the defined port will just result in Express responding with a terse error message. But at least it proves that it is listening and can respond

It seems to be a convention, not one that is enforced by the framework, that handlers for the routes are put in a folder called routes, so a file is created called hello.js and it simply looks like this:

module.exports = function(req, res) {
    res.send("<h1>Hello, World!</h1>");

req is the request, and res is the response. Very simply the response can send some content back to the browser. In this case it is hand crafted HTML (and very simple at that).

Back in the app.js file, a require line is added to bring in the hello.js module, and further down the route is added to the application.

var hello = require("./routes/hello.js");
app.get("/", hello);

Now the application will respond to a GET request on the root URL of the application. The function exported by hello.js will be run whenever this URL is requested by the browser.

The whole app.js file now looks like this:

// Requirements
var express = require("express");
var http = require("http");
var hello = require("./routes/hello.js");

// Set up the application
var app = express();
app.set("port", process.env.PORT || 3000);
app.get("/", hello);

// Run up the server
http.createServer(app).listen(app.get("port"), function(){
    console.log("Express server listening on port " + app.get("port"));

Node Package Manager – npm

The Node Package Manager is a bit like NuGet for node. It is a way to get additional functionality into node for frameworks that were not bundled with node itself.

Like NuGet it has its own website where you can browse the available packages. It is at 

Installing a package

To installing a package, just type at the command line while in your application directory:

npm install [package-name]

For example:

npm install express 

which will install the express framework (a web application framework which is very similar to Sinatra on Ruby or Nancy on .NET)

The package manager will create a directory called node_modules and put the package(s) it installs there.

Using the package in your application

As with any module you need a require statement in your application. However, even although npm has installed the package in your application’s directory you don’t need a path to the module like you would with your own modules within your application. A simple require("[package-name]") will do.

Modules and Source Control

Like other package managers you wouldn’t generally expect the packages themselves to be checked into source control. You can put a file called package.json in the root of your application to define how your application is configured. It contains some basic metadata about the application and can also include the packages that the application relies on. Full details can be found here: If that is a bit much and you want a quick overview, there is also a cheat sheet available.

At its most basic, you want to have a package.json with at least the opening and closing braces. This is enough for npm to update the file with the details of the package it is installing. An empty file will just cause an error.

To ensure that the packages get saved to the package.json file remember the --save switch on the command line. e.g.

npm install express --save

So, now your application can be committed to source control without having to add the packages in too. You can add the node_packages folder in your .gitignore file (or what ever your source control system requires)

When retrieving the application from source control or getting an update, you can run the install command on its own without any other parameters to tell the node package manager to read the package.json file and install all of the dependencies it finds. Like this:

npm install

Working with Modules in node.js

In my last post on node.js I showed how to set up a development environment to work with node. Now, I’m going to introduce some bits and pieces to get going a bit further than a simple hello world.


First off, unless you want to be working with JavaScript files that are thousands of lines long you’ll want to modularise your code into smaller components that you can bring in to other JavaScript files as required. From the API Documentation: “Node has a simple module loading system. In Node, files and modules are in one-to-one correspondence.”

The “require” keyword allows you to do this. You must also remember that require will return an object that represents that module, so you need to assign it to a variable.

Here is an example, showing you how to require the readline module so that you can ask questions at the command line.

var readline = require("readline"),
    rlInterface = readline.createInterface({
        input: process.stdin, 

rlInterface.question("What is your name? ", function(answer){
    console.log("Hello, "+answer+".");

And if you run the application, it will look something like this:

$ node whatsYourName.js
What is your name? Colin
Hello, Colin.


The above is great if you want to include a node module. But if you are building your own, then there is a little more work needing done.

In your module you need to add module.exports to indicate what is being exported from the module. Since it is just a bag of properties you can create what ever you need for the module.

e.g. This is the code for myModule.js

module.exports.someValue = "This is some value being exported from the module";
module.exports.someFunction = function(a, b){
    return a+b;

And the consumer of that module (moduleConsumer.js):

var myModule = require("./myModule.js");
console.log("This is someValue: "+myModule.someValue)
console.log("This is the result of someFunction: ", myModule.someFunction(2,3));

Which produces the output:

$ node moduleConsumer.js
This is someValue: This is some value being exported from the module
This is the result of someFunction:  5

One thing you might notice that is different from before is that when you require code that is in your project then you need to specify the path to the JavaScript file, even if it is in the same folder. So for files in the same folder you need to prefix the name of the JavaScript file with “./“.

If you have various files that need to include a specific module, then the first call to require will create the module, then each call after that will get a cached version of the module. So, if you put in a log statement at the top of your module to say that it is being loaded, then that log statement will only be run once regardless of the number of times you require that module.

Requiring a folder

You can require a folder rather than a specific file. In this case, what node does it looks for a file in the folder called packages.json, then index.js.

packages.json is a file containing a piece of json that describes module that bootstraps the rest. e.g.

{ "name" : "my-library",
  "main" : "./lib/my-library.js" }

The “main” property describes the JavaScript file that is the entry point into the module that bootstraps the rest.

If it finds an index.js file then that file is uses as the entry point that bootstraps the rest of the folder.

There were build errors. Would you like to continue and run the last successful build?

Oh, Would I! Could I really do that?!?  Well yes, but I cannot think of any situation where I would want to do this. I’m not saying there isn’t a time I might conceivably possibly maybe actually want this, but I can’t think of it right now and I’ve not come across that situation for as long as I can remember getting this stupid dialog..

Now, obviously you can press the “Do not show this dialog again” and press “No” and it will remember that as your default choice. However, what if, like me, you were a bit ham fisted and accidentally pressed “Yes” and then wonder why your latest changes simply don’t work. How do I fix that? There’s no dialog any more.

The setting is accessible from Visual Studio’s Options dialog. You can get to by going to Tools–>Options:

In the options dialog go to Projects and Solutions—>Build and Run

There you will see the option "On Run, when build or deployment errors occur:" and a drop down indicating the action. Change the drop down to "Do not launch" in order to ensure that your application does not launch when a build error occurs.

While we are here, do you ever want to run your application when the projects are out of date? I can’t think of any time I’ve wanted to do that. There is an option to stop prompting you do do that and just “always build” in that case.

Once you’ve made your changes, press “OK” to save the changes.

Node.js – Setting up a basic development environment

Node.js is a platform for executing JavaScript outside of the browser. It runs on the Google V8 engine which effectively means it is cross platform as it can run on Windows, Mac OS X, and Linux.

Since JavaScript written for Node.js is running on one platform issues that plague browser based JavaScript does not occur on Node.js. There are no “Browser Compatibility” issues. It simply runs ECMAScript 5.


To install Node.js download the installer from the node.js homepage. The “Install” button should give you the correct installer for the platform you are on, but if not, or your are downloading for a different platform you can get a list of each download available, including source code.

One thing I really like about the windows installer is that it offers to add Node.js to the PATH. Something that more developer tools should do.

Writing JavaScript for Node.js

While you can use a basic text editor, or even a more sophisticated text editor, IDEs are also available for writing node.js.

IDE – Node.js Tools for Visual Studio

There is a plug in for Visual Studio that is in beta (at the time of writing) which allows you to create node.js projects. This is very useful if you are doing cross platform development interacting with .NET based applications, or if you are simply used to working with Visual Studio. I like this option because I’ve been working with Visual Studio since version 2.1 (that came out in the early-to-mid-90s).

To get the extension for the Node.js tools for Visual Studio, go to If you don’t want to default download (which is for the latest version of visual studio) go to the downloads tab and select the appropriate download for your version of Visual Studio.

In the future, I would expect it to be available directly in Visual Studio through the Tools->Extensions and Updates… manager.

Once installed, you can create a new node.js project in the same way you’d create any new project. The Extension will have added some extra options in to the New Project dialog.

You can set break-points, examine variables, set watches and so on. The following is an example of a very simple Hello World application with a break point set and the local variables showing up in the panel at the bottom.

IDE – JetBrains Web Storm

If you don’t already have Visual Studio it may be an expensive option. JetBrains Web Storm is another IDE that allows you to create Node.js applications with a similar set of features to the Visual Studio extension above. Personally, I find Web Storm a little clunky but it works across Windows, Mac OS X, and Linux. If you don’t already have access to Visual Studio it is a much less expensive too.

Text Editors

Besides IDEs you can always use text editors. Some are more advanced than others.

The text editor I hear most about these days is Sublime, which can be more expensive than WebStorm depending on the license needed.  Sublime is a very nice looking text editor and it has its own plug-in system so can be extended, but for roughly the same money you could get a fully featured IDE.


I’ve not really gone all that much into the text editors available that support developing for JavaScript with Note.js because I don’t feel they really add much. A fully featured IDE with refactoring support is much more important to me. Maybe installing ReSharper corrupted me.

Setting up Fluent Migrator to run on a build server

This is a step-by-step guide to setting up Fluent Migrator to run on a build server using the MSBUILD project

Step 1: Setting up the migrations project

Create the Project

The migrations project is just a class library with a couple of NuGet packages added to it.

To make it easier later on to pick up the assembly from the MSBUILD project, we are not going to have debug/release bin directories in the same way other projects to. We will have one bin folder where the built assembly will be placed, regardless of build configuration.

To do that:

  • Open up the properties for the project (either right-click and select “Properties”, or select the project then press Alt+Enter).
  • Then go to the Build tab.
  • Then change the Configurations drop down to “All Configurations”.
  • Finally, change the output path to “bin\”

Add the NuGet Packages

The NuGet packages you want are:

  • FluentMigrator – This is the core of Fluent Migrator and contains everything to create database migrations
  • FluentMigrator Tools – This contains various runners and so on.

The Fluent Migrator Tools is a bit of an odd package. It installs the tools in the packages folder of your solution but does not add anything to your project.

Add the MSBuild tools to the project

As I mentioned the Fluent Migrator Tools package won’t add anything to the project. You have to manually do that yourself. I created a post build step to copy the relevant DLL across from the packages folder to the bin directory of the migrations project.

  • Open the project properties again
  • Go to the “Build Events” tab
  • Add the following to the post-build event command line box:
    xcopy "$(SolutionDir)packages\FluentMigrator.Tools.\tools\AnyCPU\40" "$(TargetDir)" /y /f /s/v
    NOTE: You may have to modify the folder depending on the version of the Fluent Migrator Tools you have

Add the MSBUILD project to the project

OK, so that sound a bit circular. Your migrations project is a C# project (csproj) and the Build Server will need an MSBUILD script to get going with, which will sit inside your C# project.

Since there is no easy way to add an MSBUILD file to an existing project, I found the easiest way was to add an XML file, then rename it to migrations.proj

Step 2: Configuring the MSBUILD Script

This is what the MSBUILD script looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="">

  <!-- Set up the MSBUILD script to use tasks defined in FluentMigrator.MSBuild.dll -->
  <UsingTask TaskName="FluentMigrator.MSBuild.Migrate" AssemblyFile="$(OutputPath)FluentMigrator.MSBuild.dll"/>
  <!-- Set this to the parent project. The C# project this is contained within. -->
  <Import Project="$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj" />

  <!-- Each of these target a different environment. Set the properties to the 
       relevant information for the datbase in that environment. It is one of
       these targets that will be specified on the build server to run.
       Other properties may be passed into the MSBUILD process 
       externally. -->
  <Target Name="MigrateLocal">
    <Message Text="Migrating the Local Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=localhost;database=my-database" />

  <Target Name="MigrateUAT">
    <Message Text="INFO: Migrating the UAT Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=uat-db;database=my-database" />

  <!-- * This is the bit that does all the work. It defaults some of the properties
         in case they were not passed in.
       * Writes some messages to the output to tell the world what it is doing.
       * Finally it performs the migration. It also writes to an output file the script 
         it used to perform the migration. -->
  <Target Name="Migrate">
    <CreateProperty Value="False" Condition="'$(TrustedConnection)'==''">
      <Output TaskParameter="Value" PropertyName="TrustedConnection"/>
    <CreateProperty Value="" Condition="'$(User)'==''">
      <Output TaskParameter="Value" PropertyName="User"/>
    <CreateProperty Value="" Condition="'$(Password)'==''">
      <Output TaskParameter="Value" PropertyName="Password"/>
    <CreateProperty Value="False" Condition="'$(DryRun)'==''">
      <Output TaskParameter="Value" PropertyName="DryRun"/>
    <Message Text="INFO: Project is «$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj»" />
    <Message Text="INFO: Output path is «$(OutputPath)»"/>
    <Message Text="INFO: Target is «$(OutputPath)\$(AssemblyName).dll»"/>
    <Message Text="INFO: Output script copied to «$(OutputPath)\script\generated.sql»"/>    
    <Message Text="INFO: Dry Run mode is «$(DryRun)»"/>
    <Message Text="INFO: Server is «$(server)»"/>
    <Message Text="INFO: Database is «$(database)»"/>
    <MakeDir Directories="$(OutputPath)\script"/>
			Connection="Data Source=$(server);Database=$(database);Trusted_Connection=$(TrustedConnection);User Id=$(User);Password=$(Password);Connection Timeout=30;"

Step 3 : Configuring the Build Server

In this example, I’m using TeamCity.

You can add a build step after building the solution to run the migration. The settings will look something like this:


The important bits that the “Build file path” which points to the MSBUILD file we created above, the Targets which indicate which target to run, and the “Command Lime Parameters” which passes properties to MSBUILD that were not included in the file itself. For example, the user name and password are not included in the file as that could present a security risk, so the build server passes this information in.

What about running it ad-hoc on your local machine?

Yes, this is also possible.

Because, above, we copied all the tools to the bin directory in the post-build step, there is a Migrate.exe file in your bin directory. That takes some command line parameters that you can use to run the migrations locally without MSBUILD.

  • Open up the project properties again for your migrations C# project
  • Go to the “Debug” tab
  • In “Start Action” select “Start external program” and enter “.\Migrate.exe”
  • In Command line arguments enter something like the following:

    --conn "Server=localhost;Database=my-database;Trusted_Connection=True;Encrypt=True;Connection Timeout=30;" --provider sqlserver2012 --assembly "My.DatabaseMigrations.dll" --task migrate --output --outputFilename src\migrated.sql

aspnet_regiis “Could not load file or assembly ‘SimpleAuthentication.Core’ or one of its dependencies.”

I was recently following Jouni Heiknieme’s blog post on Encrypting connection strings in Windows Azure web applications when I stumbled across a problem.

The issue was that I wasn’t encrypting the connectionStrings section, I was encrypting a custom section (one provided by SimpleAuthentication). And in order to encrypt that section, aspnet_regiis needs access to the DLL that defines the config section. If it cannot find the DLL it needs it will respond with an error message:

C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web>aspnet_regiis -pef "authenticationProviders" . -prov "Pkcs12Provider" 
Microsoft (R) ASP.NET RegIIS version 4.0.30319.18408 
Administration utility to install and uninstall ASP.NET on the local machine. 
Copyright (C) Microsoft Corporation.  All rights reserved. 
Encrypting configuration section... 
An error occurred creating the configuration section handler for authenticationProviders: Could not load file or assembly 'SimpleAuthentication.Core' or one of 
its dependencies. The system cannot find the file specified. (C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web\web.config line 7) 
Could not load file or assembly 'SimpleAuthentication.Core' or one of its dependencies. The system cannot find the file specified. 

And here is the relevant part of the web.config file

<?xml version="1.0" encoding="utf-8"?> 
    <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> 
      <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> 
    <section name="authenticationProviders" type="SimpleAuthentication.Core.Config.ProviderConfiguration, SimpleAuthentication.Core" /> 

It took searching through a few forum posts before I eventually found the answer. Most were pointing in the right general direction. You either have to load the assembly that defines the config section into the GAC (not possible for me as it was a third party assembly that was not strong named) or put it where aspnet_regiis was looking for it.

All the non-GAC solutions that I found were hacky horrible things that put the assembly somewhere in the .NET folder.

My problem was that where everyone was saying to put it wasn’t working for me. So I loaded up Process Monitor to look to see where exactly the aspnet_regiis was looking. It turns out that because I was using the 64bit version of the command prompt I should be looking in C:\Windows\Microsoft.NET\Framework64\v4.0.30319

I put the assembly in that directory and the aspnet_regiis worked and the relevant section was encrypted, it was runnable and I could store it to source control without other people knowing what my secret keys are.

Round tripping the encryption/decryption

I also had some issues round tripping the encrypted and decrypted config file while developing. I kept getting the error message:

Decrypting the relevant config settings
Microsoft (R) ASP.NET RegIIS version 4.0.30319.18408
Administration utility to install and uninstall ASP.NET on the local machine.
Copyright (C) Microsoft Corporation.  All rights reserved.
Decrypting configuration section...
Failed to decrypt using provider 'Pkcs12Provider'. Error message from the provider: Keyset does not exist
 (C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web\web.config line 65)

Keyset does not exist


It turned out to be a permissions issue on the private key. This post “Keyset does not exist” on Stack Overflow helped on how to resolve that.

A better tracing routine

In .NET 4.5 three new attributes were introduced. They can be used to pass into a method the details of the caller and this can be used to create better trace or logging messages. In the example below, it outputs tracing messages in a format that you can use in Visual Studio to automatically jump to the appropriate line of source code if you need it to.

The three new attributes are:

If you decorate the parameters of a method with the above attributes (respecting the types, in brackets afterwards) then the values will be injected in at compile time.

For example:

public class Tracer
    public static void WriteLine(string message,
                            [CallerMemberName] string memberName = "",
                            [CallerFilePath] string sourceFilePath = "",
                            [CallerLineNumber] int sourceLineNumber = 0)
        string fullMessage = string.Format("{1}({2},0): {0}{4}>> {3}", 
            message, Environment.NewLine);

        Console.WriteLine("{0}", fullMessage);

The above method can then be used to in preference to the built in Trace.WriteLine and it will output the details of where the message came from. The format that the full message is output in is also in a format where you can double click the line in the Visual Studio output window and it will take you to that line in the source.

Here is an example of the output:

c:\dev\spike\Caller\Program.cs(13,0): Main
>> I'm Starting up.
c:\dev\spike\Caller\SomeOtherClass.cs(7,0): DoStuff
>> I'm doing stuff.

The lines with the file path and line numbers on them can be double-clicked in the Visual Studio output window and you will be taken directly to the line of code it references.

What happens when you call Tracer.WriteLine is that the compiler injects literal values in place of the parameters.

So, if you write something like this:

Tracer.WriteLine("I'm doing stuff.");

Then the compiler will output this:

Tracer.WriteLine("I'm doing stuff.", "DoStuff", "c:\\dev\\spike\\Caller\\SomeOtherClass.cs", 7);