Node.js with Express – Come to the dark side. We have cookies!

So far, so good. At the point the application displays a list of languages and will display the language that the user picked. However, for the flashcards to work, that selection will have to be remembered. As there is so little state, it is possible to store that in a cookie.

Express handles the creation of cookies without the need for middleware. However, in order to read back the cookies a parser is needed.

Setting and Removing the cookie

To set the cookie with the language that is received from the form on the page:

var language = req.body.language;
var cookieAge = 24*60*60*1000; // 1 day
res.cookie("flashcard-language",language,{maxAge:cookieAge, httpOnly:true});

In the above code, the language is set from the form value, as seen in the previous blog post. The cookieAge is set to be one day, after which it will expire. Finally, the cookie is added to the response object. It is named "flashcard-language".

When the route is requested the HTTP response header will look something like this:

HTTP/1.1 200 OK
X-Powered-By: Express
Set-Cookie: flashcard-language=ca; Max-Age=86400; Path=/; Expires=Mon, 17 Nov 2014 23:22:12 GMT; HttpOnly
Content-Type: text/html; charset=utf-8
Content-Length: 18
Date: Sun, 16 Nov 2014 23:22:12 GMT
Connection: keep-alive

To clear the cookie, call clearCookie and pass in the name of the cookie to clear.

res.clearCookie("flashcard-language");

The HTTP Response will then contain the request for the browser to clear the cookie:

HTTP/1.1 304 Not Modified
X-Powered-By: Express
Set-Cookie: flashcard-language=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT
ETag: W/"SuG3Z498eJmmc04TIciYHQ=="
Date: Sun, 16 Nov 2014 23:29:25 GMT
Connection: keep-alive

Reading in the cookie

In order to read in the cookie some middleware is required. The changes to the app.js file are:

// Requirements section
var cookieParser = require("cookie-parser");
...
// set up section
app.use(cookieParser());

And in the route function that responds to the request, the cookie can be read back like this:

var language = req.cookies["flashcard-language"];

Code for this post

The code for this can be found here on GitHub.

Node.js with Express – Getting form data

Now that view engines are wired up and working on this application, the next area to look at is getting data back from the browser.

By default Express doesn’t do anything with form data in the request and a piece of “middleware” needs to be added to get this to work. The reason for this is that there are many ways to process data from the browser (or perhaps it is data from something that is not a browser), so it is left up to the developer how best to process that data.

The view now has a form element and a submit button. It also has an input which will contain the name of the language the user wants. This information is transmitted to the server when the user presses the submit button.

In order to read this information a piece of middleware called body-parser is added.

First, it has to be installed into the application:

npm install body-parser --save

Then the application need to know about it. The following changes are made to the app.js file:

// In the requiments section
var bodyParser = require("body-parser");
...
// In the set up section
app.use(bodyParser.urlencoded());
...
// Set up the route as an HTTP POST request.
app.post("/set-language", setLanguage);

Since body-parser can handle a few different types of encoding the application needs to know which to expect. Browsers return form data as application/x-www-form-urlencoded parser, so that’s the parser that is used by the application.

There are some limitations to body-parser but it is good enough for this application. For example, it does not handle multi-part form data.

The route function can now read the body property that body-parser previously populated.

module.exports = function(req, res){
    var language = req.body.language;
    res.send("set-language to "+language);
};

This will now return a simple message to the user with the language code that was set on the previous page.

Viewing the full source code

Rather than paste the source code at the end of the blog post, I’ve released the project on to GitHub. You can either browse the code there, or get  a copy of the repository for examining yourself. There may be changes coming, so it is best to look for the release that corresponds with this blog post.

Express for node.js – View Engines

In the last post, Express Hello World, I talked about getting started with an Express application in Node.js. In it, the output was rendered by writing HTML directly. In most situations that is undesirable and some sort of view or template engine is better suited to rendering the output.

This post follows on directly from that, so any modifications will be based on what the end result was at the end of the last post. The final output from this post will be shown at the bottom.

There are quite a few view engines out there. A popular one for express is jade, but personally I find it too far from HTML to be useful. This is especially true if you work with graphic designers whose tools will output HTML that the developer has to mould into a view for the application to run. The less rework there the better, in my opinion.

So, I’m going to use EJS in this post. It is a bit like the ASPX view engine in .NET applications, but its actually a derivative of Ruby’s ERB.

To install EJS add the "ejs" package to the application.

npm install ejs --save

Configuring the view engine

Express needs to know which view engine is going to be used, it also needs to know where you are going to store the views. So the following lines are needed in the app.js file from the previous post

app.set("view engine","ejs");
app.set("views","./views");

Changing the route to render the view in the response

As express has been told to look in a directory called "views" to get the views, the view is going to go into that directory. At this time all that is going into helloworld.ejs is the HTML that was derectly sent form the route in the last post. The helloworld.ejs file now looks like this:

<h1>Hello, World!</h1>

And the route looks like this:

module.exports = function(req, res) {
    res.render("helloworld");
};

This will now give the same output as before.

So far, this has done nothing new for us. The power of views is the ability to pass data to them and for them to render it nicely for the user.

Sending data to the view

This is relatively easy. All that needs to happen is that the response.render call needs an additional parameter which contains the information.

module.exports = function(req, res) {
    res.render("helloworld",{name:"Colin"});
};

In order to render the information the view has to be changed too.

<h1>Hello, <%= name %>!</h1>

Adding a layout

EJS, out of the box, does not support layouts (or master pages as .NET’s ASPX view engine calls them). However there is a package that can be added that adds layout support. To install:

npm install express-ejs-layouts --save

And the changes in app.js file

// in the requirements section
var ejsLayouts = require("express-ejs-layouts");
...
// in the set up section
app.use(ejsLayouts);

And that’s it. Your application can now use layouts, by default it uses the file layout.ejs in the views folder, but this can be changed if you prefer.

To demonstrate this, here is a layout:

<!DOCTYPE HTML>
<html>
    <head>
        <title>Express with EJS</title>
        <link rel="stylesheet" href="/css/bootstrap.css"/>
        <link rel="stylesheet" href="/css/bootstrap-theme.css"/>
    </head>
    <body>
        <div class="container">
            <%- body %>
        </div>
        <script type="application/javascript" src="/js/bootstrap.js"></script>
    </body>

The <%- body %> indicates where the body of the page should go. This is the view that is names in the response.render function call.

Running the application at this point now shows that the view with its layout are now being rendered. But there is a problem. The styles are not showing up correctly.

Accessing static files

By default Express will not serve static files. It needs to be told explicitly where the static files are so that it can render them. While this may seem a little bit of a pain over something like an ASP.NET MVC application, where IIS will server anything in the folder that it recognised (and it recognises a lot), it is actually somewhat comforting that it won’t serve up files by accident.

To set up a folder to be served add the following line to the app.js file:

app.use(express.static(__dirname + '/public'));

What this does is tell Express that a directory called public contains static content and files should be served directly when requested. The __dirname is the directory where the current file is located. So what this means is that public is a directory located in the same directory as the app.js file.

What is interesting here is that the path to the static resource is a full server path. That means the public files can be anywhere on the server, or addressable by the server. The path in the URL will be translated to the full server path as needed. So that means that even although I have a directory called public in my Express application, the browser doesn’t see that directory. It only sees what is in it.

So, from the browser’s perspective the bootstrap.css file is located at /css/bootstrap.css but my application sees it as …/public/css/boostrap.css.

Summary

In this post the Hello World application was advanced with the EJS view engine along with layout support and the ability for static files to be rendered to the browser.

The app.js file now looks like this:

// Requirements
var express = require("express");
var http = require("http");
var ejsLayouts = require("express-ejs-layouts");
var hello = require("./routes/hello.js");

// Set up the application
var app = express();
app.set("port", process.env.PORT || 3000);
app.set("view engine","ejs");
app.use(ejsLayouts);
app.set("views","./views");
app.use(express.static(__dirname+"/public"));
app.get("/", hello);

// Run up the server
http.createServer(app).listen(app.get("port"), function(){
    console.log("Express server listening on port " + app.get("port"));
});

The routes/hello.js file now looks like this:

module.exports = function(req, res) {
    res.render("helloworld",{name:"Colin"});
};

The views/layout.ejs file looks like this:

<!DOCTYPE HTML>
<html>
    <head>
        <title>Express with EJS</title>
        <link rel="stylesheet" href="/css/bootstrap.css"/>
        <link rel="stylesheet" href="/css/bootstrap-theme.css"/>
    </head>
    <body>
        <div class="container">
            <%- body %>
        </div>
        <script type="application/javascript" src="/js/bootstrap.js"></script>
    </body>
</html>

The views/helloworld.ejs file looks like this:

<div class="row">
    <div class="col-md-12">
        <h1>Hello, <%= name %>!</h1>
    </div>
</div>

And twitter bootstrap was installed into the public directory. The project structure now looks like this:

Express for node.js walk through – Hello World

While both Visual Studio and WebStorm have templates for node.js Express applications in this post (and and subsequent few posts) I’m going to walk through setting up an Express application so that you can see how it fits together.

Installing Express

To start with I created a folder with just a package.json file in it, then in a command line I ran npm to install Express:

npm install express –save

Express "Hello, World!"

At this stage an app.js file is created. It will bootstrap the Express application. It will configure the environment and start the HTTP listener.

In both the WebStorm and Visual Studio templates there is a line of code that looks like this:

app.set('port', process.env.PORT || 3000);

What this is doing is getting the port to run the application on. By default it will take the value in the "port" environment variable, and if not found it will default to port 3000 within the scope of the process. Although it is setting back the value of the environment variable it isn’t persisted outside of the application, so it really just serves to set up a default value without you having to check for a value and supply a default value when ever it is actually needed.

The full application, at this point is this:

// Requirements
var express = require("express");
var http = require("http");

// Set up the application
var app = express();
app.set("port", process.env.PORT || 3000);

// Run up the server
http.createServer(app).listen(app.get("port"), function(){
    console.log("Express server listening on port " + app.get("port"));
});

The HTTP module is the bit that communicates with the outside world, so we require it for our application to work. We create the server and pass it the app which is a function HTTP’s request event can call. The createServer then returns a server object. As we want to start listening immediately we can just call listen on the server, passing it the port we want to listen on, and a callback function that will be called when the server fires the listening event to indicate that it has started listening for requests.

At this point, browsing on localhost with the defined port will just result in Express responding with a terse error message. But at least it proves that it is listening and can respond

It seems to be a convention, not one that is enforced by the framework, that handlers for the routes are put in a folder called routes, so a file is created called hello.js and it simply looks like this:

module.exports = function(req, res) {
    res.send("<h1>Hello, World!</h1>");
};

req is the request, and res is the response. Very simply the response can send some content back to the browser. In this case it is hand crafted HTML (and very simple at that).

Back in the app.js file, a require line is added to bring in the hello.js module, and further down the route is added to the application.

var hello = require("./routes/hello.js");
...
app.get("/", hello);

Now the application will respond to a GET request on the root URL of the application. The function exported by hello.js will be run whenever this URL is requested by the browser.

The whole app.js file now looks like this:

// Requirements
var express = require("express");
var http = require("http");
var hello = require("./routes/hello.js");

// Set up the application
var app = express();
app.set("port", process.env.PORT || 3000);
app.get("/", hello);

// Run up the server
http.createServer(app).listen(app.get("port"), function(){
    console.log("Express server listening on port " + app.get("port"));
});

Node Package Manager – npm

The Node Package Manager is a bit like NuGet for node. It is a way to get additional functionality into node for frameworks that were not bundled with node itself.

Like NuGet it has its own website where you can browse the available packages. It is at https://www.npmjs.org/ 

Installing a package

To installing a package, just type at the command line while in your application directory:

npm install [package-name]

For example:

npm install express 

which will install the express framework (a web application framework which is very similar to Sinatra on Ruby or Nancy on .NET)

The package manager will create a directory called node_modules and put the package(s) it installs there.

Using the package in your application

As with any module you need a require statement in your application. However, even although npm has installed the package in your application’s directory you don’t need a path to the module like you would with your own modules within your application. A simple require("[package-name]") will do.

Modules and Source Control

Like other package managers you wouldn’t generally expect the packages themselves to be checked into source control. You can put a file called package.json in the root of your application to define how your application is configured. It contains some basic metadata about the application and can also include the packages that the application relies on. Full details can be found here: https://www.npmjs.org/doc/files/package.json.html. If that is a bit much and you want a quick overview, there is also a cheat sheet available.

At its most basic, you want to have a package.json with at least the opening and closing braces. This is enough for npm to update the file with the details of the package it is installing. An empty file will just cause an error.

To ensure that the packages get saved to the package.json file remember the --save switch on the command line. e.g.

npm install express --save

So, now your application can be committed to source control without having to add the packages in too. You can add the node_packages folder in your .gitignore file (or what ever your source control system requires)

When retrieving the application from source control or getting an update, you can run the install command on its own without any other parameters to tell the node package manager to read the package.json file and install all of the dependencies it finds. Like this:

npm install

Working with Modules in node.js

In my last post on node.js I showed how to set up a development environment to work with node. Now, I’m going to introduce some bits and pieces to get going a bit further than a simple hello world.

Require

First off, unless you want to be working with JavaScript files that are thousands of lines long you’ll want to modularise your code into smaller components that you can bring in to other JavaScript files as required. From the API Documentation: “Node has a simple module loading system. In Node, files and modules are in one-to-one correspondence.”

The “require” keyword allows you to do this. You must also remember that require will return an object that represents that module, so you need to assign it to a variable.

Here is an example, showing you how to require the readline module so that you can ask questions at the command line.

var readline = require("readline"),
    rlInterface = readline.createInterface({
        input: process.stdin, 
        output:process.stdout});

rlInterface.question("What is your name? ", function(answer){
    console.log("Hello, "+answer+".");
    rlInterface.close();
});

And if you run the application, it will look something like this:

$ node whatsYourName.js
What is your name? Colin
Hello, Colin.

Exports

The above is great if you want to include a node module. But if you are building your own, then there is a little more work needing done.

In your module you need to add module.exports to indicate what is being exported from the module. Since it is just a bag of properties you can create what ever you need for the module.

e.g. This is the code for myModule.js

module.exports.someValue = "This is some value being exported from the module";
module.exports.someFunction = function(a, b){
    return a+b;
};

And the consumer of that module (moduleConsumer.js):

var myModule = require("./myModule.js");
console.log("This is someValue: "+myModule.someValue)
console.log("This is the result of someFunction: ", myModule.someFunction(2,3));

Which produces the output:

$ node moduleConsumer.js
This is someValue: This is some value being exported from the module
This is the result of someFunction:  5

One thing you might notice that is different from before is that when you require code that is in your project then you need to specify the path to the JavaScript file, even if it is in the same folder. So for files in the same folder you need to prefix the name of the JavaScript file with “./“.

If you have various files that need to include a specific module, then the first call to require will create the module, then each call after that will get a cached version of the module. So, if you put in a log statement at the top of your module to say that it is being loaded, then that log statement will only be run once regardless of the number of times you require that module.

Requiring a folder

You can require a folder rather than a specific file. In this case, what node does it looks for a file in the folder called packages.json, then index.js.

packages.json is a file containing a piece of json that describes module that bootstraps the rest. e.g.

{ "name" : "my-library",
  "main" : "./lib/my-library.js" }

The “main” property describes the JavaScript file that is the entry point into the module that bootstraps the rest.

If it finds an index.js file then that file is uses as the entry point that bootstraps the rest of the folder.

There were build errors. Would you like to continue and run the last successful build?

Oh, Would I! Could I really do that?!?  Well yes, but I cannot think of any situation where I would want to do this. I’m not saying there isn’t a time I might conceivably possibly maybe actually want this, but I can’t think of it right now and I’ve not come across that situation for as long as I can remember getting this stupid dialog..

Now, obviously you can press the “Do not show this dialog again” and press “No” and it will remember that as your default choice. However, what if, like me, you were a bit ham fisted and accidentally pressed “Yes” and then wonder why your latest changes simply don’t work. How do I fix that? There’s no dialog any more.

The setting is accessible from Visual Studio’s Options dialog. You can get to by going to Tools–>Options:

In the options dialog go to Projects and Solutions—>Build and Run

There you will see the option "On Run, when build or deployment errors occur:" and a drop down indicating the action. Change the drop down to "Do not launch" in order to ensure that your application does not launch when a build error occurs.

While we are here, do you ever want to run your application when the projects are out of date? I can’t think of any time I’ve wanted to do that. There is an option to stop prompting you do do that and just “always build” in that case.

Once you’ve made your changes, press “OK” to save the changes.

Node.js – Setting up a basic development environment

Node.js is a platform for executing JavaScript outside of the browser. It runs on the Google V8 engine which effectively means it is cross platform as it can run on Windows, Mac OS X, and Linux.

Since JavaScript written for Node.js is running on one platform issues that plague browser based JavaScript does not occur on Node.js. There are no “Browser Compatibility” issues. It simply runs ECMAScript 5.

Installing

To install Node.js download the installer from the node.js homepage. The “Install” button should give you the correct installer for the platform you are on, but if not, or your are downloading for a different platform you can get a list of each download available, including source code.

One thing I really like about the windows installer is that it offers to add Node.js to the PATH. Something that more developer tools should do.

Writing JavaScript for Node.js

While you can use a basic text editor, or even a more sophisticated text editor, IDEs are also available for writing node.js.

IDE – Node.js Tools for Visual Studio

There is a plug in for Visual Studio that is in beta (at the time of writing) which allows you to create node.js projects. This is very useful if you are doing cross platform development interacting with .NET based applications, or if you are simply used to working with Visual Studio. I like this option because I’ve been working with Visual Studio since version 2.1 (that came out in the early-to-mid-90s).

To get the extension for the Node.js tools for Visual Studio, go to http://nodejstools.codeplex.com/. If you don’t want to default download (which is for the latest version of visual studio) go to the downloads tab and select the appropriate download for your version of Visual Studio.

In the future, I would expect it to be available directly in Visual Studio through the Tools->Extensions and Updates… manager.

Once installed, you can create a new node.js project in the same way you’d create any new project. The Extension will have added some extra options in to the New Project dialog.

You can set break-points, examine variables, set watches and so on. The following is an example of a very simple Hello World application with a break point set and the local variables showing up in the panel at the bottom.

IDE – JetBrains Web Storm

If you don’t already have Visual Studio it may be an expensive option. JetBrains Web Storm is another IDE that allows you to create Node.js applications with a similar set of features to the Visual Studio extension above. Personally, I find Web Storm a little clunky but it works across Windows, Mac OS X, and Linux. If you don’t already have access to Visual Studio it is a much less expensive too.

Text Editors

Besides IDEs you can always use text editors. Some are more advanced than others.

The text editor I hear most about these days is Sublime, which can be more expensive than WebStorm depending on the license needed.  Sublime is a very nice looking text editor and it has its own plug-in system so can be extended, but for roughly the same money you could get a fully featured IDE.

Summary

I’ve not really gone all that much into the text editors available that support developing for JavaScript with Note.js because I don’t feel they really add much. A fully featured IDE with refactoring support is much more important to me. Maybe installing ReSharper corrupted me.

Setting up Fluent Migrator to run on a build server

This is a step-by-step guide to setting up Fluent Migrator to run on a build server using the MSBUILD project

Step 1: Setting up the migrations project

Create the Project

The migrations project is just a class library with a couple of NuGet packages added to it.

To make it easier later on to pick up the assembly from the MSBUILD project, we are not going to have debug/release bin directories in the same way other projects to. We will have one bin folder where the built assembly will be placed, regardless of build configuration.

To do that:

  • Open up the properties for the project (either right-click and select “Properties”, or select the project then press Alt+Enter).
  • Then go to the Build tab.
  • Then change the Configurations drop down to “All Configurations”.
  • Finally, change the output path to “bin\”

Add the NuGet Packages

The NuGet packages you want are:

  • FluentMigrator – This is the core of Fluent Migrator and contains everything to create database migrations
  • FluentMigrator Tools – This contains various runners and so on.

The Fluent Migrator Tools is a bit of an odd package. It installs the tools in the packages folder of your solution but does not add anything to your project.

Add the MSBuild tools to the project

As I mentioned the Fluent Migrator Tools package won’t add anything to the project. You have to manually do that yourself. I created a post build step to copy the relevant DLL across from the packages folder to the bin directory of the migrations project.

  • Open the project properties again
  • Go to the “Build Events” tab
  • Add the following to the post-build event command line box:
    xcopy "$(SolutionDir)packages\FluentMigrator.Tools.1.3.0.0\tools\AnyCPU\40" "$(TargetDir)" /y /f /s/v
    NOTE: You may have to modify the folder depending on the version of the Fluent Migrator Tools you have

Add the MSBUILD project to the project

OK, so that sound a bit circular. Your migrations project is a C# project (csproj) and the Build Server will need an MSBUILD script to get going with, which will sit inside your C# project.

Since there is no easy way to add an MSBUILD file to an existing project, I found the easiest way was to add an XML file, then rename it to migrations.proj

Step 2: Configuring the MSBUILD Script

This is what the MSBUILD script looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- Set up the MSBUILD script to use tasks defined in FluentMigrator.MSBuild.dll -->
  <UsingTask TaskName="FluentMigrator.MSBuild.Migrate" AssemblyFile="$(OutputPath)FluentMigrator.MSBuild.dll"/>
  
  <!-- Set this to the parent project. The C# project this is contained within. -->
  <Import Project="$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj" />

  <!-- Each of these target a different environment. Set the properties to the 
       relevant information for the datbase in that environment. It is one of
       these targets that will be specified on the build server to run.
       Other properties may be passed into the MSBUILD process 
       externally. -->
  <Target Name="MigrateLocal">
    <Message Text="Migrating the Local Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=localhost;database=my-database" />
  </Target>

  <Target Name="MigrateUAT">
    <Message Text="INFO: Migrating the UAT Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=uat-db;database=my-database" />
  </Target>

  <!-- * This is the bit that does all the work. It defaults some of the properties
         in case they were not passed in.
       * Writes some messages to the output to tell the world what it is doing.
       * Finally it performs the migration. It also writes to an output file the script 
         it used to perform the migration. -->
  <Target Name="Migrate">
    <CreateProperty Value="False" Condition="'$(TrustedConnection)'==''">
      <Output TaskParameter="Value" PropertyName="TrustedConnection"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(User)'==''">
      <Output TaskParameter="Value" PropertyName="User"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(Password)'==''">
      <Output TaskParameter="Value" PropertyName="Password"/>
    </CreateProperty>
    <CreateProperty Value="False" Condition="'$(DryRun)'==''">
      <Output TaskParameter="Value" PropertyName="DryRun"/>
    </CreateProperty>
    
    <Message Text="INFO: Project is «$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj»" />
    <Message Text="INFO: Output path is «$(OutputPath)»"/>
    <Message Text="INFO: Target is «$(OutputPath)\$(AssemblyName).dll»"/>
    <Message Text="INFO: Output script copied to «$(OutputPath)\script\generated.sql»"/>    
    <Message Text="INFO: Dry Run mode is «$(DryRun)»"/>
    <Message Text="INFO: Server is «$(server)»"/>
    <Message Text="INFO: Database is «$(database)»"/>
    
    <MakeDir Directories="$(OutputPath)\script"/>
    <Migrate
			Database="sqlserver2012"
			Connection="Data Source=$(server);Database=$(database);Trusted_Connection=$(TrustedConnection);User Id=$(User);Password=$(Password);Connection Timeout=30;"
			Target="$(OutputPath)\$(AssemblyName).dll"
      Output="True"
      Verbose="True"
      Nested="True"
      Task="migrate:up"
      PreviewOnly="$(DryRun)"
      OutputFilename="$(OutputPath)\script\generated.sql"
      />
  </Target>
  
</Project>

Step 3 : Configuring the Build Server

In this example, I’m using TeamCity.

You can add a build step after building the solution to run the migration. The settings will look something like this:

 

The important bits that the “Build file path” which points to the MSBUILD file we created above, the Targets which indicate which target to run, and the “Command Lime Parameters” which passes properties to MSBUILD that were not included in the file itself. For example, the user name and password are not included in the file as that could present a security risk, so the build server passes this information in.

What about running it ad-hoc on your local machine?

Yes, this is also possible.

Because, above, we copied all the tools to the bin directory in the post-build step, there is a Migrate.exe file in your bin directory. That takes some command line parameters that you can use to run the migrations locally without MSBUILD.

  • Open up the project properties again for your migrations C# project
  • Go to the “Debug” tab
  • In “Start Action” select “Start external program” and enter “.\Migrate.exe”
  • In Command line arguments enter something like the following:

    --conn "Server=localhost;Database=my-database;Trusted_Connection=True;Encrypt=True;Connection Timeout=30;" --provider sqlserver2012 --assembly "My.DatabaseMigrations.dll" --task migrate --output --outputFilename src\migrated.sql