Software Development

Setting up Fluent Migrator to run on a build server

This is a step-by-step guide to setting up Fluent Migrator to run on a build server using the MSBUILD project

Step 1: Setting up the migrations project

Create the Project

The migrations project is just a class library with a couple of NuGet packages added to it.

To make it easier later on to pick up the assembly from the MSBUILD project, we are not going to have debug/release bin directories in the same way other projects to. We will have one bin folder where the built assembly will be placed, regardless of build configuration.

To do that:

  • Open up the properties for the project (either right-click and select “Properties”, or select the project then press Alt+Enter).
  • Then go to the Build tab.
  • Then change the Configurations drop down to “All Configurations”.
  • Finally, change the output path to “bin\”

Add the NuGet Packages

The NuGet packages you want are:

  • FluentMigrator – This is the core of Fluent Migrator and contains everything to create database migrations
  • FluentMigrator Tools – This contains various runners and so on.

The Fluent Migrator Tools is a bit of an odd package. It installs the tools in the packages folder of your solution but does not add anything to your project.

Add the MSBuild tools to the project

As I mentioned the Fluent Migrator Tools package won’t add anything to the project. You have to manually do that yourself. I created a post build step to copy the relevant DLL across from the packages folder to the bin directory of the migrations project.

  • Open the project properties again
  • Go to the “Build Events” tab
  • Add the following to the post-build event command line box:
    xcopy "$(SolutionDir)packages\FluentMigrator.Tools.1.3.0.0\tools\AnyCPU\40" "$(TargetDir)" /y /f /s/v
    NOTE: You may have to modify the folder depending on the version of the Fluent Migrator Tools you have

Add the MSBUILD project to the project

OK, so that sound a bit circular. Your migrations project is a C# project (csproj) and the Build Server will need an MSBUILD script to get going with, which will sit inside your C# project.

Since there is no easy way to add an MSBUILD file to an existing project, I found the easiest way was to add an XML file, then rename it to migrations.proj

Step 2: Configuring the MSBUILD Script

This is what the MSBUILD script looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- Set up the MSBUILD script to use tasks defined in FluentMigrator.MSBuild.dll -->
  <UsingTask TaskName="FluentMigrator.MSBuild.Migrate" AssemblyFile="$(OutputPath)FluentMigrator.MSBuild.dll"/>
  
  <!-- Set this to the parent project. The C# project this is contained within. -->
  <Import Project="$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj" />

  <!-- Each of these target a different environment. Set the properties to the 
       relevant information for the datbase in that environment. It is one of
       these targets that will be specified on the build server to run.
       Other properties may be passed into the MSBUILD process 
       externally. -->
  <Target Name="MigrateLocal">
    <Message Text="Migrating the Local Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=localhost;database=my-database" />
  </Target>

  <Target Name="MigrateUAT">
    <Message Text="INFO: Migrating the UAT Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=uat-db;database=my-database" />
  </Target>

  <!-- * This is the bit that does all the work. It defaults some of the properties
         in case they were not passed in.
       * Writes some messages to the output to tell the world what it is doing.
       * Finally it performs the migration. It also writes to an output file the script 
         it used to perform the migration. -->
  <Target Name="Migrate">
    <CreateProperty Value="False" Condition="'$(TrustedConnection)'==''">
      <Output TaskParameter="Value" PropertyName="TrustedConnection"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(User)'==''">
      <Output TaskParameter="Value" PropertyName="User"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(Password)'==''">
      <Output TaskParameter="Value" PropertyName="Password"/>
    </CreateProperty>
    <CreateProperty Value="False" Condition="'$(DryRun)'==''">
      <Output TaskParameter="Value" PropertyName="DryRun"/>
    </CreateProperty>
    
    <Message Text="INFO: Project is «$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj»" />
    <Message Text="INFO: Output path is «$(OutputPath)»"/>
    <Message Text="INFO: Target is «$(OutputPath)\$(AssemblyName).dll»"/>
    <Message Text="INFO: Output script copied to «$(OutputPath)\script\generated.sql»"/>    
    <Message Text="INFO: Dry Run mode is «$(DryRun)»"/>
    <Message Text="INFO: Server is «$(server)»"/>
    <Message Text="INFO: Database is «$(database)»"/>
    
    <MakeDir Directories="$(OutputPath)\script"/>
    <Migrate
			Database="sqlserver2012"
			Connection="Data Source=$(server);Database=$(database);Trusted_Connection=$(TrustedConnection);User Id=$(User);Password=$(Password);Connection Timeout=30;"
			Target="$(OutputPath)\$(AssemblyName).dll"
      Output="True"
      Verbose="True"
      Nested="True"
      Task="migrate:up"
      PreviewOnly="$(DryRun)"
      OutputFilename="$(OutputPath)\script\generated.sql"
      />
  </Target>
  
</Project>

Step 3 : Configuring the Build Server

In this example, I’m using TeamCity.

You can add a build step after building the solution to run the migration. The settings will look something like this:

 

The important bits that the “Build file path” which points to the MSBUILD file we created above, the Targets which indicate which target to run, and the “Command Lime Parameters” which passes properties to MSBUILD that were not included in the file itself. For example, the user name and password are not included in the file as that could present a security risk, so the build server passes this information in.

What about running it ad-hoc on your local machine?

Yes, this is also possible.

Because, above, we copied all the tools to the bin directory in the post-build step, there is a Migrate.exe file in your bin directory. That takes some command line parameters that you can use to run the migrations locally without MSBUILD.

  • Open up the project properties again for your migrations C# project
  • Go to the “Debug” tab
  • In “Start Action” select “Start external program” and enter “.\Migrate.exe”
  • In Command line arguments enter something like the following:

    --conn "Server=localhost;Database=my-database;Trusted_Connection=True;Encrypt=True;Connection Timeout=30;" --provider sqlserver2012 --assembly "My.DatabaseMigrations.dll" --task migrate --output --outputFilename src\migrated.sql
Software Development

Table names – Singular or Plural

Earlier this morning I tweeted asking for a quick idea of whether to go with singular table names or plural table names. i.e. the difference between having a table called “Country” vs. “Countries”

Here are the very close results:

  • Singular: 8
  • Plural: 6
  • Either: 1

Why Singuar:

  • That’s how the start off life on my ER diagram
  • You don’t need to use a plural name to know a table will hold many of an item.
  • A table consists of rows of items that are singular

Why Plural:

  • It is the only choice unless you are only ever storing one row in each table.
  • because they contain multiple items
  • It contains Users
  • I think of it as a collection rather than a type/class
  • SELECT TOP 1 * FROM Customers

Why either:

  • Either works, so long as it is consistent across the entire db/app
Software Development

How to get a value from a text box into the database

This question was asked on a forum and I took some time to construct a reasonably lengthy reply so I’m copying it to my blog for a bit of permanence.

I suspect that many of my regular readers will be dismayed at the lack of proper architecture (e.g. layering) but we all had to start somewhere and I suspect that your first programs were not properly layered or structured either. I know mine certainly weren’t. My aim with this was to show how a simple goal can be achieved, what basic things are needed and how to fit it all together by doing the simplest thing that would work (a mantra from the agile world).

Here’s the post (slightly edited to put back some of the original context):

Okay – Let’s step back and show the whole thing from text box to database. NOTE: that this example shows everything in one place. This is generally considered poor practice, but as you are only just starting I’ll not burden you with the principles of layered architecture and the single responsibility principle and so on. (Just be aware they exist and one day you’ll have to learn about them)

So, let’s say you have a form with two text boxes, one for a name, and one for an age. Lets call them NameTB and AgeTB. The user can enter information in these text boxes and press a button that adds them to the database.

First, we need to get the data from the text boxes into a form we can use.

string name = NameTB.Text;
int age = Convert.ToInt32(AgeTB.Text);

Since text boxes only deal with strings we have to convert the string into a number (an Int32 – a 32bit integer) for the age value.

Now, we need to set up the database connection and command in order to insert this. I’ll assume you already have a connections string to your database, I’ve called it myConnectionString for this example.

SqlConnection myConnection = new SqlConnection(myConnectionString);
SqlCommand myCommand = new SqlCommand("INSERT Person(NameField, AgeField) "+
    "VALUES (@nameParam, @ageParam)", myConnection);

I’ve now set up the SQL Command with an insert statement. I’ve assumed there is a table called Person and it has two columns called NameField and AgeField. I’m also going to insert the values via parameters, which I’ve indicated with @nameParam and @ageParam. SQL Server requires that all parameter names start with an @ symbol. Other databases may vary.

myCommand.Parameters.AddWithValue("@nameParam", name);
myCommand.Parameters.AddWithValue("@ageParam", age);

We’ve now added the parameters into the SQL command and we’ve given each parameter the value we got earlier. Finally:

myConnection.Open();
myComment.ExecuteNonQuery();
myConnection.Close();

This opens the connection, runs the INSERT statement and closes the connection again. We’re using ExecuteNonQuery because we don’t expect any results back from SQL Server. If we were expecting data back (e.g. because we were using a SELECT statement) we could use ExecuteReader (for many rows/columns) or ExecuteScalar (for a single value).

This is a very basic example. I’ve not shown any error checking or exception handling. There is also the implicit assumption that all this code resides inside a button click event, which is considered poor practice for anything but a small or throw away application.

Software Development, Tip of the Day

Tip of the Day #4 (Connection Strings in Config files)

From .NET 2.0 onwards a new and improved configuration management system has been put in place. You can now add a <connectionString> element to the config file and use it to place the connection strings to the database and then retrieve then in a consistent way in your application. It supports multiple connection strings too if you need to access multiple databases.

The config file looks like this:

<configuration>
...
   <connectionStrings>
    <add name="Default" connectionString="Server=(local);database=MyDatabase"/>
  </connectionStrings>
...
<configuration>

From the .NET application you can access the connection string like this:

connectionString =
    ConfigurationManager.ConnectionStrings["Default"].ConnectionString;

Just remember to add a reference to System.Configuration in your project and ensure that the code file is using the System.Configuration namespace as well.

Software Development

Creating Many-to-Many joins

A topic that comes up from time to time in forums is how to join two tables together when there is a many-to-many relationship. Typical examples include teachers-to-students or articles-to-tags (to create a “tag cloud”)

If you have made any relationships in a database you will see that it is very easy to create a one-to-many join. For example, a web forum may have many posts, but a post only belongs in one forum.

To create a many-to-many relationship you need to create an intermediate table. This is a table that each side of the many-to-many can have a one-to-many relationship with. The following diagram shows the many-to-many relationship between a blog posts and the tags on it (This is not a full model, just enough to show the relationship)

Many-to-many

The BlogPost has its primary key (BlogPostId), as does the Tag (TagId). Normally you would see that key being used as the foreign key in the other table, however that wouldn’t work with a many-to-many relationship.

In order to join the two tables together an “intermediate table” needs to be created that just contains the two primary keys from either side of the relationship. Those two foreign keys make up a compound* primary key in the intermediate table.

It is normal to name the intermediate table after the each table that forms the relationship. In this case it would be “BlogPostTag” after BlogPost and Tag.

In order to join a row in the BlogPost table to a row in the Tag table you only need to insert a new row in the BlogPostTag table with the keys from either side. e.g.

INSERT BlogPostTag VALUES(@blogPostId, @tagId);

In order to remove the relationship between a blog post and a tag you only need to delete the row from the intermediate table. e.g.

DELETE BlogPostTag WHERE BlogPostId = @blogPostId AND TagId = @tagId;

 

 

* a “compound key” is one which is made up of more than one column.

 

Misc

DALs and the DAAB

I’ve been pondering something that was raised in passing earlier this week and that is the relationship between a DAL (Data Abstraction Layer) and the DAAB (Data Access Application Block).

It was briefly mentioned in a conversation that I had that the DAAB provides the functionality of a DAL because the developer doesn’t need to worry about the back end database that is being used. I suppose to some extent that is true. However, I don’t believe that it fully functions as a DAL.

To my mind a DAL abstracts the access of data away from the rest of the application. Most people seem to restrict this view to data being held in a database.  But databases are not the only repository of data. Data can be held in plain text files, CSV files, XML files and many other formats. It doesn’t need to arrive by file, it could be data from a service or other mechanism.

If you treat sources of data as being more than a database then the DAAB is not a suitable substitute for building a DAL.

Also the DAAB has some limitations in that it cannot translate the SQL itself. For example the flavour of SQL in Oracle has differences to the flavour of SQL in SQL Server. This means that any SQL code will have to be translated. One possible solution is to ensure that everything is done through stored procedures. Then all that the DAAB needs is the stored procedure’s name and the values for the parameters.

But what of passing stored procedure names and parameters to the DAAB? Wouldn’t they need to be known in the business layer? Surely the business layer should know absolutely nothing about the database? Absolutely, the business layer should not be concerning itself at all with the database. It shouldn’t know about stored procedure names, parameters or anything else, even if the DAAB takes care of figuring out the details under the hood from information picked up from the config file. The Business Layer should just need to know there is a DAL and a method on the DAL can be called and some results come back. How the DAL does anything is of no concern to the business layer.

A quick and dirty test, in my opinion, is to look out for any classes from the System.Data namespace in the business or presentation layer to determine if the DAL is well designed.

In my mind the DAAB is just a tool that can be used to make the creation and maintenance of a DAL easier when dealing with databases. It makes it easy to change the location of databases as the development process moves along from the developers’ machines, to continuous integration, test, pre-live and finally live (or what ever your process calls for). The argument that the DAAB makes it easy to swap out one type of database for another isn’t something that is actually going to be done all that often. From what I’ve seen, companies generally run two systems concurrently until the old one is discontinued. Rarely do they ever actually update the old system to use the new database and when they do it is usually via some form of orchestration system so the old system doesn’t need to be changed in any great way. If it isn’t broke don’t fix it.

Tags:

Misc

What is a DAL (Part 4)

As has been mentioned previously, one of the purposes of the DAL is to shield that application from the database. That said, what happens if a DAL throws an exception? How should the application respond to it? In fact, how can it respond to an exception that it should not know about?

If something goes wrong with a query in the database an exception is thrown. If the database is SQL Server then a SqlException is thrown. If it isn’t SQL Server then some other exception is thrown. Or the DAL may be performing actions against a completely different type of data source such as an XML file, plain text file, web service or something completely different. If the application knows nothing about the back end database (data source) then how does it know which exception to respond to?

In short, it doesn’t. It can’t know which of the myriad of possible exceptions that could be thrown will be and how to respond to it. The calling code could just catch(Exception ex) but that is poor practice. It is always best to catch the most specific exception possible.

The answer is to create a specific exception that the DAL can use. A DalException that calling code can use. The original exception is still available as an InnerException on the DalException.

using System;
using System.Runtime.Serialization;

namespace Dal
{
    public class DalException : Exception
    {
        public DalException()
            : base()
        {
        }

        public DalException(string message)
            : base(message)
        {
        }

        public DalException(string message, Exception innerException)
            : base(message, innerException)
        {
        }

        public DalException(SerializationInfo info, StreamingContext context)
            : base(info, context)
        {
        }
    }
}

The DAL will catch the original exception, create a new one based on the original and throw the new exception.

public DataSet GetPolicy(int policyId)
{
    try
    {
        SqlDataAdapter da =
            (SqlDataAdapter)this.BuildBasicQuery("GetPolicy");
        da.SelectCommand.Parameters.AddWithValue("@id", policyId);
        DataSet result = new DataSet();
        da.Fill(result);
        return result;
    }
    catch (SqlException sqlEx)
    {
        DalException dalEx = BuildDalEx(sqlEx);
        throw dalEx;
    }
}

The code for wrapping the original exception in the DAL Exception can be refactored in to a separate method so it can be used repeatedly. Depending on what it needs to do it may be possible to put that as a protected method on one of the abstract base classes

private DalException BuildDalEx(SqlException sqlEx)
{
    string message = string.Format("An exception occured in the Policy DALrn" +
        "Message: {0}", sqlEx.Message);
    DalException result = new DalException(message, sqlEx);
    return result;
}

Previous articles in the series:

 

 

Technorati Tags: , ,