Setting up Fluent Migrator to run on a build server

This is a step-by-step guide to setting up Fluent Migrator to run on a build server using the MSBUILD project

Step 1: Setting up the migrations project

Create the Project

The migrations project is just a class library with a couple of NuGet packages added to it.

To make it easier later on to pick up the assembly from the MSBUILD project, we are not going to have debug/release bin directories in the same way other projects to. We will have one bin folder where the built assembly will be placed, regardless of build configuration.

To do that:

  • Open up the properties for the project (either right-click and select “Properties”, or select the project then press Alt+Enter).
  • Then go to the Build tab.
  • Then change the Configurations drop down to “All Configurations”.
  • Finally, change the output path to “bin\”

Add the NuGet Packages

The NuGet packages you want are:

  • FluentMigrator – This is the core of Fluent Migrator and contains everything to create database migrations
  • FluentMigrator Tools – This contains various runners and so on.

The Fluent Migrator Tools is a bit of an odd package. It installs the tools in the packages folder of your solution but does not add anything to your project.

Add the MSBuild tools to the project

As I mentioned the Fluent Migrator Tools package won’t add anything to the project. You have to manually do that yourself. I created a post build step to copy the relevant DLL across from the packages folder to the bin directory of the migrations project.

  • Open the project properties again
  • Go to the “Build Events” tab
  • Add the following to the post-build event command line box:
    xcopy "$(SolutionDir)packages\FluentMigrator.Tools.1.3.0.0\tools\AnyCPU\40" "$(TargetDir)" /y /f /s/v
    NOTE: You may have to modify the folder depending on the version of the Fluent Migrator Tools you have

Add the MSBUILD project to the project

OK, so that sound a bit circular. Your migrations project is a C# project (csproj) and the Build Server will need an MSBUILD script to get going with, which will sit inside your C# project.

Since there is no easy way to add an MSBUILD file to an existing project, I found the easiest way was to add an XML file, then rename it to migrations.proj

Step 2: Configuring the MSBUILD Script

This is what the MSBUILD script looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- Set up the MSBUILD script to use tasks defined in FluentMigrator.MSBuild.dll -->
  <UsingTask TaskName="FluentMigrator.MSBuild.Migrate" AssemblyFile="$(OutputPath)FluentMigrator.MSBuild.dll"/>
  
  <!-- Set this to the parent project. The C# project this is contained within. -->
  <Import Project="$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj" />

  <!-- Each of these target a different environment. Set the properties to the 
       relevant information for the datbase in that environment. It is one of
       these targets that will be specified on the build server to run.
       Other properties may be passed into the MSBUILD process 
       externally. -->
  <Target Name="MigrateLocal">
    <Message Text="Migrating the Local Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=localhost;database=my-database" />
  </Target>

  <Target Name="MigrateUAT">
    <Message Text="INFO: Migrating the UAT Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=uat-db;database=my-database" />
  </Target>

  <!-- * This is the bit that does all the work. It defaults some of the properties
         in case they were not passed in.
       * Writes some messages to the output to tell the world what it is doing.
       * Finally it performs the migration. It also writes to an output file the script 
         it used to perform the migration. -->
  <Target Name="Migrate">
    <CreateProperty Value="False" Condition="'$(TrustedConnection)'==''">
      <Output TaskParameter="Value" PropertyName="TrustedConnection"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(User)'==''">
      <Output TaskParameter="Value" PropertyName="User"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(Password)'==''">
      <Output TaskParameter="Value" PropertyName="Password"/>
    </CreateProperty>
    <CreateProperty Value="False" Condition="'$(DryRun)'==''">
      <Output TaskParameter="Value" PropertyName="DryRun"/>
    </CreateProperty>
    
    <Message Text="INFO: Project is «$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj»" />
    <Message Text="INFO: Output path is «$(OutputPath)»"/>
    <Message Text="INFO: Target is «$(OutputPath)\$(AssemblyName).dll»"/>
    <Message Text="INFO: Output script copied to «$(OutputPath)\script\generated.sql»"/>    
    <Message Text="INFO: Dry Run mode is «$(DryRun)»"/>
    <Message Text="INFO: Server is «$(server)»"/>
    <Message Text="INFO: Database is «$(database)»"/>
    
    <MakeDir Directories="$(OutputPath)\script"/>
    <Migrate
			Database="sqlserver2012"
			Connection="Data Source=$(server);Database=$(database);Trusted_Connection=$(TrustedConnection);User Id=$(User);Password=$(Password);Connection Timeout=30;"
			Target="$(OutputPath)\$(AssemblyName).dll"
      Output="True"
      Verbose="True"
      Nested="True"
      Task="migrate:up"
      PreviewOnly="$(DryRun)"
      OutputFilename="$(OutputPath)\script\generated.sql"
      />
  </Target>
  
</Project>

Step 3 : Configuring the Build Server

In this example, I’m using TeamCity.

You can add a build step after building the solution to run the migration. The settings will look something like this:

 

The important bits that the “Build file path” which points to the MSBUILD file we created above, the Targets which indicate which target to run, and the “Command Lime Parameters” which passes properties to MSBUILD that were not included in the file itself. For example, the user name and password are not included in the file as that could present a security risk, so the build server passes this information in.

What about running it ad-hoc on your local machine?

Yes, this is also possible.

Because, above, we copied all the tools to the bin directory in the post-build step, there is a Migrate.exe file in your bin directory. That takes some command line parameters that you can use to run the migrations locally without MSBUILD.

  • Open up the project properties again for your migrations C# project
  • Go to the “Debug” tab
  • In “Start Action” select “Start external program” and enter “.\Migrate.exe”
  • In Command line arguments enter something like the following:

    --conn "Server=localhost;Database=my-database;Trusted_Connection=True;Encrypt=True;Connection Timeout=30;" --provider sqlserver2012 --assembly "My.DatabaseMigrations.dll" --task migrate --output --outputFilename src\migrated.sql

Table names – Singular or Plural

Earlier this morning I tweeted asking for a quick idea of whether to go with singular table names or plural table names. i.e. the difference between having a table called “Country” vs. “Countries”

Here are the very close results:

  • Singular: 8
  • Plural: 6
  • Either: 1

Why Singuar:

  • That’s how the start off life on my ER diagram
  • You don’t need to use a plural name to know a table will hold many of an item.
  • A table consists of rows of items that are singular

Why Plural:

  • It is the only choice unless you are only ever storing one row in each table.
  • because they contain multiple items
  • It contains Users
  • I think of it as a collection rather than a type/class
  • SELECT TOP 1 * FROM Customers

Why either:

  • Either works, so long as it is consistent across the entire db/app

How to get a value from a text box into the database

This question was asked on a forum and I took some time to construct a reasonably lengthy reply so I’m copying it to my blog for a bit of permanence.

I suspect that many of my regular readers will be dismayed at the lack of proper architecture (e.g. layering) but we all had to start somewhere and I suspect that your first programs were not properly layered or structured either. I know mine certainly weren’t. My aim with this was to show how a simple goal can be achieved, what basic things are needed and how to fit it all together by doing the simplest thing that would work (a mantra from the agile world).

Here’s the post (slightly edited to put back some of the original context):

Okay – Let’s step back and show the whole thing from text box to database. NOTE: that this example shows everything in one place. This is generally considered poor practice, but as you are only just starting I’ll not burden you with the principles of layered architecture and the single responsibility principle and so on. (Just be aware they exist and one day you’ll have to learn about them)

So, let’s say you have a form with two text boxes, one for a name, and one for an age. Lets call them NameTB and AgeTB. The user can enter information in these text boxes and press a button that adds them to the database.

First, we need to get the data from the text boxes into a form we can use.

string name = NameTB.Text;
int age = Convert.ToInt32(AgeTB.Text);

Since text boxes only deal with strings we have to convert the string into a number (an Int32 – a 32bit integer) for the age value.

Now, we need to set up the database connection and command in order to insert this. I’ll assume you already have a connections string to your database, I’ve called it myConnectionString for this example.

SqlConnection myConnection = new SqlConnection(myConnectionString);
SqlCommand myCommand = new SqlCommand("INSERT Person(NameField, AgeField) "+
    "VALUES (@nameParam, @ageParam)", myConnection);

I’ve now set up the SQL Command with an insert statement. I’ve assumed there is a table called Person and it has two columns called NameField and AgeField. I’m also going to insert the values via parameters, which I’ve indicated with @nameParam and @ageParam. SQL Server requires that all parameter names start with an @ symbol. Other databases may vary.

myCommand.Parameters.AddWithValue("@nameParam", name);
myCommand.Parameters.AddWithValue("@ageParam", age);

We’ve now added the parameters into the SQL command and we’ve given each parameter the value we got earlier. Finally:

myConnection.Open();
myComment.ExecuteNonQuery();
myConnection.Close();

This opens the connection, runs the INSERT statement and closes the connection again. We’re using ExecuteNonQuery because we don’t expect any results back from SQL Server. If we were expecting data back (e.g. because we were using a SELECT statement) we could use ExecuteReader (for many rows/columns) or ExecuteScalar (for a single value).

This is a very basic example. I’ve not shown any error checking or exception handling. There is also the implicit assumption that all this code resides inside a button click event, which is considered poor practice for anything but a small or throw away application.

Tip of the Day #4 (Connection Strings in Config files)

From .NET 2.0 onwards a new and improved configuration management system has been put in place. You can now add a <connectionString> element to the config file and use it to place the connection strings to the database and then retrieve then in a consistent way in your application. It supports multiple connection strings too if you need to access multiple databases.

The config file looks like this:

<configuration>
...
   <connectionStrings>
    <add name="Default" connectionString="Server=(local);database=MyDatabase"/>
  </connectionStrings>
...
<configuration>

From the .NET application you can access the connection string like this:

connectionString =
    ConfigurationManager.ConnectionStrings["Default"].ConnectionString;

Just remember to add a reference to System.Configuration in your project and ensure that the code file is using the System.Configuration namespace as well.

Creating Many-to-Many joins

A topic that comes up from time to time in forums is how to join two tables together when there is a many-to-many relationship. Typical examples include teachers-to-students or articles-to-tags (to create a “tag cloud”)

If you have made any relationships in a database you will see that it is very easy to create a one-to-many join. For example, a web forum may have many posts, but a post only belongs in one forum.

To create a many-to-many relationship you need to create an intermediate table. This is a table that each side of the many-to-many can have a one-to-many relationship with. The following diagram shows the many-to-many relationship between a blog posts and the tags on it (This is not a full model, just enough to show the relationship)

Many-to-many

The BlogPost has its primary key (BlogPostId), as does the Tag (TagId). Normally you would see that key being used as the foreign key in the other table, however that wouldn’t work with a many-to-many relationship.

In order to join the two tables together an “intermediate table” needs to be created that just contains the two primary keys from either side of the relationship. Those two foreign keys make up a compound* primary key in the intermediate table.

It is normal to name the intermediate table after the each table that forms the relationship. In this case it would be “BlogPostTag” after BlogPost and Tag.

In order to join a row in the BlogPost table to a row in the Tag table you only need to insert a new row in the BlogPostTag table with the keys from either side. e.g.

INSERT BlogPostTag VALUES(@blogPostId, @tagId);

In order to remove the relationship between a blog post and a tag you only need to delete the row from the intermediate table. e.g.

DELETE BlogPostTag WHERE BlogPostId = @blogPostId AND TagId = @tagId;

 

 

* a “compound key” is one which is made up of more than one column.

 

DALs and the DAAB

I’ve been pondering something that was raised in passing earlier this week and that is the relationship between a DAL (Data Abstraction Layer) and the DAAB (Data Access Application Block).

It was briefly mentioned in a conversation that I had that the DAAB provides the functionality of a DAL because the developer doesn’t need to worry about the back end database that is being used. I suppose to some extent that is true. However, I don’t believe that it fully functions as a DAL.

To my mind a DAL abstracts the access of data away from the rest of the application. Most people seem to restrict this view to data being held in a database.  But databases are not the only repository of data. Data can be held in plain text files, CSV files, XML files and many other formats. It doesn’t need to arrive by file, it could be data from a service or other mechanism.

If you treat sources of data as being more than a database then the DAAB is not a suitable substitute for building a DAL.

Also the DAAB has some limitations in that it cannot translate the SQL itself. For example the flavour of SQL in Oracle has differences to the flavour of SQL in SQL Server. This means that any SQL code will have to be translated. One possible solution is to ensure that everything is done through stored procedures. Then all that the DAAB needs is the stored procedure’s name and the values for the parameters.

But what of passing stored procedure names and parameters to the DAAB? Wouldn’t they need to be known in the business layer? Surely the business layer should know absolutely nothing about the database? Absolutely, the business layer should not be concerning itself at all with the database. It shouldn’t know about stored procedure names, parameters or anything else, even if the DAAB takes care of figuring out the details under the hood from information picked up from the config file. The Business Layer should just need to know there is a DAL and a method on the DAL can be called and some results come back. How the DAL does anything is of no concern to the business layer.

A quick and dirty test, in my opinion, is to look out for any classes from the System.Data namespace in the business or presentation layer to determine if the DAL is well designed.

In my mind the DAAB is just a tool that can be used to make the creation and maintenance of a DAL easier when dealing with databases. It makes it easy to change the location of databases as the development process moves along from the developers’ machines, to continuous integration, test, pre-live and finally live (or what ever your process calls for). The argument that the DAAB makes it easy to swap out one type of database for another isn’t something that is actually going to be done all that often. From what I’ve seen, companies generally run two systems concurrently until the old one is discontinued. Rarely do they ever actually update the old system to use the new database and when they do it is usually via some form of orchestration system so the old system doesn’t need to be changed in any great way. If it isn’t broke don’t fix it.

Tags:

What is a DAL (Part 4)

As has been mentioned previously, one of the purposes of the DAL is to shield that application from the database. That said, what happens if a DAL throws an exception? How should the application respond to it? In fact, how can it respond to an exception that it should not know about?

If something goes wrong with a query in the database an exception is thrown. If the database is SQL Server then a SqlException is thrown. If it isn’t SQL Server then some other exception is thrown. Or the DAL may be performing actions against a completely different type of data source such as an XML file, plain text file, web service or something completely different. If the application knows nothing about the back end database (data source) then how does it know which exception to respond to?

In short, it doesn’t. It can’t know which of the myriad of possible exceptions that could be thrown will be and how to respond to it. The calling code could just catch(Exception ex) but that is poor practice. It is always best to catch the most specific exception possible.

The answer is to create a specific exception that the DAL can use. A DalException that calling code can use. The original exception is still available as an InnerException on the DalException.

using System;
using System.Runtime.Serialization;

namespace Dal
{
    public class DalException : Exception
    {
        public DalException()
            : base()
        {
        }

        public DalException(string message)
            : base(message)
        {
        }

        public DalException(string message, Exception innerException)
            : base(message, innerException)
        {
        }

        public DalException(SerializationInfo info, StreamingContext context)
            : base(info, context)
        {
        }
    }
}

The DAL will catch the original exception, create a new one based on the original and throw the new exception.

public DataSet GetPolicy(int policyId)
{
    try
    {
        SqlDataAdapter da =
            (SqlDataAdapter)this.BuildBasicQuery("GetPolicy");
        da.SelectCommand.Parameters.AddWithValue("@id", policyId);
        DataSet result = new DataSet();
        da.Fill(result);
        return result;
    }
    catch (SqlException sqlEx)
    {
        DalException dalEx = BuildDalEx(sqlEx);
        throw dalEx;
    }
}

The code for wrapping the original exception in the DAL Exception can be refactored in to a separate method so it can be used repeatedly. Depending on what it needs to do it may be possible to put that as a protected method on one of the abstract base classes

private DalException BuildDalEx(SqlException sqlEx)
{
    string message = string.Format("An exception occured in the Policy DALrn" +
        "Message: {0}", sqlEx.Message);
    DalException result = new DalException(message, sqlEx);
    return result;
}

Previous articles in the series:

 

 

Technorati Tags: , ,

 

What is a DAL (Part 3)

In this continuation of my series on the DAL I’m going to show the ability to create several DALs and have a Factory class instantiate the correct DAL based on settings in a config file.

One of the purposes of the DAL is to shield the application from the detail of accessing the database. You may have a system where you may have to talk to different databases depending on the way the software is installed. Typically this is most useful by creators of components that need to plug into many different database systems.

The DAL will no longer be a singleton, but rely on a Factory to create the correct DAL class. This is most useful when the data may be coming from different sources. It can also be useful when your application is in development and the database isn’t ready because you can then create a mock DAL and return that instead of the real thing.

UML Diagram showing multiple DALs

UML Diagram showing multiple DALs

The above UML Diagram shows the base DAL that was introduced in the last article in this series. Again, it is an abstract class as it does not contain enough functionality to be instantiated on its own. However, it is now joined by two additional abstract classes. These add a bit more functionality specific to the target database, but still not enough to be useful as an object in its own right.

Finally at the bottom layer are the various concrete DAL classes that act as the proxies to the database.

So that the calling application doesn’t need to know what database it is dealing with the concrete classes implement interfaces. That way the application receives a reference to an IPolicyDal, for example, without having to know whether it is connected to an Oracle database or a SQL Server database (or any other future database you may need to support)

The factory class, not shown in the UML diagram above, picks up information in a config file which tells it which specialised DAL to actually create. It returns that specialised instance through a reference to the interface. This means that the calling code does not need to know what specialised class has actually been created. Nor should it need to care.

using System;
using System.Collections.Generic;
using System.Text;
using System.Reflection;

namespace Dal
{
    /// <summary>
    /// Creates and dispenses the correct DAL depending on the configuration
    /// </summary>
    public class DalFactory
    {
        private static IPolicyDal thePolicyDal = null;
        private static IClaimDal theClaimDal = null;

        /// <summary>
        /// Gets the policy DAL
        /// </summary>
        /// <returns>A dal that implements IPolicyDal</returns>
        public static IPolicyDal GetPolicyDal()
        {
            if (thePolicyDal == null)
            {
                string policyDalName = DalConfig.PolicyDal;
                Type policyDalType = Type.GetType(policyDalName);
                thePolicyDal =
                    (IPolicyDal)Activator.CreateInstance(policyDalType);
            }
            return thePolicyDal;
        }

        /// <summary>
        /// Gets the claim DAL
        /// </summary>
        /// <returns>A DAL that implements IClaimDal</returns>
        public static IClaimDal GetClaimDal()
        {
            if (theClaimDal == null)
            {
                string claimDalName = DalConfig.ClaimDal;
                Type claimDalType = Type.GetType(claimDalName);
                theClaimDal =
                    (IClaimDal)Activator.CreateInstance(claimDalType);
            }
            return theClaimDal;
        }
    }
}

The factory stores the each DAL class as it is created so that if a Get...() method is called repeatedly it always returns the same instance. The DAL classes should not be storing any state so this is a safe thing to do.

The DalConfig class is a very simple static class that consists of a number of property getters that retrieves the information out of the config file. The PolicyDal and ClaimDal properties return the fully qualified name of the specialised DAL class to instantiate.

Previous articles in the series:

Technorati Tags: ,

What is a DAL? (Part 2)

In my last post about the DAL I presented a very basic DAL based on a static class. This is fine for small applications, but for larger applications the limitations will soon start to show themselves.

In this post I’ll convert the DAL to a series of singletons so that each singleton operates on a different part of the database.

First of all, I should explain what a singleton is. A singleton is a class that has only one instance. The way that single instance is generated is controlled from the class itself.

When some code wants to use the singleton it calls a static property called Instance to get the one and only instance of the class. The property checks to see if there is already an instance available and if not creates it. This instance is stored in a private static field on the class. To ensure that only one instance of the class is ever created

The following is the DAL with just the parts that make it a singleton

    public class InvoiceDal
    {
        /// <summary>
        /// A private field to hold the only instance of this class
        /// </summary>
        private static InvoiceDal instance;

        /// <summary>
        /// A private constructor so that the only thing that can
        /// construct this class is itself
        /// </summary>
        private InvoiceDal()
        {
            // Do initialisation stuff here.
        }

        /// <summary>
        /// A public property that retrieves the only instance of the
        /// class
        /// </summary>
        public static InvoiceDal Instance
        {
            get
            {
                if (instance == null)
                    instance = new InvoiceDal();
                return instance;
            }
        }
    }

Now the methods from the original static DAL (see previous post) can be carried over almost as they are. It is just the static keyword that has to be removed.

But, the point of this post is to refactor the old static DAL into a number of DAL classes where each class represents some component. The part given already represents the invoicing system. The next part will be for the stock inventory system.

It should, hopefully, be obvious that part of the DAL given already will be needed again. The lazy way is to just copy and paste the methods into a new DAL class. This is wasteful and the poor sod that has to maintain the code will not thank you. In practice what tends to happen is that, in your absence, you will be the butt of all jokes and your name will be dragged through the mud. This is something you don’t want to happen to you.

So, instead an abstract base class will be created which contains the common parts of the DAL. Certain decisions need to be made about goes in the base class. For example, is the base class responsible for figuring out what connection string to use? If you are only going to connect to one database then it certainly does seem to be a good choice. What if you need to (or expect to) connect to multiple databases? In those situations the derived class may be the best place to figure out the connection string. For this example, I’ll assume that multiple database are in use, but all from the same type of database system (e.g. Different databases running on SQL Server). I’ll come on to different database systems later in the series.

The abstract base class looks like this:

public abstract class DalBase
{
    private static string connectionString;

    protected DalBase()
    {
    }

    /// <summary>
    /// Builds the command object with an appropriate connection and sets
    /// the stored procedure name.
    /// </summary>
    /// <param name="storedProcedureName">The name of the stored
    /// procedure</param>
    /// <returns>The command object</returns>
    protected SqlCommand BuildCommand(string storedProcedureName)
    {
        // Create a connection to the database.
        SqlConnection connection = new SqlConnection(connectionString);

        // Create the command object - The named stored procedure
        // will be called when needed.
        SqlCommand result = new SqlCommand(storedProcedureName,
            connection);
        result.CommandType = CommandType.StoredProcedure;
        return result;
    }

    /// <summary>
    /// Builds a DataAdapter that can be used for retrieving the results
    /// of queries
    /// </summary>
    /// <param name="storedProcedureName">The name of the stored
    /// procedure</param>
    /// <returns>A data adapter</returns>
    protected SqlDataAdapter BuildBasicQuery(string storedProcedureName)
    {
        SqlCommand cmd = BuildCommand(storedProcedureName);

        // Set up the data adapter to use the command already setup.
        SqlDataAdapter result = new SqlDataAdapter(cmd);
        return result;
    }
}

And the derived class looks like this:

public class InvoiceDal : DalBase
{

    /// <summary>
    /// A private field to hold the only instance of this class
    /// </summary>
    private static InvoiceDal instance;

    /// <summary>
    /// A private constructor so that the only thing that can
    /// construct this class is itself
    /// </summary>
    private InvoiceDal()
    {
        connectionString =
            ConfigurationManager.AppSettings["InvoiceConnectionString"];
    }

    /// <summary>
    /// A public property that retrieves the only instance of the class
    /// </summary>
    public static InvoiceDal Instance
    {
        get
        {
            if (instance == null)
                instance = new InvoiceDal();
            return instance;
        }
    }

    /// <summary>
    /// A sample public method. There are no parameters, it simply calls
    /// a stored procedure that retrieves all the products
    /// </summary>
    /// <returns>A DataTable containing the product data</returns>
    public DataTable GetAllProducts()
    {
        SqlDataAdapter dataAdapter = BuildBasicQuery("GetAllProducts");

        // Get the result set from the database and return it
        DataTable result = new DataTable();
        dataAdapter.Fill(result);
        return result;
    }

    /// <summary>
    /// A sample public method. It takes one parameter which is passed
    /// to the database
    /// </summary>
    /// <param name="invoiceNumber">A number which identifies the
    /// invoice</param>
    /// <returns>A dataset containing the details of the required
    /// invoice</returns>
    public DataSet GetInvoice(int invoiceNumber)
    {
        SqlDataAdapter dataAdapter = BuildBasicQuery("GetInvoice");
        dataAdapter.SelectCommand.Parameters.AddWithValue(
            "@invoiceNumber", invoiceNumber);

        DataSet result = new DataSet();
        dataAdapter.Fill(result);
        return result;
    }

    /// <summary>
    /// A sample public method. Creates an invoice in the database and
    /// returns the invoice number to the calling code.
    /// </summary>
    /// <param name="customerId">The id of the customer</param>
    /// <param name="billingAddressId">The id of the billing
    /// address</param>
    /// <param name="date">The date of the invoice</param>
    /// <returns>The invoice number</returns>
    public int CreateInvoice(int customerId, int billingAddressId,
        DateTime date)
    {
        SqlCommand cmd = BuildCommand("CreateInvoice");
        cmd.Parameters.AddWithValue("@customerId", customerId);
        cmd.Parameters.AddWithValue("@billingAddressId",
            billingAddressId);
        cmd.Parameters.AddWithValue("@date", date);

        cmd.Connection.Open();
        int result = (int)cmd.ExecuteScalar();
        cmd.Connection.Close();
        return result;
    }

}

It is now possible to create more of these derived classes for various parts of the application all sharing the same common code in the base class.

Tags:

db4o: An Embeddable Database Engine for Object-Oriented Environments

Just posted details about this event for Scottish Developers in Glasgow.

The unique design of db4o’s native object database engine makes it the ideal choice to be embedded in equipment and devices, in packaged software running on mobile or desktop platforms, or in real-time control systems – in short: in all Java and .NET environments, where no database administrator (DBA) is present. db4o is used by some of the world’s most innovative companies, including Ricoh, Boeing, BMW, Bosch, Seagate, and Intel.

Jim Paterson describes the key features of db4o, and shows how to save objects in the database, use the db4o query mechanisms, control transactions and object activation, refactor databases, replicate data to a relational database and use db4o in a web app. You will also find out about extensive the developer resources provided by the db4o community.

Jim is a Lecturer in the School of Engineering and Computing at Glasgow Caledonian University. He is an author of “The Definitive Guide to db4o”, the first book to be published about db4o, and has written articles for the onJava and ASP Today websites. He has a particular interest in teaching software development. He has a PhD in Physics, and worked previously as a research physicist and as a software engineer.

Map of the venue showing nearby transport links

Room 16 is on Level 2 of the Continuing Professional Development Centre. The CPDC is on the left side (just past the bookshop) as you enter the university from the pedestrian entrance on Cowcaddens Road.

The event is FREE and all are welcome.

Tags: