Setting up Fluent Migrator to run on a build server

This is a step-by-step guide to setting up Fluent Migrator to run on a build server using the MSBUILD project

Step 1: Setting up the migrations project

Create the Project

The migrations project is just a class library with a couple of NuGet packages added to it.

To make it easier later on to pick up the assembly from the MSBUILD project, we are not going to have debug/release bin directories in the same way other projects to. We will have one bin folder where the built assembly will be placed, regardless of build configuration.

To do that:

  • Open up the properties for the project (either right-click and select “Properties”, or select the project then press Alt+Enter).
  • Then go to the Build tab.
  • Then change the Configurations drop down to “All Configurations”.
  • Finally, change the output path to “bin\”

Add the NuGet Packages

The NuGet packages you want are:

  • FluentMigrator – This is the core of Fluent Migrator and contains everything to create database migrations
  • FluentMigrator Tools – This contains various runners and so on.

The Fluent Migrator Tools is a bit of an odd package. It installs the tools in the packages folder of your solution but does not add anything to your project.

Add the MSBuild tools to the project

As I mentioned the Fluent Migrator Tools package won’t add anything to the project. You have to manually do that yourself. I created a post build step to copy the relevant DLL across from the packages folder to the bin directory of the migrations project.

  • Open the project properties again
  • Go to the “Build Events” tab
  • Add the following to the post-build event command line box:
    xcopy "$(SolutionDir)packages\FluentMigrator.Tools.1.3.0.0\tools\AnyCPU\40" "$(TargetDir)" /y /f /s/v
    NOTE: You may have to modify the folder depending on the version of the Fluent Migrator Tools you have

Add the MSBUILD project to the project

OK, so that sound a bit circular. Your migrations project is a C# project (csproj) and the Build Server will need an MSBUILD script to get going with, which will sit inside your C# project.

Since there is no easy way to add an MSBUILD file to an existing project, I found the easiest way was to add an XML file, then rename it to migrations.proj

Step 2: Configuring the MSBUILD Script

This is what the MSBUILD script looks like.

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <!-- Set up the MSBUILD script to use tasks defined in FluentMigrator.MSBuild.dll -->
  <UsingTask TaskName="FluentMigrator.MSBuild.Migrate" AssemblyFile="$(OutputPath)FluentMigrator.MSBuild.dll"/>
  
  <!-- Set this to the parent project. The C# project this is contained within. -->
  <Import Project="$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj" />

  <!-- Each of these target a different environment. Set the properties to the 
       relevant information for the datbase in that environment. It is one of
       these targets that will be specified on the build server to run.
       Other properties may be passed into the MSBUILD process 
       externally. -->
  <Target Name="MigrateLocal">
    <Message Text="Migrating the Local Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=localhost;database=my-database" />
  </Target>

  <Target Name="MigrateUAT">
    <Message Text="INFO: Migrating the UAT Database"/>
    <MSBuild Projects="$(MSBuildProjectFile)" Targets="Migrate" Properties="server=uat-db;database=my-database" />
  </Target>

  <!-- * This is the bit that does all the work. It defaults some of the properties
         in case they were not passed in.
       * Writes some messages to the output to tell the world what it is doing.
       * Finally it performs the migration. It also writes to an output file the script 
         it used to perform the migration. -->
  <Target Name="Migrate">
    <CreateProperty Value="False" Condition="'$(TrustedConnection)'==''">
      <Output TaskParameter="Value" PropertyName="TrustedConnection"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(User)'==''">
      <Output TaskParameter="Value" PropertyName="User"/>
    </CreateProperty>
    <CreateProperty Value="" Condition="'$(Password)'==''">
      <Output TaskParameter="Value" PropertyName="Password"/>
    </CreateProperty>
    <CreateProperty Value="False" Condition="'$(DryRun)'==''">
      <Output TaskParameter="Value" PropertyName="DryRun"/>
    </CreateProperty>
    
    <Message Text="INFO: Project is «$(MSBuildProjectDirectory)\My.DatabaseMigrations.csproj»" />
    <Message Text="INFO: Output path is «$(OutputPath)»"/>
    <Message Text="INFO: Target is «$(OutputPath)\$(AssemblyName).dll»"/>
    <Message Text="INFO: Output script copied to «$(OutputPath)\script\generated.sql»"/>    
    <Message Text="INFO: Dry Run mode is «$(DryRun)»"/>
    <Message Text="INFO: Server is «$(server)»"/>
    <Message Text="INFO: Database is «$(database)»"/>
    
    <MakeDir Directories="$(OutputPath)\script"/>
    <Migrate
			Database="sqlserver2012"
			Connection="Data Source=$(server);Database=$(database);Trusted_Connection=$(TrustedConnection);User Id=$(User);Password=$(Password);Connection Timeout=30;"
			Target="$(OutputPath)\$(AssemblyName).dll"
      Output="True"
      Verbose="True"
      Nested="True"
      Task="migrate:up"
      PreviewOnly="$(DryRun)"
      OutputFilename="$(OutputPath)\script\generated.sql"
      />
  </Target>
  
</Project>

Step 3 : Configuring the Build Server

In this example, I’m using TeamCity.

You can add a build step after building the solution to run the migration. The settings will look something like this:

 

The important bits that the “Build file path” which points to the MSBUILD file we created above, the Targets which indicate which target to run, and the “Command Lime Parameters” which passes properties to MSBUILD that were not included in the file itself. For example, the user name and password are not included in the file as that could present a security risk, so the build server passes this information in.

What about running it ad-hoc on your local machine?

Yes, this is also possible.

Because, above, we copied all the tools to the bin directory in the post-build step, there is a Migrate.exe file in your bin directory. That takes some command line parameters that you can use to run the migrations locally without MSBUILD.

  • Open up the project properties again for your migrations C# project
  • Go to the “Debug” tab
  • In “Start Action” select “Start external program” and enter “.\Migrate.exe”
  • In Command line arguments enter something like the following:

    --conn "Server=localhost;Database=my-database;Trusted_Connection=True;Encrypt=True;Connection Timeout=30;" --provider sqlserver2012 --assembly "My.DatabaseMigrations.dll" --task migrate --output --outputFilename src\migrated.sql

aspnet_regiis “Could not load file or assembly ‘SimpleAuthentication.Core’ or one of its dependencies.”

I was recently following Jouni Heiknieme’s blog post on Encrypting connection strings in Windows Azure web applications when I stumbled across a problem.

The issue was that I wasn’t encrypting the connectionStrings section, I was encrypting a custom section (one provided by SimpleAuthentication). And in order to encrypt that section, aspnet_regiis needs access to the DLL that defines the config section. If it cannot find the DLL it needs it will respond with an error message:

C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web>aspnet_regiis -pef "authenticationProviders" . -prov "Pkcs12Provider" 
Microsoft (R) ASP.NET RegIIS version 4.0.30319.18408 
Administration utility to install and uninstall ASP.NET on the local machine. 
Copyright (C) Microsoft Corporation.  All rights reserved. 
Encrypting configuration section... 
An error occurred creating the configuration section handler for authenticationProviders: Could not load file or assembly 'SimpleAuthentication.Core' or one of 
its dependencies. The system cannot find the file specified. (C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web\web.config line 7) 
Could not load file or assembly 'SimpleAuthentication.Core' or one of its dependencies. The system cannot find the file specified. 
Failed!

And here is the relevant part of the web.config file

<?xml version="1.0" encoding="utf-8"?> 
<configuration> 
  <configSections> 
    <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> 
      <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> 
    </sectionGroup> 
    <section name="authenticationProviders" type="SimpleAuthentication.Core.Config.ProviderConfiguration, SimpleAuthentication.Core" /> 
  </configSections>

It took searching through a few forum posts before I eventually found the answer. Most were pointing in the right general direction. You either have to load the assembly that defines the config section into the GAC (not possible for me as it was a third party assembly that was not strong named) or put it where aspnet_regiis was looking for it.

All the non-GAC solutions that I found were hacky horrible things that put the assembly somewhere in the .NET folder.

My problem was that where everyone was saying to put it wasn’t working for me. So I loaded up Process Monitor to look to see where exactly the aspnet_regiis was looking. It turns out that because I was using the 64bit version of the command prompt I should be looking in C:\Windows\Microsoft.NET\Framework64\v4.0.30319

I put the assembly in that directory and the aspnet_regiis worked and the relevant section was encrypted, it was runnable and I could store it to source control without other people knowing what my secret keys are.

Round tripping the encryption/decryption

I also had some issues round tripping the encrypted and decrypted config file while developing. I kept getting the error message:

Decrypting the relevant config settings
Microsoft (R) ASP.NET RegIIS version 4.0.30319.18408
Administration utility to install and uninstall ASP.NET on the local machine.
Copyright (C) Microsoft Corporation.  All rights reserved.
Decrypting configuration section...
Failed to decrypt using provider 'Pkcs12Provider'. Error message from the provider: Keyset does not exist
 (C:\dev\Xander.HorribleCards\src\Xander.HorribleCards.UI.Web\web.config line 65)

Keyset does not exist

Failed!

It turned out to be a permissions issue on the private key. This post “Keyset does not exist” on Stack Overflow helped on how to resolve that.

A better tracing routine

In .NET 4.5 three new attributes were introduced. They can be used to pass into a method the details of the caller and this can be used to create better trace or logging messages. In the example below, it outputs tracing messages in a format that you can use in Visual Studio to automatically jump to the appropriate line of source code if you need it to.

The three new attributes are:

If you decorate the parameters of a method with the above attributes (respecting the types, in brackets afterwards) then the values will be injected in at compile time.

For example:

public class Tracer
{
    public static void WriteLine(string message,
                            [CallerMemberName] string memberName = "",
                            [CallerFilePath] string sourceFilePath = "",
                            [CallerLineNumber] int sourceLineNumber = 0)
    {
        string fullMessage = string.Format("{1}({2},0): {0}{4}>> {3}", 
            memberName,sourceFilePath,sourceLineNumber, 
            message, Environment.NewLine);

        Console.WriteLine("{0}", fullMessage);
        Trace.WriteLine(fullMessage);
    }
}

The above method can then be used to in preference to the built in Trace.WriteLine and it will output the details of where the message came from. The format that the full message is output in is also in a format where you can double click the line in the Visual Studio output window and it will take you to that line in the source.

Here is an example of the output:

c:\dev\spike\Caller\Program.cs(13,0): Main
>> I'm Starting up.
c:\dev\spike\Caller\SomeOtherClass.cs(7,0): DoStuff
>> I'm doing stuff.

The lines with the file path and line numbers on them can be double-clicked in the Visual Studio output window and you will be taken directly to the line of code it references.

What happens when you call Tracer.WriteLine is that the compiler injects literal values in place of the parameters.

So, if you write something like this:

Tracer.WriteLine("I'm doing stuff.");

Then the compiler will output this:

Tracer.WriteLine("I'm doing stuff.", "DoStuff", "c:\\dev\\spike\\Caller\\SomeOtherClass.cs", 7);

Using Contracts to discover Liskov Substitution Principle Violations in C#

In his book Agile Principles, Patterns, and Practices in C#, Bob Martin talks about using pre- and post-conditions in Eiffel to detect Liskov Substitution Principle violations. At the time he wrote that C# did not have an equivalent feature and he suggested ensuring that unit test coverage was used to ensure the same result. However, that does not ensure that checking for LSP violations are applied consistently. It is up to the developer writing the tests to ensure that they are and that any new derived classes get tested correctly. Contracts, these days, can be applied to the base class and they will automatically be applied to any derived class that is created.

Getting started with Contracts

If you’ve already got Visual Studio set up to check contracts in code then you can skip this section. If you don’t then read on.

1. Install the Code Contracts for .NET extension into Visual Studio.

2. Open Visual Studio and load the solution containing the projects you want to apply contracts to.

3. Open the properties for the project and you’ll see a new tab in the project properties window called “Code Contracts”

4. Make sure that the “Perform Runtime Contract Checking” and “Perform Static Contract Checking” boxes are checked. For the moment the other options can be left at their default values. Only apply these to the debug build. It will slow down the application while it is running as each time a method with contract conditions is called it will be performing runtime checks.

Visual Studio Project Properties

You are now ready to see code contract issues in Visual Studio.

For more information on code contracts head over to Microsoft Research’s page on Contracts.

Setting up the Contract

Using the Rectangle/Square example from Bob Martin’s book Agile Principles, Patterns and Practices in C# here is the code with contracts added:

public class Rectangle
{
    private int _width;
    private int _height;
    public virtual int Width
    {
        get { return _width; }
        set
        {
            Contract.Requires(value >= 0);
            Contract.Ensures(Width == value);
            Contract.Ensures(Height == Contract.OldValue(Height));
            _width = value;
        }
    }

    public virtual int Height
    {
        get { return _height; }
        set
        {
            Contract.Requires(value >= 0);
            Contract.Ensures(Height == value);
            Contract.Ensures(Width == Contract.OldValue(Width));
            _height = value;
        }
    }
    public int Area { get { return Width * Height; } }
}

public class Square : Rectangle
{
    public override int Width
    {
        get { return base.Width; }
        set
        {
            base.Width = value;
            base.Height = value;
        }
    }

    public override int Height
    {
        get { return base.Height; }
        set
        {
            base.Height = value;
            base.Width = value;
        }
    }
}

The Square class is in violation of the LSP because it changes the behaviour of the Width and Height setters. To any user of Rectangle that doesn’t know about squares it is quite understandable that they’d assume that setting the Height left the Width alone and vice versa. So if they were given a square and they attempted to set the width and height to different values then they’d get a result from Area that was inconsistent with their expectation and if they set Height then queried Width they may be somewhat surprised at the result.

But there are now contracts in place on the Rectangle class and as such they are enforced on Square as well. “Contracts are inherited along the same subtyping relation that the type system enforces.” (from he Code Contracts User Manual). This means that any class that has contracts will have those contracts enforced in any derived class as well.

While some contracts can be detected at compile time, others will still need to be activated through a unit test or will be detected at runtime. Be aware, if you’ve never used contracts before that the contract analysis can appear in the error and output windows a few seconds after the compiler has completed.

Consider this test:

[Test]
public void TestSquare()
{
    var r = new Square();
    r.Width = 10;

    Assert.AreEqual(10, r.Height);
}

When the test runner gets to this test it will fail. Not because the underlying code is wrong (it will set Height to be equal to Width), but because the method violates the constraints of the base class.

The contract says that when the base class changes the Width the Height remains the same and vice versa.

So, despite the fact that the unit tests were not explicitly testing for an LSP violation in Square, the contract system sprung up and highlighted the issue causing the test to fail.

 

Running an ASP.NET MVC application on a fresh IIS8 install

IIS has ever increasing amounts of security, you can’t publish a basic ASP.NET MVC website anymore and expect IIS to host it without some additional work. The default config settings that the MVC uses are locked down in IIS, so it issues an error when you try to navigate to your fresh site.

Initially you may get a screen that says something bland and non-descriptive, like “Internal Server Error” with no further information.

To get the more detailed error messages modify your web application’s web.config file and add the following line to the system.webServer section:

<httpErrors errorMode="Detailed" />

Now, you’ll get a more detailed error message. It will look something like this:

The key to the message is: This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false".

The “Config Source” section of the error message will highlight in red the part that is denied.

In order to allow the web.config to modify the the identified configuration element you need to find and modify the ApplicationHost.config file. It is located in C:\Windows\System32\inetsrv\config. You’ll need to be running as an Administrator level user in order to modify the file.

Find the section group the setting belongs to, e.g.

<sectionGroup name="system.webServer">

Then the section itself:

<section name="handlers" overrideModeDefault="Deny" />

And update overrideModeDefault to "Allow" in order to allow the web.config to override it.

When you refresh the page for the website the error will be gone (or replaced with an error for the next section that you are not permitted to override)

Tip of the day: Quickly finding commented out code in C-like languages

This is a quick rough-and-ready way of finding commented out code in languages like C#, C++, javaScript and Java. It won’t find all instances, but it will probably find most.

Essentially, you can use a regular expression to search through the source code looking for a specific pattern.

The following works for Visual Studio for searching C#. The same expression would also likely work in other similar languages where the comment line starts with a double slash and ends in a semi-colon.

So for Visual Studio, press Ctrl+F to bring up the search bar, set the search option to be be “Regular Expression” and set it to look in the “Entire Solution” (or what ever extent you are searching) then enter the regular expression into the search box:

^\s*//.*;\s*$
Visual Studio 2012 Search Bar
Visual Studio 2012 Search Bar

Then tell it to “Find all” and you’ll be presented with a list of lines of code that appear to be commented out.

Caveats

It is not fully proof. It does not find code commented out in the style:

/*
* Commented = out.code.here();
*/

It also does not find commented out code on lines that do not end in a semi-colon such as namespace, class, method declarations and the like, nor on lines that simple contain scope operators (The { and } braces to define the scope of a namespace, class, method, etc.)

And although I’ve said C-like languages in the title, I don’t recall that C ever had the double-slash comment style (although many languages based on it do, hence “C-like”)

Update: Updated the regular expression to ignore white space at the start and end of the line.

DROP/CREATE vs ALTER on SQL Server Stored Procedures

We have a number of migration scripts that modify the database as our applications progress. As a result we have occasion to alter stored procedures, but during development the migrations may be run out of sequence on various test and development databases. We sometimes don’t know if a stored procedure will exist on a particular database because of the state of various branches of development.

So, which is better, dropping a stored procedure and recreating it, or just simply altering the existing one, and if we’re altering it, what needs to happen if it simply doesn’t exist already?

DROP / CREATE

This is the code we used for the DROP/CREATE cycle of adding/updating stored procedures

IF EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'MyProc')
BEGIN
       DROP PROCEDURE MyProc;
END
GO
 
CREATE PROCEDURE MyProc
AS
	-- Body of proc here
GO

CREATE / ALTER

We recently changed the above for a new way of dealing with changes to stored procedures. What we do now is detect if the procedure exists or not, if it doesn’t then we create a dummy. Regardless of whether the stored procedure previously existed or not, we can now ALTER the stored procedure (whether it was an existing procedure or the dummy one we just created

That code looks like this

IF NOT EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'MyProc')
BEGIN
       EXEC('CREATE PROCEDURE [dbo].[MyProc] AS BEGIN SET NOCOUNT ON; END')
END
GO
 
ALTER PROCEDURE [dbo].[MyProc]
AS
	-- Body of proc here
GO

This looks counter-intuitive to create a dummy procedure just to immediately alter it but it has some advantages over drop/create.

If an error occurs with drop/create while creating the stored procedure and there had been a stored procedure to drop originally you are now left without anything. With the create/alter in the event of an error when altering the stored procedure the original is still available.

Altering a stored procedure keeps any security settings you may have on the procedure. If you drop it, you’ll lose those settings. The same goes for dependencies.

Tip of the day: Forcing a file in to your Git repository

Normally, you’ll have a .gitignore file that defines what should not go into a git repository. If you find that there is a file or two that is ignored that you really need in the repository (e.g. a DLL dependency in a BIN folder for some legacy application that doesn’t do NuGet, or other package management) then you need a way to force that into Git.

The command you need is this:

git add --force <filename>

Replace <filename> with the file that you need to force in. This will force stage the file ready for your next commit.

You can also use . in place of a file name to force in an entire directory, or you can use wild cards to determine what is staged.

Afterwards you can use

git status

to see that it has been staged.

Recruitment Agents take note

A few times a week I get emails from recruitment agencies, they are are pretty much all along the same lines. The email seems to be a standard template that tells me absolutely nothing of importance about the job and gives me next to zero incentive to find out more.

I’m in a pretty great job at the moment that I’m really enjoying, so I’m not actually looking to move, but had this been maybe about a year ago (before things got restructured) I would have moved if anyone gave me a reasonable incentive for doing so. Based on the generic emails that say nothing of consequence that recruitment agents send out it is better the devil you know than the devil you don’t know. And I really don’t know.

So, here’s an example of something I received earlier this week:

From: <Name-of-agent>
Sent: <date>
To: Colin Mackay
Subject: Possible Synergy

Hi Colin,

We’ve not spoken before, I’m a head-hunter in the <technology-misspelled> development space and your name has came to light as a top talent in the <technology-misspelled> space.

I know your not actively on the market and I would not be contacting you if I didn’t feel I had something truly exceptional.

My role not only gives you interesting programme work in the <technology-misspelled> space but also strong career progression route in a growing business, work life balance, supportive environment, stability and a final salary pension. <name-of-city-I-live-in> based role.

Are you free for a discreet chat about this, what is the best time and number to call you on?

Kind Regards,

<name-of-agent>

<contact details>

This tells very little. She has at least identified that I work with the relevant technology (although sometimes I think that might just be a fluke given the number of emails I receive about things that I’m not remotely competent in) and the city I live in, so I suppose that’s a good start.

Pretty much every recruitment agent send out something similar. Every email I receive says the job is “truly exceptional”, “exciting” or that it’s an “amazing opportunity”. Those words are so over used that more often the email gets binned at that point. A lesson from many a primary school teacher trying to improve her pupils vocabulary is that they can’t use the word “nice” any more and they’ll get marked down if they do.

Nothing here sells me on the idea that change would be a good idea even although they acknowledged I’m not actively on the market.

The agent did not mentioned the type of company. Even if they can’t mention the name of the company at this stage the following would be useful: Is it a software consultancy? a digital agency? a software house with a defined product? An internal software department in a larger company? Which industry is the company operating in?

Some of the answers might turn me off, but it is better to know now than waste time to find out later. Some of the answers may pique my interest, which is obviously a good thing.

They mention the “<name-of-technology> space”. For the moment, we’ll ignore that it was misspelled (lots of technologies have strange ways of spelling or capitalising things, but it doesn’t take long to find out the official way).

They don’t really define what “XYZ space” actually means. There are so many subgroups of technology in that “space” that it could mean anything, including things I’m either unsuitable for or have no interest in. What’s the database technology (assuming there is one)? What is the front end technology (assuming there is one)? Or is the role wholly at one end or the other (e.g. mostly in the business logic or mostly in the front end)? What tool sets and frameworks are involved? (e.g. Visual Studio 2012, include version numbers. I’m interested in progressing forward, but if they’re still on Visual Studio 2008 I’m not interested and it would be better that you know that now). Is the company all single-vendor based (i.e. only using a tool if that vendor produced it) or do they use technologies from third parties (open source or commercial)?

There is nothing about training in the description they’ve provided. That would be a big bonus to me. I already spend in the region of £2000 a year keeping myself up-to-date (books, on-line videos, conferences, etc.), it would be nice to find an employer that is genuinely interested in contributing in that area beyond buying occasional books or giving me the occasional day-off outside of my annual leave to attend a conference that I’m already paying for. After all, they are the ones benefiting from all that training. However, occasionally emails do mention training, but it is sometimes couched in language that suggests a reluctance (e.g. “as an when required by the business”), but it’s there because the company or agent knows it will attract potential candidates if they mention training.

If the prospective company doesn’t provide training then I’d remind them that it is “Better to train your developers and risk they leave, than keep them stupid and risk they stay”. If the prospective company has a really negative view to training then I really wouldn’t want to work for them – I have already worked with a company that seemed to proactively provide disincentives for any sort of training.

Finally, there is no mention about salary. While, on the whole, I’m more interested in other things, I do have a mortgage to pay. If the salary won’t cover my bills with enough left over for a nice holiday (it’s no fun sitting at home watching Jeremy Kyle on days off) then that would be a showstopper even if all other things were perfect.

Also, stating salary as “£DOE” or “£market rate" is equally useless. Companies have a budget. They might say “£DOE” (depending on experience), but if it goes above their budget then that’s all they are going to offer. If that is not enough then it is better to know that up front than later on.

I’ve also been in situations where I’ve felt that the recruitment agent knew my salary expectation wasn’t going to fly with the hiring company, but strung me along for a bit until finally saying that they rejected my CV. It would be better to let potential recruits know up front without wasting everybody’s time.

While providing more information up front might reduce the interest from some potential candidates, at least they are not going to waste their valuable time and the recruitment agent’s valuable time pursuing something that is not going to come to anything. On the other hand, providing more information might be the catalyst to getting someone who is not actively looking to sit up and think about making that change.

Certainly, if I keep receiving the generic emails like the one above, especially that acknowledge I’m not actively looking, then I’m never going to look unless my current employer does something to make me question why I am there.