Pomodoro

I keep hearing about friends using The Pomodoro Technique and I’ve decided I really need to just try it out for myself. I’m not yet sure if it will work for me, but I’ve been hearing positive things about it.

To that end I set myself the task of a small project using it, that is to create a web page with a pomodoro timer with a visible indicator of time left and that makes a noise when the timer expires. The final pomodoro is to write up this blog post.

This task has a mix of things I already know about (putting together a web page with HTML, CSS, JavaScript and jQuery) and things I would have to look up (like how to get a web page to emit a sound at a given point, have it respond to events at timed intervals, and deploy it to the web via Amazon’s AWS).

What I found was that 25 minutes actually goes past very quickly. Secondly, and this is more because it is the Christmas holidays, I still need to discipline myself not to jump to Facebook or Twitter each time my phone beeps or chirrups a notification at me. Similarly, in work I would probably have to discipline myself not to jump to Outlook or Skype when they pop up notifications.

On the whole, it looks like it could be fairly advantageous and I’ll continue to see if it helps productivity.

For the moment, if you do want to have a look at the very simple pomodoro timer that I created, then you can access it here: http://pomodoro.colinmackay.co.uk – I’m also happy to take suggestions on improvements if you think it could be made better.

Getting Entity Framework to Pick up on the Auto-generated Part of the Composite Primary Key

In my previous post, I wrote about how to get SQL Server to automatically generate a composite primary key for a table when part of that key is also a foreign key to a parent table. That all works nicely in SQL Server and using regular ADO.NET commands from a .NET application. However, as the title of this post suggests, it is a little more of an issue when it comes to Entity Framework.

When you get Entity Framework to generate its model from the database then it picks up on identity columns automatically. In the example in the previous post there were no identity columns in the child table because the value was being generated from within the INSTEAD OF INSERT trigger and the column was not marked as an identity column.

It is easy enough to go into the entity model and manually set the properties of the Id column to have the StoreGeneratedPattern set to Identity which indicates to Entity Framework that it has to find out what the value is once the entity has been inserted into the database.

For an integer column this means that the Entity Framework will issue an INSERT statement like this:

exec sp_executesql N'insert [dbo].[Products]([TenantId], [Name])
values (@0, @1)
select [Id]
from [dbo].[Products]
where @@ROWCOUNT > 0 and [TenantId] = @0 and [Id] = scope_identity()',N'@0 int,@1 nvarchar(50)',@0=1,@1=N'Test Product A'

You’ll notice that this isn’t just a simple insert, it also performs a SELECT immediately afterwards in order to get the value of the newly inserted key back so that it can update the entity.

However, the way it does it will not produce a value. SCOPE_IDENTITY() will always be null</CODEL p contained.< it value the destroyed have would trigger a used we fact was, there if even fact, In column. identity no was because>

After much searching around on the internet I didn’t find a solution for this issue. I even posted on StackOverflow and didn’t get an answer back (at least, I haven’t at the time of writing). However, I did eventually come across a workaround that could be used in place of SCOPE_IDENITY(). The work around involved changing the way the trigger worked to some extent.

The new trigger would capture the keys that it inserted and output them in a select statement at the end. The new trigger looked like this:

ALTER TRIGGER dbo.IOINS_Products 
   ON  dbo.Products 
   INSTEAD OF INSERT
AS 
BEGIN
  SET NOCOUNT ON;
  
  -- Acquire the lock so that no one else can generate a key at the same time.
  -- If the transaction fails then the lock will automatically be released.
  -- If the acquisition takes longer than 15 seconds an error is raised.
  DECLARE @res INT;
  EXEC @res = sp_getapplock @Resource = 'IOINS_Products', 
    @LockMode = 'Exclusive', @LockOwner = 'Transaction', @LockTimeout = '15000',
    @DbPrincipal = 'public'
  IF (@res < 0)
  BEGIN
    RAISERROR('Unable to acquire lock to update Products table.', 16, 1);
  END

  -- Work out what the current maximum Ids are for each tenant that is being
  -- inserted in this operation.
  DECLARE @baseId TABLE(BaseId int, TenantId int);
  INSERT INTO @baseId
  SELECT MAX(ISNULL(p.Id, 0)) AS BaseId, i.TenantId 
  FROM  inserted i
  LEFT OUTER JOIN Products p ON i.TenantId = p.TenantId
  GROUP BY i.TenantId

  -- The replacement insert operation
  DECLARE @keys TABLE (Id INT);
  INSERT INTO Products
  OUTPUT inserted.Id INTO @keys
  SELECT 
    i.TenantId, 
    ROW_NUMBER() OVER(PARTITION BY i.TenantId ORDER BY i.TenantId) + b.BaseId 
      AS Id,
    Name
  FROM inserted i
  INNER JOIN @baseId b ON b.TenantId = i.TenantId

  -- Release the lock.
  EXEC @res = sp_releaseapplock @Resource = 'IOINS_Products', 
    @DbPrincipal = 'public', @LockOwner = 'Transaction'

  SELECT Id FROM @keys
END
GO

As you can see the last line of the trigger performs a SELECT in order to get the keys.

Since the Entity Framework is only expecting one result set from the SQL it issued, the fact that we have now added in a second result set in the trigger tricks it into thinking that it has got the SCOPE_IDENTITY() value it was asking for. In fact, that result set, which contained a null value anyway, is now the second result set and is ignored by the Entity Framework.

If you are only ever going to use your database with Entity Framework then this solution may work for you. However, the idea that the trigger creates additional (and potentially unexpected) result sets may prove this workaround’s undoing in a more widely used system.

Composite Primary Keys Including Identity Like Column

I’ve been thinking of a way to organise some data for a multi-tenanted system. As such the database that would be used would have to mark pretty much every piece of data with the id of the tenant in some way. This got me thinking about using composite primary keys on tables.

Using a composite primary key with the TenantId as the first part has the advantage that all data relating to a single tenant is adjacent in the database which should speed up reads as there will be less to read. Any reads will likely be confined to a single tenant. I can’t see lookups being made independent of the tenant being frequent, and if they do occur then it is most likely not as part of the application but as a DBA sitting down at SSMS.

The second part of the composite primary key is kind of like an identity column, but not. While an identity column could be used it will produce gaps in the sequence for an individual tenant and I didn’t want that. I wanted similar behaviour to an identity column but separate counters for each tenant.

Pitfalls on the way

When I was looking for a solution much of what I found talked about doing the process manually, often from the application side. I really don’t want the application having to know that much about the database.

Other solutions talked about only ever using stored procedures, which is a little better, but then if you have many stored procedures that insert into the table with the composite key then you have to remember to call the code from all those places.

In the end I went for an INSTEAD OF INSERT trigger. But implementing the ideas I’d run across in a trigger proved problematic.

I tried a few different ideas which worked well when a single user was accessing the database, but when stress testing the design with multiple simultaneous inserts I got either primary key violations (insufficient locks) or deadlocks (too much locking). In the end I discovered that there is a mutex like concept in SQL Server that locks sections of code rather than tables. While this may not sound terribly useful if there is only ever one piece of code that generates the identity like column, such as the INSTEAD OF INSERT trigger then it could work.

The Solution

Here is the solution, which I’ll walk you through in a moment. For the moment, just be aware that there are two tables: Tenants and Products. A product can be owned by a single tenant only, so there is a parent/child relationship.

Tables:

CREATE TABLE [dbo].[Tenants](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Name] [nvarchar](50) NOT NULL,
 CONSTRAINT [PK_Tenants] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[Products](
	[TenantId] [int] NOT NULL,
	[Id] [int] NOT NULL,
	[Name] [nvarchar](50) NOT NULL,
 CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED 
(
	[TenantId] ASC,
	[Id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE [dbo].[Products]  WITH CHECK ADD  CONSTRAINT [FK_Products_Tenants] FOREIGN KEY([TenantId])
REFERENCES [dbo].[Tenants] ([Id])
GO

ALTER TABLE [dbo].[Products] CHECK CONSTRAINT [FK_Products_Tenants]
GO

Trigger:

CREATE TRIGGER dbo.IOINS_Products 
   ON  dbo.Products 
   INSTEAD OF INSERT
AS 
BEGIN
  SET NOCOUNT ON;

  -- Acquire the lock so that no one else can generate a key at the same time.
  -- If the transaction fails then the lock will automatically be released.
  -- If the acquisition takes longer than 15 seconds an error is raised.
  DECLARE @res INT;
  EXEC @res = sp_getapplock @Resource = 'IOINS_Products', 
    @LockMode = 'Exclusive', @LockOwner = 'Transaction', @LockTimeout = '15000',
    @DbPrincipal = 'public'
  IF (@res < 0)
  BEGIN
    RAISERROR('Unable to acquire lock to update Products table.', 16, 1);
  END

  -- Work out what the current maximum Ids are for each tenant that is being
  -- inserted in this operation.
  DECLARE @baseId TABLE(BaseId int, TenantId int);
  INSERT INTO @baseId
  SELECT MAX(ISNULL(p.Id, 0)) AS BaseId, i.TenantId 
  FROM  inserted i
  LEFT OUTER JOIN Products p ON i.TenantId = p.TenantId
  GROUP BY i.TenantId

  -- The replacement insert operation
  INSERT INTO Products
  SELECT 
    i.TenantId, 
    ROW_NUMBER() OVER(PARTITION BY i.TenantId ORDER BY i.TenantId) + b.BaseId 
      AS Id,
    Name
  FROM inserted i
  INNER JOIN @baseId b ON b.TenantId = i.TenantId

  -- Release the lock.
  EXEC @res = sp_releaseapplock @Resource = 'IOINS_Products', 
    @DbPrincipal = 'public', @LockOwner = 'Transaction'
END
GO

The Tenants table does use a traditional IDENTITY column for its primary key, but the Products table does not.

The INSTEAD OF INSERT trigger will do just what it says, instead of using the insert statement that was issues, it will replace it with its own code. This means that the database has the opportunity to perform additional actions before the insert takes place, potentially replacing the INSERT itself.

The first thing the trigger does is attempt to acquire a lock to ensure that no one else can calculate the next available id value. Rather than locking a table it uses sp_getapplock (available from SQL Server 2005 onwards) which locks code instead since the trigger is the only place that will calculate the next Id value.

A table variable is created to hold all the current max values of Id for each TenantId in the current insert operation. These will act as the base Ids for the subsequent INSERT operation. This ensures that if the INSERT is processing multiple TenantIds in one operation they are all accounted for. In reality, for the tenant based application I’m thinking about, I don’t see this happening all that frequently if at all, but fortune favours the prepared, so they say.

Next is the actual INSERT statement that will put the data into the database. I’m using the ROW_NUMBER() function (available from SQL Server 2005) to work out a sequential value for each row that is being inserted partitioned by the TenantId. This means that each TenantId will have a row number starting a 1 which can then be added to the base Id that was worked out in the previous step. The Id column is the only substitution in the INSERT, the remainder of the columns are passed through as is without alteration.

Finally, the lock is released using sp_releaseapplock(). This ensures that any other process that is waiting to INSERT data into this table can do so. In the event that the transaction fails the lock will be released automatically as the Owner was set to the transaction.

Issues

While composite keys can mean that you can create more natural keys for your data over using a surrogate key like an single unique identity column or GUID/UUID as it follows more closely the model of the business, it does mean that if the business changes then you may end up needing to find ways to change a primary key, something that should remain immutable. For example, if you have two tenants in your system and then they merge into a single tenant it is likely you are going to have duplicate keys if you simply change the tenant Id on one.

If the shape of the table changes then the INSTEAD OF INSERT trigger will have to be updated as well as the replacement insert statement will have to take account of the new columns or eliminate references to columns that no longer exist.

There are also issues with Entity Framework not being able to get the newly generated value back. I’ll discuss that in more detail in a future blog post.

IDisposable objects with StructureMap and ASP.NET MVC 4

I’ve recently discovered a bit of an issue with running an IoC container with ASP.NET MVC’s IDependencyResolver. If you have a controller that has dependencies on things that implement IDisposable then the dispose method was not being called.

Apparently, if the controller itself is disposable then MVC will clean that up and that can obviously clean up any dependencies that it created and are also disposable. However, if you are injecting the dependency then the controller should not really be disposing of those dependencies because it did not create them as it has no knowledge of the lifecycle of those objects – the owner (the object that created the dependency) is really responsible for disposing of its objects.

So, the responsible party for disposing of the is what ever created it. However, in MVC 4 the Service Locator has no way of disposing downstream objects that get created when instantiating the controller, it only deals with the controller directly, so if a downstream object that the controller depends on needs to be disposed then the IoC container has to manage that. Mike Hadlow has a much better explanation of what is going on here and his dealings with using, specifically, Castle Windsor and the IDepenencyResolver.

Since I’m using StructureMap, it does have a way of helping you clean up.

For example, in the Initialisation expression that the ObjectFactory.Initialize uses I’ve got a repository set up like this:

x.For<IRepository>().HttpContextScoped().Use<Repository>();

This creates a new Repository for each request that the MVC application receives. However, this on its own is not enough because it means that while each request gets a new repository, none of the resources of these repository objects are being cleaned up because it never releases them. Eventually those resources will run out, be they database connections, file handles, or what ever the repository needs to use.

You can put in your Global.asax.cs file a method called Application_EndRequest() which is called at the end of each request. Or, if you already have one you can simply add this line of code to it.

protected void Application_EndRequest()
{
  ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
}

SQL Injection Attacks – DunDDD 2012

Examples

The examples were run against a copy of the Adventure Works database.

Required Tables

For the Second Order Demo you need the following table added to the Adventure Works database:

CREATE TABLE [dbo].[FavouriteSearch](
	[id] [int] IDENTITY(1,1) NOT NULL,
	[name] [nvarchar](128) NOT NULL,
	[searchTerm] [nvarchar](1024) NOT NULL
) ON [PRIMARY]

GO

Slide Deck

The slide deck is available for download in PDF format.

Further Reading

During the talk I mentioned a lesson from history on why firewalls are not enough.

I also showed XKCD’s famous “Bobby Tables” cartoon, and also a link to further information on dynamic SQL in Stored Procedures.

More information about the badly displayed error messages can be found amongst two blog posts: What not to develop, and a follow up some months later.

I wrote a fuller article on SQL Injection Attacks that you can read here although it is a few years old now, it is still relevant given that SQL Injection Attacks remain at the top of the OWASP list of vulnerabilities.

Developer Tools in IE Hides Bug

The other day I put some code on our UAT (User Acceptance Testing) server so that some new code could be tested and I started received a very alarming bug that one of the pages simply didn’t work at all. The page was in an administration section of the website and relied heavily on JavaScript even just to display the initial content (which was retrieved via an AJAX request)

As a quick initial smoke test I loaded the page up on my machine and it was working using my developer build. Then I looked at the page on the UAT server in case it was some quirk of the build and it was also working. I then noticed that the person doing the tests was running IE, so I though it might be a browser issue, so I loaded the page from the UAT server up again, this time using IE and it was still working on my machine.

Since it was lucky enough that the particular section was an admin section (which would be used only in-house) the people testing it were not actual customers of ours so I could walk over to their desk and ask for a demonstration in case there was some quirky step that had to be undertaken. When the page was loaded up I immediately saw that it didn’t work.

Being a developer my first instinct was to open up the Developer Tools in IE and have a look at what was happening. So I hit F12 and the reloaded the page…. and it started to work!  It worked with the Developer tools turned on!

I couldn’t fathom that out. What was so different about Internet Explorer when the Developer Tools were turned off so I started going through the various things that IE was showing me in the developer tools, stepping through code and watching the AJAX request go out then come back with data and start to process that data…. And then I noticed it. There was a line that said:

console.log("blah… blah…. blah…");

On a regular user’s machine the Developer Tools are never running, so it never has a console, so the JavaScript just broke.

Demonstrating the bug

How you you try this out? I’ve written a small demo to show what I mean. Obviously, you need to open it in Internet Explorer. I’ve tried it in IE 8.

If you have already opened the Developer Tools previously then you may find that they open automatically, which makes any testing impossible. Shutting them down and restarting IE doesn’t help. You have to go in to the registry and manually disable the developer tools.

To disable the Developer Tools in Internet Explorer you need to edit the system registry. Open up regedit and navigate to HKCU\Software\Microsoft\Internet Explorer\IEDevTools then create a DWORD called Disabled and give it the value of 1.

Once you have disabled the Developer Tools you’ll see that the page displays the text:

This paragraph has been update by javascript.

This paragraph is not yet updated by javascript, and if the Developer Tools are not present, it won’t update.

And a small warning triangle appears in the status bar of IE. Double clicking on the warning triangle brings up a dialog with some error information that looks like this:

To re-enable the Developer Tools simply delete that setting and restart Internet Explorer, press F12 to bring up the developer tools then open the demonstration page again.

Now the text displayed on the page reads:

This paragraph has been update by javascript.

This paragraph has also been updated by javascript, indicating the developer tools are present

Preventing this bug

Obviously running functions on the console object is not all that desirable in a production system so the idea is to remove all those calls so that IE won’t crash. If you find that is impractical you could put in some JavaScript before other JavaScript is run such as the following:

if (window.console === undefined) {
  console = {};
  console.log = function(){};
}

This will ensure that if a console object does not exist then one is created and a dummy function is attached to it. If you use other functions on the console object then you should add them also in a similar way.

The above demonstration has been updated to show this console “protection” in action.

 

RavenDB on IIS: Cannot access file, the file is locked or in use

I came across another issue with trying to get RavenDB working through IIS. When the process started up I got the error message “Cannot access file, the file is locked or in use”.

The stack trace looked like this:

[EsentFileAccessDeniedException: Cannot access file, the file is locked or in use]
   Microsoft.Isam.Esent.Interop.Api.Check(Int32 err) in C:\Work\ravendb\SharedLibs\Sources\managedesent-61618\EsentInterop\Api.cs:2736
   Raven.Storage.Esent.TransactionalStorage.Initialize(IUuidGenerator uuidGenerator) in c:\Builds\RavenDB-Stable\Raven.Storage.Esent\TransactionalStorage.cs:205

[InvalidOperationException: Could not open transactional storage: C:\inetpub\ravendb\Data\Data]
   Raven.Storage.Esent.TransactionalStorage.Initialize(IUuidGenerator uuidGenerator) in c:\Builds\RavenDB-Stable\Raven.Storage.Esent\TransactionalStorage.cs:220
   Raven.Database.DocumentDatabase..ctor(InMemoryRavenConfiguration configuration) in c:\Builds\RavenDB-Stable\Raven.Database\DocumentDatabase.cs:185
   Raven.Web.ForwardToRavenRespondersFactory.Init() in c:\Builds\RavenDB-Stable\Raven.Web\ForwardToRavenRespondersFactory.cs:84
   Raven.Web.RavenDbStartupAndShutdownModule.b__0(Object sender, EventArgs args) in c:\Builds\RavenDB-Stable\Raven.Web\BootStrapper.cs:13
   System.Web.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +80
   System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +270M

It turns out that I hadn’t given the account IIS was using access to the directory… Or rather, I had given access, I just didn’t tell IIS about the account so it was using the default account.

To fix the issue:

  • Go into the Application Pools section of IIS.
  • Click the Application pool used by the RavenDB site

  • Press “Advanced Settings…” on the right side of the dialog.
  • Ensure the correct account is set up for the “Identity” in the “Process Model” section.

  • Once you’ve set up the correct user, press "OK" for each of the dialogs and everything should be ready.

Getting RavenDB working on IIS – 500.19

While trying to get RavenDB working in IIS I ran into a problem. I got the following error from IIS

HTTP Error 500.19 – Internal Server Error

The requested page cannot be accessed because the related configuration data for the page is invalid.

Module IIS Web Core
Notification BeginRequest
Handler Not yet determined
Error Code 0x80070021
Config Error This configuration section cannot be used at this path. This happens when the section is locked at a parent level. Locking is either by default (overrideModeDefault="Deny"), or set explicitly by a location tag with overrideMode="Deny" or the legacy allowOverride="false".
Config File \\?\C:\inetpub\ravendb\web.config

Config Source

    6: 	<system.webServer>
    7: 		<handlers>
    8: 			<add name="All" path="*" verb="*" type="Raven.Web.ForwardToRavenRespondersFactory, Raven.Web"/>

I’ve ignored some of the less interesting parts of the error message.

This can be fixed in IIS itself.

  • Open up IIS, click on the top node of the tree on the left side (the one labelled with the machine name)
  • Then double-click on the item in the centre pane marked “Feature Delegation”

  • Then find the entry marked “Handler Mappings” and set the delegation to “Read/Write” using the action links on the right side of the dialog.

When this is done the error moves on to the next part of the web.config

Config Source

    9: 		</handlers>
   10: 		<modules runAllManagedModulesForAllRequests="true">
   11: 			<remove name="WebDAVModule" />

At this point you have to also set the “Modules” to “Read/Write” also

Once that was set, I was good to go.

Git cheat sheet

Here is my little cheat sheet for working with git at the command line.

Check the status of the current repository:

git status

This will return details of which files are modified and any files that are untracked, etc.

Add a file to the repository:

git add <filename>

You can also replace filename with a dot (.) to include all untracked files. Use git status before to find out what files are currently untracked.

Commit the changes to the local repository:

git commit -m "<commit message>"

This will only commit the changes as far as the local repository.

Send the changes back to the server:

git push origin master

origin is the name of the remote location of the repository. It is more-or-less, by convention, where you cloned your local repository from.

master is the name of the branch. By convention this is your default branch.

Revert a file to the version at the previous commit:

git checkout -f <filename>

Delete files from the repository

git rm <filename>

If you want to delete an entire directory you need to add the -r (recursive) flag.

git rm -r <folder-path>

You may also need to choose between keeping the files on the disk and removing them altogether. In which case you want either --cached, to remove them from the repository but keep them on disk, or -f to force the removal of the file from disk.

Umbraco installation woes

Recently, I created an Umbraco site on one machine and I wanted to move it to another. I say site, there was nothing in it. It was really a basic database, but I’d configured it with just an empty install before I zipped up the solution and all the files in it.

When I unzipped the files on to a new machine every time I tried to run the Umbraco install routine it would redirect me to a log on page even although the site had only a completely empty database to connect to (not a single table because the installation hadn’t run yet).

The reason is that on installation part of the web.config is updated.

<add key="umbracoConfigurationStatus" value="4.8.0" />

And that indicates to Umbraco that the installation is complete. So, allow installation to proceed normally remove the value and the installation can start normally, like this:

<add key="umbracoConfigurationStatus" value="" />