Getting just the columns you want from Entity Framework

I’ve been looking at trying to optimise the data access in the project I’m working on, and the major stuff (like getting a piece of code that generated 6000 queries down to just 7) is now done. The next step is to look at smaller things that can still make savings.

At the moment, the queries return complete entities. However, that may not always be desirable. For example, I have an auto-complete feature that I just need an Id and some text data. The data comes from more than one table so at the moment I’m getting a rather large object graph back and throwing most of it away. It would be great to just retrieve the data we’re interested in and not have to waste time collating, transmitting, and mapping data we are then going to ignore.

So, using AdventureWorks as an example, here is some EF code to get the list of products we can use in an auto complete feature.

private IEnumerable<AutoCompleteData> GetAutoCompleteData(string searchTerm)
{
    using (var context = new AdventureWorksEntities())
    {
        var results = context.Products
            .Include("ProductSubcategory")
            .Where(p => p.Name.Contains(searchTerm)
                        && p.DiscontinuedDate == null)
            .AsEnumerable()
            .Select(p => new AutoCompleteData
                                {
                                    Id = p.ProductID,
                                    Text = BuildAutoCompleteText(p)
                                })
            .ToArray();
        return results;
    }
}

private static string BuildAutoCompleteText(Product p)
{
    string subcategoryName = string.Empty;
    if (p.ProductSubcategory != null)
        subcategoryName = p.ProductSubcategory.Name;

    return string.Format("{0} ({1}) @ ${2:0.00}", p.Name,
        subcategoryName, p.StandardCost);
}

This produces a call to SQL Server that looks like this:

exec sp_executesql N'SELECT
[Extent1].[ProductID] AS [ProductID],
[Extent1].[Name] AS [Name],
[Extent1].[ProductNumber] AS [ProductNumber],
[Extent1].[MakeFlag] AS [MakeFlag],
[Extent1].[FinishedGoodsFlag] AS [FinishedGoodsFlag],
[Extent1].[Color] AS [Color],
[Extent1].[SafetyStockLevel] AS [SafetyStockLevel],
[Extent1].[ReorderPoint] AS [ReorderPoint],
[Extent1].[StandardCost] AS [StandardCost],
[Extent1].[ListPrice] AS [ListPrice],
[Extent1].[Size] AS [Size],
[Extent1].[SizeUnitMeasureCode] AS [SizeUnitMeasureCode],
[Extent1].[WeightUnitMeasureCode] AS [WeightUnitMeasureCode],
[Extent1].[Weight] AS [Weight],
[Extent1].[DaysToManufacture] AS [DaysToManufacture],
[Extent1].[ProductLine] AS [ProductLine],
[Extent1].[Class] AS [Class],
[Extent1].[Style] AS [Style],
[Extent1].[ProductSubcategoryID] AS [ProductSubcategoryID],
[Extent1].[ProductModelID] AS [ProductModelID],
[Extent1].[SellStartDate] AS [SellStartDate],
[Extent1].[SellEndDate] AS [SellEndDate],
[Extent1].[DiscontinuedDate] AS [DiscontinuedDate],
[Extent1].[rowguid] AS [rowguid],
[Extent1].[ModifiedDate] AS [ModifiedDate],
[Extent2].[ProductSubcategoryID] AS [ProductSubcategoryID1],
[Extent2].[ProductCategoryID] AS [ProductCategoryID],
[Extent2].[Name] AS [Name1],
[Extent2].[rowguid] AS [rowguid1],
[Extent2].[ModifiedDate] AS [ModifiedDate1]
FROM  [Production].[Product] AS [Extent1]
LEFT OUTER JOIN [Production].[ProductSubcategory] AS [Extent2] ON [Extent1].[ProductSubcategoryID] = [Extent2].[ProductSubcategoryID]
WHERE ([Extent1].[Name] LIKE @p__linq__0 ESCAPE N''~'') AND ([Extent1].[DiscontinuedDate] IS NULL)',N'@p__linq__0 nvarchar(4000)',@p__linq__0=N'%silver%'

But as you can see from the C# code above, most of this is not needed. We are pulling back much more data than we need. It is even pulling back DiscontinuedDate which we already know must always be null.

What we can do is chain in a Select call that Entity Framework is happy with, that will give it the information it needs about the columns in the database that are actually being used.

So, why can’t it get the information it needs from the existing Select method?

Well, if I take away the AsEnumerable() call I’ll get an exception with a message that says “LINQ to Entities does not recognize the method ‘System.String BuildAutoCompleteText(DataAccess.EntityModel.Product)’ method, and this method cannot be translated into a store expression.

LINQ to Entities cannot understand this, so it has no idea that we are only using a fraction of the information it is bringing back. This brings us back to using a Select statement that LINQ to Entities is happy with. I’m going to use an anonymous type for that. The code then changes to this:

private IEnumerable<AutoCompleteData> GetAutoCompleteData(string searchTerm)
{
    using (var context = new AdventureWorksEntities())
    {
        var results = context.Products
            .Include("ProductSubcategory")
            .Where(p => p.Name.Contains(searchTerm)
                        && p.DiscontinuedDate == null)
            .Select(p => new
                            {
                                p.ProductID,
                                ProductSubcategoryName = p.ProductSubcategory.Name,
                                p.Name,
                                p.StandardCost
                            })
            .AsEnumerable()
            .Select(p => new AutoCompleteData
                                {
                                    Id = p.ProductID,
                                    Text = BuildAutoCompleteText(p.Name,
                                        p.ProductSubcategoryName, p.StandardCost)
                                })
            .ToArray();
        return results;
    }
}

private static string BuildAutoCompleteText(string name, string subcategoryName, decimal standardCost)
{
    return string.Format("{0} ({1}) @ ${2:0.00}", name, subcategoryName, standardCost);
}

[I’ve bolded the changes.]

What this is now able to do is to tell Entity Framework that we are only interested in just four columns, so when it generates the SQL code, that’s all it brings back:

exec sp_executesql N'SELECT
[Extent1].[ProductID] AS [ProductID],
[Extent2].[Name] AS [Name],
[Extent1].[Name] AS [Name1],
[Extent1].[StandardCost] AS [StandardCost]
FROM  [Production].[Product] AS [Extent1]
LEFT OUTER JOIN [Production].[ProductSubcategory] AS [Extent2] ON [Extent1].[ProductSubcategoryID] = [Extent2].[ProductSubcategoryID]
WHERE ([Extent1].[Name] LIKE @p__linq__0 ESCAPE N''~'') AND ([Extent1].[DiscontinuedDate] IS NULL)',N'@p__linq__0 nvarchar(4000)',@p__linq__0=N'%silver%'

Entity Framework query that never brings back data

I was recently optimising some data access code using the Entity Framework (EF) and I saw in the SQL Server Profiler this following emanating from the application:

SELECT
CAST(NULL AS varchar(1)) AS [C1],
CAST(NULL AS varchar(1)) AS [C2],
CAST(NULL AS varchar(1)) AS [C3],
CAST(NULL AS bit) AS [C4],
CAST(NULL AS varchar(1)) AS [C5],
CAST(NULL AS bit) AS [C6],
CAST(NULL AS varchar(1)) AS [C7],
CAST(NULL AS bit) AS [C8],
CAST(NULL AS bit) AS [C9]
FROM  ( SELECT 1 AS X ) AS [SingleRowTable1]
WHERE 1 = 0

At a glance it looks a little odd, but then the final line sealed it… WHERE 1 = 0!!! That will never return any rows whatsoever!

So what code caused this?

Here is an example using the AdventureWorks database:

private static IEnumerable<ProductCostHistory> GetPriceHistory(IEnumerable<int> productIDs)
{
    using (var products = new AdventureWorksEntities())
    {
        var result = products.ProductCostHistories
            .Where(pch => productIDs.Contains(pch.ProductID))
            .ToArray();
        return result;
    }
}

Called with code something like this:

int[] productIDs = new []{707, 708, 710, 711};
var history = GetPriceHistory(productIDs);

This will produce some SQL that looks like this:

SELECT
[Extent1].[ProductID] AS [ProductID],
[Extent1].[StartDate] AS [StartDate],
[Extent1].[EndDate] AS [EndDate],
[Extent1].[StandardCost] AS [StandardCost],
[Extent1].[ModifiedDate] AS [ModifiedDate]
FROM [Production].[ProductCostHistory] AS [Extent1]
WHERE [Extent1].[ProductID] IN (707,708,710,711)

So far, so good. The “where” clause contains a reasonable filter. But look what happens if the productsIDs arrays is empty.

SELECT
CAST(NULL AS int) AS [C1],
CAST(NULL AS datetime2) AS [C2],
CAST(NULL AS datetime2) AS [C3],
CAST(NULL AS decimal(19,4)) AS [C4],
CAST(NULL AS datetime2) AS [C5]
FROM  ( SELECT 1 AS X ) AS [SingleRowTable1]
WHERE 1 = 0

What a completely wasted roundtrip to the database.

Since we know that if the productIDs array is empty to start with then we won’t get any results back we can short circuit this and not run any code that calls the database if the input array is empty. For example:

private static IEnumerable<ProductCostHistory> GetPriceHistory(IEnumerable<int> productIDs)
{
    // Check to see if anything will be returned
    if (!productIDs.Any())
        return new ProductCostHistory[0];

    using (var products = new AdventureWorksEntities())
    {
        var result = products.ProductCostHistories
            .Where(pch => productIDs.Contains(pch.ProductID))
            .ToArray();
        return result;
    }
}

Tip of the day: Using the null-coalescing operator over the conditional operator

I’ve recently been refactoring a lot of code that used the conditional operator and looked something like this:

int someValue = myEntity.SomeNullableValue.HasValue
                    ? myEntity.SomeNullableValue.Value
                    : 0;

That might seem less verbose than the traditional alternative, which looks like this:

int someValue = 0;
if (myEntity.SomeNullableValue.HasValue)
    someValue = myEntity.SomeNullableValue.Value;

…or other variations on that theme.

However, there is a better way of expressing this. That is to use the null-coalescing operator.

Essentially, what is says is that the value on the left of the operator will be used, unless it is null in which case the value ont the right is used. You can also chain them together which effectively returns the first non-null value.

So now the code above looks a lot more manageable and understandable:

int someValue = myEntity.SomeNullableValue ?? 0;

Entity Framework: Unable to load the specified metadata resource.

Recently, I was refactoring some code and I moved the location of the .edmx file into a different folder (and new namespace). I updated the code to use the new namespace but when I ran the application a MetadataException was thrown with the message “Unable to load the specified metadata resource.”

When the location of the edmx file changes, so does the connection string. The reason is that an Entity Framework connection string does more than a normal database connection string. So, if you move the edmx file, you also have to update the connection string so that the entity framework can continue to find the resources that the edmx file defines.

The connection string contains details of how the database is mapped to the entities by referencing the CSDL (Conceptual Schema Definition Language), SSDL (Store Schema Definition Language) and MSL (Mapping Specification Language) resources which are defined in the .edmx file, so if the location of the mapping changes then the connection string also needs to be updated so that the entity framework can continue to map the database to the entities.

For example, if you have a little application with two projects, an application project (in this case, the imaginatively names ConsoleApplication2) and class library (named DataAccess). An app.config file will be created for the data access project by the entity framework tools in Visual Studio. Normally that can be copied (or just the connection string entries at least) to the app.config (or web.config) of the main application.

At this point the connection string looks like this:

metadata=res://*/Products.csdl|res://*/Products.ssdl|
res://*/Products.msl;provider=System.Data.SqlClient;
provider connection string=&quot;data source=(local);
initial catalog=AdventureWorks;integrated security=True;multipleactiveresultsets=True;
App=EntityFramework&quot;

As you can see it makes reference to the Products metadata in the calling assembly (that’s what the * means) which is split into the three resources (CSDL, SSDL & MSL) .

If a ModelEnties folder is created in the DataAccess project and the Products.edmx is moved into the ModelEntities folder then the location of the resource is moved, so the connection string is no longer valid. So, for the change that was just made the connection string needs to be updated to look like this:

metadata=res://*/EntityModel.Products.csdl|
res://*/EntityModel.Products.ssdl|
res://*/EntityModel.Products.msl;
provider=System.Data.SqlClient;provider connection string=&quot;data source=(local);initial catalog=
AdventureWorks;integrated security=True;
multipleactiveresultsets=True;App=EntityFramework&quot;

I’ve bolded the bits that have changed.

If you want to quickly get an updated connection string, you can open the edmx file and click in the design area then press F4 (or the menu View→Properties Window). The window will show Connection String property which can be copied and pasted into the config file.

Why should you be returning an IEnumerable

I’ve seen in many places where a method returns a List<T> (or IList<T>) when it appears that it may not actually really be required, or even desirable when all things are considered.

A List is mutable, you can change the state of the List. You can add things to the List, you can remove things from the List, you can change the items the List contains. This means that everything that has a reference to the List instantly sees the changes, either because an element has changed or elements have been added or removed. If you are working in a multi-threaded environment, which will be increasingly common as time goes on, you will get issues with thread safety if the List is used inside other threads and one or more threads starts changing the List.

Return values should, unless you have a specific use case in mind already, be returning an IEnumerable<T> which is not mutable. If the underlying type is still a List (or Array or any of a myriad of other types that implement IEnumerable<T>) you can still cast it. Also, some LINQ expressions will self optimise if the underlying type is one which better supports what LINQ is doing. (Remember that LINQ expressions always take an IEnumerable<T> or IQueryable<T> anyway so you can do what you like regardless of what the underlying type is).

If you ensure that your return values are IEnumerable<T> to begin with yet further down the line you realise you need to return an Array or List<T> from the method it is easy to start doing that. This is because everything accepting the return value from the method will still be expecting an IEnumerable<T> which List<T> and Array implement. If, however, you started with a List<T> and move to returning an IEnumerable<T> then because so much code will have the expectation of a List<T> without actually needing it you will have a lot of refactoring to do just to update the reference types.

Have I convinced you yet? If not, think about this. How often are you inserting items into a collection of objects after the initial creation routine? How often do you remove items from a collection after the initial creation routine? How often do you need to access a specific item by index within a collection after the initial creation routine? My guess is almost never. There are some occasions, but not actually that many.

It took me a while to get my head around always using an IEnumerable<T>, until I realised that I almost never require to do the things in the above paragraph. I almost always just need to loop over a collection of objects, or filter a collection of objects to produce a smaller set. Both of those things can be done with just an IEnumerable<T> and LINQ.

But, what if I need a count of the objects in the List<T>, that would be inefficient with an IEnumerable<T> and LINQ? Well, do you really need a count? Oftentimes I just need to know if there are any objects at all in the collection, I don’t care how many object there actually are, in which case the LINQ extension method Any() can be used. If you do need a count LINQ is clever enough to work out that the underlying type may expose a Count property and it calls that (anything that implements ICollection<T> such as arrays, lists, dictionaries, sets, and so on) so it is not iterating over all the objects counting them up each time.

Remember, there is nothing necessarily wrong with putting a ToArray() to ToList() before returning as a reference to an IEnumerable<T> something to which a LINQ expression has been applied. That removes the issues that deferred execution can bring (e.g. unexpected surprises when it suddenly evaluates during the first iteration but breaks in the process) or repeatedly applying the filter in the Where() method or the transformation in the Select() method.

Just because an object is of a specific type, doesn’t mean you have to return that specific type.

For example, consider the services you actually need on the collection that you are returning, remembering how much LINQ gives you. The following diagram shows what each of the interfaces expect to be implemented what a number of the common collection types implement themselves.

Incidentally, the reason some of the interfaces on the Array class are in a different colour is that these interfaces are added by the runtime. So if you have a string[] it will expose IEnumerable<string>.

I’d suggest that as a general rule IEnumerable<T> should be the return type when you have anything that implements it as the return type from the method, unless something from an ICollection<T> or IList<T> (or any other type of collection) as absolutely desperately in needed and not just because some existing code expects, say, an IList<T> (even although it is using no more services from it that it would had it been an IEnumerable<T>).

The mutability that implementations of ICollection<T> and IList<T> give will prove problematic in the long term. If you have a large team with members that don’t fully understand what is going on (and this is quite common given the general level developer documentation) they are likely to change the contents of the collection without understanding its implications. In some situations this may fine, in others it may be disastrous.

Finally, if you absolutely do need to return a more specific collection type then instead of returning a reference to the concrete class, return a reference to the lowest interface that you need. For example, if you have a List<T> and you need to add further items to it, but not at specific locations in the list, then ICollection<T> will be the most suitable return type.

Opting out of Google Instant Browsing

I recently wrote about a new feature of Google Chrome called Instant Browsing. You can turn it on or off Basic tab of the  Options page in Chrome.

If you are a web site owner/administrator and are concerned about the impact it might have on your web server to have a deluge of requests going to your server that the end user is probably not really interested in, or having a number of requests going to your server that result in a 404 resource not found because the half formed URL in the “omnibox” does not actually resolve to a real page then you can opt out.

For folks running ASP.NET (both WebForms and MVC) I’ve created a simple HTTP Module that will opt your site out if it encounters requests from Chromes’ Instant Browsing.

You can download the Module here: Instant Browsing HTTP Module V1. And to activate it in your application you need to add the DLL file as a reference to your application and then add the bolded line to your web.config.

<configuration>
 <system.web>
  <httpModules>
   <add name="InstantBrowsingOptOut" type="InstantBrowsing.InstantBrowsingOptOut, InstantBrowsing"/>
  </httpModules>
 </system.web>
</configuration>

 

The Code

If you prefer, you can add the following source to your application and compile it yourself.

using System;
using System.Collections.Specialized;
using System.Web;

namespace InstantBrowsing
{
    public class InstantBrowsingOptOut : IHttpModule
    {
        public void Dispose()
        {
            // Nothing to dispose. Required by IHttpModule
        }

        public void Init(HttpApplication context)
        {
            context.BeginRequest += new EventHandler(BeginRequest);
        }

        void BeginRequest(object sender, EventArgs e)
        {
            HttpApplication application = (HttpApplication)sender;

            string headerValue = GetPurposeHeaderValue(application.Request);
            if (HeaderValueDoesntExist(headerValue))
                return;

            if (PreviewMode(headerValue))
                Issue403Forbidden(application.Response);
        }

        private bool PreviewMode(string value)
        {
            return value.ToLowerInvariant().Contains("preview");
        }

        private bool HeaderValueDoesntExist(string value)
        {
            return string.IsNullOrEmpty(value);
        }

        private string GetPurposeHeaderValue(HttpRequest request)
        {
            NameValueCollection headers = request.Headers;
            return headers["X-Purpose"];
        }

        private void Issue403Forbidden(HttpResponse response)
        {
            response.Clear();
            response.StatusCode = 403;
            response.End();
        }
    }
}

And to activate it in your application you need to add the bolded line to your web.config.

<configuration>
 <system.web>
  <httpModules>
   <add name="InstantBrowsingOptOut" type="InstantBrowsing.InstantBrowsingOptOut, InstantBrowsing"/>
  </httpModules>
 </system.web>
</configuration>

Note that the second “InstantBrowsing” in the type attribute is the assembly, so if you’ve put it in an assembly with a different name you’ll need to change the type attribute to reflect that.

Google Instant Browsing

What is Instant Browsing?

Google Chrome 12 comes with a some features, including support for Google’s Instant Searching and Instant Browsing via the address bar (or “omnibar” as they call it). This means that as you type your query or URL it will be sent off with almost every character press constantly updating the page underneath.

If you have this version of Chrome and don’t currently see what I’m talking about you can go into the Options and in the Basic tab look for the Search options. Make sure that “Enable Instant for faster searching and browsing” is turned on.

Now you will see what happens as you type URLs into the “omnibox” (the address bar). If you additionally run Fiddler you’ll see how many requests are being made in the background.

For example, if I start typing my blog URL, by the time I’ve finished typing my forename it has already concluded that I want to see my blog and I can see in fiddler it has already made the request to http://colinmackay.co.uk/ and my blog appears while I’m still typing.

What’s going on?

If I continue on, say I’m looking for something on SQL, I can see this progression in Fiddler of all the requests that get sent to my blog. (I’ve removed some of the other requests that are unimportant for this example)

As you can see, sometimes I can type quite quickly and it has to play catch up. Sometimes, I slow enough that the blog responds with a 301 (the server does its best to guess what you want, treating an invalid URL as a sort of search term and redirecting you to its best guess) or a 404 if it can’t resolve the URL.

Try this on the BBC News website – As you type URLs you get tons of 404 pages back as the intermediate (non-functioning urls) get responded to!

As you can see from the image of Fiddler above there are some requests missing, some were the browser pulling down CSS and images from my blog, others were request off to Google looking to augment the instant browsing feature.

These calls to augment the Instant Browsing feature are all being sent off to clients1.google.co.uk (I suspect that in each locale there will be a different set of URLs that are able to best match queries in that area). It consists of a GET request with the query in it. The query being what you have typed in the “omnibox”. For example: http://clients1.google.co.uk/complete/search?client=chrome&hl=en-GB&q=colinmackay.co.uk%2Fblog%2Ftasks

This results in some JSON being returned. If you are typing URLs it doesn’t appear to be that useful. The above returned the following to me:

["colinmackay.co.uk/blog/tasks",[],[],[],{"google:suggesttype":[]}]

It becomes much more interesting when a search term is used rather than a URL.

By typing simple “rupert” in the omnibox the result from the request is:

["rupert",["rupert murdoch","rupert grint","rupert everett"],["","",""],[],
{"google:suggesttype":["QUERY","QUERY","QUERY"]}]

Chrome then auto-suggests “rupert murdoch” as the primary completion with the drop down also suggesting “rupert grint” and “rupert everett”

Incidentally, you don’t get Instant Search or Instant Browsing while in Chrome’s Incognito Windows event if it is turned on.

Can anybody say DDoS?

While the expansion of Google’s Instant Search feature into Chrome is fantastic, my first thought when I saw Instant Browsing was that it could be used as a way to mount a DDoS (Distributed Denial of Service) attack on a website especially those that may be running on less robust hosting plans. It is okay for Google to inundate their own web properties from their browser but what about other site owners?

If a web site is not expecting the deluge of requests coming from Chrome browsers then it may be saturated dealing with requests that the user isn’t likely to be all that interested in anyway, especially if intermediate results are bringing back 404 responses (or worse 500 responses if the server breaks badly on bad URLs).

Google have thought of this and there is a way to tell Chrome to stop sending requests that you don’t want. If you read the Chrome FAQ for web developers, you’ll see there is a section opting out of Instant URL Loading. In short, you detect a request header that Chrome has inserted into the request and if you want to opt out return a HTTP 403 status code.  This will then have Chrome blacklist that website for the remainder of the user’s session. This means that if the user comes back another day there will still be that initial hit, giving the web site administrators a chance to opt back in.

An instant browsing request looks like this:

GET http://colinmackay.co.uk/blog/task HTTP/1.1
Host: colinmackay.co.uk
Connection: keep-alive
X-Purpose: : preview
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.112 Safari/534.30
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

The important part is the X-Purpose header. This is what tells the server that the browser is rendering the page as part of the Instant Browsing feature.

Note: The FAQ states that the header is “X-Purpose: preview” but fiddler shows an extra colon in there (see above). If you are attempting to detect the mode of the browser this may become important to the way you detect it.

SQL Server User Group: SQL Injection Attacks

Examples

The examples were run against a copy of the Adventure Works database.

Required Tables

For the Second Order Demo you need the following table added to the Adventure Works database:

CREATE TABLE [dbo].[FavouriteSearch](
	[id] [int] IDENTITY(1,1) NOT NULL,
	[name] [nvarchar](128) NOT NULL,
	[searchTerm] [nvarchar](1024) NOT NULL
) ON [PRIMARY]

GO

Stored Procedure with dynamic SQL

This is the stored procedure from the last demo which shows the Stored Procedure dynamically building a SQL statement that is susceptible to a SQL Injection Attack.

CREATE procedure [dbo].[SearchProducts]
(
  @searchId int
)
AS
BEGIN

  DECLARE @searchTerm NVARCHAR(1024)
  SELECT @searchTerm = searchTerm FROM FavouriteSearch WHERE id = @searchId

  DECLARE @sql NVARCHAR(2000) =
  'SELECT ProductID, Name, ProductNumber, ListPrice
  FROM Production.Product
  WHERE DiscontinuedDate IS NULL
  AND ListPrice > 0.0
  AND Name LIKE ''%'+@searchTerm+'%''';

  EXEC (@sql);

END

 

Slide Deck

The slide deck is available for download.

Further Reading

During the talk I mentioned this lesson from history (why firewalls are not enough), I also showed XKCD’s famous “Bobby Tables” cartoon, and also a link to further information on dynamic SQL in Stored Procedures. More information about the badly displayed error messages can be found amongst two blog posts: What not to develop, and a follow up some months later.

I wrote an article on SQL Injection Attacks that you can read here.

Tip of the day: IE Quirks Mode Vs. Standards Mode

If you are setting the DOCTYPE declaration in an HTML page to define the standard your page complies with ensure that you don’t put anything before that DOCTYPE declaration.

Some browsers will ignore comments and such like before the DOCTYPE declaration but IE doesn’t and if there is a comment it will then ignore the DOCTYPE which then puts your browser into Quirks Mode (and that can screw up the rendering of your page)

So, if I put this at the top of the document:

<!-- This will trigger IE into quirks mode -->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">

Then quirks mode will be rendered.

However, if I put the comment after the DOCTYPE everthing is fine, like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<!-- This won't trigger IE into quirks mode as the DOCTYPE is first -->

DDD South West Parallelisation Talk Overview

Examples

Here are all the examples from Saturday’s introductory talk on Parallelisation at DDD South West 2011.

Slide Deck

The slide deck is also available as a PDF file (15.9 Mb)