Aggregate of columns, not rows

In SQL Server it is very easy to aggregate a value across many rows. You simply just write something like this:

SELECT MAX(SomeColumn)
FROM SomeTable

This will return one row with the aggregate value (in this case MAXimum) of the column.

However, what if you want to do this across columns. What if you have a table with two or more columns you want to aggregate for each row?

You can do this with a tedious case statement that just gets more and more cumbersome for each additional column you have, especially if any of the columns are nullable. Or, you can create a subquery with a table in it and you aggregate the table in the subquery exposing the final value to your main query already aggregated.

Here’s an example:

SELECT Id, 
       (SELECT MAX(num) FROM (VALUES(st.ANumber), (st.AnotherNumber)) AS AllNumbers(num)) AS MaxNumber
FROM SomeTable st

The second line contains the subquery which is then exposed as a column in the final result set.

The subquery effectively pivots the columns into rows, then aggregates the rows. Just be careful with where you put the brackets so that it interprets them as separate rows rather than columns.

This also deals with NULL values quite effectively too, since the aggregate function will ignore any null value it finds.

Getting Entity Framework to Pick up on the Auto-generated Part of the Composite Primary Key

In my previous post, I wrote about how to get SQL Server to automatically generate a composite primary key for a table when part of that key is also a foreign key to a parent table. That all works nicely in SQL Server and using regular ADO.NET commands from a .NET application. However, as the title of this post suggests, it is a little more of an issue when it comes to Entity Framework.

When you get Entity Framework to generate its model from the database then it picks up on identity columns automatically. In the example in the previous post there were no identity columns in the child table because the value was being generated from within the INSTEAD OF INSERT trigger and the column was not marked as an identity column.

It is easy enough to go into the entity model and manually set the properties of the Id column to have the StoreGeneratedPattern set to Identity which indicates to Entity Framework that it has to find out what the value is once the entity has been inserted into the database.

For an integer column this means that the Entity Framework will issue an INSERT statement like this:

exec sp_executesql N'insert [dbo].[Products]([TenantId], [Name])
values (@0, @1)
select [Id]
from [dbo].[Products]
where @@ROWCOUNT > 0 and [TenantId] = @0 and [Id] = scope_identity()',N'@0 int,@1 nvarchar(50)',@0=1,@1=N'Test Product A'

You’ll notice that this isn’t just a simple insert, it also performs a SELECT immediately afterwards in order to get the value of the newly inserted key back so that it can update the entity.

However, the way it does it will not produce a value. SCOPE_IDENTITY() will always be null</CODEL p contained.< it value the destroyed have would trigger a used we fact was, there if even fact, In column. identity no was because>

After much searching around on the internet I didn’t find a solution for this issue. I even posted on StackOverflow and didn’t get an answer back (at least, I haven’t at the time of writing). However, I did eventually come across a workaround that could be used in place of SCOPE_IDENITY(). The work around involved changing the way the trigger worked to some extent.

The new trigger would capture the keys that it inserted and output them in a select statement at the end. The new trigger looked like this:

ALTER TRIGGER dbo.IOINS_Products 
   ON  dbo.Products 
   INSTEAD OF INSERT
AS 
BEGIN
  SET NOCOUNT ON;
  
  -- Acquire the lock so that no one else can generate a key at the same time.
  -- If the transaction fails then the lock will automatically be released.
  -- If the acquisition takes longer than 15 seconds an error is raised.
  DECLARE @res INT;
  EXEC @res = sp_getapplock @Resource = 'IOINS_Products', 
    @LockMode = 'Exclusive', @LockOwner = 'Transaction', @LockTimeout = '15000',
    @DbPrincipal = 'public'
  IF (@res < 0)
  BEGIN
    RAISERROR('Unable to acquire lock to update Products table.', 16, 1);
  END

  -- Work out what the current maximum Ids are for each tenant that is being
  -- inserted in this operation.
  DECLARE @baseId TABLE(BaseId int, TenantId int);
  INSERT INTO @baseId
  SELECT MAX(ISNULL(p.Id, 0)) AS BaseId, i.TenantId 
  FROM  inserted i
  LEFT OUTER JOIN Products p ON i.TenantId = p.TenantId
  GROUP BY i.TenantId

  -- The replacement insert operation
  DECLARE @keys TABLE (Id INT);
  INSERT INTO Products
  OUTPUT inserted.Id INTO @keys
  SELECT 
    i.TenantId, 
    ROW_NUMBER() OVER(PARTITION BY i.TenantId ORDER BY i.TenantId) + b.BaseId 
      AS Id,
    Name
  FROM inserted i
  INNER JOIN @baseId b ON b.TenantId = i.TenantId

  -- Release the lock.
  EXEC @res = sp_releaseapplock @Resource = 'IOINS_Products', 
    @DbPrincipal = 'public', @LockOwner = 'Transaction'

  SELECT Id FROM @keys
END
GO

As you can see the last line of the trigger performs a SELECT in order to get the keys.

Since the Entity Framework is only expecting one result set from the SQL it issued, the fact that we have now added in a second result set in the trigger tricks it into thinking that it has got the SCOPE_IDENTITY() value it was asking for. In fact, that result set, which contained a null value anyway, is now the second result set and is ignored by the Entity Framework.

If you are only ever going to use your database with Entity Framework then this solution may work for you. However, the idea that the trigger creates additional (and potentially unexpected) result sets may prove this workaround’s undoing in a more widely used system.

Composite Primary Keys Including Identity Like Column

I’ve been thinking of a way to organise some data for a multi-tenanted system. As such the database that would be used would have to mark pretty much every piece of data with the id of the tenant in some way. This got me thinking about using composite primary keys on tables.

Using a composite primary key with the TenantId as the first part has the advantage that all data relating to a single tenant is adjacent in the database which should speed up reads as there will be less to read. Any reads will likely be confined to a single tenant. I can’t see lookups being made independent of the tenant being frequent, and if they do occur then it is most likely not as part of the application but as a DBA sitting down at SSMS.

The second part of the composite primary key is kind of like an identity column, but not. While an identity column could be used it will produce gaps in the sequence for an individual tenant and I didn’t want that. I wanted similar behaviour to an identity column but separate counters for each tenant.

Pitfalls on the way

When I was looking for a solution much of what I found talked about doing the process manually, often from the application side. I really don’t want the application having to know that much about the database.

Other solutions talked about only ever using stored procedures, which is a little better, but then if you have many stored procedures that insert into the table with the composite key then you have to remember to call the code from all those places.

In the end I went for an INSTEAD OF INSERT trigger. But implementing the ideas I’d run across in a trigger proved problematic.

I tried a few different ideas which worked well when a single user was accessing the database, but when stress testing the design with multiple simultaneous inserts I got either primary key violations (insufficient locks) or deadlocks (too much locking). In the end I discovered that there is a mutex like concept in SQL Server that locks sections of code rather than tables. While this may not sound terribly useful if there is only ever one piece of code that generates the identity like column, such as the INSTEAD OF INSERT trigger then it could work.

The Solution

Here is the solution, which I’ll walk you through in a moment. For the moment, just be aware that there are two tables: Tenants and Products. A product can be owned by a single tenant only, so there is a parent/child relationship.

Tables:

CREATE TABLE [dbo].[Tenants](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[Name] [nvarchar](50) NOT NULL,
 CONSTRAINT [PK_Tenants] PRIMARY KEY CLUSTERED 
(
	[Id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[Products](
	[TenantId] [int] NOT NULL,
	[Id] [int] NOT NULL,
	[Name] [nvarchar](50) NOT NULL,
 CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED 
(
	[TenantId] ASC,
	[Id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE [dbo].[Products]  WITH CHECK ADD  CONSTRAINT [FK_Products_Tenants] FOREIGN KEY([TenantId])
REFERENCES [dbo].[Tenants] ([Id])
GO

ALTER TABLE [dbo].[Products] CHECK CONSTRAINT [FK_Products_Tenants]
GO

Trigger:

CREATE TRIGGER dbo.IOINS_Products 
   ON  dbo.Products 
   INSTEAD OF INSERT
AS 
BEGIN
  SET NOCOUNT ON;

  -- Acquire the lock so that no one else can generate a key at the same time.
  -- If the transaction fails then the lock will automatically be released.
  -- If the acquisition takes longer than 15 seconds an error is raised.
  DECLARE @res INT;
  EXEC @res = sp_getapplock @Resource = 'IOINS_Products', 
    @LockMode = 'Exclusive', @LockOwner = 'Transaction', @LockTimeout = '15000',
    @DbPrincipal = 'public'
  IF (@res < 0)
  BEGIN
    RAISERROR('Unable to acquire lock to update Products table.', 16, 1);
  END

  -- Work out what the current maximum Ids are for each tenant that is being
  -- inserted in this operation.
  DECLARE @baseId TABLE(BaseId int, TenantId int);
  INSERT INTO @baseId
  SELECT MAX(ISNULL(p.Id, 0)) AS BaseId, i.TenantId 
  FROM  inserted i
  LEFT OUTER JOIN Products p ON i.TenantId = p.TenantId
  GROUP BY i.TenantId

  -- The replacement insert operation
  INSERT INTO Products
  SELECT 
    i.TenantId, 
    ROW_NUMBER() OVER(PARTITION BY i.TenantId ORDER BY i.TenantId) + b.BaseId 
      AS Id,
    Name
  FROM inserted i
  INNER JOIN @baseId b ON b.TenantId = i.TenantId

  -- Release the lock.
  EXEC @res = sp_releaseapplock @Resource = 'IOINS_Products', 
    @DbPrincipal = 'public', @LockOwner = 'Transaction'
END
GO

The Tenants table does use a traditional IDENTITY column for its primary key, but the Products table does not.

The INSTEAD OF INSERT trigger will do just what it says, instead of using the insert statement that was issues, it will replace it with its own code. This means that the database has the opportunity to perform additional actions before the insert takes place, potentially replacing the INSERT itself.

The first thing the trigger does is attempt to acquire a lock to ensure that no one else can calculate the next available id value. Rather than locking a table it uses sp_getapplock (available from SQL Server 2005 onwards) which locks code instead since the trigger is the only place that will calculate the next Id value.

A table variable is created to hold all the current max values of Id for each TenantId in the current insert operation. These will act as the base Ids for the subsequent INSERT operation. This ensures that if the INSERT is processing multiple TenantIds in one operation they are all accounted for. In reality, for the tenant based application I’m thinking about, I don’t see this happening all that frequently if at all, but fortune favours the prepared, so they say.

Next is the actual INSERT statement that will put the data into the database. I’m using the ROW_NUMBER() function (available from SQL Server 2005) to work out a sequential value for each row that is being inserted partitioned by the TenantId. This means that each TenantId will have a row number starting a 1 which can then be added to the base Id that was worked out in the previous step. The Id column is the only substitution in the INSERT, the remainder of the columns are passed through as is without alteration.

Finally, the lock is released using sp_releaseapplock(). This ensures that any other process that is waiting to INSERT data into this table can do so. In the event that the transaction fails the lock will be released automatically as the Owner was set to the transaction.

Issues

While composite keys can mean that you can create more natural keys for your data over using a surrogate key like an single unique identity column or GUID/UUID as it follows more closely the model of the business, it does mean that if the business changes then you may end up needing to find ways to change a primary key, something that should remain immutable. For example, if you have two tenants in your system and then they merge into a single tenant it is likely you are going to have duplicate keys if you simply change the tenant Id on one.

If the shape of the table changes then the INSTEAD OF INSERT trigger will have to be updated as well as the replacement insert statement will have to take account of the new columns or eliminate references to columns that no longer exist.

There are also issues with Entity Framework not being able to get the newly generated value back. I’ll discuss that in more detail in a future blog post.

Optimising clustered indexes in SQL Server 2008

I’ve just found a script on another blog to go through all the clustered indexes in a SQL Server database and rebuild them in order to reduce fragmentation and improve the disk IO needed to get the data. The original script didn’t take into account tables in different schemas so I updated it. I also added some metrics to it so I could get a sense of how long the operation takes on each table.

The script can take quite a while to run. On my database it took over 6 minutes just to initially run the query to work out what needed rebuilding, and each index can take several seconds (or possibly more if you have a lot of data) on its own.

The new script is here:

SET NOCOUNT ON
DECLARE @Schema SYSNAME;
DECLARE @Table SYSNAME;
DECLARE @Index SYSNAME;
DECLARE @Rebuild NVARCHAR(4000)
DECLARE @StartTime DATETIME = GETUTCDATE();

PRINT (CONVERT(NVARCHAR(100), GETUTCDATE(), 113) + ' : Rebuild all indexes with over 10% fragmentation.')

DECLARE DB CURSOR FOR 
SELECT SS.name [schema], SO.Name [table], SI.Name [index]
FROM SYS.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
INNER JOIN SYS.objects SO ON SO.object_id=SYS.dm_db_index_physical_stats.object_id
INNER JOIN SYS.schemas SS ON SO.schema_id=SS.schema_id
INNER JOIN SYS.indexes SI ON SI.index_id=SYS.dm_db_index_physical_stats.index_id AND 
SI.object_id=sys.dm_db_index_physical_stats.object_id 
-- Begin select only clustered indexes Index_id = 1
AND SI.Index_id = 1
-- End select only clustered indexes Index_id = 1
WHERE avg_fragmentation_in_percent > 10.0 AND SYS.dm_db_index_physical_stats.index_id > 0
ORDER BY SO.Name 

OPEN DB
FETCH NEXT FROM DB INTO @Schema, @Table, @Index
WHILE @@FETCH_STATUS = 0
BEGIN
  SET @Rebuild = 'ALTER INDEX ' + @Index + ' ON ' + @Schema + '.' + @Table + ' REBUILD'

  PRINT (CONVERT(NVARCHAR(100), GETUTCDATE(), 113) + ' : ' + @Rebuild)

  -- Comment out the following line to see what tables would be affected without rebuilding the indexes
  EXEC SP_EXECUTESQL @Rebuild

  FETCH NEXT FROM DB INTO @Schema, @Table, @Index
END
CLOSE DB
DEALLOCATE DB

DECLARE @Duration DATETIME = GETUTCDATE() - @StartTime;
PRINT (CONVERT(NVARCHAR(100), GETUTCDATE(), 113) + ' : Finished. Duration = '+CONVERT(NVARCHAR(100), @Duration, 114))

SET NOCOUNT OFF

Rewiring the users and logins in SQL Server

As a developer I find that I’m frequently backing up and restoring SQL Server databases between servers for development and testing purposes. However, each time I do the link between the login (a server concept) and the user (a per database concept) gets broken.

There is a stored procedure in SQL Server to wire it all back up again and I keep forgetting what it is. So here it is (my aide memoir):

sp_change_users_login: It maps an a database user to a SQL Server login.

The quick and easy way is as follows:

sp_change_users_login 'Update_One', 'myUserName', 'myLoginName'

 

Running Queries on Excel Spreadsheets using SQL Server 2008

I’m more a database person than a spreadsheet person. I’m more used to using SQL to bend data to my will than all the fancy gubbins that you’ll find in Excel. With some chunky (for a spreadsheet) ad hoc data in hand I set about connecting it up to SQL Server so I could run a few choice SELECT statements on the data.

The details in this post work with 64bit editions of Excel 2010 and SQL Server 2008 R2

The basic ad hoc connection looks something like this:

SELECT *
FROM OPENDATASOURCE( 'Microsoft.ACE.OLEDB.12.0', 'Data Source="<full file path to excel file>"; Extended properties=Excel 12.0')...[<data sheet name>$]

However, if you try that in SQL Server Management Studio on a raw SQL Server installation you’ll get this error message:

Msg 15281, Level 16, State 1, Line 7
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information about enabling 'Ad Hoc Distributed Queries', see "Surface Area Configuration" in SQL Server Books Online.

Enabling Ad Hoc Remote Queries

I’ve linked to the Books On-Line entry in the above, but it is only part of the story. Once you’ve followed its instructions on opening the View Facets dialog, you have to hunt around a little to find where you turn on and off the ad hoc remote queries. To save you the time, they’re in the “Server Configuration” facet.

The alternative, also mentioned, is to issue a SQL Command. This command:

sp_configure 'Ad Hoc Distributed Queries', 1;

However, that still won’t work directly. You’ll get the following error message:

Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51

The configuration option 'Ad Hoc Distributed Queries' does not exist, or it may be an advanced option.

The full SQL you need is:

sp_configure 'show advanced options', 1;
RECONFIGURE
GO
sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE
GO

And you’ll get output that looks like this:

Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.

Configuration option 'Ad Hoc Distributed Queries' changed from 0 to 1. Run the RECONFIGURE statement to install.

Now, we can try running the SELECT statement again, but this time the following error appears:

Msg 7302, Level 16, State 1, Line 2

Cannot create an instance of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)".

ODBC Configuration

This is because the ODBC driver is not configured correctly, go to the Control Panel –> System and Security –> Administrative Tools –> Data Sources (ODBC), alternatively you can just type “ODBC” in the Windows start bar.

Either way, you get to this dialog:

And as you can see the ODBC Driver needed for reading Excel files is not installed. A pretty big configuration failure. But it is easy enough to get the correct drivers. You can download them from Microsoft:

However, there is a problem if you have 32bit Office installed and 64bit SQL Server. The 32bit installer for the ODBC Drivers won’t work with 64bit SQL Server, and the 64bit drivers won’t install if it finds an existing 32bit installation of Office on the machine. For my desktop machine that was a problem, but luckily my laptop is running both 64bit versions of Office and SQL Server.

Finally

I eventually found this code snippet that works:

SELECT *
FROM OPENROWSET('MSDASQL',
'DRIVER=Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb);
DBQ=c:devPerformance-results.xlsx',
'SELECT * FROM [results$]')

The only issue that I have with this is that it uses MSDASQL which is surrounded in uncertainty. In one blog post it was said to be deprecated and 64bit versions won’t be available. Yet, there is a 64-bit version available for download. But for the ad hoc work I’m doing at the moment, it works.

Visual Studio / SQL Server install order on Windows 7

Quite a while ago I blogged about the Visual Studio / SQL Server install order on Windows Vista. I’m about to go through a similar exercise on Windows 7 and given the issues I had then I thought that it would be only right to document the procedure in case any problems arose.

Last time, it would seem, the best solution was to install things in the order in which Microsoft released them with the notable exception of the operating system. So this time, that is the strategy that I’m going to take. Windows 7 is already installed on my laptop. Then I’m going to install Visual Studio 2008, then SQL Server 2008, then any patches for either and we’ll see how we get on.

I’m also going to ensure that I do NOT install SQL Server Express Edition on Visual Studio 2008 as I’ve had problems with that before. Essentially, the problem last time was that the SQL Server installer mistook Visual Studio’s SQL Server Express installation has having installed certain things. The SQL Server installation therefore didn’t want to repeat what it didn’t need to so it refused to install the client tools.

Install Order

  • Visual Studio 2008, excluding SQL Server 2005 Express Edition

Visual Studio 2008 Installer Removing SQL Express

  • MSDN Library (This is optional – I installed it because I’m occasionally developing on the road with no or limited connectivity)
  • Visual Studio 2008 Service Pack 1 (this is required in order to install SQL Server 2008 – the installation will fail otherwise)
  • SQL Server 2008 Developer Edition

sql-server-2008-compatibility-issues

  • Install SQL Server 2008 SP1

That’s it – Job done. And it only took me two attempts to get it right this time. My stumbling block here was the order in which I applied the service packs.

Developing with SQL Server 2008, deploying on SQL Server 2005

I received the following by email today:

Hi Colin! I found your blog after googling for a bit about SQL Server. I had a question for you… As someone fairly new to .NET development, would it be easier to stick with SQL Server 2005 for now, or just install SQL Server 2008 express? I ask because I don’t yet know enough about the differences between the two to know if there will be any issues when developing small web applications with 2008, but then deploying them to a hosting provider with 2005. Hope that makes sense. 🙂

Cheers,
Sean

I don’t recommend moving to SQL Server 2008 if you will ultimately be deploying on SQL Server 2005.

If you don’t know the difference between SQL Server 2005 and 2008, yet your hosting provider only supports 2005 then you would be better off sticking to working with SQL Server 2005 on your system. The reason for this is that you don’t want to accidentally stumble into features of 2008 that are not supported on 2005.

Also, even if you were intimately aware of the differences between the two versions it is still a good rule of thumb to develop on a system that is as close to the eventual live system as you can. That way you won’t get any unexpected nasty surprises when you do deploy the application and suddenly realise that things are not quite as expected.

If you have already developed the applications then you can set the compatibility level of the database to mimic SQL Server 2005:

ALTER DATABASE [MyDatabaseName] SET COMPATIBILITY_LEVEL = 90

That way you should have a similar (although not necessarily quite the same) experience as if you were developing on a real SQL Server 2005 system. There will still be things that you cannot do. I don’t think backing up your 2008 database and restoring it on a 2005 system will work.

Where's My Data? An introduction to Spatial Queries in SQL Server 2008

The slide deck used for my presentations to the

There is also a set of demo code to go along with the slide deck variants.

Further information

The following blog posts may also be useful:

Different ways to add point data in SQL Server 2008

 

The spatial data can be added to a table by specifying the column type of geometry or geography. The exact detail of what is in the column can be varied as a spatial column can represent a point, line string, and polygon and so on. For example, to create a table that represents the venues of developer events that I’ve been to might look like this:

CREATE TABLE Venue
(
    Id INT IDENTITY(1,1) NOT NULL,
    Name NVARCHAR(256),
    Location geography
)

There are a number of different ways to insert data. Points, have the most varied set of options.

First of all there is the standard STGeomFromText:

INSERT INTO Venue
VALUES(
    'HBOS',
    geography::STGeomFromText(
        'POINT(55.9271035250276 -3.29431266523898)',4326));

The function takes two parameters; the first is the Well Known Text (WKT) representation of the geometry, in this case a point, and the second is the SRID. The example above shows the location of the SQL Server UG events held in one of the conference rooms at HBOS’s offices in Sighthill, Edinburgh.

Next is the extended method Parse. I should mention that there are two types of methods with regards to standards. There are a group of methods that comply with the OGC standards (these are all prefixed with ST). Then there are “extended methods”. These are not standards compliant and have added, I’m guessing, in order to improve the capabilities to some extent over the standards.

An example of the Parse method:

INSERT INTO Venue
VALUES(
    'Glasgow Caledonian University',
    geography::Parse(
        'POINT(55.8659449685365 -4.25072511658072)'));

The function takes only one parameter, which is the WKT. There is no SRID, but it is set to 4326 (WGS84). The example above shows the location of the Scottish Developers events held in the Continuing Professional Development Centre in Glasgow Caledonian University.

The next method is to use Well Known Binary (WKB). I won’t, however, be detailing the format of the binary. At present I would simply like to demonstrate that it can be done.

An example of WKB:

INSERT INTO Venue
VALUES(
    'Dundee University',
    geography::STGeomFromWKB(0x01010000000700ECFAD03A4C4001008000B5DF07C0, 4326));

The function, like its WKT counterpart, takes two parameters. The first is the binary representation of the spatial data, while the second is the SRID. The example above is the location of the North East Scotland .NET User Group who meet at Dundee University.

Next is another extended method, Point. For example:

INSERT INTO Venue
VALUES(
    'Microsoft Campus, TVP',
    geography::Point(51.4618933852762, -0.926690306514502, 4326));

The function takes three parameters, the latitude, the longitude and the SRID. The above example is the location of the Microsoft Campus at Thames Valley Park in Reading where events like DDD are held.

Finally, the standard function, STPointFromText, is used. For example:

INSERT INTO Venue
VALUES(
    'Microsoft Edinburgh Office',
    geography::STPointFromText('POINT(55.9523783996701 -3.2051030639559)', 4326));

The function takes WKT as did Parse and STGeomFromText, however, it is constrained to only WKT that represent points. If the WKT represents something else the method will fail. If, say, a linestring was supplied then an error message would be generated such as “Expected POINT at position 1. The input has LINES.” The example above shows the location of Microsoft’s Edinburgh office.

The result of adding all this information will produce a table with the following data:

Id Name Location
1 HBOS POINT (55.9271035250276 -3.29431266523898)
2 Glasgow Caledonian University POINT (55.8659449685365 -4.25072511658072)
3 Dundee University POINT (56.4595025684685 -2.98423195257783)
4 Microsoft Campus, TVP POINT (51.4618933852762 -0.926690306514502)
5 Microsoft Edinburgh Office POINT (55.9523783996701 -3.2051030639559)