Posts filed under 'SQL Server'

Access: Run-time Error 3155 ODBC insert on a linked table failed

Microsoft Access I have been spending a lot of time trying to find out why some of the code used to insert new records into a linked SQL Server table would systematically fail with an error:

Run-time Error '3155' ODBC--insert on a linked table  failed

It was driving me mad.
I could insert a simple record using SQL Server Management Studio, I could add new records to the table in datasheet mode within Access, but as soon as I tried to insert a record from code, whether using DAO recordset or executing the same SQL INSERT, it would miserably fail.

After a fair bit of investigation and tests, of which you can read the full account on the question I asked on StackOverflow, it turns out that this is a long-standing bug in the ODBC Driver (or Access).

Memo fields in Access are usually translated into nvarchar(MAX) in SQL Server by tools like SSMA.
Unfortunately, when you link tables having these fields using the SQL Server Client driver, these fields get incorrectly interpreted as string, even though they appear ok from the table design view.
It’s only if you try to insert something into the field, either text larger than 255 chars or NULL, that you get the error message.

So, the solution, at least in this case, is to revert to the older SQL Server ODBC driver instead, or use varchar() instead of nvarchar(), but if you’re dealing with Unicode, you have to stick with nvarchar().


4 comments June 11th, 2009

Access vs SQL Server: some stats (part 1)

Microsoft Access In the perspective of upsizing my current Access 2007 application, I have been trying to understand a bit more about the possible performance impact of various choices of Primary Keys.

My problem is that currently, the Access application uses autoincrement numbers as surrogate Primary Keys (PK). Since I will need to synchronise the data over multiple remote sites, including occasionally disconnected clients, I can’t use the current autoincrement PK and will need to change to GUID.

To see for myself what could be the impact, I made a series of benchmarks.
This first part is fairly simple:

  • Populate a Product table that contains 3 fields: ID, SKU and Designation with 1,000,000 records.
  • Test natively on SQL Server and Access 2007.
  • The records are inserted in transactions batches of 1000 records.
  • I collect the time taken for each of these transactions and plot it.

Test setup

Nothing much to say about that:

All tests are performed on a dedicated Windows Server 2008 x64 rack running Access 2007 and SQL Server 2008 Standard (SP1) x64.

Test database

In SQL Server, we created a database with two tables ProductGUID and ProductInt:

    Description NVARCHAR(255) NULL

    Description NVARCHAR(255) NULL

For the table using a GUID, we use the NewSequentialID() instead of NewID() to create new keys. This is supposed to offer much better performance as the generated GUIDs are guaranteed to be sequential rather than random, resulting in better index performance on insertion.

For the Access version of the tables, we basically use the same definition, except that we used 4 tables:

  • ProductINT: let Jet/ACE autonumbering create the sequential integer Primary Key.
  • ProductINTRandom: let Jet/ACE autonumbering create the random integer Primary Key.
  • ProductGUIDRandom: let Jet/ACE use its own internal GenGUID() for the key which generates random GUIDs instead of sequential ones.
  • ProdcutGUIDSequential: call the Windows API (UuidCreateSequential) to create sequential ID instead.

SQL Server Test code

Using the SQL Server Management Studio, we performed the following test once for each table (resetting the database in-between):


WHILE (@i <= 1000)
        DECLARE @a INT = 1;
        WHILE (@a <= 1000)
            INSERT INTO ProductGUID (SKU,Description) 
            VALUES ('PROD' + CONVERT(CHAR,@a), 'Product number ' + CONVERT(CHAR,@a));
            SELECT @a = @a + 1;
SELECT @i = @i + 1;

Basically, we perform 1000 transactions each inserting 1000 records into the table ProductGUID or ProductINT.

Access 2007 Test code

To duplicate the same conditions, the following VBA code will perform 1000 transactions each inserting 1000 records.
Note that the recordset is opened in Append mode only.
The importance of this will be discussed in another article.

' Run this to inset 1,000,000 products in batches of 1000
' In the given table
Public Sub Benchmark(TableName as String, InsertSeqGUID  as Boolean)
    Dim i As Integer
    For i = 1 To 1000
        Insert1000Products TableName, InsertSeqGUID 
    Next i
End Sub

' Insert 1000 products in a table
Public Sub Insert1000Products(TableName as String, InsertSeqGUID as boolean)
    Dim i As Long
    Dim db As DAO.Database
    Dim rs As DAO.Recordset
    Dim ws As DAO.Workspace
    Dim starttime As Long
    Dim timespan As Long

    Set ws = DBEngine.Workspaces(0)
    starttime = GetClock ' Get the current time in ms
    Set db = CurrentDb
    Set rs = db.OpenRecordset(TableName, dbOpenDynaset, dbAppendOnly)
    With rs
        For i = 1 To 1000
                If InsertSeqGUID Then !ID = "{guid {" & CreateStringUUIDSeq() & "}"
                !SKU = "PROD" & i
                !Description = "Product number " & i
        Next i
    End With
    timespan = GetClock() - CDbl(starttime)
    Set rs = Nothing
    Set db = Nothing
    ' Print Elapsed time in milliseconds
    Debug.Print timespan
End Sub

We call this code to perform inserts on each of our Access tables:

  • ProductINT table: we just insert data in the ProductINT table, letting Access create autonumber IDs.
  • ProductINTRandom table: we just insert data in the ProductINTRandom table, letting Access create random autonumber IDs.
  • ProductGUIDRandom table: we let Access create the Random GUID for the primary key.
  • ProductGUIDSequential: we use the Windows API to create a sequential ID that we insert ourselves.

Test results

Without further ado, here are the raw results, showing the number of inserted record per second that we achieve for each test over the growing size of the database (here are only shown tests comapring Sequantial GUID and Autoincrement on SQL Server and Access, see next sections for the other results):

Inserts per second

What we clearly see here is that performance when using autoincrement and Sequential GUID stays pretty much constant over the whole test.
That’s good new as it means that using Sequential GUIDs do not degrade performance over time.

As a side note, in this particular test, Access offers much better raw performance than SQL Server. In more complex scenarios it’s very likely that Access’ performance would degrade more than SQL Server, but it’s nice to see that Access isn’t a sloth.

Using Sequential GUID vs Autoincrement in Access

The results show that we do take a performance hit of about 30% when inserting Sequential GUID vs just using autonumbers.
We’re still getting good results, but that’s something to keep in mind.

In terms of CPU consumption, here is what we get:

CPU load Access

Random PK, whether they are simple integer or GUID do consume substantially more CPU resources.

Using Sequential GUID vs Identity in SQL Server

Out-of-the box, SQL Server performs quite well and there is not much difference whether you’re using Sequential GUIDs or autoincrement PK.

There is however a surprising result: using Sequential GUIDs is actually slightly faster than using autoincrement!

There is obviously an explanation for this but I’m not sure what it is so please enlighten me 🙂

CPU Consumption:

CPU load SQL Server

Using Random GUID vs Sequential GUID vs Random Autonumber in Access

So, what is the impact of choosing a Sequential GUID as opposed to letting Access create its own random GUIDs?

Inserts per second Random GUID vs Sequential GUID in Access

It’s clear that random GUIDs have a substantial performance impact: their randomness basically messes up indexing, resulting in the database engine having to do a lot more work to re-index the data on each insertion.
The good thing is that this degradation is pretty logarithmic so while it degrades over time, the overall performance remains pretty decent.
While GUIDs are larger than Random Integers (16 bytes vs 4 bytes) the actual performance of inserting records whose PK is a random integrer is actually worse than random GUID…

Provisional conclusions

Here we’ve check the baseline for our performance tests. In the next article, we’ll look exclusively at the performance of inserting data from a remote Access 2007 front end using our VBA code.

Having this baseline will allow us to check the performance overhead of using ODBC and letting Jet/ACE manage the dialogue with the SQL Server backend.

Feel free to leave your comments below, especially if you have any resources or information that would be useful.


  • 16APR2009: added test of random autonumber as PK in Access.
  • 13APR2009: Original Article

15 comments April 13th, 2009

Sysadmin: SQL server performance madness

Technology I’ve just lost 2 days going completely bananas over a performance issue that I could not explain.

I’ve got this Dell R300 rack server that runs Windows Server 2008 that I dedicate to running IIS and SQL Server 2008, mostly for development purposes.

Dell PowerEdge R300 Rack servers

In my previous blog entry, I was trying some benchmark to compare the performance of Access and SQL Server using INT and GUID and getting some strange results.

Here are the results I was getting from inserting large amounts of data in SQL Server:

Machine Operating System Test without Transaction Test with Transaction
MacbookPro Windows Server 2008 x64 324 ms 22 ms
Desktop Windows XP 172 ms 47 ms
Server Windows Server 2008 x64 8635 ms!! 27 ms

On the server, not using transactions makes the query run more than 8 seconds, at least an order of magnitude slower than it should!

I initially thought there was something wrong with my server setup but since I couldn’t find anything, I just spend the day re-installing the OS and SQL server, applying all patches and updates so the server is basically brand new, nothing else on the box, no other services, basically all the power is left for SQL Server…


When I saw the results for the first time after spending my Easter Sunday rebuilding the machine I felt dread and despair.
The gods were being unfair, it had to be a hardware issue and it had to be related to either memory or hard disk, although I couldn’t understand really why but these were the only things that I could see have such an impact on performance.

I started to look in the hardware settings:

Device Manager

And then I noticed this in the Policies tab of the Disk Device Properties :

DISK Device Properties

Just for the lulz of it, I ticked the box, close the properties

Enable advanced performance

And then tried my query again:

Machine Operating System Test without Transaction Test with Transaction
Server Windows Server 2008 x64 254 ms!! 27 ms

A nearly 35 fold increase in performance!

Moral of the story

If you are getting strange and inconsistent performance results from SQL Server, make sure you check that Enable advanced performance option.
Even if you’re not getting strange results, you may not be aware of the issue, only that some operations may be much slower than they should.

Before taking your machine apart and re-installing everything on it, check your hardware settings, there may be options made available by the manufacturer or the OS that you’re not aware of…

Lesson learnt.

April 12th, 2009

Access: building ‘upsizable’ applications.

Microsoft Access When you start building an Access application, it’s tempting to just think about today’s problem and not worry at all about the future.
If your application is successful, people will want more out of it and, over time, you’ll be faced with the task of moving the back-end database to a more robust system like SQL Server.

While there are tools like SSMA that can help you move an Access database to SQL Server, a lot of the problems you’ll encounter can be solved before you even have to think about upsizing.
Abiding by a few simple rules will cost you nothing when creating your Access application but will save you a lot of headache if -when- the time comes to upsize.

So here are a few things to keep in mind.

Naming conventions

Access is pretty liberal about naming conventions and it will let you freely name your tables, columns indexes and queries. When these get moved to another database you’ll most probably be faced with having to rename them.
In some cases, you could actually create subtle bugs because something that used to work fine in Access may be tolerated in the new database but be interpreted differently.

  • Do not use spaces or special characters in your data object names.
    Stick to characters in the range A through Z, 0 to 9 with maybe underscores _ somewhere in between (but not at the start or the end).
    Also try to respect casing wherever you reference this name (especially for databases like MySQL which are case-sensitive if the hosted on a Linux platform for instance).
    Customer Order Lines (archive) should be CustomerOrderLines_Archive.
    Query for last Year's Turnover should be QueryLastYearTurnover.
    Index ID+OrderDate should become instead ID_OrderDate.

  • Do not use keywords that are reserved or might mean something else whether they are SQL keywords or functions names:
    A column called Date could be renamed PurchaseDate for instance.
    Similarly, OrderBy could be renamed SortBy or PurchaseBy instead, depending on the context of Order.
    Failing to do so may not generate errors but could result in weird and difficult to debug behaviour.

  • Do not prefix tables with Sys, USys, MSys or a tilde ~.
    Access has its own internal system tables starting with these prefixes and it’s best to stay away from these.
    When a table is deleted, Access will often keep it around temporarily and it will have a tilde as its prefix.

  • Do not prefix Queries with a tilde ~.
    Access use the tilde to prefix the hidden queries kept internally as recordsource for controls and forms.

Database design

  • Always use Primary keys.
    Always have a non-null primary key column in every table.
    All my tables have an autonumber column called ID. Using an automatically generated column ID guarantees that each record in a table can be uniquely identified.
    It’s a painless way to ensure a minimum level of data integrity.

  • Do not use complex multivalue columns.
    Access 2007 introduced complex columns that can record multiple values.
    They are in fact fields that return whole recordset objects instead of simple scalar values. Of course, this being an Access 2007 only feature, it’s not compatible with any other database. Just don’t use it, however tempting and convenient it might be.
    Instead use a table to record Many-To-Many relationships between 2 tables or use a simple lookup to record lists of choices in a text field itself if you’re only dealing with a very limited range of multivalues that do not change.

  • Do not use the Hyperlink data type.
    Another Access exclusive that isn’t available in other databases.

  • Be careful about field lookups.
    When you create Table columns, Access allows you to define lookup values from other tables or lists of values.
    If you manually input a list of values to be presented to the user, these won’t get transferred when upsizing to SQL Server.
    To avoid having to maintain these lookup lists all over your app, you could create small tables for them and use them as lookup instead; that way you only need to maintain a single list of lookup values.

  • Be careful about your dates.
    Access date range is much larger than SQL Server.
    This has 2 side-effects:
    1) if your software has to deal with dates outside the range, you’ll end-up with errors.
    2) if your users are entering dates manually, they could have made mistakes when entering the year (like 09 instead of 2009).
    Ensure that user-entered dates are valid for your application.


While most of your code will work fine, there are a few traps that will bomb your application or result in weird errors:

  • Always explicitly specify options when opening recordsets or executing SQL.
    With SQL Server, the dbSeeChange is mandatory whenever you open a recordset for update.
    I recommend using dbFailOnError as well as it will ensure that the changes are rolled back if an error occurs.

    Dim rs as DAO.RecordSet
    ' Open for read/write
    set rs = db.OpenRecordSet("Stock", dbOpenDynaset, dbSeechanges + dbFailOnError)
    ' Open for read only
    set rs = db.OpenRecordSet("Stock", dbOpenSnapshot)
    ' Direct SQL execution
    CurrentDB.Execute "INSERT INTO ...",  dbSeeChanges + dbFailOnError
  • Get the new autonumbered ID after updating the record.
    In Access, autonumbered fields are set as soon as the record is added even if it hasn’t been saved yet.
    That doesn’t work for SQL Server as autonumbered IDs are only visible after the records have been saved.

    ' Works for Access tables only
    ' We can get the new autonumber ID as soon as the record is inserted
    mynewid = rs!ID
    ' Works for ODBC and Access tables alike
    ' We get the new autonumber ID after the record has been updated
    rs.Move 0, rs.LastModified
    mynewid = rs!ID
  • Never rely on the type of your primary key.
    This is more of a recommendation but if you use an autonumbered ID as your primary key, don’t rely in your code or you queries on the fact that it is a long integer.
    This can become important if you ever need to upsize to a replicated database and need to transform your number IDs into GUID.
    Just use a Variant instead.

Parting thoughts

These simple rules will not solve all your problems but they will certainly reduce the number of issues you’ll be faced with when upsizing you Access application.
Using a tool like SSMA to upsize will then be fairly painless.

If you have other recommendations, please don’t hesitate to leave them in the comments, I’ll regularly update this article to included them.


1 comment April 1st, 2009

Most Recent Posts



Posts by Month