Answered by:
How to Import an Access DB into SQL Server Express

Question
-
Yesterday I tried to import a somewhat large (400MB) and pretty complex (60+ Tables, 120+ Queries, and half dozen links to Oracle Tables) Access DB into SQL Server. The Access DB has been around for a little while; originally developed in Access 2000; we’re now using Access 2007 for everything. At first it didn’t work, and then I got some help on a discussion forum and was shown this link:
http://www.mssqltips.com/tip.asp?tip=2002
That got me going (I thought). The process was about 50% and then just totally died. It actually started running pretty fast, but then slowed to a crawl, and finally seemed to choke on itself. What could prevent SQL Server from being able to import this Access DB? Complexity? Size? Version? Something else? I’m about to start the import process again; not expecting a great outcome. It seemed to take a while when I tried this yesterday. I’ll post back with my results shortly.
Regards,
Ryan---
Ryan ShuellFriday, November 12, 2010 1:29 PM
Answers
-
Please see these links, there's information about how to import Access DB to SQL Server:
1. http://support.microsoft.com/kb/237980
2. http://www.fmsinc.com/MicrosoftAccess/SQLServerUpsizing/importing/Import_Access_DB.asp
Hope this helps.Willy Taveras.-
http://itlearn.net- Proposed as answer by Kalman Toth Friday, November 12, 2010 9:14 PM
- Marked as answer by ryguy72 Sunday, November 14, 2010 12:31 AM
Friday, November 12, 2010 1:36 PM
All replies
-
Please see these links, there's information about how to import Access DB to SQL Server:
1. http://support.microsoft.com/kb/237980
2. http://www.fmsinc.com/MicrosoftAccess/SQLServerUpsizing/importing/Import_Access_DB.asp
Hope this helps.Willy Taveras.-
http://itlearn.net- Proposed as answer by Kalman Toth Friday, November 12, 2010 9:14 PM
- Marked as answer by ryguy72 Sunday, November 14, 2010 12:31 AM
Friday, November 12, 2010 1:36 PM -
Thanks Will! I saw something similar; the link I posted. So, anyway, I was able to import all the Tables. I tried to import all the Queries/Views and SQL Server Express threw a fit. I'm now trying to import Queries A-J; will then try to import K-Z if that works. I'll post back as soon as I have some results (good or bad).
Thanks!
Ryan ShuellFriday, November 12, 2010 7:58 PM -
Ok, so far it took A-E. I've only got F-Z to go. Not too bad, compared to the alternative method of building a SQL Server equivalent (of this Access DB) from scratch. Still, this is harder than I thought it would be. Is this normal? Maybe the queries are too complex to do a comprehensive import into SQL Server...some of the queries are indeed QUITE complex.
Has anyone experienced this before?
Kind Regards,
Ryan--
Ryan ShuellFriday, November 12, 2010 8:41 PM -
I’ve spent a good portion of the afternoon trying to import queries with names between F-Z (couldn’t even get F-P imported). I thought it may be a size issue, but according to this link, the size in SQL Server 2008 Express is 10GB:
http://en.wikipedia.org/wiki/SQL_Server_Express
I’m at 908MB now. I think this is a bit strange because the Access DB is about 350MB. I clicked Tasks > Shrink > Database and got it down to 891MB…not much of a difference. Is this feature supposed to be like Compact and repair in Access? How can the DB go from 350 to >900 (2.5x as large)?
Any thoughts, anyone?
Thanks to all who looked at this post.
Ryan ShuellFriday, November 12, 2010 10:18 PM -
Hi Ryan,
I clicked Tasks > Shrink > Database and got it down to 891MB…not much of a difference. Is this feature supposed to be like Compact and repair in Access?
Please don't make shrinking a habit. It's not like compact and repair
at all.
Read http://www.karaszi.com/SQLServer/info_dont_shrink.asp for the
reason why. But read the rest of this reply first.How can the DB go from 350 to >900 (2.5x as large)?
I don't know anything about the internals of Access, so I can't
provide a full answer, but I think I can shed some light on this.
SQL Server is designed to be much more robust than Access. Whatever
happens, up to and including power outage, SQL Server will (almost)
always succeed in restoring your database to a consistent state that
includes all changes from transaction that were reported complete
before the failure, and none of the changes from transactions that
were still in flight.To support this robustness, SQL Server uses two ways to store the
data. The primary storage are the data files, where the contents of
the tables are stored in a format that is optimized for fast retrieval
and fast updates; the data files are also heavily cached in order to
further optimize access time.
The second storage mechanism is the transaction log file. This file is
basically just a log of every data modification that takes place. In a
normal situation, this is a write-only file. There are two situations
where the log file is read. One is a rollback under normal conditions,
when all modifications of a transaction have to be undone because a
constraint was violated, or because an explicit ROLLBACK TRANASACTION
statement was executed. The other is a server restart after an
unexpected shutdown - in that case, all transactions that were still
open at the time of the shutdown will also be rolled back (undone),
but in addition all transactions that were finished but did not have
their changes flushed from cache to disk will be rolled forward
(redone).
When writing to the transaction log file, SQL Server will first
overwrite old entries that are no longer needed. When all unneeded
entries have been reused and more space is needed, SQL Server will
allocate more space (by default - you can also set it to generate an
error condition in that case). This behaviour causes the log file to
grow to its normally needed working size when the DB is in use, and
stop growing once it's big enough. However, there are conditions that
cause excessive growth of the transaction log file.There are three conditions that can cause a transaction log file to
grow too large, and all are related to the age of the oldest entry
that has to be kept. These are:
1. Changed data not flushed to disk for too long - this rarely
happens; SQL Server regulates the flushes itself and this almost never
needs tweaking.
2. A very long running transaction - the nature of the transaction log
makes it that everything since the "oldest change that might need to
be undone or redone" has to be kept, so one long-running transaction
can cause excessive log file growth.
3. A misunderstanding of the recovery model - and this is in fact the
most common reason for excessive log file growth. SQL Server supprts
two recovery models: SIMPLE, and FULL (actually three, but the third,
BULK-LOGGED, is a variation on FULL and the differences are too
advanced for this reply). These models govern how you can recover your
data after a storage-related failure that causes the loss of a file,
or a user error such as emptying an important table.In the SIMPLE model, your backup strategy consists of full backups
(that copy the entire data file), and differential backups (that copy
only the portions of the data file that have been changed since the
last full backup. If you need to revert to a backup, you will lose all
changes since the last full or differential backup.In the FULL model, your backup strategy will have to include
transaction log backups as well. This strategy allows "point in time"
restore - you can instruct SQL Server to restore all changes up to the
millisecond just before you issued that fatal DELETE FROM Customers
statement, or just before the heads of your disk crashed. This is done
by first restoring to the last full or differential backup, and then
applying all transaction log backups that were taken since - this
allows SQL Server to redo all changes that were done since the last
full or differential, up to the specified point in time.Of course, redoing all changes is only possible if you have an
unbroken chain of changes. There must not be any gaps in the
transaction logs. And therefor it makes sense that SQL Server will not
reuse anything in the transaction log that has not yet been backup up
to a transaction log backup.The catch with all this, is that SQL Server uses the FULL recovery
model by default (*). If you are not aware of this, you won't schedule
transaction log backups - and as a result, SQL Server will not reuse
any changes in the transaction log, no matter how old. And the log
file will just keep growing and growing indefinitely.(*) This default applies to the "paid" editions of SQL Server. I know
that the Express editions has a few defaults that are different from
those of the paid editions; I don't know if that applies to the
recovery model as well.To make a long story short, I suspect that the large size of your SQL
Server database is caused by a large transaction log file. Either
because you do the entire import as a single transaction, or because
the recovery model is FULL, but you don't take any transaction log
backups.To check the recovery model of your database, you can execute this query:
SELECT name, recovery_model_desc FROM sys.databases;
To change it, you can use
ALTER DATABASE perFact SET RECOVERY SIMPLE;
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelisSaturday, November 13, 2010 11:50 AM