A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
We are presently working on a datamigration project from one version of our product to the higher version. The database size is around 7 GB.
With our logic the migration time is around 12 hours. Is it acceptable. Customers are really worried. Do we have some benchmark say for dbes of size 7 GB typical datamigration takes 3 hours or something.
Any good sites that list these benchmarks.
It all depends on how much transformation you are performing. It still sounds extremely high though. We have migrated databases larger than that inside an hour (minimal logic processing).
Can you profile the conversion to see where most of the time is spent? If you are spending 3 hours on one table then you probably have a serious problem that can be rectified.
Wednesday, November 10, 2004
two factors can be of huge influence:
discrepencies between data models : how different is your news data model from the previous one
type of fields to move: any blobs, clob long text type of fields will typically take much longer
12 hours does seem a little bit long, but not extraordinary (I've seen database migration go on for several days, but typically it is in the 3-4 hour range)
Depends on how often you expect to data-migrate. If you release every month, and require a 12-hours migration each time, I might be concerned.
If you do it once a year, then 12-hours of downtime for a huge database does not seem excessive.
Wednesday, November 10, 2004
The 13 hour does seem a bit long for only 7gig. If you are truly doing alot of data transformation and calculations this would be reasonable. Only a couple of solutions I can think of.
One would be to re-examin your approach. Is there the posibility of turning of off features that are not needed on the destination database. I am assuming you are moving between an old database and a completely new instance with a different layout. Some databases I have worked with have the options to turn of some of the overhead such as actually building the keys. They then go and build all this in mass when you turn it back on. This mass buiulding of the missing keys is many times faster than doing it as it goes.
The other would be is there a posibility of running the conversion process on a multi cpu system and split the conversion of unrelated tables to run parallel to reduce the overall run time. For this method you may want to tweek the data distriution accrosed volumes and drives to reduce the data access contention issues.
I would propose thinking in the same direction then Douglas does. I once did a database migration which lasts one hour at our side (default Oracle 8.0.5 installation), but 24 hours at customer side. I figured out, that this was because of their rollback config....
So: Switch of Indexes, Triggers and constraints (your data will be valid, no use in letting the databse verify it again). Switch of rollback segments and other recovery related stuff (You will not want to do a rollback of half transformed data).
Another issue to consider is the number of commits you send. It may be a good strategy to commit often every few lines instead of having only one at the end of the script or at the end of each table. This however may depend on the database config.
Thursday, November 11, 2004
I have also seen such processes run long because of the way they are written. If the data is in an RDBMS that speaks SQL, and conversions are being performed one row at a time, 12 hours might not be bad. But there are sometimes alternatives to convert entire tables in one SQL command that will run much quicker. Not all changes can be performed that way, but it might be worth looking at.
Wednesday, December 01, 2004
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz