A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
Has anyone here worked with a large enterprise application that spanned multiple years and revisions? I was wondering what your thoughts are on Martin Fowler's Evolutionary Database Design: http://www.martinfowler.com/articles/evodb.html
In your experience would this work? In your experience do you have to evolve the schema over time? Does it have to maintain backwards compatibility or do you migrate all clients to the new schema all at once? I'm not even sure if the latter is even possible in big projects :)
What is your advice for handling database schema evolution in general?
Thursday, January 03, 2008
Here is a related video that I found interesting: http://www.informit.com/podcasts/episode.aspx?e=e316422d-b974-4125-98d0-f0e9c5c18a5b&rl=1
Friday, January 04, 2008
I think it is great. A lot of bold sensible choices.
> Does it have to maintain backwards compatibility or do you migrate all clients to the new schema all at once?
The answer is whatever makes sense. As the schema evolves, you develop your strategy as you go, maintaining backwards compatibility for older clients that can't be upgraded, and upgrading where possible/necessary. But a modular design is important so clients do not depend on aspects of the schema that they don't need to.
Friday, January 04, 2008
==>Has anyone here worked with a large enterprise application that spanned multiple years and revisions?
Ayup. On my own, I've got apps still in production going back to 1994/1995. Not a one is "stable". By that I mean there have been significant schema changes periodically over the lifespan of the system. Change. It's a fact of life. Sometimes it's pretty regularly, sometimes it's a year or three between significant changes/mods -- but there's always change.
When I was an employee at BigBlue, I worked on systems that were originally written in the late 1960's and early 1970's that I was maintaining in the mid 90's. 25+ years after they were originally written.
Not only do you have to worry about database design/schema issues -- but for these multi-decade apps you have to worry about the database *engine* itself. Some of those apps from the 1960's went through a progression like: Proprietary, home-grown, hacked data-files ==> some sort of ISAM hack I never fully understood (technology from before when I was born) ==> IDMS (IBM proprietary stuff I never understood either -- again, before my time) ==> IMS / heirarchical database from IBM==> DB2 / relational databases ==> ??? who knows what's next ???
The whole time, throughout each of those platform/engine/architecture changes, as much as possible all of the legacy data had to be ported to go with it.
The point is, not only will the schema change over time but there's a good chance that if your apps lifecycle is measured in decades, you can count on changing database/data-access technologies a few times too. The worst part is migrating that legacy data.
There's nothing in the linked article from Mr. Fowler that database professionals haven't been doing for at least the last 10 years.
In the early days, folks couldn't do it because (as an example) there were no such thing as developer workstations where the developer could have his own "sandbox" database/compilers/tools etc. For at least the last decade or so, though, we've all been more-or-less following the guidelines listed in the linked article. If you haven't been, then you'd better start soon. These really are a kind of "best practices" outline and once you make the changes to get into that mode, life will be much easier for you.
Friday, January 04, 2008
==> for these multi-decade apps you have to worry about the database *engine* itself.
Replying to my own post, sometimes it's not even the data engine but the technologies you use to extract data from the database engine. In the decade or so I've been with the MS SQL Server and MS Access products, I've been through raw ODBC, ODBCDirect, DAO, ADO, RDO, ADO.NET. Now the devs in my shop are migrating to LINQ (whatever that is (I'm a dinosaur now and don't keep up)) (and who knows what will be next).
The point is: Change.
Count on it.
Plan for it.
Friday, January 04, 2008
It isn't clear from Martin Fowler's articles or your own posts whether you:
a) Migrated all clients to the new schema and blew away the old one, or
b) Has to ensure the schema was backwards compatible.
I'm not sure how reasonable each one is. In the simple case of adding a new column to the database and having to maintain backwards compatibility, how do you handle the fact that old clients will not provide you with the information you now require (the new column)?
Saturday, January 05, 2008
I find that most problems are political rather than technical.
If a developer with a lot of kudos and leverage insists that a database design needs to be done a certain way it is very hard to get them to change their mind. Particularly if they have built a lot of code on the assumption that their design will be the one imposed on everyone else.
I'm a DBA and my primary question on all database changes is "how will this affect LIVE customer service"? Interuption of LIVE service costs £30K per minute so it is quite an important question, after all it is the customer who is paying my mortgage.
Sandbox databases are fine for minor experimentation but it is fairly easy to come up with a solution that works fine in a sandbox with small volumes of data, low network traffic, few users etc. In a large environment something that was sane and sensible on a small scale is simply non-viable in a heavily stressed environment.
I think with evolutionary design and agile practises design has to be paramount. A crappy design is going to come back to bite you in the bum.
I believe that their point was that regardless of how well to design it the first time around you're likely to get it wrong so instead you design for change and when the inevitable occurs you take a smaller hit.
I'm actually glad you're a DBA because I think it's important to get the perspective from the other end of things: how do you normally handle schema evolution in the field?
Sunday, January 06, 2008
There's an entire book (in the Martin Fowler series, go figure) called "Refactoring Databases" by Scott Ambler and Pramod Sadalage. It covers this topic in much more detail than Fowler's one-page writeup, including how to evolve the schema while maintaining back-compat.
I found it rather dry, but there's some good stuff in there.
Monday, January 07, 2008
I talked with a friend and he basically said that so long as you're putting out a web-based application you don't have to worry much about refactoring. I guess the same is somewhat true of anything behind web services because the feeling is that you can always upgrade all your clients at once.
That is, if you put up a layer between your real clients and the database you can upgrade the schema in ways that break backwards compatibility so long as that middle layer does the mapping to maintain compatibility. I guess schema evolution is not the same as API evolution after all... Does that make sense?
Tuesday, January 08, 2008
Stored Procedures and Views are your friend here. Then you can change your database structure independently of changing the app - for many kinds of changes, you can actually change the DB in advance of the app changing with little or no downtime.
If your app uses inline SQL, both will have to be revved together, which makes the logistics potentially harder and the scope of bugs larger.
I think you missed the important part here: "Clearly separate all database access code". If deployed clients are directly accessing your tables and columns, you have no ability to change your schema without simultaneously upgrading them. You have to be backwards compatible, but you get to define what that consistent API is. Views and stored procs, web services, etc. Once you have a layer of abstraction, you are free to change what's behind it.
Okay, I can see that happening for external customers but how many internal customers do you guys normally run across? That is, won't changing your schema break various internal customers that talk directly to the database, such as the reporting system?
Also, does it really matter whether you abstract the database behind web-services or not? Whether you provide backwards compatibility at the WS layer or DB layer doesn't this compatibility limit what you can do? What is the advantage of web-service level abstraction?
Saturday, January 12, 2008
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz