The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Deploying Database Changes

Context: C#, ASP.NET, MS SQL Server, SVN.

Where I work, we have a development database server and a production db server. The servers each run a single instance of SQL Server, which hosts multiple databases. A typical web app will use one or two databases extensively, and will touch dozens of tables, views, and stored procedures.

We develop against the first (duh) and have full access, including creating/altering database objects. We have only read privileges against the second.

When we (and our bosses) are satisfied with the way the app works on Dev, we write an informal deployment doc. Usually it says "run these SQL scripts against this database, then do an xcopy deployment of the app." We hand it over to Operations to do the actual deployment.

For small changes (fix a bug, add a new web page) this tends to work well. However, twice in the past six months, I have messed up a rollout of major new features by forgetting to include a database change: an Alter View, a new sproc, etc.

This is bad. It's unprofessional on my part, causes extended downtime of the web site, and requires the involvement of my manager to push through the second (and third) database change.

The question is, how can I keep track of the database changes I make? Using a tool like Redgate to do a diff on the schemas will create too many false positives because other developers are also making changes. Besides, only the DBA has a license for it.

I already make a conscious attempt to keep database change scripts. Clearly, I don't do it well enough. For my source code, TortoiseSVN's little icon overlays tell me which source files have changed or are new. I need some mechanism in the database to do a similar handholding.

Suggestions, anyone?
Art Metz Send private email
Tuesday, November 11, 2008
We have an object that is responsible for updating any database the app connects to. It checks a version number in the target db and applies the updates in sequence (implementation is a for loop going over a case statement). The developers append their changes to the case statement (instead of applying them interactively) so all changes are tested against the development environment. For some schema changes a failure can be fatal and block the update until fixed, other schema changes are just warnings

It may be less flexible than running SQL statements interactively but it enforces a structured approach

jz Send private email
Tuesday, November 11, 2008
Simplest answer would be to never use the GUI/interactive console. Always do database changes in scripts.
Chris Tavares Send private email
Tuesday, November 11, 2008
I've used a technique like jz suggests.  Depending on how many DB instances you can afford to set up, each developer may be able to have his/her own sandbox, where you can use an interactive tool to develop changes. 

Then, script your changes and check them in with your source code.  When your nightly build (you do have a nightly build, don't you?) builds your source, it can run your upgrade scripts against the "integrated" dev DB, and if something's going to break, it'll break in dev before you get to qa or prod.

Vary this theme to handle different environments, but I'll vouch for jz's plan in some rendition.
D. Lambert Send private email
Tuesday, November 11, 2008
Something like what is described in this series of articles:
Wednesday, November 12, 2008
We actually test the Deployment procedures, as part of QA.

So every time new code in delivered, it is installed into QA for testing using the Deployment procedures.
to lazy to log on
Wednesday, November 12, 2008
Thank you all for your suggestions.

JZ: This is something that needs to be enforced among all developers.  I'm afraid it won't fly in my shop.

Chris: I _thought_ I was doing that.  Clearly I was not as careful as I had thought.  I was hoping for a tool or procedure that would help me.  For example, if there were triggers on CREATE /ALTER /DROP (or a way to simulate them), the system could log all my schema changes.

D. Lambert: Unfortunately, we don't have a nightly build and it is not practical to host local copies of the db server.  (I'm using that term to mean "single instance of SQL Server that hosts all thirty databases.")  It's not a licensing issue; it's size. 

to[sic] lazy: We don't have a staging environment to test deployment.  Management has decided that it's too expensive. (I know, losing a day's worth of page views is also expensive, but I don't make that call.)  Similarly, we lost our QA guy this month in the layoff due to the recession.  (Something between 15 and 25% of the company was laid off, including the CIO.)

Joe: Thanks for the link.  It's too meaty to skim, so I will make time to digets the article. (I have no idea why I'm using these food metaphors.)

I apologize in advance for sounding negative; I truly appreciate the time you guys took to help. 

In talking with my manager, it turns out that we have a nightly scripted run of dbGhost (the Redgate tool I referred to) and that changes are stored in a dedicated SVN repository.  I've been here 15 months, and no one has ever mentioned this before.  It's not quite as good as having a trigger, because it loses information about who made the change, but it's a start.
Art Metz Send private email
Wednesday, November 12, 2008
One other thing to think about - can you write a "unit test" against the desired schema? Some sort of script that checks that all the expected stuff is in place, and if failed you roll back the schema changes?
Chris Tavares Send private email
Wednesday, November 12, 2008
We have a release request tool (internal website).  So our process is that we branch the code in Subversion for the next major release, the developers check their changes into the branches.  There is an area for front-end code and for SQL scripts. 

They then go to the tool and build a release request.  They build a request by checking off the files they want to include in the release request.  The tool helps them select the files they want by date, last author, changed since branched, etc.  They are selecting front-end and SQL scripts.  Another important part of the workflow is where they assign issue IDs (think bug tracking or feature request numbers) to that request.

Then when the build manager goes to do a build he can select only those issue items that should go into that build.  He selects the request, clicks “build” and the tool pulls all of the front-end file into the build directories and then generates a single SQL build script.  The tool knows that order is important when building the SQL script file.

This “package” of changes is then promoted from Test to QA to Production.  We are thinking this might be our next product…..
JBrooks Send private email
Friday, November 14, 2008
I recently took a webinar put on by NetObjectives called something like "Database Agility - Transition Testing". I was distracted during the seminar, but they definitely had what appeared to be a reasonable method for transitioning to new db changes. It is based upon a set of transition tests, similar to having unit tests, for migration. They even have a relatively inexpensive tool (I think $75) called DataConstructor. Anyway, the problem you are having seems to be easily resolved through their methodology. I have discussed this with others and hope to begin using this in my practices as well.
Bill Turner Send private email
Thursday, December 04, 2008

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz