The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

How to write 2 different databases ?

I have a client app which talks to an IIS server via ASP pages and writes to a MS SQL database in the backend. This is my primary database and I have full control over it.

There is a requirement to integrate this app with another database so I need to write to this database (let's call it the secondary db)  at the same time when I save records into my primary db. The secondary db I have no control over, I can only read and write certain records.

My secondary db is also accessible to the client via an app server running IIS/ASP. None of those 2 databases are accessible directly but via IIS/ASP. The databases are in 2 separate networks over the internet.

The problem with this secondary db is that I need to make sure the records have been successfully saved to the both databases and, if not, to rollback the changes.

I'm wondering what's the best way to achieve this.

I was thinking to use a cascading call/transaction from the client to the primary app server and from that server to the secondary app server. Once the primary app server receives the data from the client, it makes a call to the secondary app server to write it's portion of the data and, once the ok is received, then write the other part of data to the primary db, then return ok to the client.

Using that approach I can not be certain the data was commited to the db I wrote last. If my primary app server crashes after the data was already sent to the secondary db, then it has no way to know what needs to be rolled back.

So I guess I need to start with implementing a kind of transaction token into my primary db and have a transaction list to be generated first and stored in my primary db.
So once this transaction token and list are created, then I can start writing to the secondary db then to the secondary db and once the whole thing is complete, mark the transaction as complete and return an ok response to the client.
If something goes wrong, the client won't get a response and it will retry the transaction.
 
In case the system crashes while a transaction is in progress I need implement a transaction cleaner service to periodically scan the primary db for incomplete transactions and roll back the failed ones.

I'm wondering if anybody has done this before and if you guys see any problem with this design ?

Thanks in advance.
John Staffini Send private email
Monday, May 08, 2006
 
 
Sounds like 2 phase commit.  There should be plenty of info out there on it, it is not a new concept.  Good luck and happy hunting.
rick
Monday, May 08, 2006
 
 
Unluckyly you have guessed right.
The process that you've designed is right, but is really painfull to implement.

The only alternative I see is a syncronization, regardless the transaction, but this has some drawbacks.

Good Luck
Sevenoaks Send private email
Monday, May 08, 2006
 
 
I've done work with synchronisation of PCs and servers, where the PC may or may not be connected.

This wasn't MS SQL, so there may be some in-built solutions.

The system could do real-time updates, but in case of users being disconnected, updates could be saved up. We would store the information in a local file and each time someone did something, it would check for anything saved up as well.

We stored the saved up items with keys of an ID unique to the PC, and a per-PC transaction number. So, at connection, it could collect up unsaved items and apply them.

The real challenge is dual-updates. If something was done on the PC that conflicted with an update elsewhere, and working out what to do with that.
Tim Almond Send private email
Monday, May 08, 2006
 
 
I'd try updating the second database from the first if you have the option.
D in PHX Send private email
Monday, May 08, 2006
 
 
I don't have an answer.

From a design perspective, is there a reason why you need that level of redundancy yet with more restrictive controls over the second, remote database?

Consider a bank - if I make an electronic transaction, it is usually to an authoriative master database, and then the entire transaction is subsequently published to a backup slave database.  Err... I doubt there's a scenario where you'll save X, Y, and Z to the master, but only Y to the slave.

Based on what the original poster wrote, I definitely would go back to the customer and renegotiate the requirements. For example, replacing the IIS/ASP front ends to the databases with something more robust, will save you a lot of pain and heartache down the road.
TheDavid
Monday, May 08, 2006
 
 
See

 http://msdn2.microsoft.com/en-us/library/ms191440.aspx

for an intro to two-phase commit (2PC)
BillT Send private email
Monday, May 08, 2006
 
 
Also known as the XA protocol.

In the MS world, an easy way to get this for not much work is to write a COM+ (aka EnterpriseServices) component.
example Send private email
Monday, May 08, 2006
 
 
And, if you really want to understand the details...

 http://search.barnesandnoble.com/bookSearch/isbnInquiry.asp?r=1&isbn=1558601902
BillT Send private email
Monday, May 08, 2006
 
 
Thanks for the feedback to all of you.

It looks like I'll have to design this from scratch as none of the MS technologies (com+, dtc, mts) can't be used easily in my particular case.
John Staffini Send private email
Tuesday, May 09, 2006
 
 
If you're planning to implement an XA-compliant transaction manager from scratch, good luck.  You may be better off trying to find an available implementation that meets your needs.

Just curious -- why do DTC, etc. not work for you?
BillT Send private email
Tuesday, May 09, 2006
 
 
Bill, the main reason for saying that the MS technologies won't work for me is because my 2 servers will be on 2 different networks separated by firewalls, etc.
Unless I develop some kind of http listeners on both servers to pass messages along - and even then - I don't see the MS protocols working very well in that scenario.
Maybe I'm wrong here?

And since the transactions are initiated from a user client app, waiting for some long background processing times to settle the transactions between the 2 servers may not  be practical for the user.

So what I have in mind is to implement a transaction manager/coordinator in the client application itself and implement the garbage cleaner on server 1.

In that way the client will initiate the transaction, register a transaction token on db 1, start processing the the transaction on db 2 and then process the transaction on db 1. If any part of this fails the client will know and attempt to rollback the  transaction or retry the failed part, etc. If the client fails and none of the transactions can be confirmed, then the garbage cleaner will have to roll back the transactions.

Makes sense ?
John Staffini Send private email
Tuesday, May 09, 2006
 
 
In the face of those limitations and having to cleanup on db1 yourself, +1 for simply updating the 2nd database from the 1st on some kind of scheduled basis, independent of your client application.
Cade Roux Send private email
Wednesday, May 10, 2006
 
 
Especially since you have full control over the primary db.
Cade Roux Send private email
Wednesday, May 10, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz