The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

How To Manage Network Disconnection

Hi,

I have a network version of my time tracking application and one of the persons trialling it came back with this question "When time for a task is currently recorded and the connection to the network database is disrupted does that mean the record cannot be saved?" My initial reply was yes, the record cannot be saved because there is no more connection to the database. Is this correct or is there way around this so that the record still (eventually) gets saved to the network database?

My idea is this, how about make the application work locally and periodically synchronizes with the network database? It looks good on paper but maybe there are hidden gotchas that I do not know. What do you guys think? Thanks.
Phillip Flores Send private email
Monday, October 29, 2007
 
 
If your application lost connection to the database, it still can save data in the local repository (files, emdedded db, etc.) and try to insert this data "next time". I guess this model should work for a time-tracking application.
Michael
Monday, October 29, 2007
 
 
That's what I had in mind actually...If that's the case isn't it more logical to continue working in local mode (saving in the local database) and then synchronize upon program exit?
Phillip Flores Send private email
Monday, October 29, 2007
 
 
"synchronize upon program exit?"

What if there isn't a clean exit? A power failure or HD crash? Data would get lost unless you had checks in place.
Gerry Smith Send private email
Monday, October 29, 2007
 
 
Run the clients in local mode, and have a server app handle the server-side replication on a scheduable basis.
*myName
Monday, October 29, 2007
 
 
I think you are better off resigning
yourself to the application not working
when the network is down, rather than even
attempting to build an off-line "save".

Otherwise, you will need to deal with what
happens when the client goes out of sync
with the server, and what happens if the
sync process goes horribly wrong.

I believe that would be a version of the
"Consensus" problem in Distributed Systems,
which is allegedly "Impossible" to solve.
Unless you are a systems PhD, I'd avoid it.
Object Hater
Monday, October 29, 2007
 
 
Not true Object Hater. If implemented properly, the system CAN synchronize properly. OP, you would need to have an auto-incrementing ID that marks each row so you would know up to what Row ID you have already uploaded. You should check for this everytime before you start accessing which rows in the current local database to upload to the server.

["synchronize upon program exit?"

What if there isn't a clean exit? A power failure or HD crash? Data would get lost unless you had checks in place. ]

The data would still be in the local database. Synchronizing at a convenient time when system is idle (everybody's gone back home and noone using the app to make transactions) can also be done. You need to have an always on PC and an always on Internet / TCPIP connection.
Ezani Send private email
Monday, October 29, 2007
 
 
A lot depends on your DBMS technology.  Some products automate much of this for you.

http://support.microsoft.com/kb/190766
Codger
Tuesday, October 30, 2007
 
 
Ezani:
What do you do if you get a conflict?

eg:
Step 1:
Client PC A "saves" changes on server to autonumber N,
but not really, because the A's network timed out.

Step 2:
Client PC B saves changes to same data to
autonumber N.

Step 3:
Client PC A manages to reconnect and tries to sync

If you add more complications/errors/replicas the
problem heads into PhD territory fast. I don't know
about you, but I'd be too lazy to try to "implement...
properly" such a conflict resolution mechanism.
Object Hater
Tuesday, October 30, 2007
 
 
+1 Object Hater.

"Oh, that's EASY to do" is easy to say.
AllanL5
Tuesday, October 30, 2007
 
 
Synchronization isn't rocket science.

I'd recommend using GUIDs rather than autoincrements as your IDs since multiple systems synching to a central database using autoincrements are going to have issues with duplicate ids which means you are going to have to keep separate ids for the local version vs. the centralized version which is a pain and adds unneeded complexity. Just have an IsSynchronized flag and any time the record changes, set this to false.

As for resolving conflicts, first off, in such a simple application (a time management app), it seems somewhat unlikely that the same record is going to be modified by two different people at the same time. So first off, you need to decide if you even want to deal with this situation or just go with a "last in/first in wins" approach. If you do want to deal with such scenarios, you can either go with the simplest option (locking) or go for some kind of conflict resolution solution where you present the user with any conflicts and have them manually resolve them.

For an app like this, I'd probably ignore conflict resolution other than warning the user that the server version has been changed and then give them the choice of which version wins.

Tuesday, October 30, 2007
 
 
This is roughly how I do it (with smallish data sets)

Keep a baseline copy of the database (from when the server and client where last sync'd)

User edits database locally

Server's database can also be edited

User sync's with the server

Do a three way compare between the baseline, the server, and the client. This will tell you what has changed for each record.

In most cases, the data can be automatically merged, unless both the server and the database have changed the same field. If so, I have the user manually merge it.

Also, I keep a log of these merges.

It sounds pretty complicated, but its verbose and works well for me!
Steve Send private email
Tuesday, October 30, 2007
 
 
Object Hater:

The Client ID or Branch ID field as well as the Original AutoIncrement ID from the client side needs to be stored on the server database. The ClientID + Local AutoIncrement ID combination makes the index unique. The server database itself has its own auto-incrementing Record Index ID. WIth that kind of info, you could resolve the network down problem.
Ezani Send private email
Tuesday, October 30, 2007
 
 
All the solutions proposed already seem like too much
work to me (My proposal will be no different).
My guess is, if Phillip Flores really wants to do this,
he should make the user interface do all the work:

The GUI should have an "unsaved" flag for the data
(perhaps by putting it in italics, or even making it
blink) where it can show "saved" and "unsaved" data
side-by-side, and perhaps a "stale" flag as well
(in grey font maybe?).

If you make a change, it goes to italic. It is saved
locally, so if you close the application and re-open
it, the change to the field is still there and the
field is still italic. If the connection is up, the
app saves to the db and on success, the text shows as
normal again. If the connection goes down, the user
gets a warning.

The data should periodically refresh and also it
should fade colour in some proportion to the time
since the last refresh, going totally grey if you are
disconnected for long enough (with another warning?).

Also you could track the last good value for a field.
If a refresh causes the data to be changed, the GUI
warns the user that someone else is altering the data
at the same time, and perhaps flags the data
eg by making it blink in red. These "data-changed"
warnings could perhaps only be activated if the app
has been running continuously with no disconnect.

To implement this, it is necessary to uniquely identify
each datum that could possibly be displayed (probably
with a primary key), and on the client side three copies
of each datum.

You would also probably need a "last write" timestamp
on the server to manage staleness, which you would need
to manage carefully, probably transactionally. You would
need to adopt a "server wins" resolution policy and a
"client-assumes-staleness" policy in the case of network
errors leaving a transaction result unknown to the client.
Object Hater
Wednesday, October 31, 2007
 
 
Software design is about making compromises.

It is easier to write the application if it requires constant access to the database.
It is better for the user if the application can still work even when the network cable is broken or the database server is down for maintenance.

You have to decide how much effort to put in, based on your own available resources and how good you want things to be for the user.


One possible option is to take a cue from accountants: you never edit records, you just add new entries that modify the final value. Entries are not "final" until all clients have committed their updates to the server, but each client can modify data all day long and there would be no conflicts. Any report run using the server is simply accurate as far as the available data - and from the point of view of that report, it doesn't matter if missing data is due to someone writing notes in a book until the network is fixed, or if the notes are stored on their hard drive until the network is fixed. The fact is that there will be data that isn't on the server but "should" be - and users who don't understand that will be dissapointed, no matter what you do.

"Worked 2 hours on Foo project" followed by "delete half an hour from time spent on Foo, I forgot I took a break" adds up to a display reading "1.5 hours on Foo" regardless of what the network was doing.

Look at it this way - if the application stops working when it can't see the database, users still have to remember to enter the data later on. They can write it on paper, or your application can remember for them - which do you think would be better?

That may not suit for all types of data, but it can work for some.

If you store local data in a transactional database then it'll be as secure and reliable as any other file on the database - and while that may not be 100% perfect, its good enough for most purposes and I don't believe your time tracking application is that much more important.

And a time-tracking application possibly could be more useful if it can work on a laptop while someone's out of the office, and automatically update the central server whenever a suitable network connection is available. WiFi might be everywhere you go, but it isn't everwhere for everyone.


Synchronise on program exit, startup, or when running and the connection comes back.

Consider using pre-written messaging software (no, not instant messaging chat applications) or having a service running in the background to do the synchronisation automatically whenever possible.

Thursday, November 01, 2007
 
 
I quite agree with the "accountant" analogy you used, un-named poster. And I think the latest view must be "accurate" and "complete" so following your previous analogies, I would probably take a common data and time where you would get complete records from all branches and thus give a complete and accurate view rather than just taking the whole database latest figures and found out that Branch XYZ has not posted its data yet! [Probably some completeness verification has to be made]
Ezani Send private email
Sunday, November 11, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz