The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Handle thousands of transactions every second


I'm making an application with a Windows Forms frontend (C#) and MS Sql Server 2005 as the backend.

There are two different processes running all the time.
1. Something like a billing system with a couple transactions every couple of minutes (on an average)
2. A process which requires me to handle over a thousand transactions with the db every couple of Seconds.

The problem as i'm unsure how to design my data layer so that the system can handle the thousands of transactions which take place every second or so ALL THE WHILE making sure not delay transactions or heck even hold up the transactions by process#1.

If you guys could please drop in a few suggestions to combat the problem.

Barry Gyllenhall Send private email
Monday, September 25, 2006
I suggest either a basic nonblocking design, or a basic threaded design.

I normally suggest fork() as another alternative, but I don't think that works very well in windows.

(I do not suggest one thread per transaction, unless you use very lightweight threads).
Arafangion Send private email
Monday, September 25, 2006
I also suggest talking to a good Sql Server DBA to see how to exploit the features of the SQL Server.
Monday, September 25, 2006
Cache some DB data in your process so that you require fewer DB round-trips.

Allow 'scale-out' so that the load is distributed across more than one peer.
Christopher Wells Send private email
Monday, September 25, 2006
Sounds like a trading system... :)

I wrote these using a limited multithreading architecture in C++ (I could never get the throughput I needed under a managed programming language), in combination with a three level priority queue (similar to what the Linux kernel uses for internal task priority management). The threading is for input and output only, the three-level priority queuing is for internal calculations. UDP is used for any network communication (shaves a good amount of time in comparison with TCP).
Andrey Butov Send private email
Monday, September 25, 2006
+1 to minimizing trips to the DB

If you can group a bunch of the 1000s of txns into one db call, you'll likely see a big improvement in throughput
Mike S Send private email
Monday, September 25, 2006
I wonder what the app is? Even for a trading system it doesn't quite add up. NASDAQ does about 5 million trades/day and if you assume 100 active participants and only 2 busy hours, that works out to less than 10 per second (or am I having a Math Moment again?)
Greg Send private email
Monday, September 25, 2006
What goes in must also come out.  In Oracle deletion of old data takes longer than insertion of new data.  I expect SQL Server is the same. 

You might find that getting rid of all the old data is a MAJOR headache!
Monday, September 25, 2006
Agreed with DBA; careful planning on the database end will save a lot of problems.

Monday, September 25, 2006
+1 Database optimisation.

You'll likely spend most of your time either in network round trips to the DB or resolving locks on the tables.  Minimise both.

Also, anaylse your query requirements (are they mostly SELECT or are they mostly INSERT) and optimise your tables to make those queries go faster (Indexes or lack of indexes).

- James.
James Birchall
Monday, September 25, 2006
Thanks guys.

The application is something like this. Its a warehouse app where every "item" (read object) is tagged with an RFID tag.

The entire warehouse is planned with a reader every few feet (because of the range) and for security reasons, I need to be sure of the positions of each of the object within the warehouse.

Things will be moving around so I also need to keep a track of the positions of all of the objects as they move through the system.

The important thing is to insert the position (the RFID reader's Id which picked up the signal) and save it.
Actual checking of the records to display positions won't be more than 10 or so queries per day.

And I also need to archive everything that is over 24 hours old and store the same for a 7 day period.

Any tips as to how to combat the mass stream of data coming in from the readers into the application?

Thanks again everyone :)
Barry Gyllenhall
Tuesday, September 26, 2006
Re: mass stream of data from readers

Filter out the uninteresting data, e.g. item x is still in the same position as last time, item x has moved 2 feet from last time,etc. so your database only has to deal with item x is here, item x is moving, item x has stopped here.
Mike S Send private email
Tuesday, September 26, 2006
I wonder if this is a valid case for a memory-resident database sitting as a cache to the disk-based RDBMS, in the manner that TimesTen can with Oracle?

That aside there may be a good case for caching RFID reads in the application then sending them in-bulk to the database. You'll scale better with 100 database transactions of 10 real-world transactions each, or 10 of 100, per second than you would with 1000x1.
David Aldridge Send private email
Tuesday, September 26, 2006
+1 to Mike S.

This will turn your thousands of inserts into thousands of selects, and then you can add indices (and maybe caching) to speed these up. Then, batch up the inserts and you probably won't need to do more than one batch of inserts a second or so.
Inigo Send private email
Tuesday, September 26, 2006

Maybe it isn't JUST for NASDAQ, and there are many more than 100 participants.  AND every change in Bid/Ask is also an update.

I believe our company is processing something on the order of 25,000 transactions per second!
Tuesday, September 26, 2006
Tangent, sorry;

are you sure that RFID is a good technology for tracking the position of items 'for security reasons'?

If security is important, and there are potentially people in the warehouse who have a reason for wanting to move something without you knowing, then RFID is hardly going to stop them.
Architecture Astronaut
Wednesday, September 27, 2006
Mike S, David S:
Yea, Thats what I've thought about too. Have some local processing to filter out unnecessary inserts into the DB and have them execute in batches.

Architecture Astronaut:
The number of people in the warehouse is very little.

And RFID is the only way we can keep track of the smallest to the largest of items, the boxes where they are stored and also the containers which are shipped.

Thankyou so much everyone.
+1 to everyone :)
Barry Gyllenhall
Wednesday, September 27, 2006

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz