The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

low latency, high throughput distributed apps

Can someone recommend a books, articles or frameworks for creating applications which have very low latency, very high throughput and are distributed.  Basically the kinds of applications used by financial companies to manage their market data and trading.

I am interested in more than a million 'pieces' of information flowing through the system per second.  Since I am interested in learning the architecture of such applications and the algorithms involved, I don't care what language the author uses.

As far as I can tell, I have to learn at least a little about network topologies and other non-software issues.  I also probably have to understand TCP/IP...importantly when not to use it.  The best way to distribute an application, etc., etc.

falcon Send private email
Tuesday, January 15, 2008
High scalability for web sites:
Mark Pearce Send private email
Tuesday, January 15, 2008
"Building Scalable Web Sites: Building, scaling, and optimizing the next generation of web applications"

But I would like to know about other resources.
Oltmans Send private email
Tuesday, January 15, 2008
I should have clarified that I am not looking to build scalable websites (unless someone shows how to respond to a million hits per second on a dual or quad core machine). 

I am interested in applications which are much lower level than web servers.  In these applications, milliseconds matter.  Half a second is an eternity.

I am interested in the kinds of systems used by trading companies.  One has to be able to receive a flood of market data, filter it, perform basic manipulation, receive client orders, validate them, send them to another process for further manipulation, get back results, etc.
falcon Send private email
Tuesday, January 15, 2008
Well from the point of a hedge fund developer who plays with that exactly I would look at the white papers from Tibco and 29West.  They are the 2 major players.  They use publish and subscribe via multicast to distribute the data in a network.

You can also looked at MSMQ and MQ series
John Schroder Send private email
Tuesday, January 15, 2008
I have also used Tibco and other custom developed solutions at various companies. 

I want to learn how Tibco, 29West write their own system.  In other words, if I had 10 linux machines, some networking equipment and compilers/interpreters for C/C++/Java/Erlang, how would I architect/design/implement such systems?

This is not for an actual project, more of a learning exercise :)
falcon Send private email
Tuesday, January 15, 2008
you may want to look at  it is an open source project similar to tibco.
John Schroder Send private email
Tuesday, January 15, 2008
Look at "Enterprise Integration Patterns", which is not directly what you want but should give you a good idea of how these systems are designed. While I have not worked on one of these applications, I would expect that ordered queues/topics are used, with very fine grained concurrency control. It would therefore probably be beneficial if you had experience writting lock-free concurrent code, using futures, etc. This is the high-low picture, but I have no clue as to what happens in the middle.

I've heard that Coherence is very popular in the financial industry, and this would greatly reduce DB latency. I wrote a very similar framework using Open Source components (ehcache, memcached) and a good caching layer is definately a major improvement. You might want to look at the C code for memcached to get a good idea of how a highly concurrent service can be written.

You may wish to investigate, very lightly, real time systems. While not related, good solutions are usually adopted/rediscovered by groups in different areas. Understanding how those systems solve low-level latency issues should give you an idea of the same techniques used by large systems. They won't be exactly the same, but close enough in spirit to be valuable.
Benjamin Manes Send private email
Tuesday, January 15, 2008
I've worked on these systems at a former job.  The big players have distribution networks of servers around the country that are fed from the server feeds at the exchanges.  A tree structure.  Each server receives a multicast packet from an upstream NIC and broadcasts it on down to those below it on a separate NIC/network.

At the bottom of the distribution network is a server that is receiving all of that data at a client site, and then distributes the data out to the client PCs.  I've seen two architectures for that:
1. Single threaded app that receives the boardcast, looks in tables at who should receive the data and then writes it out to a multi-cast socket (or writes to a socket for each client).  Basic TCP/IP.  select is called a lot to be sure there really is data to send/receive to cut down on waits.  Also saw where each client had it's own socket with this one.

2. The other architecture had one thread per client.  A main reader thread received the multi-cast message and dropped it into a queue.  Some thread (not sure which) grabbed the message and figured out which client it should go to and dropped it into the client queue.  Then (I believe) each client had a thread that read from their queue and wrote out to that client's socket.

I don't know what kind of performance approach #1 could approach--it was _ancient_ code and fell over a lot.  I wish I could remember the numbers I heard for #2 -- I was blown away.  I want to say it was on the order of 10-20 million updates per second (coming from the upper feed) but that's a foggy memory.  I was really impressed though.  This was on Windows too.

Oh, and there was amazing compression on the data flowing through.  Not Lempel-Zif or anything like that, but more like "well, I sent 27.02 recently, and the price is now 27.04, so send the smallest representation/encoding of .02 that I can".  Certainly not full 8-byte floating-point numbers being blasted around.

Sorry I can't be more specific.  It's been a while, and I probably shouldn't be more specific even if I could remember.
anon for this
Tuesday, January 15, 2008
Object Hater
Wednesday, January 16, 2008
BillAtHRST Send private email
Wednesday, January 16, 2008
Thanks folks for the useful information and interesting links.
falcon Send private email
Friday, January 18, 2008
Sounds like you're interested in CEP (complex event processing).

There's a few vendors for this kind of thing, but it would definitely be worth checking out Progress Apama.

It's basically a big event processing engine, which you feed in all your events and it reacts very very quickly to the patterns you want.

I've built exactly this kind of thing before (you're talking algo trading right?) - generally you're looking at a few things:

1) Market data feeds. This is obviously critical, and it needs to get you huge quantities of data very quickly. At my current place of employment we've built our own enterprise wide market data system - we run multiple instances and have client side libraries to talk to it.
2) Order management system. This should be highly scalable (i.e. can add extra instances sharing the same database view).
3) Adapters to transform events from the market and the OMS to the desired format, which pass these events on to
4) Some kind of an event router. This has a striping strategy to divide the work up between algo engines
5) A complex event processing engine. This can be scaled using all the usual patterns (striping, failover & recovery etc)

One of the biggest issues with building this kind of system is creating a good testing capability - usually its very difficult to simulate production loads, and recreate decisions that algorithms may have made based on the market data. If you give the testing frameworks a lot of thought from the outset it'll pay off in the long term.
Monday, February 11, 2008
I'm currently using a CEP product, Coral8, and I love it.  I'd like to know what goes into building one! 

CEP works great for many things, but lack of the ability to quickly create functions (therefore, abstraction) and the lack of easy to use global state can sometimes cause awkward workarounds.  I've also found that the kinds of things programmers usually take for granted, such as good namespace organization, the ability to easily use streams from different modules also makes the code slightly more complex than it needs to be.

I'm not planning on building such a product.  As a developer, I'm just curious.  It is one thing to understand how to pass streams through filters, projections, accumulators, etc.  It is a whole another thing to understand how to do it extremely fast, in a scalable manner, with the kind of confidence required by trading software.
falcon Send private email
Monday, February 11, 2008

That's one of the nice things about Apama - it provides a nice abstraction model and a graphical tool to model your algorithms.

Essentially, you can code up things called 'smartblocks' - these are reusable pieces of logic (e.g. an order management block, or a VWAP calculator) and plug them into larger state machines called scenarios (built using a GUI).

The scenarios themselves are basically state engines - i.e. each state has any number of rules, which are processed in order. If the rule conditions evaluate to true, you execute particular actions (which may involve calling methods on the block), and then optionally move to another state.

Once you're done designing blocks/scenarios, you generate code for it and inject it into the running CEP engine.

Might sound like i'm a salesman for apama (!) but its the only product I've used in this space (and it's custom built for exactly this kind of problem), and is fairly well known around the industry.
Tuesday, February 12, 2008

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz