The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Scalable Banking Solutions

Banking is a fast changing, ever growing and mission-critical  sector. I want to know how the core banking solutions are made scalable to meet ever changing requirements. New schemes and products are added regularly and each has its own way of working. Secondly, change with the rule imposed by the government and successfully accommodating those changes in the existing solution.

What strategy is followed? In a simple application, we need to re-design schemas, front-ends and add functionality. How these solutions manage this.
RK Send private email
Sunday, August 26, 2007
 
 
I'm surprised anyone describes banking as fast changing.  It might change at a slightly less glacial pace than accounting.
Cade Roux Send private email
Sunday, August 26, 2007
 
 
>> What strategy is followed? In a simple application, we need to re-design schemas, front-ends and add functionality. How these solutions manage this.  <<

They do it like any other software development process.  The business comes up with a need, they write requirements, the coders code, the testers test, the managers manage, and the vice presdents fight amongst each other to get the biggest office.
xampl Send private email
Sunday, August 26, 2007
 
 
I work in a bank. The major aspect of scalability is big $$$. Have you worked with load-balancing, clusters, SAN, dedicated fiber channel all multi million dollar hardware. The other thing is big teams where you have dedicated load testing environment. Also scalable != performance.

I also interface with our legacy but most important core banking system which is still mainframe.

Also your notion of rapidly changing legislative environment is simply not true. The laws are made in consultation and thus the bank already has some idea of things to come and compliance kicks in with about 1 - 3 yrs after the laws are passed.
anon banking dev
Sunday, August 26, 2007
 
 
Big Iron.
Jussi Jumppanen
Monday, August 27, 2007
 
 
The main problem I found in the banking industry is that they think about scalability TOO MUCH. They think about it before and during, and often instead of, any actual software design.

There's this assumption that they need to process huge amounts of transactions, and that implies that they need multiple databases and multiple servers and that means they need to go buy some "middleware" to join it all together. Because if you buy enough middleware, any old code will run the bank..

The problem is that the kind of middleware banks have a habit of buying tend to get in the way of scalability rather than do anything useful.

Then they use the promise of the middleware to write processes which are WAY too small[3] and then everything spends its entire time serialising and deserialising comms.

There's a pitch for having reusable components about the place with nice networked interfaces. But I've seen it get to the stage where what would, in any reasonable design, be classes are actually individual applications[2], and hence every inter-class method invocation is a network operation.

We're talking about things like "compute compound interest" being on the other side of an XML over HTTP request..

Just because now every component CAN talk to every other component doesn't mean it should...

This, unsurprisingly, creates scalability problems which don't remotely stem from having a lot of customers trying to talk to you.


A lot of these systems could be simplified with some nice "plain-text over TCP" type connections; I seem to have spent an awful lot of my life decoding massive XML-over-HTTP things to obtain "instrument = price" pairs, which could just have been streamed over a TCP link in ASCII. And then we wouldn't have been constantly chasing the renamings of the tags for internal political reasons[1].

Heavyweight comms protocols, heavyweight middleware... and then you suddenly need lots of machines to run all this on, and that gets you a sysadmin nightmare. Brrr.


What strategy is followed?? None. Well. Effectively none. This week we'll be doing "agile". Next week someone will decide that offshored Java is the way to go. The week after that, we're going back to using PLSQL to do as much of the work as we can. That's all while moving offices, and under notice of possible redundancy. The next redundancy round will follow the reorganisation the week after next...

It adds up to "no strategy at all" over any span of time longer than about two months.



[1] No, really. The US team simply didn't want to use the same tags as the UK team. So what was actually on the wire varied depending on who'd won the most recent argument about this. Eventually the answer was... have another layer in the middle doing the translations so they could both use the ones they wanted...

[2] With all the associated "starting them", "stopping them", "finding them", "hacking the binary to get it run on a differently configured machine because we've lost the source" heartaches.

[3] This conveniently means the applications "don't need" proper developers.. This is called "complexity smearing"; The scary complexity disappears from the design for any individual given component. And if all the components are simple, the system must be simple and only needs cheap developers, right?[4]

[4] This is because banks, or at least retail banks (investment banks have a much better handle on this) believe IT to be purely a cost centre, which adds nothing to the business. They faintly imagine there will be a day when they can fire all these damned IT people and get back to nice simple banking -- completely ignoring the fact that the bank is utterly reliant on the IT. The problem is that it very rarely completely fails. If it was turned off a couple of days a year at random, they might understand just how little work they can do without it. The constant minor glitches just make the rest of the business annoyed, and don't really explain why they need to invest properly.
Katie Lucas
Monday, August 27, 2007
 
 
Guess this comes with the industry.

Think about... are you ready? Here it goes:
High-Performance Distributed Grid Solution for an Excel Application.

This was a real (and successful) implementation of HPC to a legacy computation-intensive Excel VBA application. All the front- and back-end are implemented in VBA, with a number of COM-layered calls to external services, but now it simultaneously runs on dozens of servers.

PS: hope this can make you feel better about "XML vs. plain-text" kind of inefficiencies.
Nomd Eplume
Monday, August 27, 2007
 
 
"Also your notion of rapidly changing legislative environment is simply not true. The laws are made in consultation"

Nice country you have there. In Poland, laws are passed by a bunch of morons, many of them having criminal convictions under their belts. And they hate banks, 'cause their "foreign owned and steal Polish capital".
Roman Werpachowski Send private email
Monday, August 27, 2007
 
 
>What strategy is followed?
Meetings. Great gobs of meetings. Of the "put a pile of $100 bills on the table and light them on fire" size meetings. Imagine conference calls with 5-100 people, that last for 15-240 minutes. Daily. A calendar full of f*ing meetings. My guess is that they blew $100k/day on conference calls for about a year of the project I'm stuck on.

>Secondly, change with the rule imposed by the government and successfully accommodating those changes in the existing solution.
You need to include lawyers as part of your business analysts. See above estimate for daily meeting burn rate. One has at least 1 year lead-time before major new regulations come into effect.

>New schemes and products are added regularly and each has its own way of working.
Real world: nothing gets thrown out. Ever. Not even that old 40-year old thing with a punchcard reader.

The financial institution that is a client of our company has multiple parallel development/testing chains leading to production. It takes about 6 (or more) months to progress through the integration/system testing chains to get to production. Since they want new "products" quarterly, then they have to have a minimum of 3 parallel chains. They prefer java. And they prefer big iron. Big expensive IBM boxes that go into distributed data centers that I'm not allowed into.

Mountains of documents and specs. Almost 1 foot of shelf space of binders of documents for our tiny part of this project.

Recommendations: Wiki to store working notes. CMS to store specs and documents.

Mandatory: Version Control and Configuration Managment needs to be nailed down and well under control. Especially database version control. Or you die. Strict requirement tracing: what banking/accounting rule does this module implement? If FASB Statement 157 changes, how many modules have to change? If the SEC or OTS send you a document retention notice, can you comply with it? Can you prove you did? Document EVERYTHING.
Peter Send private email
Monday, August 27, 2007
 
 
> want to know how the core banking solutions are made scalable <

Core banking applications are not scalable per se.  They are always designed for very large scale from the outset.  "Big Iron" as Jussi Jumppanen said.
Mikkin
Tuesday, August 28, 2007
 
 
> I want to know how the core banking solutions
> are made scalable to meet ever changing requirements.

The question doesn't sound right. May be you mean "flexible", "extendable" or "modular" instead of "scalable" ? Scalability is not related to changing requirements.
Michael
Tuesday, August 28, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz