The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

How to choose a pattern for 100 concurrent users

I am an Technical Architect and I have a project which has the Non Functional Requirements of 100 concurrent users and 300 configured users, with 2000 transactions per day and the growth of storage is approximately 4GB and is a role based system accessed from throughout the world, say 80 countries.

Environment Requirement : Apache Web Server, Weblogic 8.1 SP3 application Server, Oracle 9i Database.

Hence I have suggested the following
Presentation Tier : Struts based JSP pages with custom tag libraries to meet the role based presentations.

Business Tier : Business Delegate and SessionFacade to have SessionBeans - stateless

Integration Tier : Hibernate to meet the caching needs and performance inturn.

Can any one suggest me a better approach or can give me if I am proceeding in the right direction or not.
Jayshree Send private email
Sunday, December 17, 2006
 
 
Just for clarification: Is that a growth of 4 GB per day?

From an uninformed point of view, the technology choices seem to be enterprisey enough to be able to handle it. Performance problems will probably be in the code that your team produces. I think it would be a good idea to set up a performance test suite. I wouldn't call it a pattern, but maybe a good practice.
Peter Monsson Send private email
Sunday, December 17, 2006
 
 
"Is that a growth of 4 GB per day?"

2MB per transaction is quite a lot - even generated documents  (PDFs) etc. aren't usually that big.

Making sure that it is secure and that the storage scales is probably more of a challenge than writing the application.
Arethuza Send private email
Sunday, December 17, 2006
 
 
Arethuza,

"2MB per transaction is quite a lot - even generated documents  (PDFs) etc. aren't usually that big."

You're right, of course.  This is something this Enterprise Architect (OP: watch the proper case - limit it to proper nouns like Microsoft Word or United States or John Smith; everything else should be in lower case) needs to look at carefully.  I worked to implement a Lab Information System called Sysware three years ago that generated 1MB for every single transaction.  We had ~2,000 transactions per day, so our disk storage requirements where huge.  I had been warned to get a terabyte array by an existing user during our discovery phase, but discounted it because other things that person had said also sounded unreasonable.  I was wrong.  Sysware's architecture for their product was/is horrible, and their disk space usage was literally 1,000 times what it should have been.

So, EA, for a system growing as quickly as you state, make sure you are being efficient with data storage.  Do you really need to save 2MB of data for every transaction (assuming that is accurate)?  That seems really excessive to me, too.
Karl Perry Send private email
Sunday, December 17, 2006
 
 
Sorry.  Didn't remember his job title - should have said Tech Architect (TA).
Karl Perry Send private email
Sunday, December 17, 2006
 
 
>> Performance problems will probably be in the code that your team produces. <<

Heartily concur.
A place I used to work at hired some high-powered consultants to come in and prototype their new Java-based application.  The prototype (running on a good machine for the time) scaled to a grand total of 2 concurrent users.  Adding the 3rd user caused it to grind to a halt.  This was for a replacement to a site that was handling 4 million hits a day running on VB6.  Not good.

My advice would be to hire people who know when to apply patterns, and when not to.  Also - you should learn to recognize when they're spouting O-O/Pattern babble (Joel's architecture astronauts).  You should also insist on continuing performance metrics during the development process, as it seems that this is a critical item to the customer and your success.
xampl
Sunday, December 17, 2006
 
 
Sounds very familiar.  We've also revived server-side VB6 code when managed code solutions proved an embarrassment.  We just finished load tests of a brand new VB6 service that replaces a .Net monstrosity.  Can you say 5x performance gain?

I think a lot of the problem isn't garbage collected, drunk on objects, interpreted, sandboxed, over-threaded program environments in themselves.  It is probably the great ziggurats of technolard that comes with them.

Everything has its place, but I'm not sure buying blobs of megatech with high price tags is the same as "enterprise grade."  It sure doesn't seem to be production grade, at least not for high transaction rates.
Codger
Sunday, December 17, 2006
 
 
"Sounds very familiar.  We've also revived server-side VB6 code when managed code solutions proved an embarrassment.  We just finished load tests of a brand new VB6 service that replaces a .Net monstrosity.  Can you say 5x performance gain?"

Codger, can you expand on this and why the .Net "monstrosity" didn't work?  We're about to abandon VB6 for .Net and I'd sure hate to find out too late that we made a huge mistake.

Is this inherent in the .Net model, or did your .Net astronauts just not know what they were doing?
Karl Perry Send private email
Sunday, December 17, 2006
 
 
I'd be very concerned with the system's ability to recover, particularly when 100 concurrent users are uploading 2MB sized files. If you have a bug that causes a connection to abort halfway through, I'd get really fustrated if I had to start over each time.

With that thought in mind, I'd definitely prototype the system's robustness. I know Oracle has some semi-decent support for managing transactions but I've never tried anything like this in Struts or Hibernate.

A second issue you'll need to to verify is whether you can do backups while the system is live. Alternatively, you'll need to look into replication.

Short story short, I don't think your "pattern" should focus on the technology so much as how you're going to solve various usage related issues. If X happens, can Y cope with it or will you have to invoke Z?
TheDavid
Monday, December 18, 2006
 
 
I don't think that he said 4GB per day. I think that is the assumption people are making. It could be true but until he comes back and tells us so I wouldn't be concentrating on that aspect of the system.
dood mcdoogle
Monday, December 18, 2006
 
 
Yeah, skip the whole language war issue for a moment.  Think about your application design.  Try to cut the amount of data you need to store in half each week that you're doing your design, because unless you'd storing satellite imagery or something like that you're storing way too much data.

A lot of what you said sounded like business speak (same initials, but more polite).  That speaks to some problems in the composition of your team.  You should remove anybody with a business degree, be it just a bachelor's or an MBA.  They can have input, but you need to isolate them and

Now write a little version of the system in C.  Not C++.  Not Java.  Not anything .NET.  Write it in C.  You need something that discourages "enterprisey," something that does NOT come with frameworks. Make it do the minimal that needs to happen.  Tweak that until it works right and the design is right.  Write the unit tests which can verify that this happens correctly. 

Once you have the little model working, then you can come back and start thinking about the big system.  How will you scale it up with the fewest changes?  Are there frameworks that can make scaling it up easier?  Will those frameworks come with costs that are going to get in the way?

The point here is to think small first.  Don't think "enterprise" until you've thought about getting your first two users online.  Until you can help those first two users, you can't help the 3000 users.
Clay Dowling Send private email
Monday, December 18, 2006
 
 
You're doing fine with the approach and technologies you suggest. If the code is well-written and you identify and rectify any performance bottlenecks late in the project, you can expect at least 10 pages/second throughput running on one web server and one database server. Given that each of those 100 concurrent users will not be requesting a page every second, this should be enough.

If you do think that you will need more throughput, you'll need to run two or more instances of Tomcat, each on a separate server. This is pretty straight-forward to design and set-up (or "to architect", if you will). Your architecture will cope with this, as long as you and your developers repeat this mantra every day:

"Do not store information in the Session object"

It's probably the number one barrier to scalability I've seen over the years on web projects. If you store info in the session object, it means that it is much harder to handle distribution of requests between servers. You end up having to use expensive, memory-hungry, processor-hungry full J2EE application servers.
Imminent Send private email
Monday, December 18, 2006
 
 
Hibernate is great for certain things--like, you can pound out a persistance layer in a matter of hours.  But among those things is not performance. I remember a simple data model consisting of three tables. Very straightforward update methods would produce SQL that would fill up three screens with a text editor expanded (*). It was slow as hell. Probably, we were doing something wrong, but who knows. In normal sql, the same thing would have been:

update table set name = "blah", date = "xyz", times_used =5 where id = 5;

(*) This was the rough metric I used at the time: Copying the sql into notepad and seeing how many screens in filled up while scrolling down.
mynameishere
Monday, December 18, 2006
 
 
Our performance problems had much less to do with the choices of technology than the application of the technology.

The .Net versions of those bottlenecked services were hadicapped by layers of stuff that wasn't required:

* COM+ ("Enterprise Services") linked to an OSI-TP XATMI transport when there wasn't even any transaction coordination required.

* XML for its own sake. Legacy platform 1 sends legacy format 1.  .Net code translates legacy data serialization format 1 to XML.  Parses XML into a DOM.  Thrashes internally.  Generates new XML passed via web service to another hunk of .Net code.  Converts XML back to legacy format 2, relays it to platform 2.

So architecture and engineering are important, while blind application of patterns will cost you, sometimes dearly.

As far as the technology itself goes though I don't think anyone was discussing "languages" but rather the *implementations* of languages and tool sets.  And of course the culture arround them.
Codger
Monday, December 18, 2006
 
 
Well, I don't know what needs of the "100 concurrent users" are, but at 2000 transactions per day, that is just over 4 per MINUTE spread over 8 hours, even peak transactions will not reach above several dozen per minute.. which is NOT a lot of transactions per minute.  If the 2000 transactions are about the only thing these 100 users need to do.. then what is meant by "concurrent" can all have some page in his/her browser at a time?

You need to know what you want to hit on things like:
Response time
Time to commit transaction
Avg Requests/second
etc.

If you have 100 users, but on a web app 90 of them are filling out a web form at a time and only 10 of them are submitting that form then the server only has to handle the 10 requests at a time... so you need to know how many requests you are getting etc.

Unless the transactions per day number is incorrect or deceiving (i.e. how many requests to the app server to put together a single "transaction") then it doesn't look like crazy requirements at all...
Jared
Monday, December 18, 2006
 
 
"even peak transactions will not reach above several dozen per minute.."

How do you know that? Oh, just made it up. Cool. Carry on.
lex
Monday, December 18, 2006
 
 
You're right, I don't know how many transactions it would peak at, the several dozen is a complete guess (given that the original post doesn't hint at the distribution of the 2000 transactions) but at 2000/day it isn't going to run in to the hundreds (and even if it does... unless these are some LONG transactions then hundreds or even thousands per minute are nothing special).
Jared
Monday, December 18, 2006
 
 
You might want to take a very incremental approach by using SLAMD, and spending time on low hanging fruit.  That way you don't blindly follow best practices, but focus on what is really limitting your application.

www.slamd.com
Manes
Tuesday, December 19, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz