A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
As part of my job I often have to do capacity planning and H/W Sizing. For most part, I consider this as a semi scientific endeavour mixed with some decisions based on prior experience. Why? Because there is a lot of approximation to do. User concurrency for example - you can never be accurate. Then there are some other dimensions such as horizontal scaling vs. vertical - vertical scaling is good from a hosting/operations/maintenance costs but suffers from non-linear scaling and increased risk of single point of failure - you have to strike a balance. Like wise there is always the problem of availability of useful performance data which can suit the environment and circumstances you are in.
I recently started gathering my prior experiences and current thouhgts to come up with what will be a publicly available (GNU FDL) document discussing at great length how to go about Capacity planning and H/W sizing. All in the hope that it will be helpful. I aim to be as complete, accurate and as efficient as possible.
So if you had to do capacity planning and h/w sizing *before* you had a chance to actually code and test the application, how would you do it with an objective of being as accurate and efficient as possible. You can assume a J2EE application with App server, Web Server, Database. Take into consideration all factors including bandwidth. Also assume that you will get a chance to fine tune your numbers after load testing the application but you can't be far off from the original estimates.
Care to share your thoughts/experience/insight?
Please keep the posts on topic - I plan to repost this on Slashdot as well for broader insight, so let's spare the flames!
Friday, October 07, 2005
I don't know if I read too much into "before you start coding", but it's definitely sufficient in many cases to draw the system architecture first. Looking at that, you can do a lot wrt. how much data flows on the "arrows", how many operations per second are required by this component, like that. In that way you can make quite a good estimation for what hardware is needed.
Friday, October 07, 2005
I recommend the book "Capacity Planning for Web Performance: Metrics, Models & Methods" <http://www.amazon.com/gp/product/0136938221/102-7941323-3169712?v=glance&n=283155&n=507846&s=books&v=glance>
It comes with a CD that contains some tools to help you model your system to get an idea of the performance issues while you're designing. There are some *very* expensive tools that let you model your system and conduct virtual experiments against that system, but be prepared to shell out big $$$ and for a steep learning curve <http://www.hyperformix.com/>
It's also important that you get a good handle on your workload profile; what's the (expected) traffic pattern? If you have historical data to analyze, that helps. Also, when doing this type of work, you're really looking for the "worst case" scenario, such as a peak time during the workload, etc. You want to size to this peak plus some extra to account for surges or other anomalies.
Requirements are also important to have, otherwise you'll never know when you're done testing. Make no mistake -- this is a scientific endeavor, not a semi-scientific one ;)
It's a good idea to specify your requirements with some "wiggle room", for example:
80% of the transactions complete in under 1000 ms
90% of the transactions complete in under 1250 ms
95% of the transactions complete in under 1500 ms
99% of the transactions complete in under 2000 ms
There will always be some outliers in your actual workload that will not meet your requirements, due to the user's request (query for large amounts of data, or with many joins) or technology failures. The goal is to minimize these occurrences.
Lastly, decide whether you need to do capacity planning or hardware sizing, you can't do both, at least not at the same time. In capacity planning the software and hardware are constant while the workload varies (ie, given a particular system, how much work can it do?). In hardware sizing the software and workload are constant while the hardware varies (ie, given a particular amount of work, what's the least-costly system that can handle the workoad in the specified performance constraints?).
Former COBOL Programmer
Friday, October 07, 2005
Other statistics that you should estimate before you begin (for each software module, process, etc. ... break it down like you do for your development estimates):
1) CPU utilization (typical and max)
2) Memory footprint (typical and max)
3) Predict which code will be called the most (second most, third most)
4) Identify potentially blocking code (synchronized methods, simultaneous writes to database, etc.)
After the product is developed, perform load testing on each module and verify the above. If possible, do the same on an installed system operating in its final environment. Use regression analysis to build a model of the system (under different conditions) and then start with the corrected model to estimate the changes when the system is updated.
> ... with some "wiggle room", for example: 80% of the ...
The way in which delay increases with load, and the distribution of delays, can be modelled mathemetically using "queueing theory". The queueing delay experienced by requests can increase asymptotically at the utilization of the resource approaches 100%.
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz