The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Software quality

Hi guys.

I Have a question.
How can we measure the quality of the sowtare? Somebody thinks the quality depends on bugs number per mil lines of code. But who knows how many bugs does the code consist and who can compare QC procedures? For example I can write a program with about two mils of lines, do not QC, and say: ”It is has highest quality, it has not bugs at all!”. Only the time can measure the quality, we should operate with time, software usage extent and number of found bugs. But, during bugs discovering and fixing quality grows. Or not? Any fix can init new bug...

Any thouthgs, links?
Andrew Dashin Send private email
Friday, August 25, 2006
 
 
ou can estimate the number of bugs in the code, by measuring the rate at which you find bugs when you look for bugs, for example:

#1 "We're finding 20 new bugs per week" => there are probably lots of other bugs too that we hven't found yet

#2 "We found no new bugs this week" => perhaps there aren't many bugs left in the code

#3 We haven't tested and we don't know of any bugs" => quality hasn't been measured and is unknown (not 'quality is high')

Note that #1 implies something about the release date: if you find 20 bugs in a week and fix them all, then your current number of known bugs is 0, however the number of probable bugs is still non-zero and you should do more testing before you release.
Christopher Wells Send private email
Friday, August 25, 2006
 
 
At one place I worked they kept a chart of the bugs found each week and from historical data, they could predict how close to completion the project was by the trends and older (similar) projects.  One problem with the idea of finding 20 bugs in a release does not necessarily mean that the normal flow of how the users use the software will find those problems. 

Sometimes testers can get too agressive in testing.  Not that I am promoting buggy software to be released, but if a bug comes up only when you stand on one leg with the PC language setting in Japanese and the application set to display French and only on a blue moon, it is an acceptable risk to leave that in place.  If it is something like where Dell PCs shutdown if a cellphone is near them and they get a call or a text message, in today's market that could happen.
SteveM Send private email
Friday, August 25, 2006
 
 
First of all: I apologize for my poor English (I'm Italian).
The answer to your question is the to this question: "How many animals are there in this mountain if we see a certain number?".
The answer is: "More animals we see and more there are still. Less we see and less there are!".
The answer is correct only if the approach is correct.
Then, what is the correct approach?
The correct approach is: "We need to be a team, all alligned and cross the mountain all together, starting from the bottom and going-up to the top of the mountain".
The same is with the software. We need a team of testers who all togheter test the software with all designed test cases.

This approach is based on two technicalities:
1) We have to well design test cases (all functions, error conditions, limit conditions etc.);
2) We need to track the number of discovered errors every day and report all of them in a cumulative way.

Now, we represent the collected data with a line showing the total cumulative number of discovered errors per day.
The curve will grow every day and then will flat going to allign with the asyntotic line representing the "final total number of discovered errors".

There is an important assumption: "We can stop testing only when the line is flating (and NOT when we finished all planned test cases)!". If needed, we have to design other test cases (i.e. free test case).

I used this technique with success in all my projects!

Only one problem: managers don't like this technique because they want to stopo testing when planned test cases are completed.
Good luck!

Ercole
Ercole Colonese Send private email
Friday, August 25, 2006
 
 
"How can we measure the quality of the [software]?"

With all respect to the other responses, this is largely a business decision that takes into consideration economic and political constraints.

Quality is subjective.

Let's take Apple's iTunes and NASA's Space Shuttle flight control software as examples.

iTunes is a good quality software if you're able to download and purchase music, you're able to play the music you bought, and it doesn't crash your computer. If it insists on renaming all of your songs to use all capital letters, that's a bad feature but it doesn't really affect any of the first three goals (download, play, be stable).

The Space Shuttle in contrast absolutely positively needs software that will not crash and that will report correct information every single time when the shuttle pilot needs it. If something is capitalized when the pilot does not expect it to be, then his life is at risk.

iTunes and the Shuttle require different levels of quality. Often your first problem is just deciding what an acceptable level is.

Once you've done so, quality is often measured in the following ways...

1) Are the requirements well defined and well understood? Do people understand what the software is supposed to do and why?

2) Does the software satisfy each of the requirements?

3) Is the software self contained - that is, you've identified all of the pieces needed, and all of the constraints such that you're able to install and execute the software on the target machines?

4) Is the software delivered on time, and in a fashion that makes it useable?  Including any training?

5A) Are the known bugs in the software clearly identified and work arounds exist for them?

5B) Of the bugs that are not fixed, or are not yet identified, is there a valid business reason for not fixing them?

Again, in the case of iTunes, the goal isn't to get exactly ZERO bugs in the software.  It's too expensive and time consuming to do that for a piece of software that's being given away for free.  Apple has instead decided they're not going to bother fixing bugs that affect fewer than 0.01% of their customers.

Measuring software for quality is simply measuring to see if the software meets your expectations (as the person developing it). You still have to decide what those expectations are.
TheDavid
Friday, August 25, 2006
 
 
From the business or user perspective, software quality is just measured on whether the requirements are met. It's an objective assessment done with verification matrices where you demonstrate that each intended function of the software meets the requirements and produces the expected output. You put them all in a big chart and take two or three days to go through each systematically and check it off if it worked. That's how mission critical software is typically "proven" and considered ready for delivery and installation.

Developers certainly have a more subjective assessment -- is the code elegant, easy to modify, easy to maintain, robust when used outside the specs, etc. But the company is a business and I think all that utlimately matters with software quality is that the customer buys it and keeps sending more cash.
John
Friday, August 25, 2006
 
 
Use fault injection!

If you assume that bugs are randomly distributed across your application (this is not always the case - but not TOO bad as assumptions go)

1) Get one person to insert X number of real bugs into your application (write them down so you don't forget any of them later!) make some subtle, some obvious, and spread throughout the application (i.e. different parts of the UI, calculations, etc).

2) Get your testers to do their regular testing. They will discover your "fake" bugs, and also "real" undiscovered bugs.

3) If your testers find Y bugs (e.g. Y=15) out of the total of X (e.g. X=20) injected bugs, then a reasonable ESTIMATE is that you have found 75% of ALL the bugs in your application.


Obviously this doesn't work for a 1-person shop, and the nature of software systems are that you can have a huge application with only a single bug - and that bug can bring the whole thing down, so this is not a measure of reliability
Warren Stevens Send private email
Friday, August 25, 2006
 
 
"How can we measure the quality of the [software]?"

You can't. At least not objectively.
Steve Hirsch Send private email
Friday, August 25, 2006
 
 
Devil's advocate:

quality = consistency (kaizen)

So if your software does the same thing, all the time, with the same actions, its quality is 100%. Even if that "quality" is poor, it is still 100% quality.

So a bug, defined as [we did the same thing but something different happened] = inconsistency = low quality.

The more bugs, the lower the consistency, and hence quality.

This is from a bug's eye view of software - typically at the developer level.

Quality can also be perceived as "fit for purpose" - ie what problem does it solve, how much of that problem, how quickly, etc. This can be far more subjective but also difficult to determine without a well-documented "goals or intended uses" plan, something you created before you wrote the software, right?

AFK, I just realised I need to go write a goals + intentions document.
Aaron DC Send private email
Friday, August 25, 2006
 
 
TheDavid has it right and undoubtedly works in a real software company.

The belief that "Quality = # of bugs shipped" is far too limited a perspective (it's a developer's perspective).

At one extreme, a single "I crashed the rover into Mars" bug and the perceived quality of the software is 0 on a 10-point scale.  Your market share goes away, along with your customer base, and eventually your company.

Another possibility is that you have a big piece of software that is used by millions of people and has a handful of (what were thought to be obscure) bugs that 50% of the userbase hits.  They call, you have no answer, they stop using your product.  I.e., Novell.

Finally, you can have a very, very big piece of software with hundreds of obscure bugs that no one ever runs into, but 90% of customers are 100% satisfied with the software, so perceived quality is 9/10.  Then, they call support when they find one of the bugs and get a very responsive, helpful representative who offers a workaround to the bug: perceived value is practically 11/10.

Microsoft falls somewhere between the last two examples.

The point is that most real, commercial software is sold on a "bug count vs. ship date" curve, wherein the number of bugs (multiplied by their significance) is compared to the time it will take to fix them, and the "comfortable" de minimus is where the software ships.

-Matt
Matt Lavallee Send private email
Wednesday, September 06, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz