The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Application Testing Philosophies?

I'm part of an IT department for a small company that shall remain nameless. We recently conducted a survey and discovered that while we create lots of bugs in our products, we identify fix those bugs so quickly that the financial impact on our company is somewhat minimal.  Our management would like to see some improvement in the quality of our work (as a general philosophy thingamigbob), but they admit we don't need to strive for 100% bulletproof code at this point.

I've been asked to present a white paper on best coding practices, or rather, best testing practices that we can gradually adopt into our day to day practices WHILE keeping overall costs low.  For example, we do want to emphasize testing in the design phase to make sure all of the possibilities are accounted for, and at the same time, we're not going to explicitly try to catch cosmetic errors such as typos in the final product.

Can anyone recommend books, articles or forums that discuss software testing strategies from a business or project management perspective?
Anonymous Coward
Thursday, November 03, 2005
 
 
Code Review

Check out Code Complete for a good write up and rational.
Curtis
Thursday, November 03, 2005
 
 
Generally, the earlier you detect a defect in the life cycle, the cheaper it is to fix. Lots of studies have proven this out. Design reviews and code inspections catch a lot more defects than testing.

That said, the ultimate goal is to prevent defects from occurring in the first place. TDD goes a long ways towards helping achieve that goal.

I don't know what language(s) you develop with, but see if you can find any auotmated code inspection/static analysis tools; they'll help you get some rigor around the process, and using the tool cuts down on a lot of the drudgery and arguments that come from introducing code inspections.

If you can't dedicate resources for a test team you should adopt a rule that the only testing a developer can do on their own code is unit testing. Joel's "hallway usability testing" is also a good practice; you should be able to grab someone walking by your cube and have them do a quick test of your app.

Look into using imaging or virtualization tools (Ghost, VMWare) to manage your test environments; these types of tools greatly reduce the effort needed to maintain a testing environment.

If your application uses a backend data store, you need to verify that the data is actually written to disk using something other than the application under test.

You should also have some sort of requirements/test case traceability; each test case should verify one or more requirements. If a test doesn't verify a requirement, you shouldn't do it.

Write your tests so that they're independent of any other tests. You should be able to grab a test script and conduct a test from that script alone; your tests shouldn't rely upon the results of some other test.

Keep a test log of the actions you performed, the expected result and the actual result.
Former COBOL Programmer
Thursday, November 03, 2005
 
 
+1 for TDD. I've used it before and it's worked wonders.
QADude Send private email
Thursday, November 03, 2005
 
 
This book: http://www.amazon.com/exec/obidos/tg/detail/-/047135418X/
is almost big enough to use as a door-stop, but it contains a lot of checklists and ideas you can use for testing at various stages in the development process. If there are 2 or 3 forms you could use, it has more than paid for itself in stimulating braincells.

Or, use TDD.
Peter
Friday, November 04, 2005
 
 
No one single approach is likely to work. I suggest defense in depth:

-V-lifecycle with a review/test component corresponding to each phase of the lifecycle
-review specs and designs (you do have specs and design docs?)
-compialtion with warning level cranked up to the max
-review code (doesn't have to be anything formal)
-static and dynamic analysis tools (e.g. pc-lint and valgrind)
-unit testing
-integration testing
-system testing
-usability testing
-regression testing

A couple of questions:
1. I have read loads of stuff about how fantastic formal inspections are. But the few times I tried to get this included as part of the process we ran into too many people issues. What is everyone elses' experience of this?
2. Automated GUI testing. Are any of the automated GUI testers actually any use to micro-ISV?
Andy Brice Send private email
Sunday, November 06, 2005
 
 
Forget about the notion of "best practice," and just follow the bugs.  Your bug tracker will provide concrete data about the severity and type of your average bug, as well as the component(s) where most of them show up.  Concentrate first on the major bugs reported, and note how late in the process they were found.  Anthing found later than the unit tests is an opportunity for improvement.  Focus on this instead of attempting the futile practice of trying to get developers to introduce less bugs.  On a more general note, spend some time automating the unit tests if that's not already done.

Another thing that can be learned from the bugs is where the focus is.  If system test/QA reports mostly UI nits, then chances are they're not hitting very much of the program logic.  Make sure QA are modding their test procedures whenever an end user finds a bug.

The most important thing here is to not make any sweeping changes to testing practice, even when they won't cost a lot (because they will).  If the survey is to be believed (I would be skeptical), then there is little to worry about.
Matt Brown
Monday, November 07, 2005
 
 
We have software testers and business rules testers (QC and QA respectively, if you will.) Our programmers log the time in and out for a product and the type of tasks they work- this isn't as onerous as one might think.

  Hours spent with QC at the Developer's IDE are still marked as coding, where items reported by QA we mark differently. Lastly, bugs reported by users are BUGS and are marked as BRC- bugs reported by clients- *shiver*

  Management measures the consistency based on a few ratios: the hours spent coding vs. hours pent fixing BRQA (bugs reported by QA); since QC doesn't impact the ratio of bugs found before they leave the programmer's desktop when the BRQC drops and coding rises, we are impacting best practices positively.

First, if you do not employ professionals whose background and expertise is the testing of computer software, get some. Period. We call ours the "Ferret Team"; they can sniff out a bug in a hurricane. Secondly, work in small enough iterations that a programmer can invite a tester to sit with him/her at the coding PC. Let the tester drive and do not attempt to constrain the tester to the task/bug just fixed.

  When the tester finds bugs [not if] the programmer will have a fresher memory of the code and other likely code snipperts in the cause -n- affect ripple. The end result, less buggy code to QA, the less BRQA.

PS: using this method we're down to 1.46% total hours BRC and less than 22% on BRQA; down from the 20% BRC/ 80% BRQA mark of two years ago (without QC testers).
J.W. Send private email
Tuesday, November 08, 2005
 
 
Design by contract. I wouldn't leave home without it.
Kaitain Send private email
Wednesday, November 30, 2005
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz