* The Business of Software

A former community discussing the business of software, from the smallest shareware operation to Microsoft. A part of Joel on Software.

We're closed, folks!


» Business of Software FAQ
» The Business of Software Conference (held every fall, usually in Boston)
» Forum guidelines (Please read before posting!)


Andy Brice
Successful Software

Doug Nebeker ("Doug")

Jonathan Matthews
Creator of DeepTrawl, CloudTrawl, and LeapDoc

Nicholas Hebb
BreezeTree Software

Bob Walsh
host, Startup Success Podcast author of The Web Startup Success Guide and Micro-ISV: From Vision To Reality

Patrick McKenzie
Bingo Card Creator

Automated software testing...do you do it?

Although I doubt I don't have the time/patience to get into it so far, I occasionally think that maybe I should be doing automated software testing in my development.

I know nothing about testing other than some terms I've heard in the atmosphere (unit tests, nose tests, mocking).  I, of course, do "manual" testing--using the program and seeing if it works--but not automated testing. 

I think this whenever I come across a bug that has crept in when it previously wasn't a problem--apparently a regression, I broke something that was working.  I stumble on such bugs by chance while fixing something else, though I will try my best to thoroughly test all possible actions prior to release. 

I don't know how many potential "actions" a user could take in the app there are, but I would guess very roughly 100, depending on how you count (if you count every possible state the app could be in for every possible action, probably the sky's the limit).

I'm working on a GUI based desktop application (for which there are many ways things can go wrong), and so I would want testing to test via the GUI, at least partially. 

I guess what I'm wondering about are questions like:

- do you use automated tests?
- how much extra work is this to do?
- when does it become worth bothering?
- tips?
Racky Send private email
Wednesday, July 31, 2013
- do you use automated tests?

One one product yes, heavily. On another not so much. One day I'll create good coverage on the other product, it makes changing things less time consuming and less stressful.

- how much extra work is this to do?

It depends on the product somewhat. If the code is 80%+ UI code (which a lot are) then it'll be a lot of extra work to get good coverage right through the UI (even just with mock UI objects). I find the ROI on automated tests low for UI. This is despite putting in a lot of work to increase the ROI - for instance on one product I wrote a record / replay automated testing system for Java on the desktop which makes it very easy to manage & record tests but honestly even after years of using it I'm still not sure it's repaid it's investment. Perhaps I just don't make enough mistakes ;) On the web side I've used selenium to test in the browser. Wow - what a waste of time... that thing is SO frustrating.

On the other hand for "core" code that does the grunt work unit tests often don't take longer to write that the throwaway test harness code you probably would have written to make sure it works properly anyway. In that case it's a very good idea to write easy to run unit tests and group them up in a test suite. On one product I have a test suite that runs overnight - it's caught many bugs and means I have a high degree of confidence when changing core code.

- when does it become worth bothering?

For me when there's a lot of hard to understand / hard to manually test non-ui code.

Interestingly imo a lot of the time automated tests aren't compatible with the lean startup method but once a product has proven it's worth automated tests become valuable (they allow enhancements to be built more quickly). But... if a product doesn't have momentum on the writing of unit tests from the start I find it harder to motivate myself to go back and produce them. Spending weeks writing test code is just no fun.
Jonathan Matthews Send private email
Wednesday, July 31, 2013
- do you use automated tests?

Yes. I have ~90% coverage for non UI code (which is 70% of entire code base).

- how much extra work is this to do?

Not much.  And I don't consider it "extra work" since tests are part of the code anyway.  But you must have infrastructure in place: test framework, automated builds, etc.

- when does it become worth bothering?

To me it's a wrong question (many will disagree).  The right question (for me) is "what happen if I will stop investing time and money in automated tests?".  And the answer (for me) is - the product I'm trying to build and sell will became harder to support->my support costs will increase->I will loose money.

- tips?

1. Invest in infrastructure (test framework, source control, CI, etc)
2. Do not test UI (waste of time), test everything else.  Most probably you will need to refactor code, though. Bonus - once you move code from UI layer the code base will become more maintainable and testable.
3. Measure code coverage. There are frameworks and tools (for example jacoco in Java world)  which can give you very useful hard numbers, including total coverage, coverage for particular classes, branch coverage, etc.
4. If you cannot execute your entire test suite either automatically or by clicking on one button - it's a waste of time and resources.
5. Write code->write tests->refactor->repeat
Maksym Sherbinin Send private email
Wednesday, July 31, 2013
~90% unit test code coverage for non-ui code base. The test bed grew organically with the product - most new features are first implemented in a test case, then coded in. Often new features are declined if there is either no way to test it or it's too expensive to write the tests for. I mostly think how I will test something new rather than how to make it. In my case the testing code base is in a magnitude more sophisticated and complex than the actual product. Is it worth it? It keeps me sane - this is what counts. No worries about possible breakages. Build process is fully automated, unit tests and white-box tests are part of build. Test fail is equal to compilation error. Still there are bugs popping up, but those are either from UI which is manually tested or from other parts which can't be automated like network failures, database glitches and stuff like that. Full build takes more than an hour, there are many sleep() calls because I also test real-time scenarios. But when it's done the code is of a very high grade, very close to what they call public beta.
Dima Send private email
Wednesday, July 31, 2013
Not much -- maybe 1% of core code (complicated code that doesn't rely on mocks).  I've always liked the idea, but in my case, the product is almost completely externally focused (databases, networks, external systems) and trying to mock something so large with so many failure cases and paths is just waaaay too overwhelming.

So I try to be very careful, and if a bug is discovered, a search is made for similar bugs throughout the code.  Since I do most of the customer support, there is high motivation to get rid of all bugs ASAP.  So the product never ships with known bugs.
Doug Send private email
Wednesday, July 31, 2013
I wrote a command line app to test my genetic algorithm. Everthing else, including the GUI, I test using Coverage Validator:
(real time path coverage testing, recommended)

I don't think trying to automate the GUI testing would be a worthwhile use of my time.

I also use a lot of other approaches, as well as testing to ensure good quality:
Andy Brice Send private email
Wednesday, July 31, 2013
I think its invaluable practice and other developers I've worked with that haven't been in environments that have done this before have been converted over to this way of thinking. +1 for the comments about UI testing - its hard and the payoff not nearly as good as 'unit testing'

My own 2p worth would be about keeping pragmatic about it as that will get you good ROI.  There is a lot of almost religious dogma around automated testing including

* Inversion of Control / Dependency Injection architecture Astronaut-itis the only way to write reliable code.
* Thou must always write your tests first
* Thou must Only ever mock dependant objects for unit tests as then its a integration test (technically true, but missing the bigger picture)
* Thou must use continuous integration
* Thou must have 100% code coverage

Take a pragmatic approach and it will be well worth its time.
Ryan Wheeler Send private email
Thursday, August 01, 2013
I agree with Ryan that it's easy to become way to obsessed with automated testing.

It's also possible to rely on it too much. I've worked with people who only really considered automated testing valuable.

For me it's only part of the equation, I ideally like to have a human tester who's sadistically determined to show what a rubbish developer I am by finding all the stupid bugs I put in, they're capable of being innovative and trying to outthink me. An automated test can't do that.

Seriously, I would consider a junior developer who happened to hate me a really good candidate to test my stuff.

Here's an interesting related viewpoint from a Joel classic (http://www.joelonsoftware.com/items/2007/12/03.html)......

The old testers at Microsoft checked lots of things: they checked if fonts were consistent and legible, they checked that the location of controls on dialog boxes was reasonable and neatly aligned, they checked whether the screen flickered when you did things, they looked at how the UI flowed, they considered how easy the software was to use, how consistent the wording was, they worried about performance, they checked the spelling and grammar of all the error messages, and they spent a lot of time making sure that the user interface was consistent from one part of the product to another, because a consistent user interface is easier to use than an inconsistent one.

None of those things could be checked by automated scripts.
Jonathan Matthews Send private email
Thursday, August 01, 2013
+1 for UI testing is a waste of time for most projects. You will take FOREVER to get it to verify something that takes 10 seconds by hand. Then the next thing to come along will take another FOREVER. Eventually you will realize that the ROI just isn't there.

One point, FWIW: I don't think automated testing is primarily about finding bugs or improving the user's experience. It's about having the confidence to make major changes to code.  Without automated testing, most non-trivial code bases are doomed to a slow relentless slide to chaos. With automated testing, you stand a better chance.
GregT Send private email
Thursday, August 01, 2013
Thanks, everyone. 

My reaction to the described intricacies of testing is perhaps captured by a paraphrase to the intro to the Brian Fellow's sketch on Saturday Night Live: 

"[Racky] is not an accredited [programmer], nor does he hold an advanced degree in any [computer] sciences. He is simply an enthusiastic young man with a sixth-grade education and an abiding love for all God's creatures."

In other words, automated testing is probably more than I can reasonably take on at this point (with "this point" likely == "my whole life").  Andy's post about "defence in depth", e.g., I found fairly blood curdling (which is a compliment to him).

My guess is the best path forward is to eschew automated testing.  My application really isn't complex enough to merit it, it seems, and most of what can go wrong probably can't be automated easily or probably at all (cf. Jonathan's last post), such as visual glitches, nonsensical output, etc.

I would still be encouraged to hear of more people with successful software who don't do any automated testing at all.  So if you're there, please feel free to chirp up.
Racky Send private email
Thursday, August 01, 2013
Not a lot.

I do have a testing setup for the transactions in a client billing system that I support.  That way, I can check that changes in that hairy code do not mess up.  I finally decided to do this after too many changes resulted in errors.  My boss fought me on this somewhat, but I did not back down.  I feel a lot better having it.

I do not test the GUI automatically.  There is too much that an automated system would miss.  I am thinking of the areas of aesthetics and clarity.


Gene Wirchenko
Gene Wirchenko Send private email
Thursday, August 01, 2013
I can't answer the question yes or no in a way that wouldn't be inaccurate or misleading, so I'll describe some things I do for testing.

My compiler does syntax checking, that's automated software testing for sure. So yes, we all do automatic software testing absolutely and it is definitely a form of it. Is this answer useful? I think it is actually, since it helps to be able to recognize that this is part of testing.

My code has plenty of live error checks that are left active in deployed code. I started to say these are assertions, but most people think those throw a stack trace and halt, and I simply don't do that as it is rude. Some of these log and continue with whatever recovery is possible, and in the rare case things can't continue the user is notified and offered a choice to send a bug report and continue or quit.

I also like to, for certain complex objects and systems, have consistency checkers that go through and see if things have gone insane or not. Often I put these in destructors or program quitting clean up code, and they log inconsistencies. This code in particular has caught a lot of things. I have never heard this discussed in tech land, but I doubt I could be the only one doing something like this. I strongly prefer this to unit test and coverage fetishism, which is often mindless. My testing tends to involve more design thinking. How can I build a system here to see if things are OK? When should I check that if it is complicated to do? What should I do when things screw up?

I have very few things that don't run live. One of them is the memory protection routines, implemented by redefining malloc() and free() all which I use my own versions, which date back to long before there were widely available standard ones. So if I am compiling C and I write past the end of an array, or if some object is never free'd, I know it when I am running in development mode and so my deployed code generally doesn't have any memory leaks or overflow issues at all, but isn't bogged down in checking ranges while running.

As to unit tests, I don't really do them systematically or regularly. I have certain specific systems that are particularly sensitive to side effects due to any change, such as compilers for ASLs. These have test suites.

As to automated testing of UI, wow, implementing that is a research problem and would take far longer to figure out than just doing proper alpha and beta testing.
Scott Send private email
Thursday, August 01, 2013
Forgot to mention this. Martin Fowler makes a very good argument in Refactoring that you shouldn't refactor unless you have really good and comprehensive automated unit tests to confirm that you didn't screw up anything when you recklessly made changes. This enables you to recklessly make changes, which is necessary to be able to safely do refactoring particularly when there is more than 1 or 2 developers on a project.

If you have many developers, or even the case of 1 developer returning after and absence from a project, your brain has dumped the context, and you have no safe way of remembering all the things your current changes are probably screwing up right now.

So you particularly need these tests on projects involving many people, and they are helpful when even one person is there.

It's a trade off for time though. As I mention, known troublesome and subtle code gets more checks.
Scott Send private email
Thursday, August 01, 2013
Another thought.

There are certain "horrific dens of nightmare" code that are irreducibly complex and I absolutely dread having to make changes or attempt to fix or augment.

If I had some sort of automated testing for this code, which I had total confidence it would catch any errors, I would definitely have less fear of this code and be more likely to add certain kinds of features.

However, all of this code is real time low latency processing code, and all of the problems I run into deal with things like concurrency problems, and any sort of automated harness instantly breaks that, never mind that testing for this sort of thing is another subject that is a poorly understood research problem. So it's somewhat hopeless to try at the present.

It would be nice though to have someone that would do nothing but go through the days work by others here and write tests for it, and probably document it as part of that. A Code Librarian as part of a Surgical team as Brooks called it.
Scott Send private email
Thursday, August 01, 2013
(Just spent 2 weeks on "why is there a barely perceptible rectangle appearing in that corner occasionally"). Grrr. Typical problem too.
Scott Send private email
Thursday, August 01, 2013

> (Just spent 2 weeks on "why is there a barely perceptible rectangle appearing in that corner occasionally"). Grrr. Typical problem too.

That makes me feel better, actually.  That real programmers with real businesses deal with phantom rectangles and their ilk--and not dispatch them within a 'reasonable' time (like an hour)--makes my situation feel a bit less absurd.  Thanks.
Racky Send private email
Friday, August 02, 2013
Test units all the way. Although I have to admit to not having enough of them and not having enough good test units.

One script calls them all, use it after every build or if it takes too long then before any release. Set up a cron job to run every night.

Those difficult to test problems that Scott talks about, such as concurrency: that's when you write a black box test unit rather than a white box test unit. You throw a suitably large number of input data sets at the code and check the output matches the ground truth output. You can also consider fuzzing.

Static analysis: I'm currently using Cppcheck, http://cppcheck.sourceforge.net/ and clang. They produce a lot of false positives or style errors, but I've also caught one or two nasty bugs.

Finally, compile with all warnings turned on, i.e. -Wall in g++. Another is -Weffc++ but I don't use it myself.
koan Send private email
Friday, August 02, 2013
I did. Then I didn't. Then I did. Then I didn't. Basically I try as much as possible, but it's not always worth it. In theory and concept it's great, but having 100% code coverage generally doesn't make economical sense. For example there's no point in testing getters and setters. So for me the most important thing is to really test the items of value, those that are used a lot (shared components), or things that are prone to lots of churn. That will give you 80% of the bang for 5-10% of the cost ;)

That being said, if you're going to unit test any type of GUI app I strongly recommend the book Swing Extreme Testing. Although it's Java based, the concepts will still work for most GUI testing. Lots of good information and tips. It's probably the book I learned the most from about how to write unit testing for GUIs.

The amazon link is: http://www.amazon.com/gp/product/1847194826/ref=as_li_qf_sp_asin_il_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1847194826&linkCode=as2&tag=investorbookr-20
Stephane Grenier Send private email
Thursday, August 08, 2013

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz