A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
Among the many good agile-type ideas, unit testing and refactoring are terrific ones. The idea of freely refactoring at any time with unit tests providing confidence at every step of the way is a leap forward.
But some have taken an approach where each unit test applies specifically to a single class (a "unit"). Their complaint is that when they change the design, the unit tests are extra code that they have to change. Because the tests only test individual classes, they don't get the confidence boost of knowing that the design all plays together. Over time they "forget" to update the unit tests because it's "too much trouble" and from there it is a slippery slope back to the old-time archaic code-and-debug style of development.
I have had better results than them by doing such things as first testing low-level "utility" classes, then unit testing higher-level pieces that use those, and so on up the line.
I see the point that unit tests are more code to modify when you change things, but I believe that the promise of unit-test/refactoring/TDD is very compelling.
So I'm looking for ideas to bring into a discussion as to how to improve their (and also my own) way of thinking about these issues. Any ideas?
I unit test:
* Low-level APIs which can't easily be tested otherwise
* High-level public APIs which only change slowly
As an example of a low-level API, I wrote a class to do time-of-day/date/day-of-week manipulation. It would take ages (e.g. a whole week, or years to test the leap-years) to test that functionality as a Black Box, so I wrote unit tests to test it by injecting simulated times and dates.
As an example of a high-level API, you might have a system with two (or more) major components, e.g. the UI layer and the data layer, each developed by separate people. There's a well-known API between these layers which changes rarely. It can be well worth unit-testing each such component individually. You needn't have a unit test for each private/internal class within the implementation of each component, but have a suite of tests for each component's public interface to a) verify that it satisfies various expected use cases, and b) to automate its regression tests after refactoring its internal implementation.
I tend to start by testing individual units and they tend to be classes. But in complex projects that isn't enough and I tend to end up with tests which put several classes together and test the interactions. This is easy to do because I started by testing at the class level and I have interfaces and mock implementations at the right levels. Plus my coupling is low so I can plug and play with real or mock objects all all levels to put together collections of objects that allow me to test as much or as little as seems right for the task at hand...
Something that worked really well for me on one project was to plug together 90% of a system and mock out the bottom and top and then have tests that ran the whole thing. I then documented a walk through of the tests as part of a 'this is how the system hangs together' document which let you walk through a debug session which showed you what the system did with pre-canned inputs. It worked pretty well. (more here: http://www.lenholgate.com/archives/000378.html)
So, in summary, it depends. Write tests at a low level (ie class based) to prove that the class works and then compose your classes to allow you to prove that groups of classes work well when plugged together. Sometimes you'll want to do this, sometimes it's overkill.
Thursday, October 04, 2007
Personally, I always start with the shared/common libraries first. They're the most likely things to change without you noticing and the benefit can be shared across projects. Then I build up from there along a) individual functional lines or b) where I'm running into problems.
Most of the applications I've done have a few % code coverage, I'm in the middle of one now that has upwards of 90+% and it's *amazing*.
Anyone can jump into the code anywhere and know that things work and whether or not they've broken something. Minor changes really do look and feel minor.
Friday, October 05, 2007
Yeah, coverage ceases to be an issue when you do TDD with unit testing. My experience is that, if you do TDD, you should easily get between 80-90% coverage automatically.
Anyway, unit testing IS work. No doubt. The question is: is it saving you more than you invest? If so, then you'd be pretty stupid not to do it, eh?
That said, it is relatively easy to get into a situation where your unit tests do cost you more than you get out. The book xUnit Test Patterns might be a useful reference to avoid that situation.
Friday, October 05, 2007
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz