The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Freshly written legacy code

I'm really warming to the idea that testability is the essence of good design as suggested by Mike Feathers on his blog:
http://www.artima.com/weblogs/viewpost.jsp?thread=42486 .

I'm reading his book 'Working Effectively with Legacy Code' where he defines legacy code simply as code that has no unit tests.

The literal definition of legacy code, is code that we pass on to someone else. I think Mike's definition captures what we actually mean when we say legacy code: code that has passed away before it's even passed on.
Chris Steinbach Send private email
Tuesday, August 30, 2005
 
 
Unit tests don't answer the interesting question about software: why. It's related friend is intent. Unit tests just encode what someone things the code should do.

How does that help you?

When I come upon new code, as I have recently, and it has a pile of tests, I think BFD. There are a lot of tests. That doesn't help me figure out a thing about the code.

It helps me know if I've broken something. But that's not what makes code a legacy.
son of parnas
Tuesday, August 30, 2005
 
 
You say that the tests are only helping you figure out if something broke. Generally that's all I ask of unit tests, but I think there is more poential value in them.

I've worked in some software shops where test code is written but with generally (much) poorer quality than production code. It was not easy to see what was tested and what was not. Sometimes it was not even clear from the tests how a class should be used.

On the other hand I have seen (and hopefully written) better quality tests that are easy to understand and simple to extend. Can you imagine this being more helpful?
Chris Steinbach Send private email
Tuesday, August 30, 2005
 
 
>testability is the essence of good design

so no one did "good design" until 1995 or so (or whenever unit tests came en vogue)?
i'm eating chips
Tuesday, August 30, 2005
 
 
> Can you imagine this being more helpful?

Certainly more helpful, but not more helpful for making the code less legacy like.

> so no one did "good design" until 1995 or so (or whenever unit tests came en vogue)?

Many of us were doing unit testing way before that. We just thought it was like using good variable names, obvious.
son of parnas
Tuesday, August 30, 2005
 
 
*Unit* tests are not 1/10 as useful as some proponents (such as the referenced article) claims. *Tests* are, but *Unit* tests aren't, and there's a huge difference.

Of the bugs that I have introduced in the last couple of years, I'd estimate that ~%95 of the "unit" bugs were so simple that they were found on the first full program run, and %4 were found with time and in QA, and 1% are still lurking. I don't think "unit" tests would have improved that much.

The bugs that matter more are interaction / full-system bugs, such as race conditions, resource ownership problems, etc. These can't really be unit tested, and can usually only be tested automatically in a *very* specific scenario.

Furthermore, I've noticed that if the same person writes both the unit test and the code, they are often simultanously wrong.

Personally, I feel that the industry is giving unit tests _way_ too much credit.

With the risk of being boring, I point out again: *Testing* is extremely important. However, IMHO the usefulness of *Unit* testing is blown out of proportion.
Ori Berger
Tuesday, August 30, 2005
 
 
Ori:

It depends - if the unit tests are *cheap*, they're not out of proportion. It's nice to have a safety net that catches mistakes during major refactoring operations.

However, once they start getting expensive, they lose value. I was working on a project with quite a nice set of unit tests, and we abandoned them - they added an additional two minutes to each compile. No way to get fast turnaround.

Unit tests can only be made to work in an environment that is already very easy and cheap to modify. C++ with its arcane syntax doesn't qualify. Java works quite nicely with them. Python/Ruby/Smalltalk (which are all interepreted, more or less) are the ideal environment.

Funny - once again, a tool does not fit every task at hand. I'm sure the industry as a whole will learn that lesson in a couple hundred years...
Robert 'Groby' Blum Send private email
Tuesday, August 30, 2005
 
 
On version 1, the unit tests have some value.

On version 2, the unit tests ensure that the new features don't break old features.

By version 10, you've got crusty old code written by a guy straight out of college who's changed jobs three times since it was written and you need to add a new feature. At this point, unit tests are moderately useful.

It's perfectly posible to write code, see if it works, and if it doesn't then the great big 'access violation' dialog box lets you know. Then you debug it, and once it looks good, it's done. Once you start refactoring and redesigning previously working code, unit tests mean that you hit one button and get an executable plus a "you didn't break anything, it's all good" message. Unless you rewrite from scratch for every release, this is helpful.

So at best 1% of your bugs are still lurking?  What will happen in three years when you move on to bigger and better things and the next guy to come along breaks your code while adding a new feature?


> The bugs that matter more are interaction / full-system bugs, such as race conditions, resource ownership problems, etc.

Testing OS intergration is somewhat complex. Testing resource handling systems is both possible, and important for ensuring that the problems don't occur in the first place. Race conditions can be checked for, but it's important to know that they can occur.

In some ways I agree, and often it feels like the TDD crowd believe unit tests can give 100% reliability which simply isn't true. But still, good unit tests are one tool that should never be overlooked because the long term benefits are definately there.


> Furthermore, I've noticed that if the same person writes both the unit test and the code, they are often simultanously wrong.

Not possible. The code is right if it passes the tests, and if it's not passing then it's not finished so calling it "wrong" is premature. Of course, if the tests are wrong then nothing will work well, but that's no different to saying that manually checking the code will give an incorrect result if the person doing the testing doesn't understand what the code is supposed to do.

Again, there's other tools that help (code reviews, peer programming, having QA look over the end product, actually doing a bit of design and thinking about stuff in a group) but it's important to blame the user, not the tool.

You can build a bad house with a good hammer - and that doesn't prove that hammers are unhelpful.

Tuesday, August 30, 2005
 
 
"*Unit* tests are not 1/10 as useful as some proponents (such as the referenced article) claims. *Tests* are, but *Unit* tests aren't, and there's a huge difference."

"Of the bugs that I have introduced in the last couple of years, I'd estimate....  I don't think "unit" tests would have improved that much."


In other words, you think unit tests are next to useless...and you base this on the fact that you've never actually tried using them.

Interesting.
Kyralessa Send private email
Tuesday, August 30, 2005
 
 
Actually, I have been using them. A variation, anway:

For *really* independent stuff (data structures, format decoders, etc.) I do write unit tests.

For stuff that isn't as independent, I write a *system* test, which doesn't try to independently test everything (that would require a mock framework, etc.). The system test is much less detailed in its analysis, but -- hopefully -- detailed enough in the torture it puts the system through. The system test has a property that:
- if it passes, I can have reasonable expectations that every component used passes.
- if it fails, I can easily enough find the reason why it failed.

Such a test can be, e.g. 400 lines long with a single pass/fail result at the end. Such a system test can cover significantly more system states, and I get to spend the time to pinpoint only when the test fails.

Contrast with the standard JUnit practice of 5 lines/test * 80 tests. Commonly, the tearup/teardown sets up a very simple frame in which you can be sure of the results. You pay the exact pinpoint price upfront.

It's a tradeoff; The exact balance depends very much on your skill level and experience. The next statement might sound a little (?) arrogant, but I would rather not hire someone who needs unit testing for everything they do to produce quality code.

To make the discussion less abstract, assume for a moment we have the task of implementing a hash table. This can qualify under "independent" but lets qualify it as "somewhat dependent" for the sake of argument (and see remark [1] below).

The common "unit testing" mindset would make us create tests for inserting, deleting and updating values; If we're smart, we'd even set up various states, and test the operations in those states.

The "system testing" approach would be to create a much smaller number of tests (one or two would be reasonable), which do lots of operations, e.g., create several "pseudo random" hash tables, compute their unions, intersectios, whatevers, and eventually converge to a simple testable true/false result. I would also make the pseduo randomness parametric, and run the test with many parameter settings.

Personally, if the system setting test passes, I'm _much_ more confindent that the implementation is correct than I would be with unit testing.

If it fails, I will have to improve the test to pinpoint the problem. The test needs to be long enough to be meaningful, but simple enough to be inspectable.

Would I be happy with these tests? Not really. I have no way to properly test thread safety or other race conditions in the implementation. E.g., if this hash is using a user-supplied object/class for a key, key comparison might cause inconsistent reentrance, deadlock and various other problems I can't expect to cover in a reasonable system test (let alone in unit tests).

[1] Python has an outstanding array of unit tests. Yet, it took several years (!) of constant banging to make one of its most fundamental data structures, the dict, not crash (I think they only got it right in 2.1 or 2.2, 8 years or so). No reasonable unit or system test could give significant guarantee for that. Python has outstanding quality.

[2] GCC had effectively no unit tests last time I played with it (2.x days). It had many system tests, though. It too, has outstanding quality.

Summary of my opinion: Tests are very important. Unit tests can be useful at times, but get promoted without proportion to their real merit, and unfortunately at the expense of more comprehensive testing (and even proper design, but I'll leave that rant for another day).
Ori Berger
Tuesday, August 30, 2005
 
 
"It's a tradeoff; The exact balance depends very much on your skill level and experience. The next statement might sound a little (?) arrogant, but I would rather not hire someone who needs unit testing for everything they do to produce quality code."

The reason for writing unit tests isn't that you can't write _any_ quality code without them.  Rather, it's that (1) occasionally you'll slip up, and the unit tests will catch this; (2) the unit tests help to document the expected behavior of the system, so you can keep from changing it as you refactor code (or if you need to change it, you'll know exactly how you're doing so); (3) they nail bugs down so that they don't get reintroduced.

"The common "unit testing" mindset would make us create tests for inserting, deleting and updating values; If we're smart, we'd even set up various states, and test the operations in those states."

I'm not sure whose mindset you're referring to here.  Tests are a tool, not an end to themselves.  I'm an advocate for automated unit tests, but I'm working in a legacy system right now.  I don't go through classes writing tests for each individual thing; that's not the sort of thing you can justify to a manager...even if it weren't so mind-numbingly boring that I don't want to do it anyway.  I write unit tests for new code, and when I'm fixing bugs, I write unit tests that capture the bugs so I can know for certain that they've gone and won't be back.

Unit testing doesn't mean you pursue 100% coverage just to brag about it.  It also doesn't mean you go full-bore atomic.  It means you write enough tests to ensure that the system works as it should, and where a bug slips by, you nail it to the wall with a test.
Kyralessa Send private email
Tuesday, August 30, 2005
 
 
We seem to agree in essence.

The term "unit testing mindset" was referring to virtually all examples of standard unit testing tools (JUnit, CUnit, etc.) I've seen; It is also echoed in almost all introductory tests, magazine articles etc.
Ori Berger
Wednesday, August 31, 2005
 
 
IMHO, unit tests are limited to how you (as a software developer) understand/know the software and your perception of the software being run/used/recovered/installed in the environments that you "think" as close as you can get.

Unit tests do not ensure a good design software, it only make sure your software behaves as the tests are. There is no easy term "good design software", it has to be achieved by a lot of factors, communication, developers mindsets, support, etc.

Unit tests process is just like laying mines in the field to stop bugs coming. You don't know how many bugs are there and you don't know what their remarks are: big or small, heavy or light, smart or dull.

The field is like your software design. Good design is a flat terrain, nice and clear. Bad design is full of peaks and troughs that you cannot see.

People who uses the number of unit tests as a benchmark, like using number of encryption bits as a security measure.
Joseph Kuan Send private email
Wednesday, August 31, 2005
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz