The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Why unit test when QA tools can do the same thing?

I dunno if this argument has been used before but normally when you unit test your code - you probably input some data and ensure the output is right, I mean you could do a Registration test where you assign your Registration Object these sample values and then test to make sure the data exists in your db and (if you're daring) if a email has been sent, my question is that with tools like selenium you could verify all this using a higher level scripting language (easier) and without using nunit, so why do you need unit tests?
Deepak Trama Send private email
Thursday, September 07, 2006
 
 
> so why do you need unit tests?

When you ship your code how do you know it works?
son of parnas
Thursday, September 07, 2006
 
 
If selenium fails a test, where do you look for the problem?
If you've got Unit tests at each level, with dependencies factored out, then you can pinpoint errors down to a specific method.

Plus, if you're already in the habit of writing retrospective unit tests, then the switch to test-first is that much easier.
G Jones
Thursday, September 07, 2006
 
 
And, since unit tests (should) happen earlier, fixing found errors costs less overall.

I don't remember the exact statemenmt, but relative error costs go something like this: (think this is from The Mythical Man-Month, but ICBW)

found during design, cost=1
found during coding, cost=10
found during QA, cost=100
found post-release, cost=1000
a former big-fiver Send private email
Thursday, September 07, 2006
 
 
"Plus, if you're already in the habit of writing retrospective unit tests, then the switch to test-first is that much easier."

And doing the tests in advance will open your eyes to all sorts of things concerning interfaces/parameters needed and generally how things will fit together.
KC Send private email
Thursday, September 07, 2006
 
 
Deepak, you sholdn't look at unit tests as tests, despite the name. It's been somewhat mis-named, and the recent movement of behavior-driven development is closer in that regards.

Unit tests are, in essence, formalized statements about your program's functionalities. E.g. you need a function that will add teo numbers: you write one or more tests that challenge that feature, like:

test.assertEqual(myClass.add(1, 1), 2)

You can add several "tests", especially if the unit accepts a wide range of arguments (as in this example) and you want to test for border values.
Berislav Lopac Send private email
Thursday, September 07, 2006
 
 
Consider unit tests as a design tool.
Pakter
Thursday, September 07, 2006
 
 
I've never been sold on the value of test driven development. Unit tests are fine for simple utility methods that have discrete inputs and outputs. But how do you test complex business processes and flows with things like assert statements? It is just too hard to write tests to cover all of the possible scenarios that you can encounter for something as complex as say, the process of hiring an employee or ordering merchandise from a third party supplier.

Also, the quality of the unit test is highly dependent upon the person writing it which is usually the same guy/gal who ends up writing the code anyway. If they probably won't do a good job of manually testing, what makes you think they will do any better at writing unit tests?

The two biggest benefits that I see from test driven development are the following:

1) It forces you to ask the question "what is this thing supposed to do?" right upfront instead of when it hits QA. But that really "should" be accomplished during the design process. Using TDD to solve a design/requirements issue seems like overkill to me.

2) It gives you a facility for automating regression testing. But as the OP pointed out, this can also be accomplished through QA scripts "after the fact".

I've honestly only been involved in doing TDD once and it really wasn't worth it. We didn't end up with comprehensive code coverage, maintaining unit tests through source control was a pain, and the people writing the tests would consistently overlook important edge cases just like they would have with manual testing. We ended up dumping it and just performing formal code reviews. The code review process was much cheaper and found just as many bugs.

Your milage may vary.
anon
Thursday, September 07, 2006
 
 
Unit tests are not supposed to give you a lot of code coverage: all you need to test are your public methods, i.e. objects' interfaces. You don't need to test the code that is called internally by other code -- if it isn't covered by the public entities, you have a whole other problem.

"ut how do you test complex business processes and flows with things like assert statements?"

Remember, unit tests are not supposed to test "processes and flows" -- they are supposed to test *units*, e.g. methods of an object. You need to break your process into units to be testable -- but then, a complex process should be broken to units in any case.

Unit tests really need a "switch" in the way of thinking -- once you "get them" it's plain simple and natural. An approach that helps that switch to happen is to see them as formalized requirements statement, or even formalized user stories. Once again, despite the name, unit tests are *not* tests; they got that moniker because they use testing or asserting to determine if a feature is implemented.
Berislav Lopac Send private email
Thursday, September 07, 2006
 
 
> Unit tests are not supposed to give you a lot
> of code coverage: all you need to test are your
> public methods

There are a lot of unit tests philosphies out there and I don't see this one as any different than running a batch of system tests after the fact.

If you are writing tests while developing in response to every line of code write then you should have excellent coverage. This means you don't just test public methods. You test everything that could break and that's all the code you write.
son of parnas
Thursday, September 07, 2006
 
 
"1) It forces you to ask the question "what is this thing supposed to do?" right upfront instead of when it hits QA. But that really "should" be accomplished during the design process. Using TDD to solve a design/requirements issue seems like overkill to me."

Many of the advocates for TDD would argue that formal design is the overkill.  I think you are correct, that if you do, for example, RUP-style design, then unit testing becomes, to at least some degree, redundant.

But what if TDD can replace the design?  There is little question that writing the tests is a fairly quick process.  So the question is - if the design that comes out of the TDD process is sufficient, isn't it the design that is overkill?

"2) It gives you a facility for automating regression testing. But as the OP pointed out, this can also be accomplished through QA scripts "after the fact". "

And as another poster pointed out, QA scripts test the application at an insufficient level of granularity.  Yes, you know which use case scenario broke, for example, but a QA test should rarely tell you what specific method broke.  Autmoated unit testing will tell you the method, or with some dependency issues, a small group of methods. Without the unit tests, refactoring becomes a minefield.  Without refactoring, the design debt of an application builds up - you get more and more cruddy code that reduces future maintainability.
coderprof Send private email
Thursday, September 07, 2006
 
 
"You test everything that could break and that's all the code you write."

That's certainly an approach, but it doesn't make sense if we're talking unit tests. The most important thing about unit tests lies in the word "unit" -- it is certainly inappropriate for testing code that is not structurized on a high level and which has many dependencies. It is ideal if your code is highly encapsulated into discrete units without much side-effects -- like functions or classes -- and thus is naturally fit for object and functional style of programming.

When your code is compartmentalized into units, then it becomes unnecessary to check each and every line of code -- you need to verify only the units which can be called from the outside, i.e. public methods if we're talking about OOP. And if that doesn't cover all your code, even indirectly, then you must be doing something wrong, because you wrote code that will never be used.

The idea behind unit tests is a simple and effective one: it helps you write only the code you really need to write, which is a great productivity boost.
Berislav Lopac Send private email
Thursday, September 07, 2006
 
 
I use unit tests to drive the design of the unit I'm working on. Working in the TDD style has had a couple of advantages for me.

The first is a Rolling Stones song. I've had the interface to the code I'm writing warp radically away from what I originally expected. This has invariably been a good thing, as where I ended up was where I needed to go, not where I thought I would be going. "You can't always get what you want, but you get what you need."

The second is a psychological thing, and it may be unique to me. I noticed a trend in my work a long time ago. The longer I go between running the code, the slower I get at coding. I remember one time when I was working on some really ugly COM stuff and I didn't have running code for two weeks. By the end of the two weeks I wasn't writing more than 10 lines of code a day.

With TDD development, the tests are always there, and the code is pretty much always runnable, if not 100% functional. As a result, if I start to slow down, I just run the tests again. The discipline involved with always writing a test first really keeps the code runnable, and keeps my confidence in my changes high.

The regression test suite you get out at the end is nice to have, but it's really just a side effect.
Chris Tavares Send private email
Friday, September 08, 2006
 
 
Ok, I guess the people who really follow the 'write tests first' philosophy seem to benefit more from this then those who write it after the code is done to sort of 'test' it.
Deepak Trama Send private email
Friday, September 08, 2006
 
 
"Ok, I guess the people who really follow the 'write tests first' philosophy seem to benefit more from this then those who write it after the code is done to sort of 'test' it."

One of the issues I have with writing the tests after the code (unit) is "done" is that I tend to overthink my code.  I may write it, and then question it, asking myself if it really works like I want.  I may spend time manually tracing a tricky part of the algorithm to convince myself it works. 

Or I may try harder to make sure it works as I write it.  I don't want to lay down a line of code unless I'm fairly sure it's correct.  In either case, I'm coding defensively.  I'm working to prevent bugs, rather than focusing on writing code.

When I write the tests first, I simply write the code.  And then run the tests.  There's no point in tracing the code if the tests say it works.  As it turns out, my first instincts about an algorithm are usually right, so rather than creating self-doubt as I write without the tests, I am able to let the tests be the ultimate judge. 

It's purely psychological, but I found myself being an "offensive" coder rather than "defensive" coder, and I find that to be a productivity booster.
coderprof Send private email
Friday, September 08, 2006
 
 
Please understand that another (often overlooked) element of the TDD/Automate Unit Testing idea is that when someone reports a bug in production code, you are supposed to:

a) add a test to the test suite which exercises the anomaly
b) fix the code
c) rerun all the tests
d) deploy the corrected code

this is (of course) not always feasible (ex.: subtle threading problems, resource starvation related problems etc.) but stresses the idea that code should not run in production until all the tests pass, and protects from regressions.
paolo marino Send private email
Friday, September 08, 2006
 
 
"a) add a test to the test suite which exercises the anomaly
b) fix the code
c) rerun all the tests
d) deploy the corrected code

this is (of course) not always feasible (ex.: subtle threading problems, resource starvation related problems etc.) but stresses the idea that code should not run in production until all the tests pass, and protects from regressions."

It's usually feasible, just with great difficulty. For resource starvation, you often have to overload resource allocation functions to simulate a failure so you can test your fix. Same goes for subtle threading problems, it's doable, but it can eat a chunk of time -- however, it does ensure the problem is never reintroduced into the product.
anonymous for this one Send private email
Friday, September 08, 2006
 
 
Chris Tavers:
>> The second is a psychological thing, and it may be unique
>> to me. I noticed a trend in my work a long time ago. The
>> longer I go between running the code, the slower I get at
>> coding.

I've noticed the same thing in my work habits. When I'm working in untested code, I'll always be mentally tracing through it to test my assumptions in a kind of background process. Until they've been tested, though, I'll keep scanning through the same group of assumptions (my brain just won't let me put it down until I know it works.) The longer it is between test cycles, the more assumptions clutter my brain.

This is probably driving me more towards TDD than any of the usual espoused benefits.
dwayne
Friday, September 08, 2006
 
 
Personally I don't always follow a TDD way of working, as sometimes I think through a design/problem by writing code, not a test.  But to me one great value of having tests is they detect dependencies.  And this idea scales.

So a suite of tests for the methods of a class detect (some, I know) breaking changes that you might have made in one method's implementation that affect others.  And if you're building something like a library API for others to use, you may have many methods.  If you get into the discipline of re-running tests frequently (after every "significant" change), you catch the bug right away, right then, before you've built other code on top of the bug.

And if you can integrate running tests in a frequent build process, tests catch breaking changes where a change in one module breaks others which depend on it.

So, the utility of tests to catch dependency-related bugs scales rapidly (without being formal about supporting this assertion).  Even if each one in itself is seemingly "obvious."  Because *together* they scale beyond any one person's ability to track all possible dependencies in the system.
Mark S. Weiss Send private email
Saturday, September 09, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz