The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Test Driven Development - help

I've read quite a few articles about TDD and it looks great in theory.  I'm getting frustrated though with the simplistic examples--yeah, I could easily envision writting a test case for a function like:
int AddTwoInts(int var1, int var2)

However, how do you do it for something harder?  For example, if I want to find out if a particular process is running, I'd need a function to go and get a list of all running processes.  How do I write a test for that one?  It seems like I'd have to stub out the bulk of the function (defeats the purpose), or re-implement the function in the test case so I'd know the right answer to compare against (which is of questionable worth).

Since most of my work deals with interfacing with external entities (OS, network, etc) I'm always frustrated when I try to find a way to use TDD in what I do.

Am I missing something?
Friday, June 23, 2006
Sometimes you can do this sort of thing with "mock" objects that implement the same interface as the object used by the thing you're testing, but return known, "cooked" data when the object you're testing calls its methods.

I don't know offhand how this might apply to your example of detecting whether a process is running, but I've used it to good effect in testing units that interface with the DB.

You might have a ValidationReportGenerator class whose constructor takes an interface to a database that implements the IDatabaseMapper interface, for example.  Then you have two concrete classes that implement IDatabaseMapper: a DBInterface class that actually makes calls to the database, and a MockDBInterface class that only pretends to, and actually returns canned data that you know about.

So where the main code might have:

ValidationReportGenerator vrg = new ValidationReportGenerator( new DBInterface() );

the unit test will have something like:

ValidationReportGenerator vrg = new ValidationReportGenerator( new MockDBInterface() );

I don't know if this answers your question, but it's been a handy technique for me.
Jesse Smith Send private email
Friday, June 23, 2006
It does take work, you aren't missing that.

I would do something like:

  start a hard coded process
  see if it's running
  kill process

  kill process
  check if it's running
son of parnas
Friday, June 23, 2006
A test means that you are testing something for a known stimulus that you will get a known result.

So, if you are needing to test your module against a network stimulus, your module should present a specfic result.

In this case your module test code will require some specific network and/or server setup prior to test - and your unit test should reflect that.

However, suppose you are writing a network protcol parser. Its not necessary to have a network connection with a properly operating stream generator to test the parser itself. Instead, to test the parser, and static store of test data to test the parser against is a much better idea. Additionally, if you've coded the parser to be inseperable from the network - then that was a bad design decision. Take this opportunity to separate the two.

And, heaven forbid, your app is UI based and has not separated the network related calls from the UI, or UI events, then please self-terminate now...
Friday, June 23, 2006
There are two different kinds of tests. The scenario described by the OP is a system test, which tests the system features and largely black box based. The example the OP used is unit testing which is white box based. Normally we require each programmer tests his own code with unit test cases, and thumb of rule is to reach 90% code coverage. After done unit testing, the binary is passed to tester to perform system testing, which emphasizes on the features, extreme scenarios etc. You can not use unit test cases for system testing, and vice versa.
Friday, June 23, 2006
I don't think it's a system test in the case because there's no configuration needed. You should be able to create a process from a unit test context.
son of parnas
Friday, June 23, 2006
For starters, when you're done with a module, try and write applications using your module. If you don't like what the app code looks like, refactor your interfaces (APIs and such) until you get a nice and clean way to write your code.

Then use this code as your unit testing or maybe sample code.
Dino Send private email
Friday, June 23, 2006
To me, unit test is almost like another program -- anything necessary can be done.
Friday, June 23, 2006
>  anything necessary can be done.

The boundry between unit test and system test is important thgough, as was pointed out earlier.

If your unit test requires mysql to be up and you are running other mysql instances, say because of multiple builds going on, then this amount of configuration would put it into the system test area, imho.
son of parnas
Friday, June 23, 2006
The Unit Test cases should not require extensive configuration. The goal is to achieve a decent code coverage. Assume that the function purpose is to kill a process with specific name or PID, the unit test case could launch a process with that name, and then kill it. The unit test is completed as long a decent code coverage is achieved. It is the system tests to find out if the code works under situations such as low cmputer memory, and poor network conditions.
Friday, June 23, 2006
+1 for son of parnas.

Unit tests should be able to run out of the box all by themselves without any extra configuration, etc. If you really need to configure a system then that definitely runs into the system testing realm.

Unit tests are just that - small individual tests at the method/class level. System tests test all of the components together.
QADude Send private email
Friday, June 23, 2006
I have even heard TDD experts say if you are opening a database connection, then you are not doing TDD.  You still have to do those sorts of tests, but it's more in the system arena.  Mock objects are wonderful in this regard.  You don't want to have dependencies on configurations, so mock objects are you friends here.
Mike Stephenson Send private email
Sunday, June 25, 2006
I think there's a middle ground.

Unit tests do not have to be tested in a vaccuum. If you have a module with clearly defined inputs and outputs, it's perfectly acceptable to require that all of the inputs subsequently pass their tests first.

A good build tool such as Ant or Nant is valuable here because the tool can construct the necessary framework to test the more complex functions. For example, someone mentioned testing database functions. Ant can be structured to first unit test connecting to a database, then unit test constructing a table, then inserting data into the table, reading data back out, updating the data, then unit testing your complex SQL query, then deleting data, then dropping the table, and otherwise clean up after yourself. If the script is composed properly, you just simply insert your module at the appropriate place.

Then again, this is probably an example of why TDD is overkill for some types of applications.
Monday, June 26, 2006

Presumably your function that needs to go and get a list of all running processes and then deal with the data that is returned and decide if the process that you want to check is running will have some calls to operating system functions and a blob of your own code to deal with the results of the operating system calls and supply your logic. Your goal, in doing this in a TDD style, would be to be able to replace the operating system calls with code that is under your control so that you can return fixed data and test the code that you're writing that works on that data. After all, you're testing your code, not the operating system calls.

You can often do this by creating an interface that presents the operating system calls to your code. You can then have your code use a default implementation (where the calls go straight through to the underlying operating system calls) or a user supplied interface (mainly for testing, at least initially).

Obviously this is more work. Whether it's worth it or not depends on how much of your code there is and how many operating system calls you need to mock up and how complex the data manipulation is. Often it might be easier to break the code apart a little to move the more complex code into an object that gets given the data it needs to work with rather than having it go off and get it from the operating system - so in your example your main logic might end up in a small helper class that manipulates the list of processes that is returned from the OS call. You can then test that code in isolation and, pretty much, safely ignore the code that ties the operating system call to the logic you have tested...

For Win32 testing I've been doing some work with replacing individual API calls using API hooking within tests. This allows you complete control over an API and allows you to simulate errors that are otherwise hard to achieve, etc. I've written about it here:

I share your dissatisfaction with simple TDD examples, most code is much harder to test than many of the examples imply. However the techniques used in the simple examples do translate into more complex situations once you get used to using them. I wrote some "real world" TDD examples on my blog that first attempt to retro-fit some tests to some fairly nasty multi-threaded code and then use the tests to enable a complete, ground up, rewrite of the code using TDD and the tests that were developed with the nasty code... The series starts here:

Like all things you need to work out when to stop based on how much value you're getting from the tests, a blanket "90% coverage" rule doesn't really cut it IMHO as sometimes you need more coverage for particular pieces of code and sometimes you need less.
Len Holgate Send private email
Tuesday, June 27, 2006
When you write code, how to verify it works right now?

ie. let's say I create a new class.  I want to make sure the class works.

Well, you need a context right?  ie. is it for a windows app, web app, etc...? 

Let's say it's web  = Typically then you'd create your web app.  Fire up the page and create your class.  Call a method - run the page and see if it works right?

Well, if you do this - you are almost there :)

Instead, think of JUnit/NUnit as the 'application'.

So, you create a test that instantiates your class and calls your function.

The test will validate if it works or not - viola! you are now testing.

Second thing to remember - you can then write your tests before you try to go off and make your web app. 

Later down the line, as your code builds and you are running your tests, you see a tests fails.  You'll know where it failed and why it failed.

This is much better than setting breakpoints, stepping through all the classes, etc... right?

Hopefully right  :)
Steve Send private email
Tuesday, June 27, 2006
Simple rule of thumb appears to work along the lines of:

"The more complex the test setup / teardown the less likely you are performing a unit test" - one of my favourite blogs about TDD with a lot of **useful** examples.
Thursday, July 13, 2006

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz