The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Automated Regression Testing

I've googled around for an answer to my question below, but haven't found one (I probably haven't come up with the right keywords).

I love the concept of automated regression testing that gets run from the build process.  The only problem is, I haven't figured out how to do it when the software I'm working on does nothing but interact with an external system (think about an app that monitors the stock market for instance).  Sure, you could stub out all interaction with the external system, but then you're basically building a mirror of the external system, and if that system is complex, this is a huge task by itself.

Are there better ways to go about this?  Can anyone point me to some good resources?  I'm not looking for tools, more concepts/pointers/etc.

Sunday, January 22, 2006
Try "Mock Objects" as a Google term. The idea is to make a set of lightweight objects that can stand in for the external system.

Instead of attempting to make something that acts like the external system, you make a system that provides the same interface, but essentially none of the internal state & behavior.

It's still potentially complex, but less so than duplicating the original system. In addition, you can easily implement code to do things like return error codes or throw exceptions, which are notoriously hard cases to test with "real" systems.
Mark Bessey Send private email
Sunday, January 22, 2006
Thanks for the keywords.

I've spent some time reading and it's as ugly as I feared--we'd have to mimic a large API that talks to a server that sends out ALL KINDS of responses, data, errors, etc.  As usual, the devil is in the details.  Sigh... guess I was hoping there was a magic silver bullet that I simply hadn't discovered.  Ignorance really was bliss!
Sunday, January 22, 2006
Perosnally I would not mock these systems unless you have no choice.

I would run a small version of your external systems in your own test environment and run your regressions tests against that. Then you'll find real errors and make real strides twords overall system quality.
son of parnas
Monday, January 23, 2006
Hmm.. how about stub the API, but do a record-and-playback from the server.

1. connect to server and interact properly and record output.
2. take this "script" and set it to play back into the test.

not perfect, but you could run a day's worth at a time
Michael Johnson Send private email
Monday, January 23, 2006
Yup. Ditto what Michael Johnson says about record-and-playback.

If the service you're connecting to is an HTTP service, then just set up a cacheing proxy server on your local machine to intercept and save all the HTTP traffic.

Then, you can just flip a switch on the proxy server and have the cached content served up whenever it recieves an appropriate request.

I think should be able to do most of that for you, out of the box. If not, you should be able to accomplish what you need with minimal customization.
BenjiSmith Send private email
Tuesday, January 24, 2006
One question: How do you test that your software behaves correctly at this moment?

Test Automation is all about automating tests that already exist. If you can't do this type of test now, your problem is much more significant than just adding automated tests as part of a build: You aren't really testing your software!

Another approach that may help you is to do cross checking between your application and another application that retrieves the same data from the same source (possibly with some specific post-processing that will get you the exact results you need).
Florian Heine Send private email
Tuesday, January 24, 2006
Try FIT for Developing Software: Framework for Integrated Tests by Rick Muckridge and Ward Cunningham.
son of parnas
Tuesday, January 24, 2006
"How do you test that your software behaves correctly at this moment?"

Regression testing is not concerned with whether or not the software functions correctly. That's the domain of acceptance tests and unit tests.

Regression tests exist only to keep track of how software behavior *changes* over time. If your software performs a certain way today, you want to make sure that it continues to behave that way tomorrow, unless you deliberately make changes to its functionality (in which case, you update the regression test).

If your bugfixes to module A change the behavior of module B, that's a bad thing. And that's what a regression test is designed to detect.
BenjiSmith Send private email
Wednesday, January 25, 2006

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz