A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
my team do extensive failure testing. Our malloc function (in debug builds) and other functions check with an in-thread fail-countdown global variable, to deterministically fail. We then run a code through a to exercise every fail-point in the system. (We don't check for every perm, but we get closer than most people I bet).
We use code-coverage tools to check that we are exercising every on-failure logic etc too.
We then do this client-server. In debug-builds, we pass the current fail-countdown in the sync IPC call, so we deterministically exercise the logic on the other side of the IPC call too. If the IPC call completes, it passes the countdown as part of the answer to.
Testing async IPC is a bit more involved, but it builds upon this.
Grids ought to be able to be instrumented in the same way.
So, there is a programmatic approach to exhaustive testing, that could be applied to distributed components as easily as same-node ones. If you're prepared to invest in the instrumentation, you ought to get the payback.
This might even be able to be "aspect oriented" applied by some robot, but we haven't tried.
Monday, May 29, 2006
I would run in as many different customer configurations as you can. Ask your top customers what their systems look like, duplicated those, and run saturation tests on those configurations.
This can be most illuminating.
son of parnas
Monday, May 29, 2006
There is onlt only one distributed app testing methodology - load it - pull the cable - wait a while - plug the cable back in. Repeat above until it can take such abuse.
Tuesday, May 30, 2006
the testing is usual: split and test separately.
we split the system on layers borders by using mock objects (i.e. to test behaivour of business logic when DAO layer works funny, we replace actual DAO with mocked, controllable one)
we split the system on inter-component border by using simulators. i.e. when we need to test interaction of integration code with mainframe, we do not call mainframe itself; rather we call a simulator which supports a subset of target app functions, and allows us to setup various failure conditions.
and finally, we have a regression test suite that runs during night, and attempts to test as much of the functionality as possible.
never covers 100%, but it's the best we can do
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz