The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Uh, I Think it Works: Black Boxes in Software

You're developing a program, one that's fairly complex in its design.

The problem is this:

There's not any hard-and-fast way for the user to know if it's working.

Normally, for say, a scheduling program, you would know *instantly* if it works. The appointments you enter would show up, those completed tasks would go into History.

With your program, on the other hand, there is no obvious causality. The user enters input and gets output, but has no simple way to determine if it's "good" information; the specific algoritms used are unknown to the user (perhaps intentially.)

It's a so-called "black box" system.

I thought of two solutions to this problem:

A. You can back up your claims with credible testimonials. "Yep, Product X works wonders. We can't believe we used to live without it." Through repeated use, you can see the output will almost definitely be "good."

B. You include some sort of simulator of the output for the user, so he can see it's "good," with his specific usage.

Any thoughts, other ideas?
Ari Berdy Send private email
Thursday, February 02, 2006
 
 
I'm currently working on a rather complicated software that simulates decision analysis problems. Game theory, subjective utility etc...

We've actually adopted the term "Black Box" for our marketing. The logo is a translucent box. The basic premise is "life's a black box, we let you peek inside"

More specifically: offer a tour and/or (flash-)demo that shows how the product is used and explains the concepts it's using.

Design a great user interface that doesn't intimidate and adapts to new and regular users.

Collect testimonials from your customers.

Write a block where you use your software and apply it to everyday problems your customers may face. Include lots of screenshots, the last one showing the great visualization of the results.
Matthias Winkelmann Send private email
Thursday, February 02, 2006
 
 
If it is scientific, technical, and/or mathmatical, then have your PhD quants write papers and give lectures and be available to discussion groups on how the "black box" works.  After all, how did you design it?

Expose the workings of your "black box" to users brave enough to drill-down.  After all, how did you debug it?

Provide plenty of sample input and output units that demonstrate that the "black box" works as expected.  After all, how did you test it?
Eric Speicher Send private email
Thursday, February 02, 2006
 
 
"You include some sort of simulator of the output for the user, so he can see it's "good," with his specific usage."

This is the whole premise of Unit Testing.

If you can provide example data well within the realm of standard, along the boundary conditions (on both sides), and well outside the boundaries, you can demonstrate where the system does and doesn't work along with what the expected output should be.

Breaking in the expected places can be just as valid as working in the expected places.
KC Send private email
Thursday, February 02, 2006
 
 
I have developed software like this previously, I think you have two options:

1. Expose the inner workings as much as you can. If it is doing some mathematics, then write about the algorithms/formulae etc that was used. If this is sensitive information then try and explain the input and output of each component as best you can.

2. As already suggested, write some sort of simulation. The software I working on was particle image velocimetry software (university project) and the best way to prove its workings was to artificially generate images with a known velocity field, and then compare the known field to the calculated field (we were within 0.01 pixels most of the time).

Or, if you can convince people to use your software and write that it works (and they are solid references) then that is good too.
Daniel S
Thursday, February 02, 2006
 
 
I'm just quibbling here...

"The user enters input and gets output, but has no simple way to determine if it's 'good' information..."

Is your program something that generates an output that cannot be visually inspected? To illustrate the point, did you write a program that automatically calls the maintenance department if a fire sprinkler suddenly stops working? In other words, the only way I know it works is if I climbed up on a chair and smashed the overhead sprinkler with my shoe?

From reading the other comments, it seems that your program generates some output that can be reviewed - i.e., a picture, a printout, a solid green light, etc etc, and you're looking for a tactful way to say it's the user's responsibility to determine if the output is VALID.

If that's the case, follow Eric and KC's advice, "given X, the program is guaranteed to return Y with degree of precision Z." "Here are the boundary conditions. and the expected results or non-results."  "The program simulates formula A and its up to you to decide if you've put in the correct inputs."

If not, then you need to be a little more clear what you mean by "good outputs".
TheDavid
Thursday, February 02, 2006
 
 
Thank you for the ideas, Matthias! Sounds like a cool piece of software--I'd be interested in screenshots!

Thanks, Eric and KC! I hadn't really thought of using "black and white" examples to show causality or the validity of output.

Obviously, I've used "black and white" input myself for testing purposes, as it clearly shows if things are working, but never thought of showing the same to users for exactly that purpose!

Eric and Daniel, I've thought of explaining, conceptually, how the program works.

For example, for a route-finding program, you might say, "Product X will avoid busier streets." The user would be thinking, "I don't know *exactly* how that ties in with everything else, or the *exact* ramifications of that, but it makes a lot of sense to me! If the program is using ideas like that, the route it's giving me is probably pretty good!"

The *only* problem with this is you're "telling," not "showing." Which is still good, but you'd probably also want to "show." (I'll end up "telling," too, I think.)

TheDavid, when I say "good" information, I mean information that the user *should* be getting.

Referencing the above example, the user should be getting a *top* route, not just a good one.

But how does the user *know* it's top?

(This obviously is a flawed example, as he can just compare the expected arrival time with that of another route-finding program. But you get the picture :)!
Ari Berdy Send private email
Friday, February 03, 2006
 
 
Oh.

Yeah, in that scenario, no, you can't guarantee that your solution is the best solution or the route suggested is the best route to take.

I'd actually avoid describing how the program works because that sort of "analyis" is easy to reverse engineer and you don't want to give your competitors any help. (In this example, I can buy the product, sit in the car, and determine what the rules are via trial and error.)

In all fairness, the only thing you can do is offer testimonials. I'd certainly buy the product if both FedEx and UPS depended on it.

Keep in mind that a lot of software offers subjective value and this really isn't as big a problem as it sounds. Even Microsoft Excel doesn't guarantee the accuracy of their calculations.
TheDavid
Friday, February 03, 2006
 
 
The user will use a Bayesian algorithm - ie, their mind - to figure out if the software is producing acceptable results. You could use an automated Bayesian filter to check whether their expectations will be met.

And yes, of course, the other appeal is to authority (other users, white papers, PhDs, etc).
Spinoza Send private email
Friday, February 03, 2006
 
 
I wrote a lot of software in grad school that would implement various probabilistic / statistical algorithms. Not only were the answers (in some sense) random, but I was a very inexperienced programmer. I had exactly the same problem you describe, for my own software.


One trick I hit upon was to find some sort of common ground with a different solution -- for instance, if I was working on a PDE solver, give it a problem with an analytic solution that I could just graph in Matlab, and see if the two pictures looked like they matched. From my software's point-of-view, this wasn't a special case, so I could greatly improve my confidence.


More generally, if you're developing a product that competes against an established player in a similar space, you could demonstrate that you can solve the same problems and get answers just as good, and also you have a better UI / more general approach / better customer service / whatever. Most people, even technical people who aren't in critical-thinking mode, will happily generalize from "this works in a few examples" to "this works everywhere".
Mark Jeffcoat Send private email
Friday, February 03, 2006
 
 
"Keep in mind that a lot of software offers subjective value and this really isn't as big a problem as it sounds. Even Microsoft Excel doesn't guarantee the accuracy of their calculations."

That's a good point.

Mark, the problem is that there is no single "good" output; with this product there can be multiple "good" answers and can be multiple "poor" answers.

That's not to say that some outputs aren't better than others. Obviously, some *are* better, some much better than others. It's just that without some analysis or history, it's many times hard to know the quality of the output.
Ari Berdy Send private email
Monday, February 06, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz