The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Anything Not Mandatory is Forbidden

When I was in the military, we often got into the state where "Anything Not Mandatory is Forbidden" (ANMIF).  This means you MUST do what you are supposed to do, and FORBIDDEN to do anything beyond that.  It's a pretty mindless (little-head) way to be, and I don't think it is a good way to write software, or create systems.

Still, there seems to be a desire in some people to implement just this kind of system.  They don't want to RUN this system themselves.  They're trying to control the 'Operators' who do have to run the system.

As a software engineer, I've had analysis and design sessions with customers where they wanted the software to implement this kind of policy.  Thus each user of the system had a 'role', and each 'role' had things it could and could not do.

Instead of having a user policy manual, with human approvals and sign-offs, they wanted the software to 'automatically' control what happened based on these roles.

Personally, I prefer to write software that allows lots of stuff -- but the person should be trained to know what they should and should not do.  This allows the person to do things (for testing, or training, or emergencies) that they normally would not do.

If the software is written with the draconian ANMIF rule, then special 'flag' work-arounds have to be introduced in the software to say "Now we are in Emergency Mode", or "Now we are in Training Mode".  These work-arounds then make the user permissions code really complex, and the software as a whole more fragile.

Not to mention all the possible side-effects that rise up in this case.

I'm not against defining and coding 'roles' for users -- that's absolutely necessary.  What I'm against is these draconian policies, mandated by the customer to be applied to the users by the software.  (If draconian policies are needed, they should be built into the manual Operational Procedures)

Does this sound common?  What experiences have others had with this?
AllanL5
Tuesday, December 14, 2004
 
 
Doesn't it rather depend on your customer? And how much trust they can reasonably put in their staff? If you were writing systems for somewhere like McDonalds, with high turnover and all the issues that brings you want all the security you can get.

If it's an office environment with fairly autonomous teams then perhaps you want what you describe. However, it's more to do with the client's style of management than the role of software. I would bet that the client's spec. for a permissions system in software (or lack of one) closely mirrors the day to day management in the non-computer domain.

Sadly, that's probably not something where as a developer/consultant you'll ever get a look in, no matter how much you think they're short sighted/un-trusting!
Andrew Cherry Send private email
Tuesday, December 14, 2004
 
 
Also depending on the type of business they are doing there may be regulatory reasons for it.  Working with health insuarance companies, they were all forced to that model with HIPPA/Provacy and Security laws.
Douglas Send private email
Tuesday, December 14, 2004
 
 
It probably depends a lot on whether you're writing a word processor or software to control a 747.  IOW, it depends on the possible consequences of someone screwing around.  In a word processor, most likely the worst consequence is losing the file you were working on, so a lot more flexibility can be granted.

Though I do recall that some past versions of Word (through Word 2000, I believe) had a nasty bug in the Open File box where if you hit Delete, the selected file would be instantly and irretrievably deleted (not recycle-binned).  That, to me, is a good argument for ANMF in regards to the Open File box!
Kyralessa Send private email
Tuesday, December 14, 2004
 
 
I describe this as something slightly different....

There are the college model and prison model.
In the college model, everything is allowed, except where expressly forbidden.
In the prison model, everything is forbidden unless expressly permitted.

Microsoft uses the college model for security.
Oracle uses the prison model for security.

I don't know anything that uses the Kafka model (anything not mandatory is forbidden) for security.
Peter
Tuesday, December 14, 2004
 
 
In an ideal world, you'd be right.  People wouldn't make mistakes and people wouldn't decide to do malicious things.  In the real world, it's way too much of a gamble to put a random employee in front of an unprotected system. 

You don't want an employee to accidentally delete thousands of dollars worth of orders because they needed to cancel one.  You don't want a disgruntled employee to be able to get into the payroll system and change salaries.  You don't want someone on the outside to be able to gain control of an employee's PC through a trojan and then be able to query all of your customers' credit card numbers.  Yes, there are backups, virus scanners, and other safeguards that could help to resolve some of these, but my experience has been that those who are lax on security in one area tend to be lax across the board.

The world is full of incompetent and dishonest people so it makes sense when designing a system to plan for protecting against them.  Ultimately, it comes down to the question of whether or not the cost and likelyhood of potential damage is greater than the cost of protecting against this damage.  These parameters are highly subjective so it isn't unusual to find cases where someone will disagree with you on the value of protecting a system, simply because they've placed different values and estimates on it.
SomeBody Send private email
Tuesday, December 14, 2004
 
 
"Does this sound common?  What experiences have others had with this?"

I built a system with a kind of forced workflow.  It introduced a set of checks and balances between management and staff.

It has never ever been used.  The staff is always setup as "management" and they do what they want completely by-passing that entire system.
Almost Anonymous Send private email
Tuesday, December 14, 2004
 
 
"I built a system with a kind of forced workflow ... It has never ever been used."

AA has it exactly right. Security systems that don't accurately model the real-life chain-of-command / permissions / workflow are nearly always destined to be bypassed by everyday users.
Snark
Tuesday, December 14, 2004
 
 
Aha!  It's not that it HAS a security system, it is that the security system it has does not "accurately map" what needs to happen.

Good point.  I would suggest that 'draconian' ANMIF rule sets can never accurately map what needs to happen.  As humans, we need some wiggle room.

For me, that 'wiggle room' comes in the form of operational procedures, and supervisors.  First of all, an employee's ability to do something REALLY malicious (like deleting the database) should not be allowed.

Secondly, an employee's ability to do something slightly malicious (like entering profanity in the daily log) should probably NOT be prevented by the software.  Instead, the 'Supervisor' role should have an operational procedure to check the log daily for profanity, and punish any perpetrators.

Not a very good example, but it does show the two ends of the spectrum I'm talking about.
AllanL5
Tuesday, December 14, 2004
 
 
Better that than something that forces you to enter profanity in the daily log.
Aaron F Stanton Send private email
Tuesday, December 14, 2004
 
 
AllanL5 :

That's pretty much what I've found. So the question is : why are you being supplied with a draconian set of rules if that doesn't accurately model the "wiggle room" and levels of approval and judgment that already exist?

In my experience, the answer to that generally boils down to : someone currently has to exercise judgment on every decision that their suboordinates make, and they want to offload that responsibility. The only way to do that is to create ANMIF rules, which allows them to blame the new system for the pain that their suboordinates will soon feel when requests that were previously evaluated on a case-by-case basis are now denied out of hand.

So you have two constituencies : the people who Need To Get Work Done, and need that "wiggle room" that real-life judgement calls allow, and the guy who wants your new system to make his life easier by taking over some (or, in the worst case, all) of his decision-making process ... and you do have to please both of them.
Snark
Wednesday, December 15, 2004
 
 
I wrote : "The only way to do that is to create ANMIF rules ..."

I meant : "Their gut instinct (though wrong) is to create ANMIF rules ..."

The point is that their managerial urge is to get rid of "all this nitpicky, day-to-day decision-making," but they tend to do that by attempting to get rid of -all- decisionmaking, which is a bad move because it imposes a harsher system than actually exists.
Snark
Wednesday, December 15, 2004
 
 
Obviously, there are arguments to be made both ways. The examples of a Boeing 747 control system and that of a nuclear power plant both provide some arguments in each direction, too -- it's not so one-sided.

I think that, in general, these systems should:

- PREVENT actions by a trusted user that might be undertaken accidentally and may have negative consequences; but ALLOW an override saying "I really know what I'm doing"

- FORBID actions by an untrusted user that are beyond their set of ordinary tasks

- ALLOW all other actions, even if they're not "normal" for that user

An example of the first is from the pilot of a Boeing 747 or the senior operator of a nuclear power plant. In general, that 747 shouldn't let you cruise at high speed with the gear down, or make maneuvers that might push the boundaries of what's known to be stable. However, both users NEED to be able to throw an override in exceptional circumstances. Can you imagine being at fault for a 747 crash or a nuclear-plant disaster because the operator knew how to fix the problem, but wasn't permitted to? Human beings will be smarter than computers for the forseeable future (IMHO), and you should always *allow* trusted humans to override these systems. (BTW, in a nuclear power plant, you might define 'trust' as actually being 'three senior operators working together' or something. But the point still stands.)

An example of the second is simply a low-level employee at McDonald's or a retailer. These users aren't really 'trusted' much at all -- you have to prevent them from voiding transactions on the register at will, or they'll abuse it. This one is simple.

An example of the third is, well, everything else. To take bug-tracking systems as an example (and, yes, we use JIRA, not FogBugz -- not my choice): I am prevented by various roles from doing a number of things in the system. Um, I help *write the code* that keeps the company's critical product working. If you don't trust me, you're completely screwed anyway. So let me close that bug without a signoff, let me delete it out of existence, let me bulk-reassign stuff. Because (a) you'd better trust me anyway, and (b) it prevents utter chaos when my manager is on vacation, the CEO is in a board meeting, and we REALLY need to manipulate the bug-tracking system to find a customer's problem that's keeping them from running critical software RIGHT NOW.


I think problems with the third are by far the most common type of failing. For some inexplicable reason, there is a certain class of engineer / product manager / marketer who loves putting these elaborate systems of restriction in place, even when they're totally unneeded. If you don't trust someone, what the hell are you doing letting them fly your airplane / operate your nuclear-power plant / access your bug-tracking system in the first place?

(A good rule might be: think about how much damage that action could really do, compared to what damage the employee could do other ways. The 747 pilot can always crash the plane intentionally, and I can always write backdoors and trojans into our software. Compared to that, small restrictions like these are just stupid, beyond those necessary to prevent us from making human error.)


IMHO this is completely orthogonal to whether a system is 'all open' or 'all closed' by default -- there are interesting arguments about that, too, but it's a totally different thing.
midlands
Tuesday, December 21, 2004
 
 
midlands, there is a distinction between what Boeing does for the avionics software, versus Airbus. The avionics for Boeings does not restrict what the pilot can do, but the Airbus software does. An airplane should do what you direct it to do. It should not have a hissy fit and decide not to do it.

Boeing Rationale: there will be some unforseen flight instances where no one has an idea in advance what will happen. The pilot will have to recover, and may need to make the plane to something that everyone says that the pilot should never do (or is outside the normal flight profile). Who knows, maybe you will need to do a barrel roll in the 747 (like the first 707, but that was to show off at an airshow).

There is an example of an Airbus crashing into some trees at a demonstration. I'm sure you've seen the footage. The plane was doing "touch and goes" but for some reason, the software locked up and the plane flew (nose up, as for liftoff) directly into the trees.

Normal Accidents (by Perrow) goes into some details about overwhelming plant operators with sirens and warnings. Usually when a nuclear plant goes "woops" the warning and logging systems are messed up. So the operators just don't know what is going on.

The biggest problem I have encontered with workflow systems is that there can be a massive difference between how a manager/company thinks work gets done, and the way that the workers actually do the work. The larger the gap, the more bogons get entered into the system.
Peter
Tuesday, December 21, 2004
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz