The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

When is pedantic application of OO techniques academic?

Businesses, driven by many forces, often say, "Does it work? Yes? Ship it!"

In that light, consider:

public final class CollectionUtil {
  /**
  * Returns true if the collection is null or empty.
  */
  public static boolean isEmpty( Collection c ) {
    return c == null ? true : c.isEmpty();
  }
}

public class ClassA {
  private Collection collection;

  public boolean isReady() {
    return !CollectionUtil.isEmpty( getCollection() );
  }

  /**
  * Returns null if the collection is not initialized.
  */
  public Collection getCollection() {
    return this.collection;
  }
}

The idea is that ClassA is "ready" if the collection (or set of collections) contains data. Fair enough. And this works.

Yet it breaks OO principles; a true object requires both behaviour and attributes (CollectionUtil has no attributes). Some issues can be resolved as follows:

public class ClassA {
  private Collection categories;

  public boolean isReady() {
    return hasCategories();
  }

  public boolean hasCategories() {
    return !getCategories().isEmpty();
  }

  public synchronized Collection getCategories() {
    if( this.categories == null ) {
      this.categories = createCategories();
    }

    return this.categories;
  }

  protected Collection createCategories() {
    return new Collection();
  }
}

This eliminates the CollectionUtil class. It allows subclasses to change how the availability of categories is determined (database, boolean, collection.isEmpty(), etc.). I believe the second solution is technically better. (The big issue remains unresolved: what does "ready" mean, and why doesn't ClassA just do what needs to be done when it is, in fact, ready?)

From a business perspective, at what point does using a more robust solution make sense? Especially for something whose value is so intangible.

Often, this can be set aside as a "refactor" issue, to be cleaned up in time.

But I feel it goes deeper than that. There is time invested from refactoring the code, there is the problem of proliferation (which chews into the amount of time needed to refactor), there are the unknown bugs produced in the first version (a NullPointerException may be lurking), and the inability to change the code via subclasses (Open-Closed principle). Then there is the issue of just plum not having time to refactor ("always time to do it over, but never time to do it right"). There is the education factor: constantly having to fix code (rather than educate proper principles) is a huge time-sink.

Ultimately, it may be a management decision.

My last point on a similar note is that sometimes I'll read complaints that OO does not deliver on its promises. People use the tool incorrectly, and then blame the tool for its shortcomings. The CollectionUtil is a prime example of using the tool incorrectly; just as you can use the claw of a hammer to peel a potato, you won't be (I hope!) nearly as productive as the person who uses a potato peeler for the same task.

Thoughts?
Dave
Dave Jarvis Send private email
Friday, July 29, 2005
 
 
Given to equivalent solutions you must pick the simpler one as your answer (Occam's razor)

As far as robustness goes, here's my take on it:

interface Servant {
  boolean isReady();
}

abstract class AbstrServant implements Servant {
  private Collection collection;

  public AbstrServant(Collection collection) {
        this.collection = collection;
  }
  public boolean isReady() {
        boolean result = false;
        if(collection != null) {
              result = !collection.isEmpty();
        }
        return result;
  }
}

final class AServant : extends AbstrServant {
    public AServant() {
        super(new LinkedList());
    }   
}

final class BServant : extends AbstrServant {
    public BServant() {
        super(new HashSet());
    }
}
Dino Send private email
Friday, July 29, 2005
 
 
Hi, Dino.

Thanks for the response. I should clarify, though.

I'm not really looking for ideas on how to improve, or review variants thereof, the second solution (I've exemplified the problem to demonstrate a theoretical point); I'm looking more for thoughts people have on how much attention to proper OO techniques should be applied throughout source code in real-world business practices.

That is what do people see as the trade-offs between time, corporate profit, application maintainability, and such?

Dave
Dave Jarvis Send private email
Friday, July 29, 2005
 
 
There's no difference between academic and comercial programming. Good code is good code regardless the environment.
Dino Send private email
Friday, July 29, 2005
 
 
The real world knows more about good code than academia. It has to live with its creations.
YaYaYa Send private email
Friday, July 29, 2005
 
 
If you are looking for a description of "proper OO techniques" then there is no short, definitive answer. Different programmers have totally different views of what "proper" actually means. Hell, you ask 10 programmers what OOP means and you will likely get 10 different answers. Ask for descriptions of encapsulation, inheritance and polymorphism and see how many different answers you get.

In my decades of experience there are only two criteria worth bothering about:
(a) Does it work? If it works then it cannot be wrong, just as a solution that does not work cannot be right.
(b) Is it simple? A simple solution is always easier to maintain and enhance than a complex one, so always follow the KISS principle.
Tony Marston Send private email
Saturday, July 30, 2005
 
 
I can say that I probably have seen more problems caused by over enthusiastic desires to create frameworks, elegant architectures and use "design patterns" than anything else.

Keep it simple - pedantic application of any technique rapidly causes it own problems.

Saturday, July 30, 2005
 
 
Look, any approach to programming makes certain recommendations, and it makes them for a reason.  The important thing is to evaluate the problem you are trying to solve in terms of these reasons. 

As far as when is it appropriate to go with a more robust solution, my answer is "when it makes good economic sense."  Just like any other business decision you have to weigh the risks and rewards.

In general, I find that failing to do the things I know I should do while programming usually comes back to bite me on the arse sooner or later, so I usually try to do things cleanly from jump.  Doesn't always work out that way though :)
My opinion
Saturday, July 30, 2005
 
 
Given a choice between two designs, I always pick the one that  results in the simplest possible correct code. :-)
J. Random Hacker
Sunday, July 31, 2005
 
 
> Given a choice between two designs, I always pick the
> one that  results in the simplest possible correct
> code. :-)

I would agree with that.

However, I've had wild disagreements with other developers as to what "simple" means. One way this happens if a huge framework is created to solve a problem which means that you can have a "simple" solution in one place, but only because you have created a huge framework to underly this simplicity.

If there aren't enough applications for the framework (how many times do frameworks get created and used once!) the resulting total complexity is much higher even though it is possible to point at one part of it and say "look how simple this is".

Sunday, July 31, 2005
 
 
The more business value business sees in the source code, the more it ready to keep it high quality.

Now we can look for fields where the business values source code high. And first example is class libraires to be used and extended by others. How popular the library would be if it would be hard to understand? If it would be hard to extend? Good design is a must for such kind of things.


Denis Krukovsky
http://dotuseful.sourceforge.net/
http://dkrukovsky.blogspot.com/
Denis Krukovsky
Monday, August 01, 2005
 
 
The business world is focused on making money not producing quality code. In my experience, only on very rare occasions the two are equivalent, but usually they are not. That may explain the amount of crap code I've seen in 15 years of working on commercial applications.

======

Defining simple is simple :-) Given a problem, a simple solution has the same or slightly higher complexity as the problem it solves. Opposite, a complex solution has a complexity far greater than the problem it solves.

For the same problem, I would pick the solution of lesser complexity that correctly address the problem.

As thumb rules, if the solution is less complex than the problem then the problem is not correctly solved. If the solution is far more complex than the problem, then it is solving a problem which is far more complex than required.
Dino Send private email
Monday, August 01, 2005
 
 
I thought, by definition of "Pedantic" and "Academic", that Pedantic applications of OO techniques are ALWAYS "Academic".

I agree with the above.  As simple as possible, but no simpler.

And yes, "Simple" can be in the eye of the beholder.  A complicated framework which enables a single-call application is NOT simple.
AllanL5
Tuesday, August 02, 2005
 
 
The second example adds nothing.  It's more complex _and_ it does more.  Does it have to have _more_ functionality?  Go for the simple solution.

Also, I absolutely okay with having classes of nothing but static methods.  For the right reasons ofcourse.  My opinion is that not _every_ piece of code in a oo system is tied to a object.  Some of it is really just library code.

I think a lot of (for example) java systems have a StringUtils class, or a CollectionUtils class, or a DateUtils class, or a...  You can't extend the class itself to add the behaviour, so instead you place it in a library class.  Perfectly acceptable.
Dave
Wednesday, August 03, 2005
 
 
Hi,

The second example removes an extra class. I don't see how it is more complex, and as a unit it doesn't actually do more. Plus, the behaviour of determining whether the internal collection is null (and what that *means*) has been encapsulated to one location. A single class is easier to maintain than two, in my opinion. And when the clients of the class don't have to worry about checking for null values, I believe the class as a whole becomes much simpler to use.

Also, arguing that "a lot" of Java systems use Util classes is a logical fallacy. It does not imply that Util classes are a good thing. In fact, the notion of having a "static" Util class that cannot be subclassed for modification breaks the Open-Closed Principle.

The Open-Closed Principle states that once you write your classes they should be closed to modification, but open to extensibility. In other words, subclasses should have the ability to provide new functionality to an existing codebase. By ignoring this principle and modifying the original code, new bugs can be introduced into the old system (which has, presumably, been fully debugged).

Personally, I would much rather debug my new class than have to figure out what I've done wrong with the existing classes that someone wrote three years ago. And consequently be stuck debugging their code AND my code together. Not a pretty picture.

This also has implications to the company's bottom line; I would not want to spend the company's money on the time spent debugging difficult problems. That is, I would choose to debug a simple problem over a difficult one every time. This saves the company money, and increases productivity.
Dave Jarvis Send private email
Wednesday, August 03, 2005
 
 
Please check the inline comments:

// not clear what's final and what's not
public class ClassA {
  private Collection categories;

  public boolean isReady() {
    // why make 3 calls to figure out
    // the collection is empty or not
    return hasCategories();
  }

  public boolean hasCategories() {
    return !getCategories().isEmpty();
  }

  // You're probably better off with a synchronized
  // collection since the entire access to the
  // collection is concurent (not just the creation of)
  // If there is no concurent access, then you don't need
  // synchronzied here.
  public synchronized Collection getCategories() {
    // Since you're using a generic collection
    // ,which btw is an interface in java, you should
    // include the collection as a construction parameter
    // (IoC pattern)
    if( this.categories == null ) {
      this.categories = createCategories();
    }

    return this.categories;
  }

  // see above
  protected Collection createCategories() {
    return new Collection();
  }
}

Therefore your class should look like:

final public class ClassA {
  private Collection categories;

  public ClassA(Collection categories) {
    this.categories = categories;
  }

  // the extra final, just in case a programmer
  // decides the class is extendible.
  final public boolean isReady() {
    return hasCategories();
  }

  // Chances are this is private
  private boolean hasCategories() {
    return !getCategories().isEmpty();
  }
}

But this class is not really open to extension without modification. Therefore I would go for:

// Exportable interface
public interface InterfaceA {
  public boolean isReady();
}

// Package visible abstr implementation; behavior only
abstract AbstrA implements InterfaceA {
  private Collection categories;

  public BaseA(Collection categories) {
    this.categories = categories;
  }

  protected abstract boolean isReady() ;
}

// Package visible implementation
final public class ClassA extends AbstrA {
    private Collection categories;
    ClassA(Collection categories) {
        this.categories = categories;
    }

    final protected boolean isReady() {
        return !this.categories.isEmpty();
    }
}

// exportable classfactory.
final public class ClassFactory() {
    public InterfaceA createA(Collection c) {
      return new ClassA(c);
    }
}

This can be easily extended:
1) extend from AbstrA
2) add the factory method to the class factory.

Or worse:
1) define new behavior in abstr class implementing InterfaceA
2) extend from new abstr class
3) add the factory method

Advantages:
- it respects other OO class design principles: single responsibility, liskov substitution, dependecy inversion principle.
- it creates minimal dependecies by following packaging principles: reuse-release, common closure, common reuse
- abstract behavior is sepparated from actual implementation => less testing, less bugs, etc
- the approach is more robust, because it isolates potential mistakes in leaf classes. These classes can be tested and fixed individually.
- the Martin Metric of the above is far better than the initial examples.

Disadvantages:
- this is not the simplest solution. There is some extra complexity introduced for managing package dependecies and for the sake of robustness. However in a large system, this design pays of for its extra complexity very quickly.
Dino Send private email
Wednesday, August 03, 2005
 
 
Made a mistake ...

abstract AbstrA implements InterfaceA {
  private Collection categories;

  public BaseA(Collection categories) {
    this.categories = categories;
  }

  final public boolean isReady() {
    return !this.categories.isEmpty();
  }
}

// Package visible implementation
final public class ClassA extends AbstrA {
    ClassA() {
        super(Collections.synchronizedSet(new HashSet());
    }
}
Dino Send private email
Wednesday, August 03, 2005
 
 
Hi, Dino.

Using your implementation of ClassA, a client can do this:

clientMethod() {
  ClassA ca = new ClassA( null );

  if( ca.isReady() ) {
    // Code never gets here.
  }

  // Code never gets here.
}

This throws a rather unexpected NullPointerException (NPE). Also, it does not conform to creating a Plain Ol' Java Object. Personally, I cannot stand constructors that take arguments (almost as much as I abhor NPEs). It implies that the object is not truly black-box. In my world, the following code would always work:

clientMethod() {
  ClassA ca = new ClassA();
  ca.setCategories( null );

  if( ca.isReady() ) {
    // Code might get here.
  }
}


Adding "final" to the "isReady" method artificially limits whether or not subclasses can determine whether the object is "ready" (whatever that means). Same for making "hasCategories" private. Obviously you don't know the problem domain, but "hasCategories" could be determined by a database call, coded by a particular subclass.

But, again, all this is really skirting the main question.
Dave Jarvis Send private email
Thursday, August 04, 2005
 
 
Your NullPointerException comments are correct - that's a bug (I have plenty of work to do and I can't spend too much time thinking on posts on Joel. Enough excuses ...)

public interface IntefaceA {
  public boolean isReady();
}

public class ClassFactory {
  static public InterfaceA createA1_1() {}
  static public InterfaceA createA1_2() {}
  static public InterfaceA createA2_1() {}
  static public InterfaceA createA2_1() {}
}

abstract class AbstrA1 implements InterfaceA {
  final public boolean isReady() {
        // ... implement isReady in some way
  }

}

final class ClassA1_1 extends AbstrA1 {
    // ... A1_1 concrete implementation
}

final class ClassA1_2 extends AbstrA1 {
    // ... A1_2 concrete implementation
}
 
abstract class AbstrA2 implements InterfaceA {
  final public boolean isReady() {
        // ... implement isReady in some other way
  }
}

final class ClassA2_1 extends AbstrA2 {
    // ... A2_1 concrete implementation
}

final class ClassA2_2 extends AbstrA2 {
    // ... A2_2 concrete implementation
}

In my world, programmers make mistakes, just like I did. When you design the code, you need to find ways to naturally isolate such mistakes. Typically a programmer would have to extend the system by creating concrete ClassAx_y classes. A slightly bigger job is to extend the behavior of the system. But their changes are isolated in simple classes, most cases leaf / final classes. That minimizes testing.

Adding final to isReady in the abstract classes matches the Liskov Substitution Principle. If you overwrite that method, then a derived class would not be substitutible for its base (since it does something else). Therefore, IMO this calls for a final method.

For the main question, my point is, there's no academic or commercial code, but good code and bad code (e.g. mine was bad :-) What I wanted to show you in my example is that structure matters most ...
Dino Send private email
Thursday, August 04, 2005
 
 
I feel that this isn't a discussion about OO techniques more a discussion on the approaches to development.

Personally, I think you can sit there for ages with your head in your hands thinking "what's the perfect implementation of this object" and just go round and round the same points. 

Instead of getting stuck there in a loop of unproductively I start by writing the dirtiest code I can and just prove that my idea works.  I base this loosely on the "write once - throw away" approach or, what I also call “sketching”.  All creative disciplines have a concept similar to the painter’s sketching of creating work that will never see the light of day just in order to prove and practise ideas and techniques.

In development, because of business demands, sketching is harder but still possible.  When I sketch I get to a certain stage where I understand the system more and it then tells me what I should be doing to it. For example I realise that I am repeating code or having to solve problems I didn’t originally think about so then I have the opportunity to make solid design decisions.

This is of course constant refactoring.  But I find several things that make this technique work:
1.    You prove things early on giving confidence to the functionality and allowing room to make good design decisions.
2.    You know your bad/sketched code isn’t release safe so you’re not tempted to release it (I often tell my manager this is just a sketch and very unstable – it’s a long way from ready yet but you can see it works).
3.    You solve the problems as you find them - rather than trying to predict them all at the start and getting stuck with huge philosophical and academic arguments.
4.    Good use of unit testing ensures that you maintain the functional integrity even as the design radically changes.
5.    You naturally reach the point where you decide whether something is worth doing or not.  If your application is working and you only have 3 classes with some repeated code and you’ve two days till deadline you figure its OK code.  If you’ve realized you’re going to have to do 10 more then you make the decision to start employing some decent OO principles.
6.    You start writing code that’s easy to maintain because you’re the one who’s gonna be back rewriting it in 2 weeks time.

This may not make sense to some but I find this approach easier and frees my brain up to think more creatively about how I solve design problems and I always find I am leaving code in a state where I feel confident and happy.  In this specific example you would implement your first solution with version 1 of ClassA, unit test it etc. then, as you learn more about the system you may decide that it is important to go back and make ClassA more stable and evolve it into the second version – the system reveals its needs to you as you build it - that’s my feeling anyway.
Jupiter Moon
Thursday, August 11, 2005
 
 
In reply to "If it works then it cannot be wrong, just as a solution that does not work cannot be right."

That is not a good measure IMHO.  You have to ask the question:
"Is bad code that works easier to be made wrong than good code that doesn't work is to be made right?"
Peter Moss
Thursday, August 11, 2005
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz