The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

design of software

I've been thinking hard and long about why software is designed the way it is today.

I am actually a hardware dude by education. When I design hardware, there are rules, guidelines, components, and lots and lots of subtleties, however, I feel like software development is a lot worse in many ways.

I can easily compare hardware development to building things out of Legos. There are well defined components in most cases. All you have to do is get the right components, hook them up as prescribed (most components have appnotes), and most of the time you get your working system.

OK, I know it is not all that easy sometimes. Board layout is still mostly magic (esp for highspeed design), and when things don't quite work, it gets messy, but relatively speaking, software design is nowhere close to this...

In software, there are no building blocks other than the very most basic stuff like ints, chars, etc.. There are design patterns, but I wonder how much they are actually used, not to mention these patterns aren't quite "plug'n'play". In practice, developers have to create their own Lego pieces which results in pieces either too specific to be used anywhere else, or buggy.

Why is it like that? What is your opinion?

p.s. I've done quite a bit of coding, but I don't have a formal software engineering background, so pardon me if I am being too naive.
Tuesday, August 16, 2005
It's like that because software can be probably the most complex engineered system in the world.
Colm O'Connor Send private email
Tuesday, August 16, 2005
Actually, the sort of "component-based software design" that you're talking about exists and is widely used.

In .NET the FCL is exactly the library of reusable components you're talking about.  Components that are intended to be plugged together to create an application.  In Delphi the VCL is much the same.

The basic FCL and VCL components are in many cases not feature-rich enough for professional quality apps so large markets for feature-rich 3rd party components exist.  See for example of what a set of these components looks like.

This component-based programming simplifies and speeds up a big part of the development process, but design and coding of software is still a complex process and very far from any point where you'd simple be able to stick a few components together.
Herbert Sitz Send private email
Tuesday, August 16, 2005
I used to explain hw vs. software thusly:

Imagine you're building a skyscraper.  You're up to the 70th floor and someone decides to change the size of the bolt. But not just the bolts for the next floor, or even for this floor. ALL THE BOLT are now half as big.  Building crashes.

Not quite as true any more these days.
Mr. Analogy {uISV} Send private email
Tuesday, August 16, 2005
I'm sure there's a lot of reasons, but a few stand out to me:

1) There's a large incentive to get a hardware design right the first time.  If it fails, you have to eat the cost of a unit and further development, and the customer gets pissed because the fix takes so long to turn around.  The costs of fixing things in software are often negligible compared to hardware--costing next to nothing to rebuild and replicate.  In fact, we rather expect that the product will break, many times.

2) As a corollary to the above, we rather expect any third-party components we use to break, many times.  Sometimes they break so spectacularly that the only sane decision is to create our own components, temporarily ignoring the fact that they too will break, many times.

So when you don't have reliable building blocks, all bets are off on the rest.  :)

3) Too many choices of reusable components, not one of which are an obvious best choice.  Any choice you make is virtually guaranteed to cut off a large number of potential customers/users, because a program isn't a standalone device.  It's like having to buy a car to use a cigarette lighter.
Matt Brown
Tuesday, August 16, 2005
I am gonna have to check out devexpress!
Tuesday, August 16, 2005
How about any resources for non-.Net and non-Delphi development?  I know.. I know.. Google is my friend.  :)
Tuesday, August 16, 2005
How come it's always software that has to work around hardware problems?
son of parnas
Tuesday, August 16, 2005
What Colm said.

With hardware, you are constrained.  You lay components out in a grid -- X and Y separation.  MAYBE you have two boards -- but usually ALL interconnects for those boards go through one card edge -- MAYBE an additional jumper cable, but that's it.

With software, all such constraints are removed.  You can have a multi-dimensional hypercube of components pretty quickly, all directly connected to each other with multiple parameter data highways.

Now, you can IMPOSE restrictions on connectivity, and Software Engineering does -- look up Coupling and Cohesion.  But that assumes people are trained in that.  I've still found people making use of 'common' data areas (globals) in C and C++.

And it's OBVIOUS in hardware when people do that stuff -- you can SEE the traces and proliferation of jumper cables winding into a rats-nest of interconnects.  In software, we don't (yet) have a visualization method that shows that stuff.  UML Class Diagrams merely show inheritance and associations.  In fact most of our graphic software standards currently are trying to make the hypercube visible, not provide views that show how complicated the interconnections of the hypercube are.

So, software is one to several orders of magnitude more complicated than hardware can be.  Software is also seen as "infinitely malleable" -- just fix the code.  And this level of software hasn't been around that long.  "Just use the .NET Framework.  That'll solve your problem" -- well, .NET has only been around a few years.  We're still discovering (and developing) the frameworks that are helpful.
Tuesday, August 16, 2005
"I can easily compare hardware development to building things out of Legos. There are well defined components in most cases. All you have to do is get the right components, hook them up as prescribed (most components have appnotes), and most of the time you get your working system.

OK, I know it is not all that easy sometimes. Board layout is still mostly magic (esp for highspeed design), and when things don't quite work, it gets messy, but relatively speaking, software design is nowhere close to this..."

Well this is a question of point of view. It is not always like this. The whole business of the alphabet soup ASIC, PAL, VHDL, FPGA and so on, is to create new components. This is the low level design of the EE world, almost fully comparable with the lower level of software design. Keeping in mind that VHDL is a "programming language". =)
Take a guess.
Tuesday, August 16, 2005
True "guess", but you're still constrained by all those interconnects.  Good point, though.
Tuesday, August 16, 2005
"True "guess", but you're still constrained by all those interconnects.  Good point, though."

Yeah, I'm just waiting for the hardware guy to get the crazy idea of building half of a CPU so the interconnects can be multiplexed and make the complexity of the circuit into a software problem... *sigh*
Take a guess.
Tuesday, August 16, 2005
In software and hardware design alike, component-based design helps. But in software and hardware design alike, component-based design is still subject to emergent effects. Those limit the theoretical ease of use for components.

Simply put, a sum of some connected components is bigger than the set of unconnected components. If you plug two components together, you have 1 interaction between them. Add a third one, and you get 3 interactions. A fourth one, and you are with 8 interactions. A fifth... and the whole shebang is emerging at factorial speed, and you can bet that while you are after a limited number of those interactions, the rest are "Side Effects" which can very well snowball and bury you under an avalanche.

This is why, in both software and hardware alike, isolating the subsystems is such a big deal. If you don't guard them from unwanted interactions, the parasitic inter-influence will kill your device. This is why encapsulation, modularisation, etc. are such a big deal: they group the components into manageable groups with interactions predictable once again, so you can build a bigger system before you are bogged with parasitic side effects once again.

This is true for any complex system design; it is simply due to the huge complexity and size of computing devices and software that it is more noticeable in software design.
Wednesday, August 17, 2005
Maybe hardware is different in the sense that once the connections are made as traces on the board, they are a lot harder/costlier to change, so designers spend more time figuring out if things will work or not.

Secondly, when you buy parts from companies like TI, Intel, Analog Devices, these parts have pins exposed which the desingers use to interconnect the parts. These pins are like API's if you will. Companies spend quite a bit of time developing the interfaces which usually make them very (re)useable. I doubt there is this much thought in the design process of software components.

I don't necessarily agree with the hypercube example although I do agree that there can be quite a few interconnections between software components. It appears that software development tools are still rather weak when it comes to building very large systems.
Wednesday, August 17, 2005
Software engineering is not like hardware engineering , nor like any other engineering displine that we know.

Software is special because software bits do not behave in a way that we can understand.

"The basic tenet of statistics is that a population can be represented by a sample of the population when the sample is sufficiently large and when the sample is composed of a random selection of units (persons, components, or of whatever the population is composed) from the population" -

I'll rephrase the above in the software context: we cannot prove that switching any two instructions around in any non-trivial program is an idempotent operation (it leaves the program unchanged).
Here's a quick (pseudo) proof:

Let program A coded as turing machine Ta. We build program B by switch two of A's instructions around. Program B is coded as turing machine Tb.

Are Ta and Tb equivalent?

The proof requires the analysis of Ta and Tb. If the analysis were performed by another turing machine (program) then its complexity must be higher than the complexity of any of Ta or Tb. If all humans can understand is Th (and there is such a limit), then changes in programs with Ta, Tb > Th cannot be proven idempotent.

Because of this we cannot understand the statistical behavior of software (if any), therefore we cannot approach it as an engineering discipline.
Dino Send private email
Wednesday, August 17, 2005
Dino, that was great. No offense, but now I'm trying to figure out whether you just divided by zero somewhere or whether this is truly a new (to me) insight.

I didn't quite follow the step between stats and software behaviour, at first (and I probably still don't), but by the end I think I know what you were saying. Still, re-reading it a few times, I'm left with the feeling that I'm reading the equivalent of Escher's endless staircase.

Cripes, now I won't even be able to sleep tonight!
Ron Porter
Wednesday, August 17, 2005
Stats are built on averages. When a population has a huge body of average behavior the small distinctivness of each individual in the population vanishes and that makes things simple.

But when each individual in the population apparently does its own thing and there's almost no average or we cannot tell what the average is, how can we build any stats?
Dino Send private email
Wednesday, August 17, 2005
Also realize that even though SW can be component-driven, designing is still very different. A standard ripple-carry adder is simply a set of full adders in series, each made up of half adders (and those made of lower gates). Its very easy to unit test each component and reuse in series or parallel, and keep testing upwards. Even more complex algorithms, such as prefix adders, are mostly made up of the same logic but wired in unique configurations. To extend from 16 to 32 bits requires mostly a copy and paste of the schematic, or basic wiring in HDL. Using only standard cells is now expected.

Hardware also has an interface that can easily be defined and relied upon. The technologies aren't likely to change mid-stream other than the fabrication process, but you won't be mixing DCVS logic with pass-transistor logic.  A lot of things are set in stone early on, such as your clocking framework (e.g. skew-tolerant clocking). Rules upon rules that make up requirements that won't change based on the customer's whim.

Its also rare to see implementations of software algorithms evaluated annually, to see if bubble sort became better than quick sort because of a move from C to C++. That's not so uncommon in hardware - a move from one process to another can make methods that were poor in the past wonderful now. Simply having the best algorithmic performance does not make it the best technique.

Both can be quite complex and frustrating, but in entirely different ways. Anyone who claims one is harder than the other doesn't have enough experience, because what makes software a joy can be a nightmare for hardware developers.
Ben M
Wednesday, August 17, 2005
So why didn't they just take 100 P1 CPU's to make the next better "CPU"  they could have interconnected them.

When you are prototyping and coming up with ideas it is a reasonable thing to show that you can do something and put it together with common blocks,  but when it comes to building a critical process that needs to be efficient/fast you often have to go back down to hardware level constructus to get the efficiency, and strip out all the nice reusable interface guff inbetween.

In the end, there are some reusable hardware compnents, your caps, tranies and so on, and these are equivelant to your coding if and while, you still have to use them to help join stuff together.

Thursday, August 18, 2005
Because its very difficult to get reasonable performance from parallel architectures. You can see that now we are moving towards multi-chip designs, because of advancements to compiler technology and that power is now a major design issue. This change effects the only interface that matters - the external one. It causes changes to the ISA and to the system architecture, so it wouldn't be as graceful as you suggest.

You want to make the common case fast, and the common case is single threaded applications. That means trying to extract the most ILP possible, and then keep the rest of the hardware busy. With the ever growing number of transistors to play with, the next generation chips can make different design choices. These days that's changing, because for 20 years we made great strides (e.g. RISC, OOE) leading up to Itanium. The low-hanging fruit is no longer ILP, power is a concern, and compiler technology has improved enough that parallel processors are reasonable. That's why we are seeing a major shift in design theory.

I'm not saying everything should be stock components. Hardware has very set interfaces and strong ecapsulation. My example was with the ALU not changing its interface, despite major internal redesigns. Internally they still use common blocks, each of which could change seemlessly too. Just like in software, better algorthims and architectures leads to increased performance rather than hand-tuned optimization.

You still have to through things away. My point is that there is a lot more reusability in the hardware world and set interfaces are more common because they can be specified correctly very early. Each designer can optimize their black box, knowing that others won't change the interface on a whim. That's something that software engineers have dreamed about for years.
Ben M
Friday, August 19, 2005

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
Powered by FogBugz