The Joel on Software Discussion Group (CLOSED)

A place to discuss Joel on Software. Now closed.

This community works best when people use their real names. Please register for a free account.

Other Groups:
Joel on Software
Business of Software
Design of Software (CLOSED)
.NET Questions (CLOSED)
TechInterview.org
CityDesk
FogBugz
Fog Creek Copilot


The Old Forum


Your hosts:
Albert D. Kallal
Li-Fan Chen
Stephen Jones

"No Silver Bullet" and Functional Programming

Partly in response to this, I've written a short essay on how functional programming fits into the model of "No Silver Bullet".  Its at

http://cogito.blogthing.com/2006/12/06/no-silver-bullet-and-functional-programming/

I conclude that functional programming, and specifically Erlang and Haskell, is in fact a silver bullet as defined by Brookes.

Of course that doesn't mean its like Lego.  It is still hard.  Just an order of magnitude easier than the same job in C++ or Java.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
I commend your article -- well done, good summary of Brook's point of view.

I was put off a little by your OP, though, because it's typical of "technology buffs" to say -- "Oh, yes there IS a Silver Bullet, and it's THIS!" -- 'this' being whatever their favorite technology happens to be.

Eventually, somebody is going to be correct at this -- his thesis was "in the next 10 years", not that it was impossible to have SOME kind of 'silver bullet'.

To my knowledge (which frankly is not much where Haskell is concerned) I don't think Haskell "manages complexity" very well -- but I could easily be wrong there.  I look forward to your next blog entry.
AllanL5
Wednesday, December 06, 2006
 
 
I don't agree, and I admit I'm interested in reading the next blog post where Paul explains why Erlang and Haskell haven't been adopted.

Personally, I think Paul is mixing the difficulty of expressing an idea within the constraints of the language, with the difficulty of conceiving the idea in terms of boolean rather than fuzzy logic.

Using Paul's specific examples, I sincerely doubt that the silver bullet for writing correct and robust concurrency-aware code is to hold the developer's hand. It certainly does help, but the real sea change is getting the developer to truely and effortlessly think in terms of concurrency instead. I certainly don't sit down and think about threads and non-state machines and so forth when I program a basic calculator and I will wager the vast majority of programmers don't think about it either.

If there is a silver bullet, it will be when every programmer instinctively takes into consideration and deals with security issues, concurrency issues, environmental constraints, to wit, everything external to the program itself that can negatively impact the program.
TheDavid
Wednesday, December 06, 2006
 
 
To AllanL5:

Its difficult to explain how Haskell manages complexity, or in fact how it manages to be a programming language at all (no assignment statements) without just teaching the language.  This is a classic example of the Blub Paradox ( http://www.paulgraham.com/avg.html ).  All I can really say is that once you have learned the language you will understand.  But not until.  Sorry about that, its just in the nature of things.

Paul.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
To TheDavid:

Your arguments are remarkably similar to the arguments against GC I used to hear back in the early nineties.  The solution to memory allocation bugs is not to hold the developers hand, its to get them to keep track of the memory they assign.  Or in other words, be man enough to handle all of that accidental complexity.

The Macho Theory of Good Engineering is fundamentally broken, and always has been.

Paul.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
"All I can really say is that once you have learned the language you will understand.  But not until.  Sorry about that, its just in the nature of things."

I'd bet that that's one of the major roadblocks to widespread Haskell adoption: its proponents can't explain the advantages of the language to people who don't know the language.

Actually, it's worse than that. Some time back, I picked up one of the main Haskell tutorials and started to work through it. Many of the code samples wouldn't run. When I asked for help, I was told that I couldn't just type them in as described in the tutorial and that I needed to understand the Haskell execution model before I started the tutorial.
clcr
Wednesday, December 06, 2006
 
 
Clcr:

Which tutorial was that?  If it was the Wikibook then that problem has been fixed.

Paul.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
"...are remarkably similar to the arguments against GC I used to hear back in the early nineties."

I agree they are remarkably similar, but they are not the same.  It's not a question of being man enough to handle the additional complexity, it's being aware that the complexity exists to an extent that it does influence how you write code.

The string buffer overrun security issues are a better analogy. People have been trying for years now to get "novice programmers" to use the safer strcpy2() instead of strcpy(), and in some cases have resorted to removing strcpy() from the libraries entirely; if you used strcpy(), your program won't compile for that platform. I don't like that solution because it still permits said programmers to try expressing 70,000 as a signed integer.

Simply adopting Haskell and Erlang is, for better or worse, this type of illusionary bullet. I realize I'm nitpicking the words we use here, but I truely believe there's a better benefit to be gained from truely learning and understanding concurrency, and then using Haskell to easily and conveniently express one's understanding of concurrency.

I did see Paul's response to clcr's complaint about the Haskell environment, and in a roundabout way, I think the need to answer that complaint, reinforces my point. You should be able to learn the concepts behind functional programming (which Paul's blog actually provides a decent abit brief introduction to) without an actual, physical compiler or programming environment.

Otherwise you're just doing what the compiler allows you to do, and a better compiler alone does not reduce the "design time" by an order of magnitude.
TheDavid
Wednesday, December 06, 2006
 
 
Everyone these days knows LOC measurements are unreliable.  If you look at the development time in the referenced .pdf ADA comes in at 23 and 28 hours development time, Haskell comes in at 10.  That's not an order of magnitude increase in productivity.

Nice try though. ;-)
Grant
Wednesday, December 06, 2006
 
 
"No Silver bullet" link is broken.

And the "Silver Bullet" as defined by Brooks says that a true Silver Bullet would make programming easy.  But you say it's still hard.  Ergo, you've disproven your own argument. :)
Crimson Send private email
Wednesday, December 06, 2006
 
 
TheDavid:

Its possible that we are in violent agreement here.  Every language carries its own paradigm, and Haskell and Erlang are certainly not exceptions.  Merely adopting the language doesn't help: you have to adopt the whole paradigm or you are in for a world of pain.  That is why my article was headed "Functional Programming" rather than "Haskell and Erlang".  The languages are merely illustrations of the paradigm.  In both you still have to understand concurrency, but programming a large system doesn't require that you grok every bit of code in the whole of the system.

The problem Clcr had was (I think) just a technical issue with the way the Haskell command line interpreter works.  In Python you can basically paste a program in at the command line and it just works.  Haskell doesn't behave like that: for technical reasons to do with monads and the type system the command line behaves like it is inside a "do" block.  In particular a Haskell program might say "foo = 5", but in the command line you have to say "let foo = 5".  This is only a detail of course, but I'd hate readers who don't know Haskell to get the idea that this represents a fundamental problem with the language.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
Grant:

The programming time figures are not comparable, partly because of possible reporting bias, and partly because they don't account for documentation.  The Haskell version was very well documented.  If you include documentation lines as well as code lines the correlation to reported time is rather better.

Large projects produce between 3 and 10 lines of code per person per day (across the whole lifecycle, for all staff from Project Manager down).  Brooks noted this in the Mythical Man Month, and its still true today.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
Crimson:

The link is now fixed, thanks. 

I think you need to read Brooks *actual* definition of a silver bullet.  It does not make programming absolutely easy, it just reduces the complexity by an order or magnitude.
Paul Johnson Send private email
Wednesday, December 06, 2006
 
 
Paul,

unfortunately complexity is what limits us indeed. This was proved by George Chaitin who showed that a Turing machine of complexity N can compute only the first N bits of the Omega number. That makes Omega non-computable by conventional computation. And things don't stop here; in fact most irrational are non-computable, or there is no algorithm to compute them. This is all the same with Church's undecidable equivalence problem or Turing's halting problem. They all say that there is no algorithm to resolve an infinitely complex problem. As a plus, Chaitin gives us the max complexity that can be handled by a given algorithm.

However, one language vs another may be more efficient in terms of computation time or memory. Or more, quantum computers may yield better time complexity compared to conventional computers. But it is completely unknown/uncertain if there is a computational device that can solve the halting-problem / undecidable equivalence / calculate omega.

From that perspective there is no silver bullet: no language can ever address complexity beyond their own complexity. Breaking that barrier requires a non-algorithmic, creative process.

PS. The undecidable equivalence problem is in fact a testing problem. Given enough complexity it becomes impossible to prove that a function does what is supposed to do, or in other words it is impossible to test it. This is easily visible in a SOA system. For n services aggregating eachother m times, the complexity of the testing process is o(n^2m).
Dino Send private email
Wednesday, December 06, 2006
 
 
But does it pay well... I'll learn Haskill or whatever you want me to learn, but again, does it pay well?  :)
Recce
Wednesday, December 06, 2006
 
 
It's hard to get a "weird" language to gain traction. Algol-like languages are part of our nature already. It's like doing basic mathematics. Haskell and other languages are more like advanced maths, where most people know nothing about that and will not learn in a year of training, let alone without training at all.

Since languages like Haskell don't get traction, their idiosyncrasies won't get fixed, that is, their sharp corners won't be softened.

Really. I program in Ruby in a "crazy" way using some kind of Functional Programming, besides standard OO techniques. And I mean tens of thousands of lines in different projects. One of the first things I miss is the "RDoc" documentation tool, because in the "FP files", it doesn't work because there aren't methods, classes... I think in other languages people would find some problems like this when doing the documentation. While in "OO files", you can create as much related documentation as you want and it will be generated automatically for you in .HTML files.

I imagine there are other issues like this one. Like hard to debug errors, because the stack trace may get very wild. Dynamic typing. And on and on. :-)

I wonder what most folks here would find of advanced mathematics if they had to learn it "on-the-go".

It's like, while most folks want to "fill in the blanks", with FP you might have to do more, even though the reward is more as well. If this reward is not extra money, most folks don't care.
Lostacular
Wednesday, December 06, 2006
 
 
Then don't bitch that your job is hard, because you're choosing to use subpar tools.
well
Wednesday, December 06, 2006
 
 
From the study referenced:

> 1. The NSWC experiment was conducted in a very short time-frame with very little direct funding. Thus many corners had to be cut, including significant simplification of the problem itself; the largest of the prototypes was only 1200 lines of code.

> 2. The geo-server specification was by nature ambiguous and imprecise, thus leading to some variation in the functionalities of the developed prototypes. (On the other hand, the specification is probably typical of that found in practice, especially during requirements acquisition.)

> 3. The participants were trusted entirely to report their own development metrics. In addition, not all of the participants attended the first meeting at NSWC; those who did attend were advantaged.

> 4. No guidelines were given as to what exactly should be reported. For example, development times usually included documentation time, but there were major variations in the code-size/documentation-size ratios. In addition, different methods were used to count lines of code for each language, and it is not clear how one line of code for one language relates to that of another.

> 5. The review panel had very little time to conduct a really thorough review. For example, none of the code was actually run by the panel; they relied on the written reports and oral presentations.

The problem was grossly simplified, the code was rushed, the line count and time spent numbers questionable, and the final application never run.

This is your proof?

We don't even know what those 85 lines of Haskel actually work with good inputs, let alone if they correctly responded to bad data.
Jonathan Allen Send private email
Thursday, December 07, 2006
 
 
I've found that this guy's experiences with Haskell closely resemble my own:

http://wagerlabs.com/haskell

His recent site redesign has dorked the formatting, but there is good information there.
_
Thursday, December 07, 2006
 
 
@Paul:
I tried to comment directly to your post on your blog, but kept getting caught as spam. so am posting here. the intention is not to spam/flame you. Here's my comment:

While the arguments you present in the bulk of your post are convincing, the hard data presented hardly seems so. How does low LOC = high productivity except in the wierd world where CMM makes sense?

What gets put down as code is but the distill of all the knowledge stored in a programmer's head, and functional languages do demand a higher overhead of that storage space than imperative ones (more for historical reasons, i admit, but there it is).

Moreover, code building tools reduce the effort to actually input those high LOCs with automation that is improving by the day.

Can you present a better quantitative metric than LOC counts for functional languages being an order of magnitude better?

Thursday, December 07, 2006
 
 
Some, slightly disjointed, thoughts:

While being a fan of the concept of functional programming it has always irked me that, as Paul points out, there are areas of commonly used functionality that are inherently not pure, most predominantly I/O.  Yes, they can be cleverly wrapped in monads, but it highlights one of the key problems that most higher level languages face, specifically that at some point they will need to interface with other systems that may be working on an entirely different basis that is at odds with their structure and methodology.

Brook's original comparison of hardware to software is a strange one.  If you laid them side by side on a desk, it works, but he's not taken into account the difference in distribution and adoption, specifically there is a (comparatively) small amount of types hardware in the world that are generally mass produced, follow reasonably strict standards, and adopted widely.  Software on the other hand is so amazingly splintered and reimplemented in a fascinatingly hideous way (let's call it product differentiation) that it's no wonder it's lagging behind.

It can be seen that when a software system provides a standard API that is rich and diverse, productivity increases.  Ruby on Rails is a good example of providing a high level API for a specific use, namely developing web applications.  Does it's increase in productivity come from the semantics of the language, or the reduced line count (by far one of the most divisive and corrupt comparison of languages)?  Some might argue that it does, but any brevity comes from a well thought out API, and the semantics are mostly ever going to be a personal choice.

Why we programmers are looking for a silver bullet in the semantics of a language somehow escapes me.  They all have their place, they all have their use.  Personally I put far more effort into standardising abstraction of implementation, rather than the implementation itself.  This might sound a bit odd but you can always fix, improve, and farm out the code at any time, but changing interfaces will more often than not create a massive haemorrhage of refactoring, if it is indeed possible at all (hey, let's call it strcpy2 or strcpy_s... nah lets have both!)

Day to day I'm not generally interested in the code behind the interface, as long as it works, but I am interested in having a flexible language and API that lets me get a multitude of jobs done on time.  Unfortunately, most languages and API's fall far from this ideal as myopically focus too much on specifics of implementation rather than the ecology of programming.

You can keep your Open Source - give me Open Interfaces :)
Alex May Send private email
Thursday, December 07, 2006
 
 
The reason imperative languages are worse than functional languages is not because they are imperative, but because they are bad languages:

http://www.warp9point99.blogspot.com/

Haskell is not silver bullet, because there is no general algorithm that can prove the correctness of another algorithm, and hence a program that compiles in Haskell can still contain logical errors not caught by the compiler.

Lambda functions, closures, garbage collection, algebraic types etc can exist in imperative languages too.

Developing proofs for each program is not a trivial task as well, and it can take many months to formulate the appropriate theorems.

Finally, the benchmarks you present are biased. For example, the getopt parser can be written in a few lines of C++ code, using a boost::Spirit like framework; for example:

rule cmdLine = *('-' << *(letter | digit));
Achilleas Margaritis Send private email
Thursday, December 07, 2006
 
 
All of the "no silver bullet" arguments in this thread are quite general, and apply to structured programming vs. spaghetti code goto programming or C vs. assembly language.

I think your definition of a silver bullet is meaningless if those two things aren’t silver bullets.
28/w
Thursday, December 07, 2006
 
 
Regardless of whether functional languages are a silver bullet or not (and I don't believe they are), they will become more and more important in the not too distant future (don't ask me which language though).  Since processors are not speeding up much anymore but multi-core chips are becoming more and more common (with more and more cores), high performance or resource intensive applications will have to be parallelizable in order to take advantage of hardware improvements.  There are two main options for this:

1) Write/rewrite applications with a multi-threaded imperative model.  I haven't done this much but I understand that it's hard because of unpredictable side effects of and the need to isolate and synchronize threads.
2) Write/rewrite applications using functional programming.  Functional programs are inherently parellelizable because there are no side effects.  However, functional programming is hard because it's a different (and less intuitive) model that most developers don't currently feel comfortable with.  Also, many of the languages are poorly developed and don't have good tools ( http://www.defmacro.org/ramblings/not-ready.html )

So the future of programming will require one or both of these methods.  Personally, I think #1 will be more important for a long time because it's a modification of an existing paradigm and is well supported by big players.  However, as the tools for #2 become better and more people have good experience with it (and as problem sizes increase), I think it will become more prominent.
Peter Christensen Send private email
Thursday, December 07, 2006
 
 
+1 for Paul - don't know Haskell, used Common Lisp a bit for "fun".

But for production I've used languages that vary in expressive power from 8-bit assembly language (8080) to Perl and Python, and it's clear that one can work faster and more accurately in the more expressive languages.

Another thing that Brooks wrote about was that programmers tend to produce code at about the same rate regardless of language.

So the more expressive language wins that way too, especially if you pick one that shines at your particular need.
dot for this one
Thursday, December 07, 2006
 
 
Here's a link to Wikipedia's article on "Order of Magnitude", for those wondering what exactly that entails:
http://en.wikipedia.org/wiki/Orders_of_magnitude

i.e. even a 5-9x productivity increase or LoC decrease would not be considered an "order of mangitude" greater.

I don't know about ya'll, but ask any manager, business owner or entrepreneur if they'd like to increase their company's output / productivity / profit / etc by even just 20%, and you'd bet they'll be interested in taking a second look.

That's why, in this *real* world of ourse, you don't need an order of magnitude difference, only a 1.5x, 2x, 3x and beyond difference.

That's why so many of us had to try out RoR to see what the hype was all about.

And even if frameworks are using some kind of sleight of hand to trick us all into thinking we're being more productive, who cares?  (well, someone reallly should do a study at some point)

But if you *feel* like you're getting 2x as much done, and having fun doing it, then of course you'll keep building apps in that language.
fez Send private email
Thursday, December 07, 2006
 
 
"Silvet Bullet" means a solution which covers every need.

I would like to see:

1) a sorting algorithm in Haskell that is as fast as in-place sorting in imperative languages.

2) a GUI library built on top of Haskell (not Haskell using other libraries).

3) An application which uses the MVC pattern in Haskell and is as cleanly implemented as MVC in Java/Smalltalk/Qt.
Achilleas Margaritis Send private email
Friday, December 08, 2006
 
 
> "Silvet Bullet" means a solution which covers every need.

I think "Silvet Bullet" might mean a solution to a well-known problem that cannot otherwise be solved (specifically, stopping a were-wolf); other, easier problems don't require silver bullets.

http://en.wikipedia.org/wiki/Silver_bullet suggests that penicillin is a "silver bullet": that implies that penicillin is an effective solution to a well-known problem, but not that it solves all problems.
Christopher Wells Send private email
Friday, December 08, 2006
 
 
The fundamental problem with functional languages is that they try to legislate away the hard/interesting part of programming - which is manipulation of state. Pure functions are interesting ideas, but the purpose of a program is to transform one state to another. Trying to ban side effects from programming is a strange effort.
victor yodaiken Send private email
Saturday, December 09, 2006
 
 
I agree that the manipulation of state seems much of the time to be the main point of many programming projects.  That said, devotees of FP claim an incredible ability to solve general-purpose programming problems much faster than they had previously been able to.  I don't have enough knowledge of FP to evaluate these claims - it takes me hours to do the most trivial things in Haskell - but at least it is an interesting learning exercise so far.
Something I would like to see is either an analysis or an evaluation of, say, Haskell for some relatively simple task such as opening up a text file, executing some function of every Nth word, and writing the result to another file.  Or a simple DB interaction example.  The "compute the Nth Fibonacci number" and "factorial" examples are fine, but I would love to see something a little bit more general purpose.
Does anyone know where a relativley simple example program might be found?
WR
Saturday, December 09, 2006
 
 
The fibonacci example is interesting. If you do the same example in C with, say, the gmp library, without any smarts (not using the built in fib or some clever algorithm) you get a very fast implementation that can easily do fib(1000000) while the haskell example dies with  "stack overflow" at 35000. To me, the Haskell idea of thinking of a function as a sequence is not so useful.
victor yodaiken Send private email
Sunday, December 10, 2006
 
 
Oh dear God please stop using the ability to calculate the Nth Fibonacci number as the hallmark of a good programming language. If all you do all day is calculate fibonacci numbers then by all means base your language selection on how well this task gets accomplished.

Anybody who says that language "blah-de-blah" is good/bad/useful/not useful/great/terrible/excellent/put-in-your-favorite-adjective  because of how well/poorly it can calculate a fibonacci number using X algorithm should be shot, stabbed, burned alive, run over, fed to rabid dogs and then thrown into a woodchipper.
Bart Park
Monday, December 11, 2006
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz