A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
Two of the bigger productivity gains that text-based, high-level languages have conferred on us are abstraction, and the related ability to say more with less. These two things allow us to get unnecessary clutter away from our eyes, and to be able to implement software at a level closer to the real-world. Thanks to today's languages, I haven't had to implement a basic data structure from scratch in over a decade. But each of these benefits comes with a price, and the farther we stretch these capabilities, the greater that price. Are we nearly at the point of diminishing returns? Here's why I ask this:
* Abstraction--by definition--means hiding details that don't seem relevant at the moment. Yet the specifications we are given can call for functionality at any level of detail, and the one we hide today is the one we need to change next month. We have to exhume these details in order to tune our code for performance, and it gets circumvented as systems undergo maintenance.
* As languages have gotten more powerful (expressive), our need to manipulate bits, bytes, characters and numbers has not changed. We interface with a messy world. To compete successfully as a general-purpose language, every new entrant has to be born with a majority of the features of the best extant languages. As further proof of the continued need to access low-level details, note that most graphical programming environments have a scripting language, to make them capable enough for the real world. And for debugging, the metal itself is the only limit on how far down we may have to go to find a problem.
Textual languages must grow in complexity in order to grow in power. You can't say more with less typing unless the underlying language features get more complex to compensate. But human beings have cognitive limits. OO has been around a while now, but few developers use more than 1/3 of the known OO technology. Same for templates/generics. One also has to consider that developers come in a range of capabilities, and the least capable person may end up maintaining the top dog's code.
In summary, I'm seeing a dearth of major new language features that make a _big_ impact for a _majority_ of developers. Languages like C# have new features, but nothing that I find earth-shattering; they are mostly rounding off the rough edges from previous languages (and renaming many of the keywords). So I repeat my question: Are we at a point where all the major, usable innovations in high-level languages are here, and any improvements are going to come slowly, and be incremental?
Probably not. There is an interesting trend of change based on annotations. Annotations instruct compilers how to generate the assembly. They are a simple case of programs writing programs.
Friday, July 06, 2007
Sun's motto of "the network is the computer", with proving facts such as the Internet, and now with Google's infrastructure counted in the millions of computers in the servers, there's plenty to be invented yet in programming to handle the current and upcoming challenges.
Programming languages are huge or small, depending whether you stand just next to them or if you are seeing them from an "airplane", for example.
BTW, I enjoyed or text. Nice one. It managed to keep relatively neutral which is good, but there's a bias somewhere, right? :-)
If computers have become commodities, and they are used in multiples to deliver a single service with backup, redundancy, or computation sharing, and we've come from the days when a single computer was good enough, I think we are in this transition period, where the companies need to adapt their business models to the new times. But one hand the companies need to be profitable and need to extract as much profit as possible whenever they can, and on the other hand, it's not always convenient to pay too much for software which is not used in full production mode necessarily, so to pay for redundant software might be expensive, and to just avoid keeping redundant computers around is just against the "network is the computer" and internet demands.
I mean this as an example of the challenges of the companies who create the new programming languages. Their interests do not coincide with the demand necessarily, as things have changed, are changing, and will change. Companies don't change as fast.
It's not that I think that it's easy to change, though. There's a lot at risk. The need for computers and programs of different companies to work together make things even more difficult sometimes. And then there are the Governments, Universities, NGOs, who are not in it for the business necessarily.
How can programming languages, single-handedly built and promoted, take on such issues which are beyond the scope of any single company/person?
Maybe the abstractions that programming languages allow can only do so much in face of greater needs which are required of them all the time. By this angle, it's clear that the innovations happen in a level above the programming language, even though it means that these innovations are not easily reusable by everyone, as they are not even available necessarily.
Friday, July 06, 2007
"Textual languages must grow in complexity in order to grow in power."
No, not really. The rules of chess don't have to grow in complexity in order to enable better games of chess.
What a textual language buys you is an efficient (and unambiguous) way to express data structures and algorithms. And by 1978 we knew that "Structured Programming" (using sequence, decisions, loops, and subroutines) reduced spaghetti code.
Adding OO on top of this allows us to encapsulate data and methods, enables polymorphism, enable inheritance. But I think textual languages are topping out at this point.
The next order of magnitude increase in productivity will probably be associated with visual metaphors. Though they've been saying that since 1985 or so. But if we could implement a simple (yet powerful) graphical environment, that allowed us to 'compile' down into the text files, that would 'compile' into executable code, that would do it.
Rational is trying that with UML. Which is ok, but is going to take a while, since UML was not originally designed as a 'compileable' graphic standard.
The big show-stoppers these days are the learning curves for dealing with API's and basic data structure libraries.
Friday, July 06, 2007
This post reminds of Joel's article on "leaky abstractions": http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Friday, July 06, 2007
I've always suspected that we're asking too much out of any one single language. The common metaphor is the toolbox. I distinctly remember a period of time when we were encouraged to use Fortran for calculations that required a high degree of precision, Perl for "write once, read never" quick and dirty scripts, Cobol for business logic, and so forth. Each of these languages had a strength that made it ideal for one type of task, and a weakness that made it less than ideal for others.
We've now switched to the point where a recent college graduate is expected to only know one language, be it Java, Visual Basic, C++ or C#. The IDEs, libraries and tools available allow that graduate to pretty much do anything within the comfort of his chosen language, and they try with varying degrees of success to hide those strengths and weaknesses. If you now need to do mathematical calculations with a high level of precision, you can simply use the Boost library (I think); you're not forced by necessity to use Fortran and you won't have an incentive to learn how to cope with double precision values at the byte level.
I think this is an advantage in that "routine" programming is a lot easier and quicker to accomplish. I also think that this is a disadvantage preventing us from writing "novel" solutions tailor made to the problem at hand. It's like teaching an artist how to use charcoal pencils and forbidding him from ever using oils or watercolors.
Friday, July 06, 2007
Annotations/attributes are a way to modify the way a class, etc., is compiled. Interesting, and might even lead into Aspect-Oriented Programming someday. When I first used them in C#, they appeared to be a mish-mash of C's #pragmas and command-line options for controlling the compiler. Perhaps I should include them as a recent innovation.
What I meant by languages must grow in complexity to grow in power, is that if you want to invoke a particular programming language construct, someone has to do the work, be it you or the compiler. If it's the compiler, then the learning curve went up for you. If you do the work, then the amount of labor went up. Either way, something got harder.
And yes, I waited a long time for the visual programming tools to come along, and every time I used one, they always had to put in a way to type in scripts! Visual programming suffers from a lack of a good means to fully parameterize what you were doing before with text. Want to sort data? You still have to somehow tell the tool that it should sort on column X, then Y, then Z. Best way to do that always turned out to be textually. I have quit waiting for visual programming methods.
(answering the title question) No. All of the most common high level languages are not much more advanced than FORTRAN was in the mid-eighties. There is still plenty of room to implement more smarts in the compilers and provide programmers with all sorts syntactic tools to better address programming problems. Templates, contracts, aspects, closures, etc. are just in their infancy as far as implementation in popular programming languages goes. We are only just now getting garbage collection in non-boutique languages.
As for all the talk of graphical programming: don't hold your breath. First, pictures aren't very good for describing most of what must be specified for a program to work. You can do some data flow, and some control flow, but it's a pain in the ass constructing a complex formula from little icons, and heaven help you if your control flow or data flow gets the least bit complicated (try, for instance, drawing a state diagram that accurately reflects the lexical analysis of C++ tokens).
We have some good visual programming tools, but only address a part of the programming task (Apple's XCode uses the visual environment to model object communication networks, Rational Rose uses UML to model class hierarchies and state machines, LabVIEW and Helix used icons arranged in a two dimensional network to represent program steps, Apple's Quartz Composer and Microsoft's Popfly use icons and 'hoses' to model data flow between functional blocks) but we still have to fall back to textual code to fill in the nitty-gritty details of any real program. Don't expect this to change significantly in your lifetime: anyone who tells you otherwise is trying to sell you something.
Friday, July 06, 2007
I think the major high-level languages still suffer greatly from making data access (eg. accounting stuff, customer lists, parts lists, etc.) way too cumbersome and error prone, which I believe is what a huge majority of developers do all day long (as opposed to, say, writing operating systems).
There are some nichey languages that do a much better job of this (FoxPro, I think, Progress, I think, PowerBuilder, I think) and it's beyond me why it hasn't spread any further.
I mean that you should not have to do things like string together SQL at runtime and hope it works, or have to spend decades of your life mapping between the database types and the language types.
Yes, I know about Hibernate and so on. Yes, they help. No, they are not the complete solution, IMHO. Yes, I also know about LINQ, but it is not good enough either. IMHO.
You don't need changes in your current language to be more productive, you need a framework with which you can build and then run your components with less effort.
As an example, I have been building CRUD applications for decades, first using COBOL then an obscure 4GL called UNIFACE. I designed and built frameworks in both these languages which drastically increased the productivity of the software house where I worked.
I decided that I wanted to produce this type of application for the web, so I started off by teaching myself a new language which was purpose built for the web - PHP.
While it is true that there is an enomous amount of crap PHP code out there written by novices who have no commercial programming experience, I do not suffer from this disadvantage. After teaching myself PHP the first step I made was to replicate those previous frameworks in PHP, which I have now completed. You can check out the result at http://www.radicore.org
With this framework I can start with nothing more than a database schema and generate working transactions at the touch of a few buttons, all without writing any HTML, SQL or even PHP code. The resulting code is split into three layers - presentation, business, data access - so it is easy to maintain.
The amount of code that is generated is quite small as it mostly refers to pre-built components in the framework's extensive library. The only time when any PHP code has to be written is for any business rules, which go straight into the business objects. Everything else, including access control, dynamic menus, task switching, etc, is handled automatically by the framework.
Adding new features to the language will NOT make you as productive as using a well-designed framework. That is my experience of writing 3 frameworks in 3 different languages.
Saturday, July 07, 2007
Number of times I've visited some of these topics and see people highly defensive or marking their own custom solution to the problem instead of discussing the topic.
Alot of these topics are just simply rehashes of joels articles.
More and more times I agree with joel and his articles because of developer inadvertently proving his point.
we have at least a hundred years to go before we stop making progress (not that it will be perfect then, but good enough that progress will no longer be made). For one thing there aren't many languages aimed at replacing C++. D is. My language that I am working on is, maybe there are a few others. Let's say that my language eventually replaces C++ in 50 years. My language isn't the best that a C++ like language can be. It is good but there is room for something better.
Programming language adoption is very, very slow. How long did it take java to replace COBOL? How long did it take C++ to replace C? Are either of those replacements complete? Nope.
Physical objects replace each other at a much faster rate. If I invent a new forklift it won't take long to replace all of the old ones. Maybe a few companies buy my new forklift right away so that they can have the best stuff. Maybe most companies buy them when they are scheduled to replace their old fleet. Maybe the penny-pincher buys them when he can no longer get replacement parts for his dinosaurs. Code is different. Code never wears out and never required maintainence (mintainence coding is not actually maintainence). Code is personal. Much of the value of code is knowledge of the code, and that is stored in huamns.
So since programming languages take so long to change we have a long, long, way to go.
Saturday, July 07, 2007
What do you think of the facebook api and other service oriented APIs and widgets? Don't they take your productivity to the next level? Assuming you want to do exactly what they do :-)
son of parnas
Saturday, July 07, 2007
Yes, APIs and widgets (components) are a great time-saver if they do exactly what you want, or if the requirements are sufficiently fuzzy that nobody will complain when the look-and-feel is inconsistent. However, the drawback of quickly wiring together components (or filling in a few dialogs in a wizard, etc.) is that your competition can do the same thing just as quickly. There is no competitive advantage in R&D if you don't do some truly hard work to create something truly original. So for all but simple internal applications, I'm required to write real code.
No. Read about LINQ from Microsoft and Anders Heljsberg's video on that topic. The way LINQ bridges code and data transparently is to my view a big productivity booster. e.g you can do a join query on in memory objects, or xml or sql using the exactly same statement. Having intellisense and type checking for sql query, stored procs etc is a leap forward also. His vision that one day concurrent programming will also be done seamlessly is something that will bring even more productivity.
See the video at
Sunday, July 08, 2007
I find myself spending a lot of time on problems in which a certain variable contains incorrect data, but I'm not sure how it got there. Object-oriented programming has not solved that problem. For example, I can define a Person class having FirstName and LastName properties. But if at some point I find that these properties contain unexpected values, there is no way for the programming language, IDE, or debugger to tell me how those values got there. What code assigned the incorrect values?
I believe that I could be a lot more productive if it were easier for me to trace a particular bit of information backward in time (during the current task execution at least).
A factory worker who notices a defect in a part can walk back along the assembly line to the place where the defect occurred. With an assembly line the flow of physical materials is pretty obvious. But in complex software developed using the most popular programming languages, the flow of data is very difficult to reconstruct.
I believe that this is an area where big improvements are possible.
Monday, July 09, 2007
I would like to clarify my previous post by pointing out that incorrect data is not the same thing as invalid data. One could certainly prevent invalid property values simply by validating properties when they are assigned. I am referring instead to situations such as a particular person object having a valid name that is simply not the correct name for that person. Finding out what part of a large and complex codebase actually assigned that incorrect value is frequently very tedious, especially when this is occuring in the midst of hunudreds or thousands of other assignments, most of which are correct. Being able to trace the flow of information from that one incorrect value back to its source would be immensely valuable.
If you're at the mouth of a river and you want to find out where the pollution is coming from, you can trace the pollution upstream until you arrive at its source. I'd like the ability to do the equivalent thing with data values at runtime. I believe that most of today's languages were not designed with that goal in mind.
Monday, July 09, 2007
Bob Snyder whats happening with your situation is your noticing that the identification of bugs within the software is more important then actually processing incorrect data that somehow happened to be injected into the data structure. This causes other parts of the code to bloat with error checking on data or data structure that should not require to be checked within the core subsystems of the program.
Now if these core subsystems are corrupt or incorrect then this is a bug within your software that the only way to fix is to stop the execution exactly where the location is detected and fix the bug. The alternative to this, is a bug appearing somewhere within 1 million lines of code, and programmers wishing they had magical tool to detect these violations of their own data structures!
I don't think any technologies will be invented to allow you to walk back in time in a programs execution to see where these errors actually occur. Because no technologies will understand your data structure and the correct behavior your routines should be taking other than yourself.
>The next order of magnitude increase in productivity will >probably be associated with visual metaphors.
I have worked with an EAI tool (webMethods) that promotes a "charting" metaphor. I.e. you have create logic by stringing together "blocks" which represent loops, decisions and so on.
Each block has a whole range of parameters, depending on what it does. Some can be set up as "local constants", or you can "wire" the output of a block to the input of another and so on.
Pretty visual, if this is what you meant.
* Mapping complex XML structures by saying X goes into R1.X, while Y is the lower value between (Y1 and Y2) is a breeze.
* It's not easy to actually understand what really goes on without opening each block and check what you have "wired in".
* Forget about Inheritance (this could be just a specific restriction of the product I used, instead of a problem with graphical environments, but I doubt it)
* Good luck diff-ing two versions of a complex piece of "code" to understand what changed in the last 2 months.
So, while powerful in itself, I have problems seeing this as very promising to overcome "complexity". I suppose that where it shines, it's because it has specialized blocks which serve you well in the domain (example: exposing a piece of "code" as a webservice is a 1-click operation... but similar stuff exists for text-based languages too... implementing an FTP area with automatic startup of a process when you receive a file is a 1-block element...)
On the other hand, most "trivial" logic requires you to use a lot of small primitive blocks and somtimes you almost feel like working in Assembler, except you use pretty icons instead of mnemonics...
Monday, July 09, 2007
AOP via annotations is already done (guice from google labs).
Monday, July 09, 2007
Bob, some researchers of user-centric design of programming languages felt the same way; they called their approach the WhyLine. According to http://www.cs.cmu.edu/~marmalade/whyline.html, "programmer's average debugging time [was reduced] by a factor of 7.8."
The closest equivalent in a mainstream programming language is OCaml's time-travel debugger.
Wednesday, July 11, 2007
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz