A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
I am a CS graduate student working part-time for a small software company. We have only three developers (one full-time, myself, and a fellow grad student from the same university, another part-timer), so we can (and often, against my better judgement, do) get away with sloppy source-control, testing, and coding practices.
I can live with the fact that we don't have testers. I can even live with the fact that we are leaving debugging symbols in builds that go to the clients.
But the latest piece of sloppiness in the service of convenience really bothers me.
The software we sell is specific to a certain industry, and due to the nature of that industry, we need to make customizations for each new client we take on. Generally, they go into the main build, since other clients simply don't use features they don't need.
But now we are building a version for a very different client, in a very different country, with a very different culture. This means we need to include things that domestic clients absolutely should not be seeing.
Well, to avoid forking the build, the decision has been made (not, I stress, by me) to read "use this"/"don't use this" settings from the client's database, instead of producing a different "Japanese" version.
Each feature has to read from the database what it should be doing, and respond accordingly. That's fine when there are only 6 differences, but what about when there are 60? 600?
I try to object when decisions like this get made, but I am a voice in the wilderness, since "it's only a few small changes" and doing things my way would take more precious, expensive time.
Not to mention that I am a computer scientist, a theoretician, rather than a software developer qua software developer.
Is it just me, or is this totally fucking bonkers?
Extracting out the behaviour of the application into runtime configuration parameters is called "Metaprogramming" and is usually a whole lot better than the alternative.
>That's fine when there are only 6 differences,
> but what about when there are 60? 600?
Well, how good is having two separate codebases when you are checking in 6, 60 or 600 changes applicable to both?
So, no, personally I don't see this as being bonkers. Hairy, yes, but not bonkers.
Wednesday, March 29, 2006
+1 for Derek.
If you ever want to service clients outside one vertical model, the software has to be adaptable. It's far easier to set control flags in a database or config file than to manage code branches.
For instance, the primary package my company sells does revenue-sharing financials. It's in use in 6 very different industries. We use a combination of flags (yes/no, on/off etc) and metadata (data about data) to tell our data manipulation engines how to work in a particular installation. So the dead-tree publishers get what they want, the biotech firms get what they want, the computer hardware manufacturers get what they want, et cetera. From a single code base/distro.
This way, we leverage our IP and only need to learn the industry specific bits to sell into another vertical by knowing how to configure our package to suit. If we had to manage branched code for this, the company'd probably be out of business-the costs would be too high.
Agree with the previous two.
If you want a comp-sci (or more accurate sw engineering in this case) explanation of why they are right: think about duplication of code. Massive duplication of code is always bad, for reasons you should know.
Here the fork approach requires duplicating all most all the application, including the bits which are the same regardless of the 2 (or more) options... and these particular bits, sound like they are the majority of the app.
The goal in creating this type of app, is to create 1 app, which you can sell to multiple clients, without having to maintain a whole new code base for each.
Wednesday, March 29, 2006
>> Each feature has to read from the database what it should be doing, and respond accordingly. That's fine when there are only 6 differences, but what about when there are 60? 600?
Like, say, Windows reads from the registry [the paramters that control] what it should be doing, and responds accordingly? Works find with literally thousands of differences...
Or, to put it another way, which is better, an ini file containing preferences, or a different build for each combination of preferences?
As a software developer, each and every time that I get complacent about an app I'm creating and developing, I think about the following words that have been immured into me from my upbringing.....
"There is NEVER time to do it right.....but there is ALWAYS time to do it over."
For me, this settles any apprehensions I may be having about a development issue.
Forking is an issue since you now have two (or more) code bases that need to be maintained. You will find very often that a bug fix for one client needs to be make for the others. This means a duplication of work or worse - forgetting to make the fix and then going through the whole rigmorale of tracking down a bug that has already been fixed once. On the other hand it does mean that you are constantly checking configurations.
There is another option from soft configuration and that is conditional compiling of the code. In C++ you can use the preprocessor to do conditional compiles. The means that you can have a separate config.h file for each client and only fork the code that is radically different one from the other. This may make things easier to maintain than using runtime switches and also stops clients fiddling in unexpected ways with the config files.
Thursday, March 30, 2006
Of course, configuration is better than forking, but that's already been addressed by others. I want to object to your notion that leaving debugging symbols in builds that go to the clients is sloppy:
I do that all the time, and I'm quite happy with it. The extra weight of the binary does give you a size and speed penalty, but that's all but unnoticeable for most applications nowadays, and the value of shipping a product that is both _identical_ to your development version and can be examined in place if something unexpected happens makes this an easy trade-off.
>I do that all the time, and I'm quite happy with it. The extra weight of the binary does give you a size and speed penalty, but that's all but unnoticeable for most applications nowadays
That depends on your application. The difference might be significant if you do a lot of number crunching. Also a debug build must be a lot easier to crack.
Anyway it is possible to generate a map file from a release version that you can use to look up the source line of a crash. And you don't send this map file to the customer.
Thursday, March 30, 2006
Forking the dev. tree of an application is one of those things that sounds good in a purist sense but really start to suck once you are forced to live with it. I was in almost the exact same position you are in about 5 years ago only I was the only developer. I made the decision to fork the versions (based on similar circumstances and some others that do not pertain here) and ended up regretting it for years after. The problem, as others have stated, is when you need to make the same change to both versions. The change could be a bug fix, a feature addition, whatever. No matter how dilligent you are about making your changes and updates, you will eventually end up with two *almost* identical products that are *almost* impossible to maintain.
And don't forget the slippery slope. Once you fork it for one client it will be much easier to do it for another. Then you have 3 *almost* identical versions to maintain.
Friday, March 31, 2006
The ONLY reason I can think of to fork, is to be able to demonstrate to an auditor that the intellectual property contained in one code base, is not contained in the other.
If it was just a matter of turning things on or off via configuration flags or even compile time options, then there's always the possibility - as slim as it may be - that a client can hack the binary and tease out a meaningful conclusion about your other clients. For example, if I notice you have a flag USE_AMERICAN, that's enough to make me wonder if you have a USE_CHINESE flag.
However, having said that, I should point out that if you do need to maintain that clear separation, it's probably best to maintain that seperation at the business level. Somebody else should license a clean copy of that code from you and fork it.
Friday, March 31, 2006
I think it really depends on what the changes are...
For language(UI), definately wouldn't fork.
Are there any other options besides a straight fork?
What percentage of the code base would be different? 5%? 50%?
Possible to have a vanilla build with the extra/different stuff put in? Are you automating your builds? Would make it pretty easy to see a break.
Interesting that there have been alot of opinions about how the forking and having 2 versions would be bad...but if you are setting flags, you have essentially placed another layer of 'state' in the app.
And EVERYTHING better play by the rules of the 'state' you are in... :)
My word of advice is to avoid at all costs use of precompile directives #ifdef, etc to block in/out code sections based on the target customer.
The technique sounds appealing and would seem to work well at first, but rapidly goes out of control and becomes difficult to maintain and debug.
I speak from experience after having to make mods to a large C project where this had been done.
If you go with the config file/database option...make sure your configurations are kept as simple as possible and clearly documented! Don't let your configuration info become another programming language!
Thursday, April 06, 2006
What I read into the first post is that the application constantly queries for it's configuration/mode with every GUI change. If so, work around it but as everyone has pointed out that alone does not call for a fork.
Monday, April 17, 2006
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz