A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
How do you go about doing a daily build and striving for a zero-defect environment? Does it mean I never get to go home until I've killed all the bugs in my new code? Or does it mean I just don't check my code back in until I've fully tested it, which leaves the code effectively branched for a much longer time?
I'm working with a handful of programmers for the first time (as opposed to working on my own, or with just one other coder), so I'm just wrestling with decisions like this for the first time.
I interpret it to mean that you resist the urge to forge ahead with adding the cool Bla control to your BlahBlah, until you fixup the boring way your Menus don't desensitize properly sometimes.
As an aside; Daily Build is a good idea because its a simple test that's not open to interpretation or prioritization. 'Zero defect' is tougher because one person's defect is another's feature, etc.
It means you don't move on to new things until you're done fixing your daily problems.
Friday, September 24, 2004
Do you have a build manager? Typically they would have to find out which checkins are messing up the build, go back to a version that works and rebuild. Of course, the faulty checkins should be revisited and changes merged properly in.
You end up with inconsistent checkins because developers don't use the latest build when making their changes. Train your developers to check for updates and merge their changes rather then just check in.
It is worthwhile to run regression testing with the nightly builds. You want to know about build problems sooner rather than later (people would code around bugs, therefore locking problems into the product)
Friday, September 24, 2004
To me, 'zero defect' implies a focus on software quality and bug resolution more than a particular check-in strategy. In general it's better to check in incremental, bite sized changes regularly than to complete a larger unit of work in isolation and only check it in when it's complete. By following that approach, changes you make without realising they'll cause trouble in other areas are more likely to be noticed sooner.
But a 'zero defect' policy can be applied with either model. 'Zero defect' would demand that, when such problems arise, they get immediate priority. The question is, do you want to drop everything for a short spell to resolve a couple of bugs on a regular basis, or resign yourself to a long and tedious bug hunt further down the road? :-)
You also need to take account of the relative cost of fixing defects vs. leaving them broken. If it's going to cost a lot of development and/or QA effort to resolve an obscure bug that nobody's ever going to notice, it may not be worth fixing. How important is software quality to your particular project, when viewed in commercial terms?
Daily builds, on the other hand, are a no-brainer. It's all too easy to accidentally miss one file during a checkin and without a daily build, if your co-workers don't update their local code frequenetly, you may not realise anything's broken for some time. More importantly, if you have a daily build process you can incorporate automated testing into it. If you generate comprehensive unit tests for your code, having them run as part of the daily build is another great way to get early notice of undetected regression bugs.
Friday, September 24, 2004
It means that you write smaller chunks of code and check them in as you go. Therefore, if you have to do a diff between the codebase and your code, there should be only small differences and if there's now a bug, it's highly limited on where it could be.
It's like if you're going on a long drive. You may only check the map once, but if it's a new route with shoddy/unclear directions, you'll check more often. Then you can realize that you missed your turn after 5 minutes instead of after an hour.... like my boss did yesterday.
Stop talking about me behind my back and get back to work!!!
Friday, September 24, 2004
I've normally heard "zero defect + daily build" used in conjunction with some sort of "test-driven-like" process, where the build includes running all unit and integration tests. By integrating these, you reduce the re-testing effort.
One of the important things is that you can do all the unit testing you want, but there are some bugs that show up because of unanticipated interactions between modules or misinterpretations about interfaces. You won't find these until you do a check-in and integration testing. However, by their nature these are the hardest bugs to track down. So by making as few and as small changes as possible, you increase your chance of tracking down thise bugs; it has to be related to the new code. This gives you a beach-head from which to try figuring out where the interaction is.
Above, I said "test-driven-like" process, because of one caveat. In full test-driven-development, you start with a clean build (including test). Then you introduce a new test for functionality that will be added, then re-run the build and test to verify you no longer have a "zero defect" build. Then you add just enough code to satisfy that test. So unless you can get each test+feature accomplished in one day, you may not be able to get both daily check-in and build. However, the intent is to keep each of these as pairs as a small and simple change, and gradually building up the full functionality of a feature. So you should be able to get several of these done in a single day.
Of course, part of the benefit comes from intelligent use of configuration and change control, so you know what has been checked in and when, and what the changes are (or were supposed to be).
As with all things, an agile process means valuing the results over the process. Occassionally one finds a need to do major surgery. Agile seeks to minimize this through techniques such as refactoring. If you talk it over with the team, and all agree its necessary, then abandon the daily temporarily, but start it up again as soon as it makes sense.
I don't interpret it to strictly mean the application is 100% bug-free. I see it as keeping the checked-in code fully functional and of release quality. Focusing on incremental improvement, rather than taking an app and pulling-apart and putting-together for each major release.
It gives you the opportunity to continue with major-point releases, but gives the added benefit that major-point release dates can be moved easily, as business dictates.
Tuesday, September 28, 2004
>>Does it mean I never get to go home until I've killed all the bugs in my new code
Or you can "killed all the bugs" when deadline comes.
Wednesday, September 29, 2004
If you check in code that doesn't compile, nobody else can compile their code, so all code checked in should at minimum compile. It's also good not to break any required functionality, but new features don't really need to be bug-free. We have 5 people on our team, and we try not to keep certain files checked out for a long time, like the resource file (which can't be easily merged). Otherwise, a good source code control tool should take care of conflicts. Ours does, but then again, that's the product I work on. :-)
If you have checked in something that prevents the system from compling, then maybe you do stay until you fix it :-) That's because breaking compilation can really slow down the rest of team.
Regarding bugs, I think it makes sense to classify them into two groups: ones we will fix, and ones we won't fix (ever). The latter group is very small, and basically just includes minor cosmetic things. For the bugs in the first group, there are definite advantages to fixing them as you go. In particular, if you fix bug as soon as they are found, then you get a much better indication of your true progress. As soon as you are tempted to fix bugs "later", in order to make "faster" progress, it becomes much harder to predict your true finish date -- the date where you finish all your features AND fix the backlog of bugs.
Friday, October 01, 2004
SCRUM uses the 24 hour rule on issues. If you run into an unanticipated issue, it gets resolved within 24 hours in one of three ways:
1) you get an answer and incorporate it.
2) you come up with a work-around, and add the issue to the post-sprint backlog, or
3) if its a real killer, you cancel the current sprint, and regroup.
The idea is to avoid having the short increment/sprint thrown way off schedule, or shooting for an impossible schedule due to unforseeable issues.
This is not just for "puff of smoke" errors, but includes "handles 80% of cases successfully, but not out-of-state sales" and "he says this way, but she says that way." Zero defect can cover a lot of ground. A "work around" may be to assume a flat 8% sales tax for now, make sure this is nicely modularized, and expand on this in a future iteration.
If the bug will prevent you from building, testing and demonstrating the software, then you need to address it somehow. This can simply mean pulling the offending code out of the build (you do use some sort of version control, don't you?). Or you can fix it. If using test-driven development, you can pull the complaining test. Either way, this should be a team (including stakeholders) decision, particularly if this affects the scope of the current increment.
Unless you're preparing a major release, you need to define "zero defect" in terms of the goals your current increment.
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz