A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.
I'd like to get your opinion on code review processes and tools. I assume some of you are working in big globally dispersed teams developing products that share configurable components.
I see it as error prone if developers of one product work on a shared component, even if it is just a bug fix, without having someone else verifying the modification. Often it breaks another product and if you are lucky it will be detected in the daily build or at least by QA in the release phase, but if not the user will find it out and we know what this means.
My impression is that voluntary code reviews don't work these days as all developers are under heavy workload, even without code reviews.
Could you please share with me your opinions about code review processes? If you do them? What processes or tools you are using? What the performance benefit was, if there was one at all?
Here are some tips on how we run through code reviews.
1). Use NotePad2 (http://www.flos-freeware.ch/notepad2.html) to print out the code with line numbers.
2). Pass out the printouts to the developers maybe 2-3 days before the actual code review meeting, so they have enough time to review the code.
3). Create a table of contents page, so everyone doesn't get lot and they know what code will be covered.
4). During the code review, have a moderator go through the code asking, "Any problems with lines xxx-xxx?...okay page 2, Any problems with lines xxx-xxx?" (Yes, it does get monotonous, but you try to spice it up a bit) :-)
Hope this helps,
Monday, May 22, 2006
jdanylko gives some solid advice, to which I'll add:
1) Automate as much as possible. There are lots of code inspection type tools that check for things like compliance with coding standards and conventions, etc. One thing that really bogs down a code review is "discussion" over variable names and comment blocks ;) There are also lint-like tools that can check code for common errors. The submitter must run their code through these tools prior to distribution for review. On a recent Java project we used CheckStyle and JLint for this.
2) You can start with an informal off-line review; just distribute the review packet to the reviewers with a deadline at least three days from the publication date. Have reviewers send comments/corrections directly to the author. The author can then call a meeting to discuss if (a) they disagree with a comment/correction, (b) two or more reviewers gave conflicting comments/corrections.
3) If you're doing a formal review with a group in a room, you should have a Facilitator/Moderator, Scribe, Author and at least two Reviewers. The Facilitator doesn't review; he/she's there to keep the meeting on track. The Scribe keeps track of issues and action items.
4) Keep track of how long each participant spends on the process in preparation, review, and post-review activities. Also track how many issues are found, and their severity.
5) You can't review everything. If you have a feel for where the most critical code is, review those areas. If you can get a report from your version control system on checkin frequency, review the files that have a high rate of checkins.
> your opinions about code review processes?
Writing a telco server for an ISV in the 90s selling shrinkwrap, software reliability was most important. I'd review the work of new hires. I (unlike my boss) would tell them that doing it well was more important than doing it quickly (after they've learned to do it well, then they can start to do it more quickly). After a few months they would have learned what I wanted, I'd stop finding flaws in the code I was reviewing, and so I'd stop reviewing that person's work.
Thanks a lot so far.
I'm quite familiar with the formal inspection process, as I use it for design reviews and documentation reviews.
I know that many developers take coding style personal, thats why we have established one common style to avoid ratholing on variable names ;-)
Quite interesting was the idea of using the version control system to show the most active areas to identify hotspots for the inspection.
My experience is that problems often occur in parts that are touched very rarely, like interfaces. Developers tend to do such stupid things like adding IDs in the middle of an enum or misusing interfaces.
The best would be an integration between bug tracking tool and version control system to identify most common errors and the modules that caused most trouble.
What I'm looking for is a tool/process that eases the informal code review. It must be accepted by everyone, the cost/value ration must be good.
Monday, May 22, 2006
"My impression is that voluntary code reviews don't work these days as all developers are under heavy workload, even without code reviews."
You are right about that. Code review is in my opinion an excellent and cost effective practice to reduce bugs and increase maintanability.
However code review, like any other project task is not free. So if management allocates zero budget in the project plan for code review then code review will not occur. You get what you pay for. People aren't going to do it voluntarily, nor should they.
I'd recommend you educate management on the value of code review and for future projects ensure that time is budgeted for this worthwhile activity.
Tuesday, May 23, 2006
"People aren't going to do it [code review] voluntarily, nor should they."
Yes they should. All code should be reviewed prior to checking it in to source control (personal branches maybe excepted if they're not merged into the main source code or get reviewed when that happens). The formal type is overkill for many projects, in which case a simple process is fine (for instance, the person who wrote the code explains it to one other developer, while sitting in front of the computer, and answers any questions that come up and makes changes or records fixes to make).
Code reviews are such an easy way to make such a big improvement to your code that you'd be crazy not to request one, and also crazy not to do one for your co-workers. Take responsibility for your work. But maybe I'm just an idealist. :)
Exception Guy -- yes, all that you've said is true, with a big IF. IF you can keep the manager's paws out of the code review, IF people prepare before the review, and IF people can be convinced not to insist on the One Best Way (Their Way) in a code review.
All very very difficult tendencies to defeat. And any one of them can make a code review a schedule buster.
Tuesday, May 23, 2006
Apart from what others have said I have some more tips
1. If you have a predecessor document like design docs, coding standards , other standards (e.g http spec if you are writing web server) etc. bring them in the review to compare if the code is meeting the design, interface etc.
2. Don't ever discuss fixes to issues you have found during the meeting. Note down the issue, assign it to the devleoper , let him fix and checkin the code, mark the issue as fixed and then the moderator should verify the fix by doing a diff.
3. Don't make it personal.
4. If you found the issue was caused by defect in spec or design investigate how the spec and dsign was approved in first place, why they were wrong ?
If you have enough overhead you might start noting down what was done to fix the issue, what would have been the impact and a severity. Gather the metrics over a period of time. If it comes out that most fixes was done by proper initialization of variables then you know to include in your coding standards or giving training to your devs on common mistakes etc. You can also find systemic problems by this way. This is called continous improvement in CMM.
Wednesday, May 24, 2006
> IF people can be convinced not to insist on the One Best Way
There are two kinds of reviewer:
1) Team leader: if the team leader wants something changed, then change it.
2) Peer: peers make 'suggestions' for improvement - having heard the suggestion it is then the author's priviledge to decide whether (and how) to act on each suggestion.
We have a bug tracking (well, issue tracking really) system where code review is part of the process.
1. You get an issue assigned to you
2. You program what needs to be done.
3. You send the issue to another developer on your team. He can then in the system see *the diff* of what you did on that issue. That way he doesn't have to look at everything, just what you did on that particualar issue. He can then enter comments and linenumber. Then he sends it back to you as either 'review - fix' or 'review - approved'. You then fix it according to the comments (or put in comments to explain why you did things that way), and send it back to 'review' until you get it back with 'review - approved'.
That seems to work well for us. We catch lots of subtle bugs and readability problems that way. The time you spend reviewing an issue is then put on that issue in the timetracking system, so it's understood that each issue will have some code review time spent on it.
Wednesday, May 24, 2006
You propose that "peers make 'suggestions' for improvement - having heard the suggestion it is then the author's priviledge to decide whether (and how) to act on each suggestion."
I've used a different model, where the (sole) reviewer has to give permission to check in the change, since they're putting their name on it, literally and figuratively. If you go with the attitude that you've got to satisfy the reviewer's demands, then you'll be much less inclined to ignore their suggestions, and much more inclined to anticipate those suggestions and deal with them before the review.
Under this scheme, if a bug is found in code you've reviewed, you share the responsibility for it. In a good group, that means you and the author will acknowledge the mistake and analyze why it wasn't detected, and then someone will remedy it and all related issues the analysis revealed.
I understand what you're saying. I was taking it for grante that an author would fix any actual *bug*, if one was pointed out. The model I outlined was for a team of senior developers ... the kinds of changes that an author might legitimately decide not to act on are more like:
* Refactoring (reviewer thinks that the design could be a bit neater in some way)
* Scheduling (reviewer thinks that support for feature X would be better included in this check-in, whereas the author wants to defer implementing support for that feature until later)
Letting the decision-making vote default to the author helps to reduce 'guilding the lily': since it's the author who has to do the rework, and since the reviewer isn't the author's manager, the reviewer doesn't (shouldn't) have *too* much authority to dictate how the author will spend their time.
In practice, if the author and reviewer couldn't come to some consensus on a topic, then we'd usually bring a 3rd developer in on the discussion and then bow to the majority opinion.
It sounds like we're in agreement here:
Bugs introduced by the new code must get fixed before submission (this includes bugs that didn't used to get hit but do appear now). Existing bugs that were found at least get reported into the bug tracking system. Enhancements get done at the discretion of the author if they're small, or else get discussed with the team to decide if they're worth the time they'll take.
* Code reviews are a must-have for all software developers. That doesn't mean that they have to be long or involved, but the developer must expect to have their code reviewed as a basic practice. The presence of an effective code-review process with a fair, mentoring approach will *of itself* reduce the number of code-reviewable defects in your developers code.
* Managers should keep out of code reviews (he said, having just agreed to do one because everyone else is busy). Management should no more be able to say 'don't do code reviews' than they should be able to say 'don't bother testing'. If you can't get that across, your organisation will not deliver quality software and its up to you whether you want to be part of that.
* Except for critical modules, n x people in a room going line by line is way too expensive. Treat the code review process as a training & mentoring exercise and whereever possible, do it one-on-one.
* Use a good diff-tool to find the changes (when reviewing mods).
* Record the output of reviews and the defects found for later root-cause analysis & other metrics.
* Apply the same principles used in quality inspection in manufacturing these past 20 years: where you know or suspect an issue, inspect in detail; where you know that quality is usually acceptable, inspect in less detail; where you know that quality is usually impeccable, inspect on a sample basis. But never stop reviewing entirely.
* Always review patches and other fixes where the customer is already unhappy about the specific issue. Don't allow a second defect or a non-fix in these cases.
* Don't let the quality/process wonks tie you up in forms, checklists, inky-signoffs, etc. The process should be there because everyone knows that it is necessary, it shouldn't be a chore that 'they' impose.
* Don't get hung up on style. Unless the code is unreadable, don't allow it to be a blocker for releasing the code. The way to deal with this IMHO is to allow that your style-fascist to do a style audit from time to time outside of the flow-of-control into the build. That way he can have an occasional rant but isn't a bottleneck.
* Record errors and allow the developer to fix them. Don't stand over them or dictate fixes.
* If an error comes up, is agreed and isn't fixed at the re-review, that's a big deal.
This topic is archived. No further replies will be accepted.Other recent topics
Powered by FogBugz