The Standard of Code Review
- Approve the CL as long as it improves the overall health of the code base, even if it is not perfect.
- If hard to approve CR, people will be less motivated to make improvements
- Don’t require author to polish every piece of CL before the approval
- The senior principle among all of the code review guidelines.
- Just keep in mind that if your comment is purely educational, but not critical to meeting the standards described in this document, prefix it with “Nit: “ or otherwise indicate that it’s not mandatory for the author to resolve it in this CL.
- Any purely style point (whitespace, etc.) that is not in the style guide is a matter of personal preference. The style should be consistent with what is there. If there is no previous style, accept the author’s.
- If the author can demonstrate (either through data or based on solid engineering principles) that several approaches are equally valid, then the reviewer should accept the preference of the author. Otherwise the choice is dictated by standard principles of software design.
- You can validate the CL if you want—the time when it’s most important for a reviewer to check a CL’s behavior is when it has a user-facing impact, such as a UI change. It’s hard to understand how some changes will impact a user when you’re just reading the code. For changes like that, you can have the developer give you a demo of the functionality if it’s too inconvenient to patch in the CL and try it yourself.
- Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe.
- Compliment people in CR, especially a good answer
- At Google, we optimize for the speed at which a team of developers can produce a product together, as opposed to optimizing for the speed at which an individual developer can write code.
- If you are in the middle of a focused task, such as writing code, don’t interrupt yourself to do a code review.
- to be sure that you are always making comments about the code and never making comments about the developer. You don’t always have to follow this practice, but you should definitely use it when saying something that might otherwise be upsetting or contentious. For example:
- Bad: “Why did you use threads here when there’s obviously no benefit to be gained from concurrency?”
- Good: “The concurrency model here is adding complexity to the system without any actual performance benefit that I can see. Because there’s no performance benefit, it’s best for this code to be single-threaded instead of using multiple threads.”
- If you ask a developer to explain a piece of code that you don’t understand, that should usually result in them rewriting the code more clearly. Occasionally, adding a comment in the code is also an appropriate response, as long as it’s not just explaining overly complex code.
- Explanations written only in the code review tool are not helpful to future code readers. They are acceptable only in a few circumstances, such as when you are reviewing an area you are not very familiar with and the developer explains something that normal readers of the code would have already known.
- Experience shows that as more time passes after a developer writes the original CL, the less likely this clean up is to happen. In fact, usually unless the developer does the clean up immediately after the present CL, it never happens. This isn’t because developers are irresponsible, but because they have a lot of work to do and the cleanup gets lost or forgotten in the press of other work. Thus, it is usually best to insist that the developer clean up their CL now, before the code is in the codebase and “done.” Letting people “clean things up later” is a common way for codebase to degenerate.
- If you previously had fairly lax code reviews and you switch to having strict reviews, some developers will complain very loudly. Improving the speed of your code reviews usually causes these complaints to fade away.
- Sometimes it can take months for these complaints to fade away, but eventually developers tend to see the value of strict code reviews as they see what great code they help generate. Sometimes the loudest protesters even become your strongest supporters once something happens that causes them to really see the value you’re adding by being strict.
- Priorities in order: message format/schema, tests, interface, implementation
- Claimed by speed must be backed by micro-benchmarks
- Reviewer is the owner of the code they are reviewing
- Favor the author’s approach if the author can prove it by qualitative or quantitative proofs
- If something is off track early, send feedback immediately
- Review CR as soon as you are not in the middle of a focused task, no more than 1 business day in any case.
- Most deadlines are soft. Favor code quality over meeting soft deadline. Hard deadline include
- Contract
- Market share speed
- Missing an important conference (debatable)
- Can reject CL just because it is too large.
- Leave refactoring in a separate CL whenever possible.
- Small cleanup can be in the same CL
- May do a refactor CL before the implementation
- If review is not constructive or polite, explain in person. Use private email to explain in a kind way that what you wish could have been done differently
Common phrases in toxic comments
Excluding second person pronouns, which are the most common symptom.
- it is
- that is
- going to
- trying to
- to do
- do not
- it is not
- this is
- in the
- to be
- have to
- there is no
- to do with
- the problem is
- want to
- of these
- of our
- is to
- if we
- depend on
- we use
- the cl
- this case
- is of
- the project
- the linter
- read the
- as long as
- just wanted to
- we don’t want
the perception of unnecessary interpersonal conflict while a reviewer is blocking a change request
- Rounds of a review
- Active reviewing time
- not wall-clock time of start-to-finish reviews, but based on time specifically spent working on the code review and related actions.
- Active shepherding time
- time the author spent actively viewing, responding to reviewer comments, or work- ing on the selected CR, between requesting the code review and actually merging the change into the code base.
- To flag potential push back
- 48 minutes for active reviewing time
- 112 minutes for shepherding time
- 9 batches of comments for number of rounds of review
- Compared to very small code changes (less than 10 lines), changes that were between 250-999 lines long were 4.8 times more likely for authors to feel more change was requested than necessary and changes that were between 10-49 lines were 4.7 times more likely to have authors report they felt negatively about submitting a similar change in the future
- The number of reviewers is not predictive of any of our push back feelings.
- Some of the most commonly mentioned associated factors were delays (65), references to code style (46) and documentation (39), the tone in which feedback was delivered (43), familiarity with the codebase (35), and the CR being part of a read- ability review (34). The most common emergent themes which we classified as consequences were inefficient use of time (31), negative emotions (29), and offline conversations (28).
Code review at Google
- Expectations for code review at Google do not center around problem solving. Reviewing was introduced at Google to ensure code readability and maintainability. Today’s developers also perceive this educational aspect, in addition to maintaining norms, tracking history, gate-keeping, and accident prevention. Defect finding is welcomed but not the only focus.
- This does not align with the previous idea that CR is a group problem solving activity
- Usually, only one reviewer is required to satisfy the aforementioned requirements of ownership and readability.
- This does not align with previous idea that two reviewers find an optimal number of defects
- Over 80% of all changes involve at most one iteration of resolving comments.
- We find that the median developer authors about 3 changes a week, and 80 percent of authors make fewer than 7 changes a week. Similarly, the median for changes reviewed by developers per week is 4, and 80 percent of reviewers review fewer than 10 changes a week. In terms of speed, we find that developers have to wait for initial feedback on their change a median time of under an hour for small changes and about 5 hours for very large changes. The overall (all code sizes) median latency for the entire review process is under 4 hours. This is significantly lower than the median time to approval reported by Rigby and Bird [33], which is 17.5 hours for AMD, 15.7 hours for Chrome OS and 14.7, 19.8, and 18.9 hours for the three Microsoft projects. Another study found the median time to approval at Microsoft to be 24 hours
- At Google, over 35% of the changes under consideration modify only a single file and about 90% modify fewer than 10 files. Over 10% of changes modify only a single line of code, and the median number of lines modified is 24. The median change size is significantly lower than reported by Rigby and Bird for companies such as AMD (44 lines), Lucent (263 lines), and Bing, Office and SQL Server at Microsoft (somewhere between those boundaries), but in line for change sizes in open source projects
- fewer than 25% of changes have more than one reviewer, and over 99% have at most five reviewers with a median reviewer count of 1. Larger changes tend to have more reviewers on average. However, even very large changes on average require fewer than two reviewers.
- Developers spend an average of 3.2 (median 2.6 hours a week) reviewing changes. This is low compared to the 6.4 hours/week of self-reported time for OSS projects
- The median change size in OSS projects varies from 11 to 32 lines changed
- During the week, 70% of changes are committed less than 24 hours after they are mailed out for an initial review.