This is an excerpt from an email I wrote to my colleagues at Mozilla a couple of weeks back.
Aside from the occasional gripe about how awful it was to land a particular patch, I rarely hear anyone talk about code review.
What processes do people in our team use to decide "I'm going to hit the green button"?
I hope people will share their thoughts. Besides being curious about how others approach code review, I have two self-serving goals. First, I want to become both faster and more thorough when performing reviews. Second, I want to know what I should be doing as a developer to ensure my code is reviewed quickly and with minimal fuss.
- How do you decide what level of scrutiny needs to be given to a patch?
- How do you gain an understanding of a complex patch and how it fits into the existing code base?
- How do you ensure localized changes do not cause unintended side effects elsewhere? Similarly, how do you ensure conceptual integrity?
- How do you give feedback, solicit changes, and ask for clarification?
- Do you ever add a commit to somebody else's patch?
- Are different standards applied to external contributors?
- Finally, what other heuristics are used to decide whether to r+?
Since I'm asking, I'll say my bit first.
To determine the level of scrutiny
Is the dev one that I have worked with before and trust? I'll inherently trust a contribution from a colleague I work with every day more than one from a first time contributor. Next, I look to see how large a patch is. Large patches (which I am frequently guilty of) scare me. Changes to core pieces of functionality scare me. Changes to a historically problem area raise awareness. One line CSS changes generally do not. The gray area in between - I have no idea how to quantify.
To gain an understanding of a complex patch
I take a multi-phase approach. First, I scan the diff to see what code has changed, not yet trying to fit things together. Next, I try to pick out names of modules/functions that have changed. I then try to fit logical units together. After that, I work from the outside in, seeing how the patch interfaces with existing code, and finally, how the new parts fit together.
Ensuring localized changes do not cause unintended side effects
I rely on my own knowledge of the system, thinking through potential side effects, and hope the tests are sufficient to cover things I have not thought of. This is the portion I struggle with the most.
How I give feedback
I try to approach most patches as a multi-stage process, and how I give feedback depends on the stage of the patch. If it is an early stage WIP (Work In Progress), I generally try to ensure the concept is sound and that the code is cohesive and fits in conceptually, but not much else. Mid->late stage WIPs I ask for code/name cleanup, comments, sufficient test coverage, and style nits. Late stage WIPs are where I am the most pedantic. Final review is usually pretty easy because the author and I have gone over things several times.
I try to be direct but helpful with the feedback. I'm not afraid of offering suggestions or alternate approaches. If I am asking for clarification, I try to explain what the code does to see if I am on the right track.
Adding commits to other people's patches
Yes. Most frequently, I add tests. I rarely modify core functionality of somebody's patch unless the author and I have discussed it beforehand. I sometimes feel guilty about this, sometimes not.
How I handle external contributors
I try to do whatever it takes to land an external contributor's patch. The most important thing is to review early and often. Landing a contributor's patch quickly leads to repeat contributors. A contributor whose patch has sat for a month probably isn't coming back. Sometimes the contributor needs to be educated about the process and standards, but really, this is the same process used when onboarding a new employee.
Other random heuristics I use
- Does the code do what the author says it does?
- Does the patch address a real problem/need?
- Does the functionality duplicate functionality already found elsewhere? If not, is there similar functionality that can be generalized?
- Have pdehaan, jrgm, jbonacci, or edwong weighed in? If so, listen, carefully. They have saved our bacon too many times to count.
- Are existing tests modified? If so, I give the patch more scrutiny than a patch that only adds new tests.
- Are new configuration items added that differ between prod and local dev? If so, Ops needs to be involved.
- Are there any major performance regressions?
- Does the patch use a large external client-side library? If so, can we recreate only the subset of functionality that we need?
- Is the styling "close enough?" I'm not super pedantic about styling, but I like a general feel of consistency.
What do you do?