Skip to content
All posts

The case against code reviews

Originally published May 2019

Code reviews are a de-facto part of the development process for most teams. Most developers would argue that they're best practice too. But I'm increasingly against them. While code reviews might spot a few bugs, I believe the wider impact is ultimately negative.

Code reviews are unhelpful on two fronts:

  • Negative impact on flow of code
  • Encourages hierarchical and unhealthy team dynamics

I'd argue they have minimal value on code quality too.

Code reviews prevent fast feedback

I'm a huge advocate for TBD (trunk-based development), CI/CD (continuous integration/delivery) and DevOps culture. One of the primary characteristics of these practices is fast feedback - my team currently aims to get code fully tested and into production in 20 minutes from push. That's a full set of tests and deploy that runs automatically on push.

If you put a code review in this process, then our 20 minutes feedback cycle would be many times longer. And if the code review blocks any part of the test automation, then you're preventing "shift left" of those tests - losing a lot of the value associated with CI and TBD practices.

I've never encountered a team that manages to reliably carry out high quality code reviews within 4 hours of them being requested, and I've seen plenty of PRs that have needed a couple of days or more and multiple prompts before code reviews are carried out and PRs merged. This makes the overall pipeline unacceptably slow. In turn, this encourages people to batch up larger pieces of work. So code reviews actively discourage "small, safe" changes.

Code reviews act against healthy team dynamics.

Code reviews create an inherently hierarchical team dynamic and propagate the idea of stage-gates in the delivery process.

I encourage senior developers and tech leads to take a coaching approach as much as possible - for example, a code review might say "I'm not sure I like this variable name, can you think whether there's a shorter, more descriptive name for it?" rather than "this variable needs to be named foo instead".

Nonetheless, the act of adding code reviews to the process is effectively saying "someone more senior than you needs to approve your code before it can be added to the codebase" and sets the wrong tone for the team. I say this as someone who has led many junior-weighted teams. It undermines the culture of trust, while taking ownership and responsibility away from the developers who are delivering the code.

It's hard to not see code reviews as a criticism. I mean, they kind of are criticisms - so to continue having a positive team environment, a team working with code reviews must work hard to reinforce psychological safety. But this is difficult with the hierarchical dynamic that code reviews create. Just as junior developers get held back from taking ownership, the senior developers are put under pressure to be "perfect" and all-knowing. It is possible to have a team with code reviews and high psychological safety, it's just not obvious how. And if you have a team with an unhealthy culture, then your code reviews become a place where toxic power dynamics can be used to undermine and criticise developers in a way that, on the surface, appears "objective". The traditional view of the senior developer having both the right and the obligation to cast judgement on the code of others has become such an expectation of the role and the dynamic, that it's become an integral part of software culture.

Code reviews don't improve code quality

Code reviews create a feeling of safety - which is the reason why getting rid of them feels like heresy. But feelings of safety can be unhelpful. Having a safety net makes it more acceptable to say I don't need to worry about whether this is properly tested because any problems will be caught in the code review process...

So that, combined with the slow feedback and inclination towards larger changes as mentioned above, creates a dynamic where changes become slower, bigger, less safe, and less well tested. None of these are conducive to high code quality.

Code reviews are often rushed, or done at times of low energy and concentration. They're often done without full context of the problem that's being solved or the thought process along the way. They might even be carried out without in-depth knowledge of the frameworks or libraries being used (not that that completely removes their usefulness - just diminishes it a bit).

I've seen PRs bounce back and forth for days as developers quibble over minor details, caught up in discussions about semantics and design patterns, while the real-life problem that the slightly-imperfect but working code fixes remains unfixed. In theory, a relatively stable team should see the code review cycle/burden diminish over time, but I've never actually seen this happen: as a way to learn better coding standards, code reviews are very slow.

There is a better way...

We've had the good fortune to be practicing pair programming as a default on my latest team. Pair programming mostly makes code reviews obsolete - they're more like code reviews in real time. Good pair programming isn't about trying to get perfect code first time, but about creating good practices about when to get it right first, and when to get it working first. A traditional code review can only review the outputs, but pair programming can also review the process.

When points are raised during pair programming, it's in a much more forward-looking and conversational way. As a pair programmer, you're collaborating to create code together so when you raise a comment in a pair, you're working together to create good code first time. There's no expectation on the driver to get every line right in the moment, on the fly - and the navigator is there to help catch some of those points of polish and foresight, not just as a final backstop against bad choices. (That's not to say that pair programming isn't subject to its own challenges and stresses when it comes to power dynamics - but this is a topic for another post).

A different way to carry out code reviews

Work is delivered in smaller batches and with higher incentives to be well tested when the test automation pipeline is the only thing that stands between the developer and production. But we have picked up post-production code reviews too: working code that's already in production, but needs to be reviewed to consider design choices. The interesting effect this has had is on encouraging refactoring. We're prioritising working code over perfect (but potentially still buggy) code, but this in itself sends a clear team message - we don't expect you to get this perfect first time, done really is better than perfect and we'll continuously improve even after delivery.

Code reviews are such an expected part of software development, that doing without them feels highly controversial. But assessing whether your team gets real value from code reviews is well worth doing. Why not discuss in your next team retro?