Tips on reviewing code

When someone asks you (or assigns you 😬) to review code, it may look daunting at first. But no worries, you'll get used to it pretty quickly.

Here you'll find some quick tips that I use in most of my reviews.

review.ts
const codeReview = () : boolean => {
  throw Error('Did you really check my code before approving this review?')
}

Comments

Comments in general have their pros and cons. For me, good comments explain why code was added and why a certain approach was taken. I do agree that most code can be written without comments if you do a great effort in naming your functions and splitting up code functionality. But in most cases, I think it's good practice to write down why the code exists (business perspective) and why you took approach A in stead of approach B (developer perspective).

Calling out a developer on not writing these kinds of comments may seem like nitpicking, but it will help everyone in the long run. I've had many moments where I simply forgot why a certain approach was taken, if I had only written a comment back then (yes, I have sinned 🙈), I wouldn't have to spend X minutes going through the code again and figuring out the original reason.

Happy path / unhappy path

When reviewing code, it's good practice to look at both the happy and unhappy path. When you're reading through the code, everything may seem logical. We have the tendency to just think of the happy path, as it's usually the use case definition / requirement of what has to be made.

happy-unhappy-path-example.ts
const example = async () => {
  const dataA = await fetchDataA(); // What happens when this call fails or no data is present?
  const dataB = formatDataA(dataA); // What happens when this function gets data it can't use?
  await storeDataToDatabase(dataB); // What happens when this call fails?
}

An example function might be the following: "we fetch data A and use it to calculate data B, which will then be submitted to the database". However, what happens when the "fetch data A" call fails? How should the service react to it? Should there be a retry mechanism? Should it be logged? It's easy to forget all these scenarios as most of the time these use cases aren't specified by the requirements. They are implicitly expected!

When we actively think about the unhappy path, this will result in a more robust solution.

Sufficient logging

The previous point leads me to another tip: adding sufficient logging. In most cases, there will be some kind of logging service. In order to efficiently debug when things go wrong, it's necessary to have logs in your functions.

In general, you can log the flow of the code when things go right (happy path), in order to validate that everything works correctly. More importantly, log when things go wrong (unhappy path), and give a clear error message. When problems do occur, you will have a much clearer picture of the reason and what you'll have to do to fix it.

Nitpicking allowed

A colleague of mine (shout-out to Jarre 👋) started some of his comments with "nit:", the idea was that those comments were nitpicking. When processing the review, it wasn't necessary to fix those comments, but they do give insights to developers of things that could be improved.

I actually really liked these nitpick comments, as those are often small things you can change in order to improve.

Functional testing

I think we all have had our moments where we don't have a lot of time, and we just approve the code after inspecting if everything looks the way it should and the code itself seems valid. However, we should always try to test the functionality as a reviewer. Deploying code is a shared responsibility, and actually testing the code should be a requirement for a good code review.

What do you take into account during code reviews?

I'm interested to hear what things you value the most when reviewing code.
Feel free to continue the discussion on Twitter.

Tweet tweet