kvch.me

New post

It has been more than a year now since I started using Claude Code, Github CoPliot and ChatGPT to write code for me. I challenged myself to delegate as much coding as possible to GenLLM in January 2025. As expected, I changed in my coding workflow significantly. But I realized that my review process has changed as well.

My focus has shifted from checking if the code works according to the requirements to judging its design. In this post I am sharing the 3 things that changed the way I review code.

Pair reviewing with an AI agent

The first major change is that I now review pull requests in parallel with an AI agent.

I prompt the agent to read the PR description, review the code, identify bugs and missing edge cases, and suggest alternative approaches to the problem. While the agent is working, I review the PR myself. I read the description, go through the code, and take notes in my physical notebook as I always have.

Once both reviews are complete, I compare my findings with the agent’s output.

Sometimes I need to run quick scripts to validate my ideas. In the past, I considered this relatively high effort and only did it for critical PRs. Now, I regularly generate and run short scripts to build confidence in the implementation.

Higher system design standards

In addition to using AI as a review companion, I have noticed that my tolerance for poor design has decreased. As I generated more and more code, I realized that agents produce subpar design most of the time, even if I spoonfeed them with code examples. At this point anyone can generate code. The real challenge is producing code that fits well within an existing system and remains flexible and maintainable over time.

I find myself reading about design patterns more frequently. In the past I consulted these best practices when I created something new from scratch. Rereading the principles helped me plan the architecture of the new components.

I no longer consider shipping code quickly a sign of engineering excellence. On the contrary, I find it reckless to generate 10,000 lines of unmaintainable code, and leave it to your colleagues to deal with incidents and clean up the mess.

However, I have always valued strong design skills, but now I invest more time in keeping them sharp. I regularly read about system design and revisit well-designed open-source projects that have stood the test of time. This has made me far more sensitive to poor architectural decisions.

Cheap rewrites

I used to hesitate when requesting significant rewrites, unless the code failed to meet basic standards such as correctness and maintainability.

Programming is a creative discipline, and most problems have multiple valid solutions. Some may be suboptimal, but if they work and are maintainable, they are can acceptable. Often it is not worth the time to be pedantic about tiny details of the code.

Now my hesitation is gone. I am more likely to ask for a refactoring because the cost of the iterations dropped. I no longer feel like the engineer wasted their precious time if I reject their changes.

I also changed the format of how I provide feedback. When I suggest a refactoring, I provide example implementations. It is highly likely that my comment is going to be copy-pasted into an AI agent. So a well-structured feedback can become a high-quality prompt.

Summary

Since adopting more AI tools in my workflow, my review process has changed as well. As I can delegate identifying basic issues in the code to an AI agent, I am free to focus on software design. I have become more sensitive to bad architecture. I request changes more frequently because the cost of new iterations is almost zero.

Thoughtful design has become more important than ever, so this is what I am looking for in PRs.