Blog ·
How to write a code review that helps the maintainer
Reviewing someone else’s open-source project is a skill, and like every skill it is mostly learned by doing it badly a few times and then doing it less badly. This post is the shortcut. If you’ve never written a substantive review of a public repo before, this template will keep you out of the four most common failure modes: vague praise, drive-by criticism, off-topic tangents, and the dreaded "looks great!" two-liner that is technically a review but tells the maintainer nothing.
RepoRanker requires reviews to be at least 800 characters. The minimum exists because anything shorter cannot say something useful. This post is also roughly 800 characters per section, which is a useful calibration of what a real review feels like to write.
The five-part review structure
Every good review answers five questions in order. Skip any of them and the review becomes either useless or unfair.
1. What does the project do?
Two to three sentences in your own words. This forces you to actually understand the project before reviewing it. If you can’t describe what it does, you haven’t earned the right to opine on whether it does it well. This is also useful to the maintainer: if your description differs from their pitch, that is a signal that the README is not landing.
2. Who is it for?
Project audience is harder to write about than project function. A library that solves a niche problem for senior infra engineers is not the same as a library that solves the same problem for junior front-end devs, and the review should treat them differently. Be specific about who you think this is for. The maintainer will tell you if you’re wrong.
3. What works well?
This is the part most reviewers either skip or pad with vague praise. Do neither. Pick two or three concrete things and say why they work. Examples:
- "The CLI prompts have sane defaults so I could ship a working install in under a minute."
- "The README has a copy-pasteable curl example that actually works on the first try."
- "The error messages include the file path and line number, which made debugging trivial."
Each of these tells the maintainer which decisions paid off, so they can do more of those things and not retire them in a refactor.
4. What needs work?
This is the part everyone gets wrong, in both directions. The two failure modes:
- Sycophancy. Refusing to say anything critical. Useless to the maintainer because it is indistinguishable from "I didn’t actually look at the code."
- Performative roasting. Listing every nit and personal preference as if the project were a job application. Reads as bad faith and gets the review disputed.
The middle path: pick the two or three rough edges that would block adoption, describe them specifically, and propose what "better" would look like. Examples:
- "The install instructions assume Node 20+ but don’t say so. I tried on Node 18 and got a cryptic error before figuring it out."
- "The default rate limit is 10 req/sec, which is fine for testing but blocks any real production use without an obvious way to raise it."
- "The TypeScript types in the public API are
anyin three places, which removes the type-safety benefit for downstream consumers."
Each one is concrete, falsifiable, and actionable. The maintainer can either fix it or explain why it’s intentional.
5. Would you use it / recommend it?
One sentence verdict. Skip the false neutrality. If you wouldn’t use this in your own work, say so and say why. If you would, say what you’d use it for. The verdict is the part future readers remember.
What to avoid
- Personal attacks. Review the work, not the maintainer. "This code is bad" is
bad feedback. "The error handling in
src/api.ts:42swallows exceptions silently and makes debugging hard" is good feedback. - Off-topic content. If you spend three paragraphs talking about a competing project you like better, you are reviewing the competitor, not this one. Stay on the project at hand.
- Drive-by demands. "Add WebAssembly support" is not feedback on what exists. It is a feature request. Open an issue. The review is for what the project actually is.
- AI-generated filler. If your review reads like an LLM wrote it, it probably did, and it’s probably going to be disputed. Reviews must clear 800 characters of original thought, not 800 characters of paragraph-ese padded out to fit the minimum.
One more thing: be honest
The reason peer review works is that it is harder to fake than a star. A star takes a click. A review takes 20 minutes of actually engaging with the project. RepoRanker’s 48-hour dispute window exists so the maintainer can flag low-effort or dishonest reviews before they go live and the reviewer earns no credits. Honest negative reviews are protected. Lazy positive reviews are not.
The best reviews are the ones the maintainer references in their next changelog: "Thanks to @reviewer for catching this in the public review." That happens when the review is specific, grounded, and useful enough to act on.
List your project and see what your peer reviews look like. Or browse the{" "} leaderboard and write one of your own.
Related: Trust & moderation · Content policy · How RepoRanker works.
