CodeGrader

--> AI-powered code evaluation — currently in testing stage

I built an automated code review pipeline that reads a GitHub repo and generates a full evaluation — structure, readability, patterns, the works. It uses AI to produce feedback that (hopefully) feels more like a teacher than a linter.


Right now I'm testing it in the open. If you have a public repo you'd like to throw at it, go for it — I'm genuinely curious whether the output is useful or if it's just noise. It takes some minutes, depeneding on repo dimension.


If you've already tried it, you'll get a quick feedback prompt on your next visit — even a one-line reaction helps me figure out what to fix.

Try it on a repo
Works with public GitHub repos
Usually takes under a minute but, be patient
4 / 200 trial evaluations used
WHAT HAPPENS
  1. You paste a link — any public GitHub repo: side projects, assignments, experiments, whatever
  2. The pipeline reads your code — it looks at structure, naming, patterns, complexity, and a few other things (no GitHub permissions needed, it just clones what's public)
  3. You get a full evaluation — a report with a grade, section-by-section breakdown, and specific per line suggestions
A FEW THINGS WORTH KNOWING

This is a free beta test, so there are some limits: each GitHub account can run up to 5 evaluations, enough to get a feel for it. The tool works best on focused, single-purpose projects (think a specific module or assignment, not a massive monorepo).

I'm keeping the beta open for a limited batch of submissions so I can actually review how the pipeline performs, then I'll iterate based on what I find.


No account needed. No email collected.
Your code isn't stored after the evaluation runs.


If you have any question
feel free to mail me


Giovanni