Core Philosophy
I'm fully on board with using AI to write code. I use Cursor, Claude, Copilot, and ChatGPT daily, and they've made me significantly faster. But I've learned — sometimes the hard way — that there's a critical line between AI-assisted development and simply being a copy-paste relay for model output.
The distinction matters: AI-assisted means I understand every line, I've tested the logic, and I can defend my decisions. AI-proxied means I hit "accept all" and hoped for the best. The first approach multiplies my productivity. The second is a liability I refuse to ship.
No matter what tools helped produce the code, I am the sole responsible author of everything I submit — the logic, the design choices, and the trade-offs.
The Five Rules I Follow
Rule 1 — Prove It Works
AI writes code that looks plausible but fails at runtime more often than you'd think. "It compiled" or "no red squiggles" is never enough for me.
pytest, npm test, cargo test, or screenshots. No exceptions.Rule 2 — No Hallucinations, No Reinventing
I've watched AI confidently explain code that was already deleted, reinvent utilities that exist three files away, and write comments about variables that don't exist. I actively guard against this.
// returns the result or // loop through the array. If my code has these, it means I got lazy and accepted AI output without reading it.unwrap() in Rust library code, bare except: in Python, any everywhere in TypeScript. These are tell-tale signs of unreviewed output.Rule 3 — If I Wrote It, I Can Explain It
This is the simplest rule and the most important one. If someone asks me about a function I submitted, I can walk through it line by line.
Rule 4 — I Own the Architecture
AI is great at writing individual functions, but it tends to over-architect. Module boundaries, API design, data modeling — those decisions come from me, not the model.
Rule 5 — Security Is My Responsibility
AI has no concept of my security context. It doesn't know what's sensitive, what's public, or what could go wrong. I treat security-touching code with extra care.
How I Disclose AI Usage
I believe transparency builds trust. When I use AI in my work, I'm upfront about it — not because it's shameful, but because it gives reviewers the right context to evaluate my code.
My Code Review Checklist
When I review code — my own or someone else's — here's what I specifically look for to catch low-effort AI output:
📝 Suspiciously Polished PR Descriptions
👻 Hallucinated Patterns
data, process_item✅ Missing Proof of Execution
🔍 Ghost Comments & Noise
⚙️ Broken Language Idioms
🛡️ Security-Sensitive Code
Anti-Patterns I've Learned to Avoid
🍝 The "Spaghetti Dump"
A massive PR that spans dozens of files, clearly generated in one long AI session with zero incremental thought. Usually full of duplicated logic and inconsistent naming. I've done this once — never again.
🏰 The "Castle in the Sky"
Over-engineered abstractions no one asked for — factory-of-factories, premature generalization, design patterns for their own sake. AI loves to architect. I've learned to keep it grounded.
🦜 The "Parrot"
Every function has a comment that restates its name. // Gets the user above getUser(). AI loves narrating the obvious — I strip this noise out before committing.
🎭 The "Confidence Mask"
Beautifully formatted code with eloquent commit messages that hides a total lack of understanding. Breaks on the first edge case. If I can't answer basic questions about my own PR, something went wrong.