Personal Engineering Standard

My AI-Assisted
Development Policy

I use AI coding tools every day — Cursor, Claude, Copilot, ChatGPT — and they've made me significantly more productive. But over the past year, I've developed a clear set of principles for using them responsibly. This is my personal framework.

✓ AI-Assisted: Yes ✗ Blind Copy-Paste: Never Standard: Non-Negotiable

01

Core Philosophy

I'm fully on board with using AI to write code. I use Cursor, Claude, Copilot, and ChatGPT daily, and they've made me significantly faster. But I've learned — sometimes the hard way — that there's a critical line between AI-assisted development and simply being a copy-paste relay for model output.

The distinction matters: AI-assisted means I understand every line, I've tested the logic, and I can defend my decisions. AI-proxied means I hit "accept all" and hoped for the best. The first approach multiplies my productivity. The second is a liability I refuse to ship.

No matter what tools helped produce the code, I am the sole responsible author of everything I submit — the logic, the design choices, and the trade-offs.

02

The Five Rules I Follow

🔬

Rule 1 — Prove It Works

AI writes code that looks plausible but fails at runtime more often than you'd think. "It compiled" or "no red squiggles" is never enough for me.

Must
Every PR I submit with functional changes includes local test output — pytest, npm test, cargo test, or screenshots. No exceptions.
Must
Non-trivial changes get discussed first — in a GitHub issue, team chat, or design doc. The PR always references that discussion.
Must
Any algorithm I implement references an authoritative source — a paper, official docs, or a proven library. I don't trust AI to invent algorithms.
Red Flag
If I can't show test output for complex logic, the PR stays in draft until I can.
🚫

Rule 2 — No Hallucinations, No Reinventing

I've watched AI confidently explain code that was already deleted, reinvent utilities that exist three files away, and write comments about variables that don't exist. I actively guard against this.

Must
Use what already exists in the codebase — utilities, shared patterns, established conventions. I don't let AI reinvent the wheel.
Red Flag
Ghost comments — comments describing logic that was removed or never existed. I treat this as a sign the code wasn't properly reviewed.
Red Flag
Noise comments like // returns the result or // loop through the array. If my code has these, it means I got lazy and accepted AI output without reading it.
Red Flag
Ignoring language idiomsunwrap() in Rust library code, bare except: in Python, any everywhere in TypeScript. These are tell-tale signs of unreviewed output.
🧠

Rule 3 — If I Wrote It, I Can Explain It

This is the simplest rule and the most important one. If someone asks me about a function I submitted, I can walk through it line by line.

Must
I can explain the logic, justify the design, derive the math, and identify edge cases for every function I submit. Not from memory — but from genuine understanding.
Red Flag
"That's what the AI generated" or "I don't know, it passes tests" — if I ever catch myself thinking this, the code goes back for a rewrite.
Note
This doesn't mean memorizing every line. It means I've read, understood, and can discuss the code as confidently as if I'd written it by hand.
🏗️

Rule 4 — I Own the Architecture

AI is great at writing individual functions, but it tends to over-architect. Module boundaries, API design, data modeling — those decisions come from me, not the model.

Must
Structural decisions — module boundaries, API surfaces, data modeling, dependency choices — are mine. I don't blindly accept AI suggestions for architecture.
Must
When AI suggests a significant structural change, I document why I accepted or modified it in the PR description.
Red Flag
Unnecessary abstractions, factory-of-factories, over-engineered patterns with no justification — I reject these as "AI-driven complexity."
🔒

Rule 5 — Security Is My Responsibility

AI has no concept of my security context. It doesn't know what's sensitive, what's public, or what could go wrong. I treat security-touching code with extra care.

Must
I never paste proprietary code, secrets, API keys, or user data into AI prompts unless I'm using enterprise-tier tools with data guarantees.
Must
Any AI-generated code touching authentication, authorization, cryptography, or financial logic gets a manual line-by-line review from me.
Red Flag
Submitting security-critical AI-generated code without explicit review notes is something I consider a personal violation of my own standards.
03

How I Disclose AI Usage

I believe transparency builds trust. When I use AI in my work, I'm upfront about it — not because it's shameful, but because it gives reviewers the right context to evaluate my code.

Minimal AI
Autocompletion, typo fixes, boilerplate scaffolding. I wrote the core logic myself.
Significant AI
AI drafted major sections. I reviewed every line, understood the approach, tested it, and modified where needed.
Fully Generated
AI produced most of the code. I hold this to extra scrutiny and can still explain every piece of it.
04

My Code Review Checklist

When I review code — my own or someone else's — here's what I specifically look for to catch low-effort AI output:

📝 Suspiciously Polished PR Descriptions

If the PR description is 300+ words with generic H2 headers and no specific file references
Then I flag it — a good PR description is concise and references specific changes

👻 Hallucinated Patterns

If code reinvents existing utilities or uses names like data, process_item
Then I ask for a refactor using existing code and domain-specific naming

Missing Proof of Execution

If no test output, no linked issue, no reference for algorithms
Then I request the missing artifacts before continuing review

🔍 Ghost Comments & Noise

If comments reference nonexistent variables or just restate the function name
Then I flag it as AI hallucination and ask for cleanup

⚙️ Broken Language Idioms

If code ignores language best practices — sloppy error handling, unnecessary cloning, missing docs
Then I request fixes per the language's idiomatic style

🛡️ Security-Sensitive Code

If PR touches auth, crypto, validation, or financial logic without security review notes
Then I block merge until security review is documented
05

Anti-Patterns I've Learned to Avoid

🍝 The "Spaghetti Dump"

A massive PR that spans dozens of files, clearly generated in one long AI session with zero incremental thought. Usually full of duplicated logic and inconsistent naming. I've done this once — never again.

🏰 The "Castle in the Sky"

Over-engineered abstractions no one asked for — factory-of-factories, premature generalization, design patterns for their own sake. AI loves to architect. I've learned to keep it grounded.

🦜 The "Parrot"

Every function has a comment that restates its name. // Gets the user above getUser(). AI loves narrating the obvious — I strip this noise out before committing.

🎭 The "Confidence Mask"

Beautifully formatted code with eloquent commit messages that hides a total lack of understanding. Breaks on the first edge case. If I can't answer basic questions about my own PR, something went wrong.

06

My Best Practices

1
Read before I commit. I go through every line the AI generates. If I can't explain a block to a teammate, it doesn't get submitted.
2
AI accelerates, I decide. I let AI handle boilerplate, first drafts, and pattern matching. Architecture, edge cases, and security stay in my hands.
3
Test the unhappy paths. AI-generated code has a "happy path bias" — it writes for the ideal scenario. I specifically write tests for edge cases, error conditions, and boundary values.
4
Keep PRs focused. I don't dump an entire AI session into one PR. I break changes into logical, reviewable units — same discipline as hand-written code.
5
Review with fresh eyes. I take a break between generating and reviewing. AI output is convincing at first glance — distance helps me catch errors.
6
Write better prompts. The quality of AI output directly correlates with prompt quality. I invest time in providing context, constraints, and existing patterns.
7
Keep my fundamentals sharp. AI tools are powerful but they come and go. Debugging, system design, reading code, reasoning about complexity — these skills are permanent.