Engineering Leverage with AI: Practical Workflows That Save Senior Dev Time
5/8/2026
Where I was wasting time before
I used to burn a lot of senior-dev time on work that looked small: orienting on large PRs, writing repetitive test setup, sanity-checking migration diffs, and patching docs after code already moved on.
AI didn’t make me smarter. It made repetitive setup cheaper. That’s the leverage.
Workflow 1: PR review prep
Before reviewing a big PR, I ask AI for a risk brief: what changed by subsystem, hotspots, likely regressions, and suspicious files (auth/state/migrations).
- Before: 20–30 minutes to build a map
- After: 8–12 minutes to find the high-risk zones
I still do the real review. AI just gets me to the right files first.
Workflow 2: test scaffolding
This is consistently useful. I ask for edge-case tables, candidate test names, and skeleton setup. Then I rewrite hard.
- What worked for me: generate test names first, bodies second
- Rule: generated tests are drafts, never truth
Workflow 3: migration diffs
Migrations are where confident nonsense is expensive. I use AI to summarize schema changes, backfill requirements, rollback risk, and irreversible operations.
- Must-have prompt line: “List destructive and irreversible operations explicitly.”
Then I manually validate each destructive step against actual SQL/migration files.
Workflow 4: docs sync
Docs drift is silent debt. After merge, I ask AI for doc patch suggestions: README changes, API examples, flag defaults, and runbook deltas.
If reliability discipline is the current pain point, this connects directly to my other post: /blog/side-project-to-reliable-product-observability-error-budgets-release-discipline
What breaks
- Hallucinated helpers and nonexistent APIs
- Shallow fixes that pass unit tests but fail integration paths
- Overconfident explanations that are subtly wrong
- Over-refactors when a surgical fix was needed
False confidence is the dangerous one.
My guardrails (verification checklist)
- Can I trace every claim to code, schema, or docs?
- Did I run tests at the right level (unit/integration)?
- Did I validate edge cases, not just happy paths?
- Does this follow our existing conventions?
- Can I explain this change plainly to another engineer?
If any answer is no, it’s not ready.
What I’d adopt this week
- Start with one workflow (PR prep is easiest).
- Measure time saved for one week.
- Track mistakes introduced vs mistakes avoided.
- Define one non-negotiable guardrail.
- Write your team’s AI no-fly zones.
AI is best as power steering. You still own verification and final judgment.