The Moment Everything Changed
If you’ve been using AI coding assistants, you’ve probably felt this: you describe what you want, Claude or Copilot writes some code, then you spend the next hour fixing bugs, clarifying misunderstandings, and wondering why this feels harder than just writing the code yourself.
I was there six months ago. I’d spend 20 minutes explaining a feature, watch Claude generate code, then spend another 30 minutes debugging why it didn’t work with the existing codebase. The promise of “10x developer productivity” felt like a cruel joke.
Then I had a realization that changed everything: I was using a tech lead like a junior developer.
This is the story of how I went from frustrated prompt-fixing to consistently shipping 3-5 features daily—not by finding better AI tools, but by completely changing how I think about the work.
The Problem: We’re Using the Wrong Mental Model
The frustration most developers feel with AI assistants comes from a fundamental mismatch. We treat them like faster typewriters—feed them a prompt, get code back, repeat. When the code is wrong, we blame the AI for being “dumb” or “not understanding context.”
I’ve learned that the problem isn’t the AI. The problem is treating a tool capable of understanding and planning like a code generator that needs micromanagement.
Think about how you’d work with a skilled engineer on your team. You wouldn’t describe every implementation detail. You’d explain the goal, provide context about the codebase, answer clarifying questions, review their plan, then let them execute while you check in periodically. You’d focus on the “what” and “why,” trusting them with the “how.”
That’s exactly what AI coding assistants can do—if you set them up for success.
The Mental Model Shift: Developer to Tech Lead
Here’s what I’ve found works: think of yourself as a tech lead directing work, not a developer writing code.
As a tech lead, your job is to:
- Define clear requirements and success criteria
- Provide context about the codebase and architecture
- Review plans before implementation starts
- Unblock and redirect when things go wrong
- Ensure quality through verification
The AI assistant’s job is to:
- Ask clarifying questions
- Propose implementation approaches
- Write the actual code
- Execute repetitive tasks
- Adapt based on your feedback
This shift feels uncomfortable at first. For years, I defined my value by the code I wrote. Stepping back to let AI handle implementation felt like losing my identity as a developer.
But here’s what I discovered: directing work at a higher level is harder and more valuable than writing code. The thinking doesn’t go away—it just moves up the abstraction ladder. Instead of thinking about how to implement a form validator, I’m thinking about how three features interact, how to structure work for parallel execution, and how to maintain consistency across a growing codebase.
The code still needs to be right. But now I’m ensuring it’s right by providing clear direction and thorough review, not by typing every character myself.
The Foundation: Steering Documents
The single biggest breakthrough in my AI-assisted workflow came from solving the context problem.
Every conversation with an AI assistant starts fresh. It doesn’t know your tech stack, your architecture patterns, your code style, or your business logic. So you either re-explain everything each time (exhausting), or the AI makes assumptions that don’t match your codebase (frustrating).
I struggled with this for months. Each new feature meant copy-pasting context, and inevitably the AI would still make incorrect assumptions. The code would use the wrong state management approach, or violate our API conventions, or just feel inconsistent with the rest of the codebase.
Then I discovered steering documents—structured context files that the AI reads at the start of every conversation.
I created three core documents:
product.md- What we’re building and why- Product vision and goals
- User personas and use cases
- Feature requirements and business logic
- Success metrics
tech.md- Technical standards and patterns- Technology stack (languages, frameworks, databases)
- Architecture patterns (microservices, event-driven, etc.)
- Code organization and file structure
- API conventions and data models
- Testing requirements
structure.md- Codebase layout- Directory structure with explanations
- Where to find key components
- Naming conventions
- File organization principles
The first time I built these documents took about 3 hours. Now they save me 15 minutes on every single feature. That’s the investment paying for itself after 12 features—which happens in less than a week.
The results were immediate:
- No more re-explaining our React patterns every conversation
- No more incorrect assumptions about API structure
- Code that’s consistent with the existing codebase on the first try
- Faster implementation because the AI understands the context upfront
I’ve written over 700 lines of steering documentation. I wrote it once. The AI reads it before every feature. That’s the ultimate “write once, read forever” investment.
The Workflow: Systematic Delegation
With steering documents providing consistent context, I could finally implement a systematic workflow. Here’s what I’ve found works:
1. Plan Mode First
This was the hardest habit to build, but it’s the most valuable. Before any implementation, I have the AI create a detailed plan.
The workflow:
- I describe the feature at a high level
- The AI asks clarifying questions
- I review the proposed implementation approach
- We define success criteria together
- Only then do we start writing code
This felt inefficient at first. Why spend 10 minutes planning when you could just start coding?
Because those 10 minutes prevent 30 minutes of rework. When the AI understands the full scope upfront, it makes better architectural decisions, anticipates edge cases, and writes code that actually works with the existing system.
2. Parallel Execution with Git Worktrees
Once I stopped writing code myself, I realized I could direct multiple AI assistants working on different features simultaneously.
Git worktrees let you have multiple branches checked out at once in different directories:
# Create 3 parallel workspaces
git worktree add ../feature-1-payment-form feature/payment-form
git worktree add ../feature-2-user-settings feature/user-settings
git worktree add ../feature-3-email-templates feature/email-templates
# Launch 3 Claude Code instances (one per worktree)
# My role: review plans, approve, unblock
# Result: 3 features in the time it used to take for 1My typical day now:
- Morning: Review 3 feature requirements, have each AI create plans, approve all three
- Midday: Check-in on progress, unblock any questions, review initial implementations
- Afternoon: Code review, test, approve PRs, start next batch
I’m not coding faster. I’m orchestrating parallel work streams. It’s a fundamentally different activity.
3. MCP Servers for Tooling Access
One breakthrough was giving AI assistants direct access to development tools through MCP (Model Context Protocol) servers.
Instead of me running database queries and pasting results, or checking API docs manually, the AI can:
- Query the database directly to understand schema
- Access API documentation
- Run tests and interpret results
- Check logs and error messages
This closed the feedback loop. The AI can verify its own work, catch mistakes early, and iterate without waiting for me to run commands and paste output back.
Real Results: The Productivity Multiplier
I track my feature output carefully. Here’s what changed:
Before (traditional development):
- 1 feature per day on average
- 6-8 hours of coding
- 1-2 hours of planning and review
- Context switching killed momentum
After (AI-assisted with systematic workflow):
- 3-5 features per day on average
- 1-2 hours of directing and reviewing
- 6-7 hours of other high-value work (architecture, documentation, mentoring)
- Parallel execution eliminates context switching
The multiplier isn’t just about raw speed. It’s about:
- Better thinking time: I focus on architecture and edge cases, not syntax
- Consistent quality: Steering documents ensure every feature follows our patterns
- Reduced context switching: Parallel work means I batch similar activities
- Energy preservation: Directing is less mentally draining than coding
I’m shipping more features, and I’m less exhausted at the end of the day.
Getting Started: A Three-Week Ramp
If you want to try this approach, here’s what I’d recommend:
Week 1: Build Your Foundation
- Spend 3-5 hours creating steering documents
- Start with tech.md (your stack, patterns, conventions)
- Add product.md (what you’re building, business logic)
- Document structure.md (codebase layout)
- Pick one small feature to test the workflow
Week 1-2: Learn Plan Mode
- Force yourself to plan before coding
- Let the AI ask questions—resist the urge to jump to implementation
- Review plans carefully—this is where you catch issues
- Define success criteria before starting
- Iterate on your steering docs based on what the AI still gets wrong
Week 2-3: Try Parallel Work
- Set up git worktrees for two features
- Launch two AI sessions
- Practice context switching between them
- Notice how much faster this feels than serial work
- Refine your check-in cadence
Week 3+: Scale and Refine
- Add a third parallel workstream when comfortable
- Expand steering documents as patterns emerge
- Experiment with MCP servers for tooling
- Track your feature output to see the multiplier
The learning curve is real. The first week will feel slower than just coding. But by week three, you’ll start seeing the productivity gains. By month two, you’ll wonder how you ever worked any other way.
The Shift From Doing to Directing
The hardest part of this transition wasn’t learning new tools. It was accepting that my value as a developer was no longer measured by lines of code written.
I had to redefine what “being productive” meant. It’s not about typing faster or remembering more syntax. It’s about:
- Asking better questions upfront
- Providing clearer context
- Making smarter architectural decisions
- Catching issues in review before they become bugs
- Scaling my impact through systematic delegation
This is the new abstraction layer in software development. Just as high-level languages freed us from thinking about registers and memory addresses, AI assistants are freeing us from syntax and implementation details—letting us focus on architecture, requirements, and problem-solving.
The developers who make this shift aren’t replacing their skills. They’re amplifying them. The thinking that made you a good developer is exactly what makes you effective at directing AI assistants. You just need to apply it at a higher level.
Is there a learning curve? Absolutely. Does it feel weird at first? Completely. But the productivity gains are too significant to ignore.
If you’re frustrated with AI coding assistants, I’d encourage you to try this mental model shift. Start small—one steering document, one planned feature, one week of practice. See if thinking like a tech lead instead of a typist changes the experience.
It did for me. I’m shipping more, thinking better, and actually enjoying the work again.
The code still needs to be right. I’m just ensuring it’s right from a different level of abstraction—one that lets me scale my impact far beyond what I could achieve typing every line myself.