From Writing Code to Managing Agents: The AI-Native Engineer
The job title still says "software engineer." But the job itself? It's changing faster than most engineers realize.
In a recent talk on the EO channel, Mihail Eric — AI lead at a San Francisco startup and instructor at Stanford University — laid out a vision that should make every developer sit up. The era of the engineer who only writes code is ending. The era of the engineer who manages AI agents has begun.
The shift isn't hypothetical. It's happening in production codebases right now. And the engineers who adapt will define the next decade of software.
The AI-Native Engineer: A New Breed
Eric describes a new generation entering the workforce: the AI-native engineer. These aren't developers who occasionally use Copilot for autocomplete. They are engineers who think in agentic workflows from day one.
But here's what makes Eric's take refreshing — he's not saying traditional skills don't matter. The opposite. AI-native engineers need a strong foundation in programming, system design, and algorithmic thinking. Without it, you can't evaluate what an agent produces. You can't debug its failures. You can't architect systems that agents can actually navigate.
The difference is that on top of this foundation, AI-native engineers are fluent in a second language: orchestrating autonomous agents to do the work that used to require typing every line by hand.
Think of it as the difference between a musician who only plays and a conductor who shapes an entire orchestra. You still need to understand music. But the output scales dramatically.
From Coder to Manager: The Mindset Shift
Eric makes a provocative comparison: managing AI agents is a lot like managing human interns.
You wouldn't hand an intern your entire codebase on day one and say "refactor the authentication system." You'd start small. Give them a contained task. Review their output. Gradually expand their scope as trust builds.
The same applies to agents. Eric recommends an iterative delegation approach:
- Master single-agent workflows first. Get one agent working reliably on a well-scoped task before adding complexity.
- Establish clear boundaries. Define what the agent can and cannot touch. Scope its access to specific files, modules, or operations.
- Add agents incrementally. Only introduce a second or third agent for isolated, parallel tasks once the first is stable.
- Monitor constantly. Know when agents are stuck. Know when they're hallucinating. Know when to step in.
The top 1% skill that separates great AI-native engineers from everyone else? According to Eric, it's context-switching — the ability to keep track of what multiple agents are doing simultaneously, spot when one goes off the rails, and course-correct in real time.
This is management. Not people management, but intelligence management. And it requires a fundamentally different way of thinking about your workday.
Building Agent-Friendly Codebases
Here's where Eric's talk gets deeply practical. AI agents are only as good as the environment they operate in. Drop an agent into a messy, inconsistent codebase and it will compound errors at machine speed.
The solution? Build your codebase like you're onboarding the world's most literal intern.
Tests Are Contracts
Agents don't have intuition. They can't look at code and think "this feels wrong." They need explicit contracts — and that means tests.
Your test suite isn't just quality assurance anymore. It's the specification that tells agents whether their code is correct. Without comprehensive test coverage, agents are flying blind. They'll generate code that looks right, passes a syntax check, and silently breaks your application.
If you want agents to contribute meaningful code, invest in tests first.
Consistency Over Cleverness
If your codebase has two different patterns for the same operation — say, two different ways to make API calls, or two different error-handling conventions — a human developer will be confused. An AI agent will be even more confused.
Eric emphasizes that codebases must be robust, airtight, and self-consistent:
- Your
READMEmust match the actual code. If the docs say one thing and the code does another, the agent will pick one at random. - Use consistent design patterns across modules. One way to handle auth. One way to structure API routes. One convention for error responses.
- Keep your dependencies clean. Conflicting or deprecated packages create ambiguity that agents handle poorly.
The principle is simple: if it would confuse a new hire, it will confuse an agent. Except the agent won't ask for clarification — it'll just write bad code confidently.
Documentation as Interface
This extends beyond READMEs. Every module, every API, every configuration file is part of the interface that agents use to understand your system. Think of documentation not as something you write for future humans, but as something you write for the agents working alongside you right now.
The Value of Taste
Eric draws a line between functional software and incredible software. The gap? Developer taste.
AI agents can generate working code. They can scaffold features, write tests, and wire up APIs. But the last 10% — the polish, the edge cases, the UX details that make users love a product — that still comes from human judgment.
Eric calls this the "last mile" problem. Agents get you 90% of the way there quickly. But the developer who ships great software is the one who takes that output and refines it with taste: better naming, cleaner error messages, smoother interactions, tighter performance.
And there's a meta-lesson here too. Eric notes that even top AI labs like Anthropic are constantly rewriting and experimenting with their own tools. Nothing is settled. The best tools today will be replaced in six months.
AI-native developers must adopt the same posture: hack, test, iterate. Don't get married to a workflow. Don't assume your current agent setup is optimal. Keep experimenting. The developers who will thrive aren't the ones who found the right tool — they're the ones who never stop looking.
Why Junior Developers Have the Edge
This might be the most counterintuitive insight from Eric's talk. In a brutal job market for entry-level engineers, junior developers actually have a hidden advantage in the AI-native era.
Why? Because senior developers with 20 years of experience are often ingrained in their habits. They've built muscle memory around workflows that pre-date AI agents. Asking them to fundamentally change how they work meets real psychological resistance.
Junior developers, by contrast, are sponges. They have:
- No legacy habits to unlearn
- Healthy naivety that makes them willing to try approaches seniors would dismiss
- Nimbleness to rapidly adopt new tools and workflows
- Entrepreneurial bravery to tackle hard problems without overthinking
The playing field is being reset. Experience still matters — but adaptability matters more. A junior developer who masters agentic workflows in their first year may outproduce a senior who refuses to change.
If you're early in your career, this is your moment. Lean into it.
The Future: AI-Native Organizations
In Eric's talk, Rem Koning — Professor at Harvard Business School — zooms out to the organizational level. His insight reframes the entire conversation.
The future of business, Koning argues, depends on the ability to allocate intelligence. Not just human intelligence. Machine intelligence.
True AI-native organizations won't just use AI to help developers write code faster. They'll embed AI directly into the product — agents that interact with customers, process orders, handle support, and make decisions without a human in the loop.
And then comes the trillion-dollar question: what happens when AI agents start talking to each other?
Not human-to-agent. Agent-to-agent. Autonomous systems negotiating, collaborating, and making decisions at machine speed. This isn't science fiction — it's the logical next step once you have reliable agents operating in well-structured environments.
The companies that figure this out first won't just have a competitive advantage. They'll be operating on a fundamentally different plane.
What You Should Do Today
If you've read this far, you're probably wondering where to start. Here's a practical checklist based on Eric's framework:
For your codebase:
- Audit your test coverage. Gaps in tests are gaps in agent capability.
- Standardize your patterns. One way to do each thing.
- Update your documentation. Make it accurate, not aspirational.
- Remove dead code and conflicting conventions.
For your workflow:
- Start with one agent on one contained task. Master that before scaling.
- Build feedback loops. Review agent output like you'd review a junior's PR.
- Track what works and what doesn't. Keep a log of agent successes and failures.
- Experiment weekly with new tools and approaches.
For your career:
- Invest in fundamentals (system design, algorithms) — they're your quality filter for agent output.
- Practice context-switching across multiple parallel tasks.
- Build taste through shipping and iterating, not just coding.
- Stay humble. The best workflow today won't be the best in six months.
The transition from writing code to managing agents isn't optional. It's already underway. The question is whether you'll be leading it or catching up.
This article is based on Mihail Eric's talk "From Writing Code to Managing Agents" on the EO channel, featuring insights from Harvard Business School Professor Rem Koning.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.