No severe developer nonetheless expects AI to magically do their work for them. We’ve settled right into a extra pragmatic, albeit nonetheless barely uncomfortable, consensus: AI makes an ideal intern, not a alternative for a senior developer. And but, if that is true, the corollary can be true: If AI is the intern, that makes you the supervisor.
Sadly, most builders aren’t nice managers.
We see this day-after-day in how builders work together with instruments like GitHub Copilot, Cursor, or ChatGPT. We toss round imprecise, half-baked directions like “make the button blue” or “repair the database connection” after which act shocked when the AI hallucinates a library that has not existed since 2019 or refactors a important authentication circulation into an open safety vulnerability. We blame the mannequin. We are saying it isn’t good sufficient but.
However the issue often isn’t the mannequin’s intelligence. The issue is our lack of readability. To get worth out of those instruments, we don’t want higher immediate engineering tips. We’d like higher specs. We have to deal with AI interplay much less like a magic spell and extra like a proper delegation course of.
We must be higher managers, in different phrases.
The lacking talent: Specification
Google Engineering Supervisor Addy Osmani lately printed a masterclass on this precise subject, titled merely “The right way to write a great spec for AI brokers.” It is likely one of the most sensible blueprints I’ve seen for doing the job of AI supervisor properly, and it’s an ideal extension on some core ideas I laid out lately.
Osmani isn’t attempting to promote you on the sci-fi way forward for autonomous coding. He’s attempting to maintain your agent from wandering, forgetting, or drowning in context. His core level is easy however profound: Throwing a large, monolithic spec at an agent usually fails as a result of context home windows and the mannequin’s consideration finances get in the best way.
The answer is what he calls “good specs.” These are written to be helpful to the agent, sturdy throughout classes, and structured so the mannequin can comply with what issues most.
That is the lacking talent in most “AI will 10x builders” discourse. The leverage doesn’t come from the mannequin. The leverage comes from the human who can translate intent into constraints after which translate output into working software program. Generative AI raises the premium on being a senior engineer. It doesn’t decrease it.
From prompts to product administration
If in case you have ever mentored a junior developer, you already know the way this works. You don’t merely say “Construct authentication.” You lay out all of the specifics: “Use OAuth, help Google and GitHub, maintain session state server-side, don’t contact funds, write integration exams, and doc the endpoints.” You present examples. You name out landmines. You insist on a small pull request so you’ll be able to examine their work.
Osmani is translating that very same administration self-discipline into an agent workflow. He suggests beginning with a high-level imaginative and prescient, letting the mannequin increase it right into a fuller spec, after which enhancing that spec till it turns into the shared supply of fact.
This “spec-first” strategy is rapidly changing into mainstream, shifting from weblog posts to instruments. GitHub’s AI group has been advocating spec-driven growth and launched Spec Package to gate agent work behind a spec, a plan, and duties. JetBrains makes the identical argument, suggesting that you simply want evaluate checkpoints earlier than the agent begins making code modifications.
Even Thoughtworks’ Birgitta Böckeler has weighed in, asking an uncomfortable query that many groups are quietly dodging. She notes that spec-driven demos are likely to assume the developer will do a bunch of necessities evaluation work, even when the issue is unclear or massive sufficient that product and stakeholder processes usually dominate.
Translation: In case your group already struggles to speak necessities to people, brokers is not going to prevent. They’ll amplify the confusion, simply at a better token charge.
A spec template that truly works
A great AI spec isn’t a request for feedback (RFC). It’s a device that makes drift costly and correctness low cost. Osmani’s suggestion is to begin with a concise product transient, let the agent draft a extra detailed spec, after which appropriate it right into a residing reference you’ll be able to reuse throughout classes. That is nice, however the true worth stems from the precise parts you embrace. Based mostly on Osmani’s work and my very own observations of profitable groups, a useful AI spec wants to incorporate a number of non-negotiable components.
First, you want aims and non-goals. It isn’t sufficient to jot down a paragraph for the objective. You will need to record what’s explicitly out of scope. Non-goals stop unintentional rewrites and “useful” scope creep the place the AI decides to refactor your complete CSS framework whereas fixing a typo.
Second, you want context the mannequin gained’t infer. This consists of structure constraints, area guidelines, safety necessities, and integration factors. If it issues to the enterprise logic, you must say it. The AI can not guess your compliance boundaries.
Third, and maybe most significantly, you want boundaries. You want specific “don’t contact” lists. These are the guardrails that maintain the intern from deleting the manufacturing database config, committing secrets and techniques, or modifying legacy vendor directories that maintain the system collectively.
Lastly, you want acceptance standards. What does “accomplished” imply? This needs to be expressed in checks: exams, invariants, and a few edge circumstances that are likely to get missed. In case you are pondering that this feels like good engineering (and even good administration), you’re proper. It’s. We’re rediscovering the self-discipline we had been letting slide, dressed up in new instruments.
Context is a product, not a immediate
One cause builders get pissed off with brokers is that we deal with prompting like a one-shot exercise, and it isn’t. It’s nearer to establishing a piece surroundings. Osmani factors out that giant prompts usually fail not solely as a result of uncooked context limits however as a result of fashions carry out worse while you pile on too many directions without delay. Anthropic describes this identical self-discipline as “context engineering.” You will need to construction background, directions, constraints, instruments, and required output so the mannequin can reliably comply with what issues most.
This shifts the developer’s job description to one thing like “context architects.” A developer’s worth isn’t in figuring out the syntax for a selected API name (the AI is aware of that higher than we do), however fairly in figuring out which API name is related to the enterprise drawback and guaranteeing the AI is aware of it, too.
It’s value noting that Ethan Mollick’s submit “On-boarding your AI intern” places this in plain language. He says you must be taught the place the intern is beneficial, the place it’s annoying, and the place you shouldn’t delegate as a result of the error charge is simply too expensive. That may be a fancy method of claiming you want judgment. Which is one other method of claiming you want experience.
The code possession lure
There’s a hazard right here, in fact. If we offload the implementation to the AI and solely give attention to the spec, we threat shedding contact with the truth of the software program. Charity Majors, CTO of Honeycomb, has been sounding the alarm on this particular threat. She distinguishes between “code authorship” and “code possession.” AI makes authorship low cost—close to zero. However possession (the power to debug, keep, and perceive that code in manufacturing) is changing into costly.
Majors argues that “while you overly depend on AI instruments, while you supervise fairly than doing, your personal experience decays fairly quickly.” This creates a paradox for the “developer as supervisor” mannequin. To put in writing a great spec, as Osmani advises, you want deep technical understanding. For those who spend all of your time writing specs and letting the AI write the code, you may slowly lose that deep technical understanding. The answer is probably going a hybrid strategy.
Developer Sankalp Shubham calls this “driving in decrease gears.” Shubham makes use of the analogy of a guide transmission automotive. For easy, boilerplate duties, you’ll be able to shift right into a excessive gear and let the AI drive quick (excessive automation, low management). However for advanced, novel issues, that you must downshift. You may write the pseudocode your self. You may write the tough algorithm by hand and ask the AI solely to jot down the take a look at circumstances.
You stay the motive force. The AI is the engine, not the chauffeur.
The long run is spec-driven
The irony in all that is that many builders selected their profession particularly to keep away from being managers. They like code as a result of it’s deterministic. Computer systems do what they’re informed (principally). People (and by extension, interns) are messy, ambiguous, and require steering.
Now, builders’ major device has turn out to be messy and ambiguous.
To reach this new surroundings, builders must develop mushy abilities which can be really fairly laborious. It is advisable to learn to articulate a imaginative and prescient clearly. It is advisable to learn to break advanced issues into remoted, modular duties that an AI can deal with with out shedding context. The builders who thrive on this period gained’t essentially be those who can sort the quickest or memorize essentially the most commonplace libraries. They would be the ones who can translate enterprise necessities into technical constraints so clearly that even a stochastic parrot can not mess it up.
