Sorry I’m so late on updating my Claude Code collection. When you’ve been following the information, you’ve in all probability seen a ton of articles the final couple weeks, although, about Claude Code and what a revolutionary piece of software program it’s for programmers.
The factor that I believe is price noting is that these items are written extra by software program builders than empirical social scientists or economists. Actually, I believe little or no of what I’ve seen even comes near being the form of employee that I see as being the audience and common reader of my substack. And I believe that’s as a result of thus far, in case you learn intently between the traces of all of the alleged productiveness good points from AI for programmers, it generally really has been the pc science tribe.
Which isn’t to say, although, that empirical social scientists aren’t utilizing AI, as they for positive are. I simply imply that on the gradient of the type of use that you just see introduced at giant and the kind of employee and work being purported within the realm of social science, I believe there may be sufficient of a niche that it warrants separate explanations if solely to translate what use instances (past trivial use) there are. So I’m going to try to do this extra.
This will probably be a rambling submit. I maintain attempting to consider a method to set up it, but it surely’s an excessive amount of work. I’m simply going to subsequently write little sections.
Boris Cherny, an Economics Main, Invented Claude Code in Early 2024
Earlier than I dig into the precise workflow stuff, let me let you know what I’ve discovered in regards to the creator of Claude Code. Sure, it was created by Anthropic, but it surely was by accident created too. The one that constructed Claude Code is known as Boris Cherny. Right here’s what I’ve discovered about Boris Cherny.
-
Boris wasn’t an AI researcher.
-
He studied economics at UC San Diego, graduating in 2011.
-
He taught himself to program, began working at startups when he was 18, ultimately wrote a well-regarded ebook on TypeScript for O’Reilly
-
He spent eight years at Meta, rising to Principal Engineer—a senior particular person contributor function.
-
He led engineering for Fb Teams.
-
He joined Anthropic in September 2024. And it was not to construct Claude Code. Quite, he joined to work on the Claude chatbot extra usually.
If he wasn’t employed to make Claude Code, and he made Claude Code, then what occurred? Properly that’s an attention-grabbing story in and of itself. From what I’ve been capable of collect, what occurred subsequent got here from a behavior Boris has talked about in interviews: he builds facet initiatives. He’s mentioned that the majority of his profession progress got here from tinkering on issues exterior his principal job. When he hires individuals, he seems to be for a similar sample—individuals with hobbies, facet quests, ardour initiatives. “It reveals curiosity and drive,” he’s mentioned.
First, let me simply say that that really was tremendous encouraging to listen to as a result of I additionally construct facet initiatives. Mixtape Periods is a facet challenge. My podcast is a facet challenge. This substack is a facet challenge. I’ve means too many facet initiatives to listing. When individuals ask me what my hobbies are, I mainly sheepishly will say one thing like “I’m attempting to construct an educational family tree of Orley Ashenfelter, a labor economist at Princeton’s Industrial Relations Part …” Many of those I simply need to work on in any other case I’ll die. So it’s good to know that some assume it’s really an excellent factor,
Anyway, when Boris acquired to Anthropic, he instantly began tinkering with Claude. He needed to study the Claude API, so he constructed just a little terminal device that connects to Claude. And initially, the primary model of Claude Code might inform him what tune was taking part in on his laptop.
Then he had a dialog with a PM at Anthropic named Cat Wu, who was researching AI brokers. And that dialog sparked an concept. What if he gave Claude entry to extra than simply the music participant? What if he gave it entry to the filesystem? To bash?
So he tried it. I’ll paraphrase and dramatize what occurred subsequent.
“The consequence was astonishing. … Claude started exploring my codebase by itself. I’d ask a query, and Claude would autonomously open a file, discover it imported different modules, then open these recordsdata too. It went on, till it discovered an excellent reply. … Claude exploring the filesystem was mind-blowing to me as a result of I’d by no means used any device like this earlier than.”
Take a look at that intently. He was shocked by what he did. Claude shocked him. Why? As a result of he didn’t educate Claude tips on how to navigate his codebase. He didn’t program something algorithmic in any respect. He didn’t write “whenever you see this import assertion, open that file.” Quite, he simply gave Claude entry to the filesystem, which gave Claude the flexibility to learn recordsdata, and Claude instantly knew what to do with it.
So, how does Claude know tips on how to learn the recordsdata within the filesystem if Claude was not designed to do this, and nobody had ever programmed him to do this? That’s the million greenback query. And the reply seems to be hidden in plain sight.
Claude was skilled on billions of traces of code. However it isn’t simply the code as syntax. That is the important thing, and it’s related to one thing David Autor has written about concerning the computerization of labor, the flexibility of computer systems to outperform people when the work will be written down as a collection of steps, and that AI (or LLMs relatively) can’t do algorithmic work properly in any way.
However, it may possibly do the type of work properly that can’t be written down which is the type of work primarily based on a sort of data that’s latent however not capable of be communicated between people. Autor calls this the Polyani Paradox — we all know greater than we all know tips on how to clarify.
Properly, right here’s the deal — LLMs can’t observe algorithms in any respect properly. Which is why when individuals mainly ask it do stuff which can be duties that are kind of algorithmic in nature, it sucks at it. Discover me the cites for this after which it comes again with hallucinated texts. However ask it to try to uncover the that means in one thing, and it may possibly. Why?
As a result of, embedded in human speech are a number of issues — there’s the syntax, however there’s additionally the inchoate that means behind the phrases. People decide that up — and apparently, so does Claude, so does ChatGPT. Many people knew that with the chatbots which was what made all of them appear so human-like, however apparently as a result of Claude was skilled on billions of traces of code, one thing like that is happening on the subject of initiatives as properly.
Code is extra than simply syntax. It’s not merely documentation for Stata and R. Quite, code is in context. It’s tutorials, documentation, Stack Overflow posts, Stata listserv posts, Github repositories with their full historical past. Claud has seen all of it — numerous examples of how programmers really work. Actually, issues associated to work that even the programmers themselves might not actually acknowledge because the work. Claude sees them opening recordsdata, seeing imported issues, following these issues, understanding their numerous dependencies, then return. Forwards and backwards 100 occasions. Claude noticed all of it.
He noticed not simply the syntax of the code. He noticed the challenge. Code is rarely the objective in something. The challenge is the objective. And Claude has reviewed code, however extra necessary than that, Claude has reviewed the initiatives.
That is the data that Autor has emphasised AI and LLMs particularly accesses — the latent data contained in human speech. And you probably have the latent data, and also you even have the syntax of that, no matter it’s, regardless of the medium, then you might have a really giant share of what’s required to finish a challenge.
Conclusion
I’m going to cease for there. I believe these posts must be digestible, and that is a straightforward historical past piece in addition to a conceptual piece about Claude Code, however I wish to simply cease for now in order that the subsequent posts can focus extra alone specific workflow. I wish to proceed to emphasise to readers, although, that Claude Code is not merely the chatbot Claude, despite the fact that the chatbot Claude and Claude Code are each primarily based on 4.5, which is a really highly effective LLM.
I additionally wish to emphasize that Claude Code is not simply one other model of Github Copilot, nor Cursor AI, each of which a few of you’ve got in all probability heard of however didn’t wish to your self make investments time into. So that you’ve been doing extra of the copy-paste methodology utilizing ChatGPT and Claude to “do stuff”. If the AI agent isn’t rummaging round your recordsdata in your laptop “doing stuff”, like studying issues, writing issues, and even working regressions, then you haven’t skilled this but.
Claude Code is an expertise good. Till you expertise it, you’ll not admire how revolutionary it’s. However, when you do expertise it — which belief me, you’ll. You’ll, and most probably very quickly. When you expertise it, you’ll like me notice that there isn’t any turning again. And all of the complaining about how AI is destroying world will grow to be one thing you might be mildly inquisitive about and principally resigned to. You’ll swap. It’s important to expertise it first to know that I’m proper, although, but when all you’ve got as a conceptual psychological mannequin of what Claude Code is and might do is a chatbot, and also you’ve been notably bullish about chatbots capability to do artistic work, initially I’ll simply say I believe you might be complicated consumer error with chatbot error normally. I’ve not often heard somebody say they might not get a chatbot to do one thing that I’ve discovered I’ve had it do 100 occasions over. Normally it’s simply complaining for the sake of complaining.
However put that apart. It doesn’t matter. Till you see Claude Code hearth up a listing of certainly one of your initiatives, and run round, you received’t know. The actual app killer, although, are the decks Claude Code will make for you. I’m optimistic that for many individuals, after they see it make a deck in beamer for them, with them solely describing the deck they need in phrases like,
“I would like you to take advantage of unique, lovely deck, with lovely figures, and exquisite tables, following an unknown latent idea of the rhetoric of decks themselves, which I do know you already know since you’ve got actually learn each single deck written within the historical past of humanity, about my paper and my code and my tables and my figures. I would like this to be a deck that anybody, an clever layperson, would need to concentrate to. You should use no matter theme you need, however I would like the ultimate product to be so unique and distinctive to this challenge that nobody may even detect what that unique theme even was.”
Once you see the deck that comes out of that, you’ll say, “Anthropic, take all my cash.”
I’ll speak extra about this later, and present some decks I really feel snug sharing, however belief me — 2026 goes to be for you the yr of Claude Code.
