From Second Brain to Shared Brain
The hottest idea in AI this week is something a German sociologist figured out with index cards in 1953.
Andrej Karpathy published a gist last week describing a pattern he calls “LLM Wiki.” The idea: an LLM that builds and maintains a personal knowledge base — interlinked markdown files, structured summaries, cross-references updated automatically. You curate the sources and ask the questions. The LLM does the bookkeeping. Five thousand stars in four days.
His key analogy: “Obsidian is the IDE; the LLM is the programmer; the wiki is the codebase.”
It’s a sharp proposal. It’s also one that a German sociologist named Niklas Luhmann would have recognized immediately — from the system of 90,000 handwritten index cards he maintained for forty-five years, starting in 1953. Luhmann’s Zettelkasten is similar to what Karpathy describes: interlinked notes, cross-references, a persistent structure where knowledge compounds over time. He produced seventy books from it. The architecture wasn’t the hard part. The filing was.
Between Luhmann and Karpathy, an entire community spent decades working on this problem — Vannevar Bush’s Memex in 1945, the tools-for-thought movement, Roam Research and Obsidian, Tiago Forte’s Building a Second Brain in 2022. Karpathy walked into a room that’s been full for decades. But he brought something new through the door.
What he got right
The thing that kills every personal knowledge system isn’t the architecture — it’s the upkeep. Updating cross-references when your thinking evolves, keeping summaries current, noticing when something you wrote six months ago now contradicts what you learned last week. Most people who start a second brain quit within months. The bookkeeping grows faster than the value, and one day you stop opening the app.
Karpathy’s solution — hand the maintenance to an LLM — is pragmatic. If you were never going to build the habit yourself, having an agent build a compounding knowledge base for you is vastly better than having nothing at all.
But his framing reveals an assumption: “You never write the wiki yourself.” The human curates sources and asks questions. The LLM writes, files, and maintains. You read it; the LLM writes it.
For people starting from zero, that division makes sense. But there’s another group — people who already built the habit, who already have a second brain they’ve been writing in for years. For them, the opportunity looks different.
The shared surface
I’ve kept a second brain in Obsidian for nearly five years — around 2,400 notes, wikilinked together, semantically searchable. I started building it before AI entered the picture, for the same reason people have always built these systems: I wanted to externalize my thinking so I could work with it.
When I read Karpathy’s proposal, I recognized the architecture immediately — but from the other direction. He was asking how to give an agent persistent memory. I’d already answered that question by accident, by handing the agent the memory system I’d built for myself.
The move wasn’t “LLM builds a wiki.” It was “come work where I already work.”
Karpathy’s framing is that you read the wiki and the LLM writes it. In my vault, we both write to it. The agent logs entries to my notes, and I read them the next morning. I update a project file with notes from a call, and the agent picks up the new context in its next session without my having to explain anything. When a family member emails an update about a medical appointment, the agent processes it into my file in the vault — the same file I open when I’m at the next appointment.
The project file carries the chronology — who did what, when, what’s next — and both of us contribute to it. Neither of us has to re-explain context over chat because the context lives in the notes we’re both already using.
This only works because the vault is my actual workspace, not a reference library I consult occasionally. If the agent wrote to a separate folder I never opened — the way most AI memory systems work — I’d never see what it got wrong, never catch a stale fact or a misunderstood priority. The errors would compile quietly into the agent’s model of my life, looking authoritative. A shared brain only stays honest when both parties are working in it.
What changes
A wiki the LLM maintains for you is an archive — historical and well-organized. But it’s the LLM’s understanding of your world, written in the LLM’s voice, looking backward. You visit it when you have a question about what happened.
A vault you both work in is a whiteboard. I start my morning and the agent already knows what I was working on yesterday, what’s blocked, what changed overnight — not because I briefed it, but because it was there. I don’t re-explain context. I don’t paste in background. I pick up a thread and the agent picks up the other end — like a teammate who was in the room yesterday and will be in the room tomorrow.

