To Think Is to Forget
How my agent and I learned to throw things away.
In 1942, Borges wrote a short story about a man who couldn’t forget anything.
Ireneo Funes, nineteen, gets thrown from a horse. He wakes up paralyzed — and unable to lose a single detail of anything he sees. He can recite Pliny in Latin from one reading, describe the exact shape of every cloud he saw on a particular morning in April.
It sounds like a gift. It isn’t. Funes can’t tell that the dog he saw at 3:14 pm, in profile, is the same dog he saw at 3:15 pm, from the front. To the rest of us they’re the same dog; to him they’re two completely different sights, both equally real, both equally vivid. He can’t generalize, can’t decide what matters, can’t act.
Borges puts it in one line:
“To think is to forget differences, generalize, make abstractions.”
Funes has lost the ability to throw things away. And without that, every decision becomes impossible.
Funes was fiction in 1942. He’s the agentic engineering problem of 2026.
Two years ago we had the opposite problem. Agents had no memory at all — every session a clean reset, every conversation tattooed onto sticky notes pasted into the next prompt. I wrote about it then and called them digital versions of Leonard Shelby from Memento. The fix was obvious: give them somewhere to put things. Vector stores, memory tools, persistent profiles — somewhere the conversation could land and stay.
But we didn’t really fix the problem. We just started remembering everything. And remembering everything is its own kind of broken.
We built our way out of Leonard Shelby and straight into Ireneo Funes.
Funes at the terminal
It started for me with my CLAUDE.md files. For the past year I’ve kept them at both the project level and the user level — long-ish text files that tell the agent how I work, what conventions I follow, what I care about on which project. The whole point is so I don’t have to re-explain myself every session. The agent reads the file, and it knows me.
For a while this worked. Then things started shifting — a project I’d been deep in for months wrapped up, a convention I used to care about stopped serving me, new things took their place. I’d add the new lines but forget to remove the old ones.
The agent kept reading both. It would ask me about the finished project as if it were still live. Push me toward the convention I’d quietly abandoned. Treat the old me and the new me as equally current. The answers started feeling a half-step off — not wrong exactly, just from a version of me that no longer existed.
It wasn’t wrong in the way an amnesiac agent was wrong. It wasn’t missing context. It had every piece of context I’d ever given it, sitting there at full fidelity, and that was the problem.
The dog at three-fourteen and the dog at three-fifteen.
The pass that decides
What I needed wasn’t a bigger file. I needed to do something with the file — a pass that read everything and decided what was still true.
There’s a name for that operation, and we run it every night. Sleep isn’t really rest; it’s compression. The brain replays the day, throws most of it away, and writes a shorter version of what was worth keeping. You don’t wake up with yesterday’s full transcript. You wake up reoriented — slightly different from who you were when you closed your eyes.
Funes never gets that. The horse fell on him and he hasn’t slept properly since — every detail still there, undimmed, accumulating.
So I built a thin version for the agent. A skill called /sleep. I broke the monolithic CLAUDE.md into smaller files, one for each part of my life — me, my work, my projects, my systems — each carrying its own last-reviewed date. When I run /sleep, the agent reads the files, flags stale lines, contradictions, things I’ve outgrown, and asks me about each one before changing anything. The goal is to keep them small.
Last week, Anthropic shipped the industrial version of the same idea. Their feature is called Dreams: point it at a memory store and up to a hundred past session transcripts, and it runs a separate job — minutes, sometimes tens of minutes — that reads everything and writes a new memory store. You review the output and decide whether to adopt it.
I’m not adopting it. The shape is the same as /sleep; the mechanics are different. Dreams batches everything asynchronously and produces a memory store you accept wholesale. /sleep is interactive, partitioned, and slow on purpose — the work of deciding what to keep is the part I still want to do myself.
But the striking thing isn’t that I built one version and Anthropic built another. It’s that the same shape keeps getting invented. Stanford’s Generative Agents paper called it reflection in 2023 — agents pausing to synthesize higher-level abstractions from accumulated memories. Letta shipped sleep-time compute in 2025 — a second agent rewriting the primary’s memory in the background; I borrowed the name from them. Anthropic now calls it dreaming. The names change. The operation doesn’t: read everything, decide what was worth keeping, throw the rest away.
Forgetting is where memory architectures go when they mature.
My agent isn’t smarter than it was last week. The weights haven’t moved; the model is the same. But after /sleep, the next session starts with a slightly different agent — and a slightly different me.
Funes never gets to be slightly different. The horse threw him in 1882, and he’s been the same person, holding the same teeming day, ever since.
The rest of us get to wake up reoriented — to find some things have shrunk overnight, some have stayed, some are gone entirely.

