The Promotion I Missed
What managing a new hire taught me about working with AI
I was writing a skill file for Claude Code when a line in Anthropic’s documentation stopped me. It said to explain why the skill was needed — not just what it should do, but why it existed in the first place.
I’d been writing prompts as step-by-step procedures — edge cases spelled out, fallback behaviors for when things went wrong. The kind of detail you give someone when you don’t trust them to figure it out on their own.
But here was the documentation saying: just tell it why.
I knew this moment. I’d been on the other side of it before — not with software.
Kate
I hired Kate (not her real name) straight out of college. Computer science degree, disciplined, the kind of work ethic where you’d assign something Friday afternoon and find it done by Monday morning. But she needed the task spelled out — not because she wasn’t smart, but because she was new. No different than I had been at her age. So I gave her detailed specs, walked through edge cases, flagged where things could go wrong. She executed well within those boundaries, and the boundaries were tight.
Anyone who’s managed people knows what happened next. The instructions loosened. Her questions changed — not “how should I do this?” but “are we solving the right problem?” The projects changed too: early on it was “implement this feature, here are the requirements,” and later it was “this part of the system isn’t working for users — figure out why.”
At some point my 1:1s with Kate stopped being about what to do and started being about why we were doing it. Context, priorities, the business problem behind the technical one. I was managing her the way my best managers had managed me — with intent, not instruction.
And then Kate started managing others. She onboarded new hires with the same careful detail I’d once given her — spelling out the specs, walking through edge cases, flagging where things could go wrong. The promotion, when it came, was a formality. She’d been operating at that level for months. We just hadn’t updated the title yet.
Claude
I started working with Claude early on, back when the model couldn’t stay focused for long. Its context window was small and it would lose the thread of a conversation halfway through. So I learned to spell things out. I’d provide examples of the output I wanted so it could mimic the pattern. I’d write “let’s think step by step” because it couldn’t reason unless you told it to. I’d open with “act as a senior software engineer” to narrow the field of possible responses. The boundaries were tight — just like they’d been with Kate. And like Kate, it executed well within them.
Anyone who’s been using these tools knows what happened next. The instructions loosened. “Let’s think step by step” — the technique that defined an era of prompting, the thing people paid $335,000 a year to get right — now barely improves performance; on some models it actually hurts. The model internalized the technique the way Kate internalized project management. It just does it now, without being told. My prompts changed the way Kate’s questions had changed — less “here’s exactly what to do” and more “here’s what’s going wrong and why it matters.”
And then Claude started managing others. I used to open multiple sessions myself — one for writing, one for review, one for research. Now Claude spins up subagents on its own, delegating work with care.
The Recognition
Since December, I find myself in new territory with the models. The latest step up feels like a promotion, but like with Kate, it’s taking some reorientation.
With Kate, the hard part wasn’t her readiness — it was mine. I kept handing her detailed specs after she’d outgrown them, still seeing the new hire I’d onboarded instead of the senior engineer she’d become. The unlock wasn’t just her leveling up. It was me noticing that she had.
I went back to my skill file. This time, instead of spelling out procedures and edge cases, I found myself doing what every good manager eventually learns to do: starting with the why. What problem I was trying to solve. What good looked like. The kind of context you give someone when you trust them to figure out the how.
Ethan Mollick wrote in 2023 that we should treat AI like an intern. It was good advice at the time. But interns don’t stay interns.
Kate went on to become a director. She’s worked at companies I’ve only read about, led teams bigger than any I’ve built. If I’m being honest, I could see myself working for her.

