The New Colleagues: How to Build and Lead an AI Team
What I've learned about turning AI models into productive collaborators

Microsoft recently made a bold prediction: in the future, everyone will be a boss—of AI employees. In an April 2025 blog post, Microsoft executive Jared Spataro described "the rise of the agent boss" who builds, delegates to, and manages AI agents to amplify their impact.
It's a compelling vision. But not an original one. Since the beginning of the current AI wave, people have been talking about this future. Two years ago, I began to wonder how to prepare for it. What skills would I need?
You see, for most of my career, my core skill has been building digital products. This work centers around communicating with technical teams, helping them see the vision of what we're trying to achieve. It involves countless hours in meetings as we wrestle with that vision, identify blockers, develop consensus, and push toward shipping the product. The skills I've honed are convincing, compelling, and cajoling teams toward completing products in reasonable timeframes.
I found myself wondering how these skills would translate. The future promised by AI hype is one where we work with AI coworkers that operate faster and without complaints. A world where one-person billion-dollar companies become possible. But what does it actually look like to work with an AI team to deliver products? Are my skills in cajoling and compelling still relevant? Or would I find myself wrestling with overly compliant, sycophantic coworkers that don't provide the resistance and feedback that polish rough ideas into diamond-like products?
Economist Tyler Cowen once posed a thought-provoking question: "What do you practice as regularly and diligently as a pianist practices their scales?" This has been an orienting question I've asked periodically in my life as I figure out where to spend my attention. As the AI transition accelerated, I realized that to effectively use these new tools, I would need to deliberately practice working with them—not just occasionally, but consistently and with purpose.
This newsletter became my laboratory—a place where I could practice the craft of aligning AI models and prompts and experiment with the process. A clear weekly deadline and a concrete goal helped set the cadence and define the finished product. In my previous article, I offered a behind-the-scenes look at my AI-assisted writing workshop. Now, I want to take you deeper into the journey itself—what I've learned from two years of deliberately practicing collaboration with non-human intelligence.
The Engine Keeps Getting Swapped Out
Imagine learning to drive a car while the engineers are constantly swapping out the engine, upgrading the transmission, and redesigning the dashboard. That's what writing with AI has felt like. Each week, I've had my hands on the "steering wheel," feeling the subtle (and sometimes not-so-subtle) shifts in AI's capabilities, its quirks, and its potential.
One aggravating aspect of AI models is their memory-loss. Models are like the main character in Memento—they wake up having lost all memory. As our editing sessions grew longer, models would lose the plot, suddenly generating text unrelated to our task. Like an engine strong out of the gate but faltering on cross-country trips, they needed regular maintenance.
The problem was especially acute as the context (chat) grew longer. Rather than helping, the expanded context often meant models didn't know what to pay attention to. Eventually, I discovered that tools like BoltAI allowed me to fork a chat. If the model ever lost coherence, I could return to the last stable point and branch off from there.
Today's models are significantly better at maintaining context and following instructions in long threads. But we've also developed better maintenance techniques. I now use prompts that summarize the context of the thread so far, essentially giving the model a refresher on what we're trying to do.
Reduce Workflow Friction
Adopting a new technology requires working through the incredible friction of change. I've seen this firsthand as a product manager, watching users prefer familiar methods over supposedly "faster" or "better" ones. The early days of working with AI models highlighted this problem acutely.
Initially, using AI meant copy-pasting text into a clunky chat window and then copy-pasting the results back into my work product. This constant shuttling created enough friction that I often found myself avoiding AI altogether—the cognitive cost of context switching outweighed the benefits.
The AI user experience has improved, with companies building AI directly into existing workflows. Gmail and Apple Mail now suggest completions using AI models. But these integrations are often too generic to be useful. It's frequently easier to write something yourself than to take their generic suggestions and make them sound like you. They also miss crucial context about what you're trying to accomplish.
My breakthrough came when I discovered tools that brought AI to my work rather than forcing me to bring my work to AI. First, I found a Chrome plugin called Sider that enabled me to use AI directly in the browser. Later, I transitioned to BoltAI, which works anywhere on my Mac and provides better transcription (via Whisper) capabilities than Apple's native service.
This shift—from shuttling work back and forth to having AI available right where I'm working—dramatically reduced the friction of collaboration. It's like the difference between formal meetings versus swiveling your chair to ask a quick question. When collaboration costs less, it happens more often and yields better results.
Work Alongside AI
The real lightbulb moment came when I started using tools like Cline. My approach until then had been to shuttle work to or from AI models—even with better tools, I was still thinking in terms of discrete handoffs. But software engineers have been collaborating on complex knowledge products for decades, and they've developed sophisticated tools to support this work.
What developers have taken for granted for years is foreign to most other knowledge workers. Version control systems track changes and allow multiple people to work on the same codebase. Pull requests and code reviews create structured ways to suggest and incorporate changes. These workflows aren't just about the code—they're about enabling effective collaboration.
The acceptance of markdown as a text-based format for capturing content came at a perfect time. AI models understand markdown well, and it strikes the right balance between structure and simplicity. With markdown, I can create everything from articles to graphs (using Mermaid) to presentations—all in a format that both humans and AI can easily work with.
The tools that enable this side-by-side work are reshaping how we think about collaborating with AI. Cursor, a popular "agentic" software development environment has seen its revenues and valuation soar as people have flocked to it in droves. When we can both look at the same document, make changes in real-time, and build on each other's ideas, we're no longer just using AI—we're collaborating with it.
Teach the Model to Fish
"We waste hours not willing to waste minutes."
— Amos Tversky
This quote perfectly captures my experience with AI collaboration. We expect more from models than from human colleagues. When onboarding team members, I invest hours in one-on-one meetings to establish shared context and get buy-in to the project vision. Yet with AI, I demanded perfect work from vague instructions.
I now use prompt files to provide background for each type of task, giving models the context they need when they "boot up." These files contain information about my preferences, writing style, project background, and specific requirements. It's like having an onboarding document for a new team member. But humans learn through osmosis, watching others, and past interactions. Models are like the character in the movie Memento, they wake up each morning with a clean slate. So take the time to write and keep the context documents up to date.
This approach requires patience and a willingness to "waste minutes" setting context. But those minutes save hours of back-and-forth, corrections, and frustration.
Deliberate Practice Requires Self-Reflection
Once again, there's much we can learn from the world of software development. One practice that has improved my AI collaboration is the retrospective—taking time after completing a project to reflect on what worked, what didn't, and what I could improve next time.
This kind of structured self-reflection is critical for teams, whether they're made up of humans, AI, or both. Without it, we risk repeating the same mistakes and missing opportunities to refine our process. With AI collaboration, this becomes even more important because the technology is evolving so rapidly.
I've developed a simple but effective technique that has vastly improved my workflow. At the end of a productive session with an AI model, I'll ask:
"Based on the conversations we've had to craft the final work product, what would you change in your initial instructions so you have a clearer view of what is needed?"
This question turns the AI into a partner in improving our collaboration. I use the feedback to improve the prompt files which helps my team get better the next time. This doesn't mean we no longer have problems, but getting a little bit better each week has compounded over the years.
A New Kind of Team-Building
My career skills—convincing, compelling, and cajoling teams—haven't disappeared; they've evolved. I still communicate vision, but with collaborators who think and work fundamentally differently than humans.
This new journey of human-AI collaboration has only just begun.