Software That Sees Before It Asks
How software becomes bespoke when interfaces materialize from data
I’ve been building software for over two decades. The process always follows the same pattern: design the interface, ship it, then users bring their data.
This sequence—interface first, data second—is why most software serves everyone adequately but no one perfectly. You build one form for thousands of users with thousands of different situations. Product managers make decisions: we’ll ask these questions in this order, offer these options, support these common scenarios. Users arrive and squeeze their specific context into pre-built categories.
What if that could invert? What if the data came first, and the interface materialized around it?
How We Build Software Today
Take landscape design software, for example. You want to help homeowners design their own yards, so you study how landscape architects work—the questions they ask, the factors they consider, the sequence of their analysis. Then you build it into forms.
But you’re designing for thousands of users with thousands of different yards. The product manager makes decisions: we’ll ask about trees first, then budget, then design preferences. We’ll offer these five style templates. We’ll support these common scenarios.
The form launches. A user opens it.
Question 1: “How many trees are in your yard?”
Dropdown: None, 1-2, 3-5, 5+
She has one tree. But it’s a massive oak that’s been there for forty years. It dominates the entire yard—provides afternoon shade to the patio, drops acorns in the fall, has roots that spread thirty feet. Is that “1-2 trees”? Technically yes, but the dropdown doesn’t capture what matters.
She selects “1-2” and continues.
Question 2: “What is your primary goal for this project?”
Radio buttons: Aesthetics, Low Maintenance, Privacy, Entertaining, Increase Property Value
She wants to build a deck for entertaining. But she also wants it designed around that oak tree—she loves the shade it provides. The tree isn’t just a variable, it’s the centerpiece. There’s no option for “build around existing tree.”
She selects “Entertaining” and continues, wondering if the software understands what she’s actually trying to do.
Question 3: “Preferred deck material?”
Dropdown: Wood, Composite, PVC
She’s not sure. Wood might look better, but the oak drops leaves and acorns. Does that matter? The form doesn’t ask about what’s above the deck, only what it’s made of.
She picks “Wood” and submits.
The software generates recommendations. A deck design for entertaining. Some notes about tree placement—generic advice about maintaining clearance from trunks. The output is reasonable. Workable, even. But it’s designed for someone with “1-2 trees” who wants “Entertaining,” not for someone with a forty-year-old oak tree they want to make the centerpiece of their outdoor living space.
The complexity the software couldn’t capture became work she had to do—translating her specific oak tree into “1-2 trees,” her nuanced goals into “Entertaining,” her material decision into a blind choice without context.
When the Software Sees First
Now imagine a different flow.
You open the landscape software. The first screen asks you to upload a photo of your yard.
You take a photo from your back door—the oak tree is prominent, the patio visible beneath it, the lawn spreading out beyond.
You upload it.
A few seconds pass. Then the interface materializes.
The software generates a form, specific to what it sees:
“I see a mature oak tree—looks like a Valley Oak, probably 40+ years old based on the canopy spread. That’s a beautiful centerpiece. Are you looking to design around it, or would you consider other layouts?”
Radio buttons: Design around the tree / Open to alternatives / Not sure yet
“I notice the tree provides afternoon shade to your patio area. Would you want a deck in that shaded zone, or are you thinking a different location?”
Clickable image map of your yard with zones marked
“Given the oak’s canopy, you’ll get significant leaf drop in fall. For a deck in that location, I’d typically recommend composite or PVC over wood for lower maintenance. Does that align with your preferences?”
Radio buttons: Composite (recommended), PVC, Wood (requires more maintenance), Show me trade-offs
The software isn’t asking generic questions hoping they’ll apply to your situation. It’s asking specific questions about your oak tree, your patio, your shade patterns. The form materialized from understanding your yard first.
The data came first. The interface came second.
When AI Learned to Speak UI
This wasn’t possible before GenAI.
Software had to be designed, built, and shipped before users arrived. You could customize experiences based on data—Google Search shows you local weather, Amazon recommends products based on your browsing—but those customizations were pre-programmed. Google couldn’t look at your upcoming travel booking and generate a new weather interface showing your destination’s forecast with a packing checklist based on the climate. The panels were pre-built. The logic was pre-defined. You couldn’t generate interfaces on the fly because nothing could reason about data and decide what UI components made sense in the moment.
When AI first arrived, it only spoke text. You could have conversations, get explanations, even generate images—but all through text commands. “Make it brighter.” How much brighter? You wanted a slider. “Add more contrast to the left side.” Which part of the left side? You wanted to click and drag. Text forces you to describe what you want to point at. The gap between AI’s understanding and your ability to act on it was the missing interface layer.
AI needed to speak UI.
In 2023, I wrote about how AI would make us rethink UX. The tension was clear: AI could understand context in ways pre-built software never could, but if it could only respond in text, it was stuck explaining rather than enabling. The insight was that interfaces wouldn’t need to be pre-built navigation flows. They could materialize based on what users needed, when they needed it. But the tools to build this didn’t exist yet.
That’s changing now.
Over the past few months, infrastructure has emerged that lets AI generate interface elements, not just text. Google released A2UI in December 2025—an open-source framework that lets AI agents compose interfaces from trusted component catalogs, with the same payload rendering across Flutter, web, and native platforms. CopilotKit introduced AG-UI, a protocol for keeping agents and interfaces synchronized in real-time. Anthropic and OpenAI collaborated on the MCP Apps extension in November 2025, bringing standardized interactive UI capabilities to Model Context Protocol—letting MCP servers present visual information and collect complex user input through sandboxed interfaces.
These are early approaches to a problem everyone’s trying to solve. The patterns are still being worked out, the standards still taking shape. But the direction is clear: AI needs to speak in interface components, not just text.
The landscape software example isn’t hypothetical anymore. A2UI’s demos include exactly this—a landscape architect application where users upload photos, and Gemini generates custom forms specific to what it sees in the yard. The interface materializes from the data.
Software That Sees
Now that AI can speak UI, we’re starting to see what becomes possible. But this raises questions we’re just beginning to answer.
When interfaces materialize from data rather than pre-built designs, how do we ensure they’re correct? What are the guardrails? A form asking about your oak tree feels personalized and helpful. But what if the AI misidentifies the tree species, or suggests a deck placement that’s structurally unsound, or generates interface elements that mislead rather than guide?
Pre-built software had clear accountability—the product manager who designed the form, the engineer who coded the logic, the QA team that tested the flows. When software generates interfaces on the fly, those lines blur. We’re figuring out how to maintain safety and reliability when the interface isn’t predetermined.
But here’s what’s exciting: we’re at the beginning of a new way to design, build, and ship software.
For two decades, I’ve watched product teams wrestle with the same constraint—design for scale, which means design for everyone, which means design for no one specifically. The homeowner with the forty-year-old oak got the same dropdowns as someone with a new subdivision lot. The software couldn’t see the difference.
That changes when AI can speak UI. The homeowner uploads a photo, and the software sees her oak tree, her patio, her afternoon shade patterns. The interface that materializes isn’t a garden path designed for thousands of users. It’s a conversation about her specific yard.
One-size-fits-all starts to fade. Bespoke software becomes possible at scale.
We have a lot to figure out—the guardrails, the accountability, the patterns that work and those that don’t. The infrastructure is early, the standards still taking shape. But the direction is set.
Software can finally see what it’s working with before deciding what to show you.


The landscape example nails why pre-built forms always feel off. When I've used design software before the disconnect between what the dropdown asks and what actualy matters in my space was frustrating but I couldnt articulate why. Framing it as inverting the sequence makes the breakthrough clear and shows how A2UI-style tools arent just convenince but a fundamentel shift in how software adapts to context.