A2UI: Agent‑to‑UI explained
A2UI sits between intent and the UI users see. It introduces a new way for agents and systems to generate interfaces by returning structured render trees instead of code.
test
- A2UI describes how AI agents interface with user interfaces through structure, not prompts.
- Agents don’t generate UIs directly. They operate on encoded intent and return renderable UI trees.
- A2UI turns interface structure into a stable contract between humans, agents, and systems.
What A2UI actually means
A2UI stands for Agent-to-UI.
The official protocol lives at a2ui.org.
It describes a system boundary:
How an AI agent translates intent into user interfaces *without* directly designing or coding screens.
The source of intent doesn’t matter — a product team, a workflow engine, or a conversational agent. The boundary stays the same.
In an A2UI setup, agents don’t produce pixels. They don’t improvise layouts. And they don’t “design”.
They operate on **structured intent** and return **render instructions** that a UI system can reliably execute.
Why this distinction matters
Most conversations around AI and UI assume a direct relationship:
Prompt → UI output
That model breaks at scale.
It leads to:
- inconsistent layouts
- fragile patterns
- UI drift across features
- endless review and correction
Not because agents are weak. But because intent is underspecified.
A2UI exists to separate **decision‑making** from **rendering**.
The A2UI boundary
In an A2UI system, the flow looks like this:
Intent → Intent map → Render tree → Components
Each step has a clear responsibility:
- **Intent** expresses *what* should happen, not how it looks
- **Intent map** resolves ambiguity and constraints
- **Render tree** encodes layout, hierarchy, and variation rules
- **Components** execute predefined behavior
Agents are only allowed to operate *above* the render tree.
This is the critical constraint.
Why agents don’t design
Design decisions are expensive. They encode trade-offs. They shape behavior and trust.
Letting agents improvise those decisions introduces silent variance.
A2UI prevents this by freezing decisions at the right layer.
Agents:
- select patterns
- fill parameters
- choose among allowed variants
They do **not** invent structure.
A2UI deliberately avoids code generation — agents return a structured render tree, allowing non-coding or system agents to produce UI without owning rendering decisions.
That’s how consistency survives automation.
A2UI vs generative UI
These terms are often confused.
**Generative UI** focuses on *output*:
- generating screens
- synthesizing layouts
- composing UI on the fly
**A2UI** focuses on *contracts*:
- what agents are allowed to decide
- what must stay stable
- where variation is safe
Generative UI without A2UI creates novelty.
A2UI enables **repeatability**.
A2UI and design systems
A2UI does not replace design systems.
It depends on them.
Design systems provide:
- the component vocabulary
- the behavioral constraints
- the visual and interaction rules
A2UI provides:
- a machine‑readable interface to that system
- a way to apply decisions at runtime
- a guardrail against drift
Together, they turn interface structure into an executable asset.
Why A2UI becomes critical with better agents
As agents improve, they don’t converge. They amplify whatever structure already exists across a product or system interface layer.
They explore more. They assume more. They vary more.
Without a boundary, that variance leaks into the UI.
With A2UI, variance is absorbed before rendering.
The better the agent, the more important the boundary becomes.
The strategic implication
A2UI is not a framework. It’s not a tool. And it’s not a spec you install.
It’s a way of deciding:
Which interface decisions are permanent — and which are delegated.
Organizations that answer this explicitly can use AI aggressively.
Organizations that don’t will spend their time correcting output.
The quiet conclusion
Agents will keep getting better.
Interfaces won’t standardize themselves.
A2UI exists to make intent executable *without* making interfaces fragile.
It’s the missing layer between AI capability and product consistency.