MLOps Community

Finding the Holy Grail of AI Agent UIs: From AI-Orchestrated Development to A2UI

Finding the Holy Grail of AI Agent UIs: From AI-Orchestrated Development to A2UI
# Artificial Intelligence
# UI
# Generative AI Tools
# AI Agent
# Software Development

Moving Beyond Chatbots: A New Standard for Dynamic AI Agent Interfaces with A2UI

February 10, 2026
Médéric Hurier
Médéric Hurier
Finding the Holy Grail of AI Agent UIs: From AI-Orchestrated Development to A2UI
In my previous article, I argued that the real bottleneck for AI agents is the User Interface (UI). We are stapling rocket engines to bicycles by forcing advanced agents to communicate through basic markdown chatbots.
Since then, I’ve been on a journey to find the solution. I didn’t want just a theoretical answer; I wanted to build it. I explored everything from “AI-Orchestrated Development” to Python wrappers, up to new AI protocols, searching for a scalable way to give Agents a native, rich, and dynamic interface.
I dedicated time to building a concrete implementation to verify my hypotheses. Here is what I found, what failed, and why I believe A2UI is the protocol we’ve been waiting for to solve this problem.

Source: Gemini App

The Exploration: A Graveyard of “Almost” Solutions

My goal was simple: Build a custom frontend for an agent application without spending weeks on boilerplate. I tried multiple approaches, and most of them hit a wall.

1. The “Heavy” Approach: Angular & Flutter

My first instinct was to build a real app. I tried both Angular and Flutter. These are standards for enterprise application development, offering robust ecosystems and pixel-perfect control.
The Result: It works, but at what cost? In 2026, setting up a full frontend project is still painful. You have to configure build tools, set up linters, manage complex state stores (Redux, Bloc), and synchronize data models with your backend. This overhead is acceptable for a static, long-term product like a banking dashboard, but for a dynamic Agent? It’s overkill.
Agents need to be able to transmit their UI and adapt on the fly. Hardcoding a heavy client defeats the purpose of an autonomous agent. If every new agent capability requires a sprint of frontend changes, the agent isn’t truly autonomous. It’s just a backend API with a very expensive chat interface.

2. “AI-Orchestrated Development” (AI-Generated UIs)

I tried what I call “AI-Orchestrated Development”: a more structured approach where the AI is front and center in generating application code, popularized in early 2026 by tools like GitHub Spec Kit, Gemini Conductor, or Antigravity. This is distinct from “vibe coding” (using AI intuitively without understanding the output). AI-Orchestrated Development aims for a systematic process where AI handles implementation under developer guidance.
The Verdict: While promising long-term, it still generates lots of code. Code that you have to maintain, test, and debug. And I’m not confident in either maintaining AI-generated codebases or letting AI be the sole responsible party for production systems.
We already spend more time on application maintenance than building. AI-Orchestrated Development risks accelerating this accumulation. We need to reduce the amount of specific code generated, not increase it.

3. HTMX: The Backend-Driven UI

I went back to my roots (PHP/AJAX) and tried HTMX. It’s a productive methodology that keeps logic in one place by streaming HTML fragments from the server.
The Problem: HTMX couples the agent too tightly to a specific visual implementation. If you want to render the same agent response on a mobile app, a web dashboard, and a desktop client, you can’t reuse the HTML stream — you’re locked into one presentation layer.
More fundamentally, HTML is too low-level for an agent to reason about. An agent shouldn’t be worrying about CSS classes, DOM nesting, or accessibility attributes. It should focus on intent and logic, not pixels. Sending declarative data is more efficient, more universal, and can be consumed by different types of clients.

4. Python Wrappers (Streamlit, Gradio, Chainlit)

These are great for prototypes. Tools like Streamlit, Gradio, and Chainlit offer a small code surface and instant deployment.
The Flaw: The “Glue Code” Hell. You inevitably hit a wall where the library doesn’t support the specific interaction or component you need. Maybe you need a custom drag-and-drop interface or a specific data visualization. You lose control over style, and you end up writing hacky workarounds (custom HTML injection, iframe bridges) to connect the agent’s state to the UI components.
They are also not truly dynamic — they are rigid templates filled with data, not fluid interfaces generated by the agent’s needs. You are still building a form; you are just doing it in Python instead of React.

5. Chat Extensions (Slack/Teams/Workspace)

Building into existing workflows seems smart. Why build a new UI when you can just deploy a bot to Slack or Google Chat?
The Limit: It doesn’t scale. You end up building a specific adapter for Slack, another for Teams, another for Google Chat. Each platform has its own proprietary UI kit (Block Kit, Adaptive Cards) with different limitations.
You want to build your agent once and have it project its UI anywhere, not rewrite the presentation layer for every host app. This fragmentation increases the maintenance burden and prevents you from creating a consistent user experience across platforms.

The Epiphany: Separation of Concerns

I realized something fundamental during this process: Everything is disposable.
We shouldn’t be precious about the UI code. We should focus on the declarative side. Just as humans use HTML not because we love drawing pixels, but because we want to say “Here is a link” or “Here is an image,” agents need a high-level language to describe what needs to be shown, not how to draw it.
The Agent should be responsible for the Data and the Logic. The Client should be responsible for the Style and the Rendering.
This separation allows the agent to be “brain-heavy” and “UI-light,” deferring the complex rendering logic to the client, which is what clients are best at.

The Solution: A2UI (Agent-to-User Interface)

Enter A2UI.
I built a demo app using this protocol, and I was genuinely impressed by its elegance. A2UI is a JSONL-based declarative protocol that creates a standard contract between the AI and the user interface.

How it works

Instead of streaming markdown tokens like a traditional LLM, the agent streams structured JSON objects representing UI components.

The client can use the Lit renderer, Angular renderer, or Flutter renderer to render native components progressively.

Why it wins

Production-Ready at Google: A2UI isn’t vaporware — it’s already integrated into Google products like Opal, Gemini Enterprise, and the Flutter GenUI SDK.
Transport Agnostic: It works over HTTP (via the A2A protocol), WebSockets, or carrier pigeons. The protocol doesn’t care how the JSON gets there.
Progressive Rendering: The UI appears as the agent “thinks” it. Components stream in one by one, making the interface feel alive and responsive, much like text streaming but for rich UI elements.
Framework Agnostic: The client implementation (React, Angular, Lit) decides how a “Card” looks. The agent just says “I need a Card”. This means you can have a “Material Design” client and an “iOS Cupertino” client rendering the exact same agent response natively.
Secure: No arbitrary JavaScript execution. It’s just declarative data, mitigating injection risks. This is critical for enterprise adoption where security reviews block “dynamic code generation.”
LLM-Friendly: Flat, streaming JSON structure designed for easy generation. LLMs can build UIs incrementally without perfect JSON in one shot.
Note: A2UI is currently at v0.8 and still in active development. The protocol has some rough edges, and for production use. The best approach is to wait for native integration in tools like Gemini Enterprise or the Agent Development Kit (ADK).

A2UI vs AG-UI: Two Philosophies

I also looked at AG-UI, another emerging standard in this space.
AG-UI aims to blend the frontend and backend deeply, creating “AI-First” apps from the ground up with a focus on real-time event loops. It’s powerful but requires you to rethink your entire application architecture regarding state synchronization and event handling.
A2UI focuses on extending the capabilities of chat-based interaction to be richer. It’s a bridge that lets agents “speak UI” using standard components. It feels more like an evolution of the chat interface into a command center rather than a complete replacement of the application stack.
I believe A2UI is the scalable path forward for most agent implementations. It respects the separation of concerns and integrates seamlessly with existing systems via protocols like A2A (Agent-to-Agent).

Conclusion: The 2026 Shift

We are moving towards a schism in frontend technology, and it’s happening faster than we think:
Static Apps (the stock): Dashboards, retail sites, and specialized tools. These will still be built with efficient frameworks for speed, precise control, and specific user journeys where the path is known. They represent the bulk of existing applications.
Dynamic Agent Interfaces (the flow): Powered by new protocols like A2UI. These will replace the “Chatbot” with something far more powerful — interactive, component-based, and generated on the fly. This is where the new growth is happening. These interfaces will emerge when the user’s intent is ambiguous or highly variable, like in Agentic Commerce.
I am convinced that 2026 is the year we stop building UIs for agents and start letting agents project their UIs to us. We shouldn’t spend too much time on UI. Let it be personalized by the agent so we can focus on what truly matters: integration and instruction.
In the next article, I will share the source code and a full demo of the application I built using A2UI. Stay tuned!

Source: Gemini App



Dive in

Related

Video
AI Agent Development Tradeoffs You NEED to Know
By Sherwood Callaway • Jul 22nd, 2025 Views 177
Video
Building the Future of AI in Software Development
By Varun Mohan • Dec 12th, 2023 Views 487
Video
From A Coding Startup to AI Development in the Enterprise
By Ryan Carson • May 10th, 2024 Views 634
Video
AI Agent Development Tradeoffs You NEED to Know
By Sherwood Callaway • Jul 22nd, 2025 Views 177
Video
From A Coding Startup to AI Development in the Enterprise
By Ryan Carson • May 10th, 2024 Views 634
Video
Building the Future of AI in Software Development
By Varun Mohan • Dec 12th, 2023 Views 487
Code of Conduct