If you've ever built multi-agent systems, you know the pain: juggling YAML files, mentally tracking agent hierarchies, debugging configuration errors, and constantly referencing documentation to get the syntax right. Google's Agent Development Kit (ADK) just changed that game.
Google ADK v1.18.0 introduced the Visual Agent Builder - a browser-based interface that lets you design, configure, and test complex multi-agent systems through drag-and-drop interactions and natural language conversations. No more hand-crafting YAML. No more syntax errors. Just pure agent architecture.
I've spent the last few days exploring this feature, and I'm genuinely excited about what it enables. Let me show you how to build a research agent system from scratch using the Visual Builder.
The Visual Agent Builder is a web-based IDE for creating ADK agents. Think of it as a combination of a visual workflow designer, configuration editor, and AI assistant all working together. Here's what makes it powerful:
The beauty is that everything you do in the Visual Builder generates proper ADK YAML configurations under the hood. You can export them, version control them, and deploy them just like code-based agents.
The Visual Builder shines in specific scenarios:
That said, there are times when code-first development makes more sense: infrastructure-as-code workflows, CI/CD pipelines, or when you need programmatic agent generation. The Visual Builder and Python-based approach complement each other.
Before diving in, make sure you have:
Install the latest version of Google ADK:
# Install or upgrade to ADK 1.18.0+
pip install --upgrade google-adk
# Verify installation
adk --version
β
ADK includes a built-in web server with the Visual Builder:
# Launch ADK web interface
adk web
# By default, this starts a server at http://localhost:8000
# The Visual Builder is accessible at http://localhost:8000/dev-ui/
β
Once the server starts, open http://localhost:8000/dev-ui/ in your browser. You'll see the ADK Dev UI landing page.

This is where you configure individual agent properties:
The canvas provides a visual representation of your agent hierarchy:
The canvas updates in real-time as you make changes in the configuration panel.
The Agent Builder Assistant is powered by Gemini and can:
You interact with it through a chat interface - just describe what you want to build.
graph LR
A[Configuration Panel<br/>Agent Properties<br/>Tools & Sub-agents<br/>Callbacks] --> B[Visual Canvas<br/>Agent Hierarchy<br/>Real-time Updates<br/>Interactive Nodes]
B --> C[AI Assistant<br/>Natural Language<br/>Architecture Generation<br/>Q&A Support]
C --> A
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style C fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
β
Diagram: The three integrated panels of the Visual Agent Builder work together to provide a seamless development experience
Let me walk you through building a complete research agent system. This agent will:
The workflow we'll follow demonstrates the AI-first approach:
graph TD
A[Create Project] --> B[Describe to AI Assistant]
B --> C[AI Generates Architecture]
C --> D[Review on Canvas]
D --> E{Satisfied?}
E -->|No| F[Refine with AI]
F --> C
E -->|Yes| G[Save Configuration]
G --> H[Test with Real Queries]
H --> I{Works Well?}
I -->|No| F
I -->|Yes| J[Deploy]
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
style B fill:#fff3e0,stroke:#f57c00,stroke-width:2px
style C fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
style D fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style G fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
style H fill:#fce4ec,stroke:#c2185b,stroke-width:2px
style J fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px
Diagram: AI-first development workflow - describe intent, let AI generate, review visually, test iteratively
From the ADK Dev UI landing page:
research_agent (must be valid Python identifier)
The Visual Builder opens in a new view with mode=builder in the URL. You'll see the default configuration for a new LlmAgent.
Here's where the Visual Builder truly shines. Instead of manually configuring each agent, tool, and parameter, you'll describe what you want to the AI Assistant in plain English, and it will generate the entire architecture for you.
In the AI Assistant panel on the right side, type the following prompt:

Create a research agent that uses Google Search with an iterative refinement pattern. The agent should:
1. Accept research topics from users
2. Use a Loop agent pattern to iteratively improve search queries and gather information
3. Have specialized sub-agents:
- One for analyzing and refining search queries (use gemini-2.5-pro for better reasoning)
- One for executing searches and extracting insights (use gemini-2.5-flash for speed)
4. Include proper loop termination with exit_loop tool
5. Use Google Search as the primary tool
6. Limit to 3 iterations maximum
The architecture should follow ADK best practices with proper agent hierarchy and tool assignments.
β
The AI Assistant will ask clarifying questions to ensure it understands your requirements:

After you confirm the details (in this case, specifying model choices for each agent), the AI Assistant will propose a complete architecture:

The AI Assistant generates:
β
The Visual Builder interface is divided into three main panels:
1. Complete Project Structure:
research_agent/
βββ root_agent.yaml
βββ research_loop_agent.yaml
βββ query_refinement_agent.yaml
βββ search_execution_agent.yaml
β
2. Detailed YAML Configurations:
3. Proper Instructions: Each agent gets role-specific instructions explaining its purpose and behavior
4. Tool Assignments: google_search and exit_loop tools added where appropriate
Once you approve the proposal, the AI Assistant creates all the agents and updates the visual canvas.
After the AI Assistant creates the agents, the visual canvas updates to show your complete multi-agent system:

You can see the full hierarchy:
Click on any agent in the canvas to inspect its configuration. For example, clicking on research_loop_agent shows:

Key configuration highlights:
exit_loop (allows the agent to terminate when satisfied)Let's examine one of the sub-agents. Click on query_refinement_agent in the canvas:

Notice how the AI Assistant:
The complete architecture generated looks like this:
root_research_agent (ROOT)
βββ research_loop_agent (LoopAgent, max_iterations=3)
βββ query_refinement_agent (LlmAgent, gemini-2.5-pro)
β βββ tool: google_search
βββ search_execution_agent (LlmAgent, gemini-2.5-flash)
βββ tool: google_search
β
graph TD
A[root_research_agent<br/>ROOT - LlmAgent<br/>gemini-2.5-flash] --> B[research_loop_agent<br/>LoopAgent<br/>max_iterations: 3<br/>tool: exit_loop]
B --> C[query_refinement_agent<br/>LlmAgent<br/>Model: gemini-2.5-pro<br/>Tool: google_search]
B --> D[search_execution_agent<br/>LlmAgent<br/>Model: gemini-2.5-flash<br/>Tool: google_search]
style A fill:#e1f5ff,stroke:#01579b,stroke-width:3px
style B fill:#fff9c4,stroke:#f57f17,stroke-width:2px
style C fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
style D fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
Diagram: Research Agent Architecture showing the hierarchical structure generated by the AI Assistant
Before testing, click the "Save" button in the left configuration panel. The Visual Builder validates your setup and writes all YAML files to disk in your project directory:
research_agent_demo/
βββ root_agent.yaml
βββ research_loop_agent.yaml
βββ query_refinement_agent.yaml
βββ search_execution_agent.yaml
All the agent configurations, tool assignments, and instructions from the AI Assistant conversation are now saved as code. You can version control these files, edit them manually if needed, or share them with your team.
Now for the exciting part - seeing your multi-agent system in action. Click the "Exit Builder Mode" button to switch to the test interface.
Type a research query in the chat:
Research the latest developments in quantum computing error correction in 2024.

The agent executes through multiple iterations:
Iteration 1:β
query_refinement_agent analyzes the question and generates refined search queriesβsearch_execution_agent performs Google searches and synthesizes initial findings about Google's Willow processor, Microsoft/Quantinuum's achievements, and IBM's qLDPC codesIteration 2:
query_refinement_agent identifies gaps in the research (need for deeper technical comparison)βsearch_execution_agent provides detailed comparisons of Surface Code vs qLDPC vs Color Code error correction approachesIteration 3:
query_refinement_agent determines sufficient information has been gatheredβsearch_execution_agent provides final synthesis covering breakthroughs, technical comparisons, and future challengesFinal Output: A comprehensive research summary covering:
This demonstrates the power of the iterative refinement pattern - the agent didn't just perform one search and call it done. It analyzed gaps, refined its approach, and synthesized information across multiple iterations to produce a thorough, well-structured answer.
Execution Notes:
You can view the complete execution trace in the Events tab, showing all LLM calls, tool invocations, agent transfers, and loop iteration boundaries.
The Visual Agent Builder transforms how we build AI agent systems. The real breakthrough isn't just the visual canvas or configuration panels - it's the AI Assistant-first approach that lets you describe what you want in natural language and get a complete, working multi-agent architecture in seconds.
Instead of wrestling with YAML syntax, manually configuring each agent, and debugging nested hierarchies, you can:
Key Takeaways:
β AI Assistant is the killer feature: Describe requirements in natural language, get complete agent architectures with proper model selection, tool assignments, and instructions
β Visual feedback accelerates development: The canvas makes complex hierarchies tangible - you can see exactly what the AI generated and how agents relate to each other
β Iterative refinement pattern works: As demonstrated with the research agent, loop agents can intelligently refine their approach across multiple iterations
β Production-ready output: Everything generates proper ADK YAML that you can version control, deploy, and share with your team
Next Steps:
adk web)βThe Visual Builder doesn't replace code-based agent development - it accelerates it. Use the AI Assistant to prototype architectures, the visual canvas to understand complex systems, and the testing interface to validate behavior. Then export the YAML for production deployment.
The complete research agent from this tutorial is available as open-source YAML configurations:
π GitHub Repository: google-adk-visual-agent-builder-demo

The repo includes:
Clone it, modify it, and build your own multi-agent systems!
Have you built anything with the Visual Agent Builder? I'd love to hear about your use cases and architectures.
Connect with me: