Building AI Agents Visually with Google ADK Visual Agent Builder

THOMAS CHONG
11 NOV 2025
πŸ“– BLOG

If you've ever built multi-agent systems, you know the pain: juggling YAML files, mentally tracking agent hierarchies, debugging configuration errors, and constantly referencing documentation to get the syntax right. Google's Agent Development Kit (ADK) just changed that game.

Google ADK v1.18.0 introduced the Visual Agent Builder - a browser-based interface that lets you design, configure, and test complex multi-agent systems through drag-and-drop interactions and natural language conversations. No more hand-crafting YAML. No more syntax errors. Just pure agent architecture.

I've spent the last few days exploring this feature, and I'm genuinely excited about what it enables. Let me show you how to build a research agent system from scratch using the Visual Builder.

What is Google ADK Visual Agent Builder?

The Visual Agent Builder is a web-based IDE for creating ADK agents. Think of it as a combination of a visual workflow designer, configuration editor, and AI assistant all working together. Here's what makes it powerful:

The beauty is that everything you do in the Visual Builder generates proper ADK YAML configurations under the hood. You can export them, version control them, and deploy them just like code-based agents.

When Should You Use the Visual Builder?

The Visual Builder shines in specific scenarios:

That said, there are times when code-first development makes more sense: infrastructure-as-code workflows, CI/CD pipelines, or when you need programmatic agent generation. The Visual Builder and Python-based approach complement each other.

Getting Started with ADK and the Visual Builder

Prerequisites

Before diving in, make sure you have:

Installation

Install the latest version of Google ADK:

# Install or upgrade to ADK 1.18.0+
pip install --upgrade google-adk
# Verify installation
adk --version

‍

Launching the Web Interface

ADK includes a built-in web server with the Visual Builder:

# Launch ADK web interface
adk web
# By default, this starts a server at http://localhost:8000
# The Visual Builder is accessible at http://localhost:8000/dev-ui/

‍

Once the server starts, open http://localhost:8000/dev-ui/ in your browser. You'll see the ADK Dev UI landing page.

Screenshot: The Visual Builder interface showing the three main panels - Configuration Editor (left), Agent Canvas (center), and AI Assistant (right)

Left Panel: Configuration Editor

This is where you configure individual agent properties:

Center Panel: Agent Canvas

The canvas provides a visual representation of your agent hierarchy:

The canvas updates in real-time as you make changes in the configuration panel.

Right Panel: AI Assistant

The Agent Builder Assistant is powered by Gemini and can:

You interact with it through a chat interface - just describe what you want to build.

graph LR
    A[Configuration Panel<br/>Agent Properties<br/>Tools & Sub-agents<br/>Callbacks] --> B[Visual Canvas<br/>Agent Hierarchy<br/>Real-time Updates<br/>Interactive Nodes]
    B --> C[AI Assistant<br/>Natural Language<br/>Architecture Generation<br/>Q&A Support]
    C --> A
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style C fill:#e8f5e9,stroke:#388e3c,stroke-width:2px

‍

Diagram: The three integrated panels of the Visual Agent Builder work together to provide a seamless development experience

Step-by-Step Tutorial: Building a Research Agent

Let me walk you through building a complete research agent system. This agent will:

  1. Accept research topics from users
  2. Search Google for relevant information
  3. Use a Loop pattern for iterative refinement
  4. Present synthesized findings

The workflow we'll follow demonstrates the AI-first approach:

graph TD
    A[Create Project] --> B[Describe to AI Assistant]
    B --> C[AI Generates Architecture]
    C --> D[Review on Canvas]
    D --> E{Satisfied?}
    E -->|No| F[Refine with AI]
    F --> C
    E -->|Yes| G[Save Configuration]
    G --> H[Test with Real Queries]
    H --> I{Works Well?}
    I -->|No| F
    I -->|Yes| J[Deploy]
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:2px
    style B fill:#fff3e0,stroke:#f57c00,stroke-width:2px
    style C fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
    style D fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style G fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
    style H fill:#fce4ec,stroke:#c2185b,stroke-width:2px
    style J fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px

Diagram: AI-first development workflow - describe intent, let AI generate, review visually, test iteratively

Step 1: Creating a New Agent Project

From the ADK Dev UI landing page:

  1. Click the dropdown that says "Select an agent"
  2. Click the "+" button (or "add" icon) next to the dropdown
  3. A dialog appears: "Create a new app"
  4. Enter a name: research_agent (must be valid Python identifier)
  5. Click Create
Screenshot: The "Create a new app" dialog for creating a new agent project

The Visual Builder opens in a new view with mode=builder in the URL. You'll see the default configuration for a new LlmAgent.

Step 2: Using the AI Assistant to Design Your Agent

Here's where the Visual Builder truly shines. Instead of manually configuring each agent, tool, and parameter, you'll describe what you want to the AI Assistant in plain English, and it will generate the entire architecture for you.

In the AI Assistant panel on the right side, type the following prompt:

Screenshot: Typing a comprehensive natural language prompt to the AI Assistant describing the research agent requirements
Create a research agent that uses Google Search with an iterative refinement pattern. The agent should:

1. Accept research topics from users
2. Use a Loop agent pattern to iteratively improve search queries and gather information
3. Have specialized sub-agents:
   - One for analyzing and refining search queries (use gemini-2.5-pro for better reasoning)
   - One for executing searches and extracting insights (use gemini-2.5-flash for speed)
4. Include proper loop termination with exit_loop tool
5. Use Google Search as the primary tool
6. Limit to 3 iterations maximum

The architecture should follow ADK best practices with proper agent hierarchy and tool assignments.

‍

The AI Assistant will ask clarifying questions to ensure it understands your requirements:

Screenshot: AI Assistant asking for confirmation about model choices before generating the architecture

After you confirm the details (in this case, specifying model choices for each agent), the AI Assistant will propose a complete architecture:

Screenshot: AI Assistant proposing a complete 4-file YAML architecture including root agent, loop agent, and two specialized sub-agents with detailed instructions and tool assignments

The AI Assistant generates:

‍

Understanding the Visual Builder Interface

The Visual Builder interface is divided into three main panels:

1. Complete Project Structure:

research_agent/
β”œβ”€β”€ root_agent.yaml
β”œβ”€β”€ research_loop_agent.yaml
β”œβ”€β”€ query_refinement_agent.yaml
└── search_execution_agent.yaml

‍

2. Detailed YAML Configurations:

3. Proper Instructions: Each agent gets role-specific instructions explaining its purpose and behavior

4. Tool Assignments: google_search and exit_loop tools added where appropriate

Once you approve the proposal, the AI Assistant creates all the agents and updates the visual canvas.

Step 3: Reviewing the Generated Architecture

After the AI Assistant creates the agents, the visual canvas updates to show your complete multi-agent system:

Screenshot: Visual canvas showing the complete agent hierarchy with root agent, loop agent, and two specialized sub-agents, along with tool assignments

You can see the full hierarchy:

Click on any agent in the canvas to inspect its configuration. For example, clicking on research_loop_agent shows:

Screenshot: LoopAgent configuration panel showing max_iterations set to 3, with exit_loop tool and sub-agent assignments

Key configuration highlights:

Let's examine one of the sub-agents. Click on query_refinement_agent in the canvas:

Screenshot: Query refinement agent configuration showing gemini-2.5-pro model selection, google_search tool, and detailed instructions

Notice how the AI Assistant:

The complete architecture generated looks like this:

root_research_agent (ROOT)
└── research_loop_agent (LoopAgent, max_iterations=3)
    β”œβ”€β”€ query_refinement_agent (LlmAgent, gemini-2.5-pro)
    β”‚   └── tool: google_search
    └── search_execution_agent (LlmAgent, gemini-2.5-flash)
        └── tool: google_search

‍

graph TD
    A[root_research_agent<br/>ROOT - LlmAgent<br/>gemini-2.5-flash] --> B[research_loop_agent<br/>LoopAgent<br/>max_iterations: 3<br/>tool: exit_loop]

    B --> C[query_refinement_agent<br/>LlmAgent<br/>Model: gemini-2.5-pro<br/>Tool: google_search]

    B --> D[search_execution_agent<br/>LlmAgent<br/>Model: gemini-2.5-flash<br/>Tool: google_search]

    style A fill:#e1f5ff,stroke:#01579b,stroke-width:3px
    style B fill:#fff9c4,stroke:#f57f17,stroke-width:2px
    style C fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
    style D fill:#f3e5f5,stroke:#4a148c,stroke-width:2px

Diagram: Research Agent Architecture showing the hierarchical structure generated by the AI Assistant

Step 4: Saving the Configuration

Before testing, click the "Save" button in the left configuration panel. The Visual Builder validates your setup and writes all YAML files to disk in your project directory:

research_agent_demo/
β”œβ”€β”€ root_agent.yaml
β”œβ”€β”€ research_loop_agent.yaml
β”œβ”€β”€ query_refinement_agent.yaml
└── search_execution_agent.yaml

All the agent configurations, tool assignments, and instructions from the AI Assistant conversation are now saved as code. You can version control these files, edit them manually if needed, or share them with your team.

Step 5: Testing Your Agent

Now for the exciting part - seeing your multi-agent system in action. Click the "Exit Builder Mode" button to switch to the test interface.

Type a research query in the chat:

Research the latest developments in quantum computing error correction in 2024.
Screenshot: Complete test execution showing all 3 loop iterations and the final comprehensive research summary about quantum computing error correction

The agent executes through multiple iterations:

Iteration 1:‍

Iteration 2:

Iteration 3:

Final Output: A comprehensive research summary covering:

This demonstrates the power of the iterative refinement pattern - the agent didn't just perform one search and call it done. It analyzed gaps, refined its approach, and synthesized information across multiple iterations to produce a thorough, well-structured answer.

Execution Notes:

You can view the complete execution trace in the Events tab, showing all LLM calls, tool invocations, agent transfers, and loop iteration boundaries.

Conclusion

The Visual Agent Builder transforms how we build AI agent systems. The real breakthrough isn't just the visual canvas or configuration panels - it's the AI Assistant-first approach that lets you describe what you want in natural language and get a complete, working multi-agent architecture in seconds.

Instead of wrestling with YAML syntax, manually configuring each agent, and debugging nested hierarchies, you can:

Key Takeaways:

βœ… AI Assistant is the killer feature: Describe requirements in natural language, get complete agent architectures with proper model selection, tool assignments, and instructions

βœ… Visual feedback accelerates development: The canvas makes complex hierarchies tangible - you can see exactly what the AI generated and how agents relate to each other

βœ… Iterative refinement pattern works: As demonstrated with the research agent, loop agents can intelligently refine their approach across multiple iterations

βœ… Production-ready output: Everything generates proper ADK YAML that you can version control, deploy, and share with your team

Next Steps:

  1. Install ADK 1.18.0+ and launch the Visual Builder (adk web)‍
  2. Start with the AI Assistant - describe an agent system and let it generate the architecture
  3. Review the generated config on the canvas and in the configuration panels
  4. Test immediately using real queries to see how your agents perform
  5. Save and iterate - refine the architecture based on test results

The Visual Builder doesn't replace code-based agent development - it accelerates it. Use the AI Assistant to prototype architectures, the visual canvas to understand complex systems, and the testing interface to validate behavior. Then export the YAML for production deployment.

Try It Yourself

The complete research agent from this tutorial is available as open-source YAML configurations:

πŸ“ GitHub Repository: google-adk-visual-agent-builder-demo

The repo includes:

Clone it, modify it, and build your own multi-agent systems!

Have you built anything with the Visual Agent Builder? I'd love to hear about your use cases and architectures.

Connect with me:

References and Resources

Official Documentation

Release Notes

Related Resources

Tutorial Code

Community