A2A ADK Examples
This directory contains scenario-based examples demonstrating different
capabilities of the A2A Agent Development Kit (ADK).
Table of Contents
π Structure
Each example is a self-contained scenario with:
- Server: A2A server implementation with task handlers
- Client: A2A client that demonstrates sending tasks and receiving responses
- Configuration: Environment-based config
- README: Detailed documentation and usage instructions
examples/
βββ minimal/ # Basic server/client without AI (echo responses)
βββ default-handlers/ # Using built-in default task handlers
βββ static-agent-card/ # Loading agent config from JSON file
βββ ai-powered/ # Server with LLM integration
βββ ai-powered-streaming/ # AI with real-time streaming
βββ streaming/ # Real-time streaming responses
βββ input-required/ # Input-required flow (non-streaming and streaming)
βββ callbacks/ # Callback hooks for agent/model/tool lifecycle
βββ artifacts-filesystem/ # Artifact storage using local filesystem
βββ artifacts-minio/ # Artifact storage using MinIO (S3-compatible)
βββ artifacts-autonomous-tool/ # LLM autonomously creates artifacts via create_artifact tool
βββ queue-storage/ # Queue storage backends (in-memory and Redis)
βββ tls-example/ # TLS-enabled server with HTTPS communication
βββ usage-metadata/ # Token usage and execution metrics tracking
π Quick Start
Running Any Example
- Navigate to the example directory:
cd examples/minimal
- Run with Docker Compose:
docker-compose up --build
- Or run locally:
# Terminal 1 - Server
cd server && go run main.go
# Terminal 2 - Client
cd client && go run main.go
π Available Examples
Core Examples
minimal/
The simplest A2A server and client setup with custom echo task handler.
- Custom
TaskHandler implementation
- Basic request/response pattern
- No external dependencies
default-handlers/
Server using built-in default task handlers - no need to implement custom handlers.
WithDefaultTaskHandlers() for quick setup
- Automatic mock responses (no LLM required)
- Optional AI integration when LLM is configured
static-agent-card/
Demonstrates loading agent configuration from JSON files using WithAgentCardFromFile().
- Agent metadata defined in
agent-card.json
- Runtime field overrides (URLs, ports)
- Environment-specific configurations
ai-powered/
Custom AI task handler with LLM integration (OpenAI, Anthropic, etc.).
- Custom
AITaskHandler implementation
- Multiple provider support
- Environment-based LLM configuration
streaming/
Real-time streaming responses for chat-like experiences.
- Custom
StreamableTaskHandler implementation
- Character-by-character streaming
- Event-based communication
ai-powered-streaming/
AI-powered streaming with LLM integration.
- Real-time AI responses
- Streaming LLM integration
- Event-driven architecture
Demonstrates input-required flow where agents pause to request additional information.
- Non-streaming: Traditional request-response with input pausing
- Streaming: Real-time streaming that can pause for user input
- Task state management and conversation continuity
- Built-in
input_required tool usage
- Interactive conversation examples
callbacks/
Demonstrates callback hooks for intercepting and modifying agent execution lifecycle.
BeforeAgent/AfterAgent: Hook into overall agent execution
BeforeModel/AfterModel: Hook into LLM calls for caching, guardrails
BeforeTool/AfterTool: Hook into tool execution for authorization, logging
- Flow control: skip default behavior or modify outputs
- Use cases: guardrails, caching, logging, authorization, sanitization
artifacts-filesystem/
Demonstrates artifact creation and download using local filesystem storage.
- Filesystem storage provider
- HTTP download endpoints
- Client artifact download integration
artifacts-minio/
Demonstrates artifact creation and download using MinIO (S3-compatible) storage.
- MinIO storage provider
- S3-compatible API
- Enterprise-ready cloud storage
Demonstrates autonomous artifact creation where the LLM decides when and what artifacts to create using the built-in create_artifact tool.
- LLM autonomously creates artifacts
- Built-in
create_artifact tool enabled via configuration
- No custom task handler required
- AI-driven decision making for artifact creation
- Supports multiple file types (JSON, CSV, code files, etc.)
queue-storage/
Demonstrates different queue storage backends for task management and horizontal
scaling.
- In-Memory: Simple development setup with in-memory storage
- Redis: Enterprise-ready Redis-based queue storage
- Docker Compose setups for both storage backends
- Complete server and client implementations
tls-example/
TLS-enabled A2A server demonstrating secure HTTPS communication.
- Self-signed certificate generation
- TLS/SSL encryption for client-server communication
- Docker Compose orchestration with TLS setup
- Secure task submission and response handling
Demonstrates automatic token usage and execution metrics tracking in task responses.
- Token usage tracking (prompt_tokens, completion_tokens, total_tokens)
- Execution statistics (iterations, messages, tool_calls, failed_tools)
- Automatic metadata population in Task.Metadata field
- Configuration options for enabling/disabling usage tracking
- Cost monitoring and performance analysis use cases
π§ Configuration
All examples follow a consistent environment variable pattern with the A2A_ prefix:
Common A2A Variables
A2A_SERVER_PORT: Server port (default: 8080)
A2A_DEBUG: Enable debug logging (default: false)
A2A_AGENT_NAME: Agent identifier
A2A_AGENT_DESCRIPTION: Agent description
A2A_AGENT_VERSION: Agent version
AI/LLM Configuration
For examples with AI integration:
A2A_AGENT_CLIENT_PROVIDER: LLM provider (openai, anthropic)
A2A_AGENT_CLIENT_MODEL: Model to use (gpt-4, claude-3-haiku-20240307)
A2A_AGENT_CLIENT_BASE_URL: Custom gateway URL (optional)
See each example's README for specific configuration details.
π Learning Path
Recommended progression:
minimal/ - Understand basic A2A protocol and custom task handlers
default-handlers/ - Learn built-in handlers for rapid development
static-agent-card/ - Externalize agent configuration to JSON files
input-required/ - Learn input-required flow for interactive conversations
callbacks/ - Hook into agent lifecycle for guardrails, caching, and logging
artifacts-filesystem/ - Add file generation and download capabilities
ai-powered/ - Add LLM integration for intelligent responses
streaming/ - Implement real-time streaming capabilities
ai-powered-streaming/ - Combine AI integration with real-time streaming
artifacts-autonomous-tool/ - Enable LLM to autonomously create artifacts
artifacts-minio/ - Enterprise-ready artifact storage with MinIO
queue-storage/ - Learn different queue storage backends for scaling
tls-example/ - Learn TLS/SSL encryption and secure communication
usage-metadata/ - Track token usage and execution metrics for cost monitoring
For detailed setup instructions, configuration options, and troubleshooting, see
each example's individual README file.
For more information about the A2A protocol and framework, see the main
README or refer to the
official documentation.