README
ΒΆ
A2A ADK Examples
This directory contains scenario-based examples demonstrating different capabilities of the A2A Agent Development Kit (ADK).
Table of Contents
π Structure
Each example is a self-contained scenario with:
- Server: A2A server implementation with task handlers
- Client: A2A client that demonstrates sending tasks and receiving responses
- Configuration: Environment-based config
- README: Detailed documentation and usage instructions
examples/
βββ minimal/ # Basic server/client without AI (echo responses)
βββ default-handlers/ # Using built-in default task handlers
βββ static-agent-card/ # Loading agent config from JSON file
βββ ai-powered/ # Server with LLM integration
βββ ai-powered-streaming/ # AI with real-time streaming
βββ streaming/ # Real-time streaming responses
βββ input-required/ # Input-required flow (non-streaming and streaming)
βββ callbacks/ # Callback hooks for agent/model/tool lifecycle
βββ artifacts-filesystem/ # Artifact storage using local filesystem
βββ artifacts-minio/ # Artifact storage using MinIO (S3-compatible)
βββ artifacts-autonomous-tool/ # LLM autonomously creates artifacts via create_artifact tool
βββ queue-storage/ # Queue storage backends (in-memory and Redis)
βββ tls-example/ # TLS-enabled server with HTTPS communication
βββ usage-metadata/ # Token usage and execution metrics tracking
π Quick Start
Running Any Example
- Navigate to the example directory:
cd examples/minimal
- Run with Docker Compose:
docker-compose up --build
- Or run locally:
# Terminal 1 - Server
cd server && go run main.go
# Terminal 2 - Client
cd client && go run main.go
π Available Examples
Core Examples
minimal/
The simplest A2A server and client setup with custom echo task handler.
- Custom
TaskHandlerimplementation - Basic request/response pattern
- No external dependencies
default-handlers/
Server using built-in default task handlers - no need to implement custom handlers.
WithDefaultTaskHandlers()for quick setup- Automatic mock responses (no LLM required)
- Optional AI integration when LLM is configured
static-agent-card/
Demonstrates loading agent configuration from JSON files using WithAgentCardFromFile().
- Agent metadata defined in
agent-card.json - Runtime field overrides (URLs, ports)
- Environment-specific configurations
ai-powered/
Custom AI task handler with LLM integration (OpenAI, Anthropic, etc.).
- Custom
AITaskHandlerimplementation - Multiple provider support
- Environment-based LLM configuration
streaming/
Real-time streaming responses for chat-like experiences.
- Custom
StreamableTaskHandlerimplementation - Character-by-character streaming
- Event-based communication
ai-powered-streaming/
AI-powered streaming with LLM integration.
- Real-time AI responses
- Streaming LLM integration
- Event-driven architecture
input-required/
Demonstrates input-required flow where agents pause to request additional information.
- Non-streaming: Traditional request-response with input pausing
- Streaming: Real-time streaming that can pause for user input
- Task state management and conversation continuity
- Built-in
input_requiredtool usage - Interactive conversation examples
callbacks/
Demonstrates callback hooks for intercepting and modifying agent execution lifecycle.
BeforeAgent/AfterAgent: Hook into overall agent executionBeforeModel/AfterModel: Hook into LLM calls for caching, guardrailsBeforeTool/AfterTool: Hook into tool execution for authorization, logging- Flow control: skip default behavior or modify outputs
- Use cases: guardrails, caching, logging, authorization, sanitization
artifacts-filesystem/
Demonstrates artifact creation and download using local filesystem storage.
- Filesystem storage provider
- HTTP download endpoints
- Client artifact download integration
artifacts-minio/
Demonstrates artifact creation and download using MinIO (S3-compatible) storage.
- MinIO storage provider
- S3-compatible API
- Enterprise-ready cloud storage
artifacts-autonomous-tool/
Demonstrates autonomous artifact creation where the LLM decides when and what artifacts to create using the built-in create_artifact tool.
- LLM autonomously creates artifacts
- Built-in
create_artifacttool enabled via configuration - No custom task handler required
- AI-driven decision making for artifact creation
- Supports multiple file types (JSON, CSV, code files, etc.)
queue-storage/
Demonstrates different queue storage backends for task management and horizontal scaling.
- In-Memory: Simple development setup with in-memory storage
- Redis: Enterprise-ready Redis-based queue storage
- Docker Compose setups for both storage backends
- Complete server and client implementations
tls-example/
TLS-enabled A2A server demonstrating secure HTTPS communication.
- Self-signed certificate generation
- TLS/SSL encryption for client-server communication
- Docker Compose orchestration with TLS setup
- Secure task submission and response handling
usage-metadata/
Demonstrates automatic token usage and execution metrics tracking in task responses.
- Token usage tracking (prompt_tokens, completion_tokens, total_tokens)
- Execution statistics (iterations, messages, tool_calls, failed_tools)
- Automatic metadata population in Task.Metadata field
- Configuration options for enabling/disabling usage tracking
- Cost monitoring and performance analysis use cases
π§ Configuration
All examples follow a consistent environment variable pattern with the A2A_ prefix:
Common A2A Variables
A2A_SERVER_PORT: Server port (default:8080)A2A_DEBUG: Enable debug logging (default:false)A2A_AGENT_NAME: Agent identifierA2A_AGENT_DESCRIPTION: Agent descriptionA2A_AGENT_VERSION: Agent version
AI/LLM Configuration
For examples with AI integration:
A2A_AGENT_CLIENT_PROVIDER: LLM provider (openai,anthropic)A2A_AGENT_CLIENT_MODEL: Model to use (gpt-4,claude-3-haiku-20240307)A2A_AGENT_CLIENT_BASE_URL: Custom gateway URL (optional)
See each example's README for specific configuration details.
π Learning Path
Recommended progression:
minimal/- Understand basic A2A protocol and custom task handlersdefault-handlers/- Learn built-in handlers for rapid developmentstatic-agent-card/- Externalize agent configuration to JSON filesinput-required/- Learn input-required flow for interactive conversationscallbacks/- Hook into agent lifecycle for guardrails, caching, and loggingartifacts-filesystem/- Add file generation and download capabilitiesai-powered/- Add LLM integration for intelligent responsesstreaming/- Implement real-time streaming capabilitiesai-powered-streaming/- Combine AI integration with real-time streamingartifacts-autonomous-tool/- Enable LLM to autonomously create artifactsartifacts-minio/- Enterprise-ready artifact storage with MinIOqueue-storage/- Learn different queue storage backends for scalingtls-example/- Learn TLS/SSL encryption and secure communicationusage-metadata/- Track token usage and execution metrics for cost monitoring
For detailed setup instructions, configuration options, and troubleshooting, see each example's individual README file.
For more information about the A2A protocol and framework, see the main README or refer to the official documentation.