examples/

directory
v0.17.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 30, 2025 License: MIT

README ΒΆ

A2A ADK Examples

This directory contains scenario-based examples demonstrating different capabilities of the A2A Agent Development Kit (ADK).

Table of Contents

πŸ“ Structure

Each example is a self-contained scenario with:

  • Server: A2A server implementation with task handlers
  • Client: A2A client that demonstrates sending tasks and receiving responses
  • Configuration: Environment-based config
  • README: Detailed documentation and usage instructions
examples/
β”œβ”€β”€ minimal/                   # Basic server/client without AI (echo responses)
β”œβ”€β”€ default-handlers/          # Using built-in default task handlers
β”œβ”€β”€ static-agent-card/         # Loading agent config from JSON file
β”œβ”€β”€ ai-powered/                # Server with LLM integration
β”œβ”€β”€ ai-powered-streaming/      # AI with real-time streaming
β”œβ”€β”€ streaming/                 # Real-time streaming responses
β”œβ”€β”€ input-required/            # Input-required flow (non-streaming and streaming)
β”œβ”€β”€ callbacks/                 # Callback hooks for agent/model/tool lifecycle
β”œβ”€β”€ artifacts-filesystem/      # Artifact storage using local filesystem
β”œβ”€β”€ artifacts-minio/           # Artifact storage using MinIO (S3-compatible)
β”œβ”€β”€ artifacts-autonomous-tool/ # LLM autonomously creates artifacts via create_artifact tool
β”œβ”€β”€ queue-storage/             # Queue storage backends (in-memory and Redis)
β”œβ”€β”€ tls-example/               # TLS-enabled server with HTTPS communication
└── usage-metadata/            # Token usage and execution metrics tracking

πŸš€ Quick Start

Running Any Example
  1. Navigate to the example directory:
cd examples/minimal
  1. Run with Docker Compose:
docker-compose up --build
  1. Or run locally:
# Terminal 1 - Server
cd server && go run main.go

# Terminal 2 - Client
cd client && go run main.go

πŸ“š Available Examples

Core Examples
minimal/

The simplest A2A server and client setup with custom echo task handler.

  • Custom TaskHandler implementation
  • Basic request/response pattern
  • No external dependencies
default-handlers/

Server using built-in default task handlers - no need to implement custom handlers.

  • WithDefaultTaskHandlers() for quick setup
  • Automatic mock responses (no LLM required)
  • Optional AI integration when LLM is configured
static-agent-card/

Demonstrates loading agent configuration from JSON files using WithAgentCardFromFile().

  • Agent metadata defined in agent-card.json
  • Runtime field overrides (URLs, ports)
  • Environment-specific configurations
ai-powered/

Custom AI task handler with LLM integration (OpenAI, Anthropic, etc.).

  • Custom AITaskHandler implementation
  • Multiple provider support
  • Environment-based LLM configuration
streaming/

Real-time streaming responses for chat-like experiences.

  • Custom StreamableTaskHandler implementation
  • Character-by-character streaming
  • Event-based communication
ai-powered-streaming/

AI-powered streaming with LLM integration.

  • Real-time AI responses
  • Streaming LLM integration
  • Event-driven architecture
input-required/

Demonstrates input-required flow where agents pause to request additional information.

  • Non-streaming: Traditional request-response with input pausing
  • Streaming: Real-time streaming that can pause for user input
  • Task state management and conversation continuity
  • Built-in input_required tool usage
  • Interactive conversation examples
callbacks/

Demonstrates callback hooks for intercepting and modifying agent execution lifecycle.

  • BeforeAgent/AfterAgent: Hook into overall agent execution
  • BeforeModel/AfterModel: Hook into LLM calls for caching, guardrails
  • BeforeTool/AfterTool: Hook into tool execution for authorization, logging
  • Flow control: skip default behavior or modify outputs
  • Use cases: guardrails, caching, logging, authorization, sanitization
artifacts-filesystem/

Demonstrates artifact creation and download using local filesystem storage.

  • Filesystem storage provider
  • HTTP download endpoints
  • Client artifact download integration
artifacts-minio/

Demonstrates artifact creation and download using MinIO (S3-compatible) storage.

  • MinIO storage provider
  • S3-compatible API
  • Enterprise-ready cloud storage
artifacts-autonomous-tool/

Demonstrates autonomous artifact creation where the LLM decides when and what artifacts to create using the built-in create_artifact tool.

  • LLM autonomously creates artifacts
  • Built-in create_artifact tool enabled via configuration
  • No custom task handler required
  • AI-driven decision making for artifact creation
  • Supports multiple file types (JSON, CSV, code files, etc.)
queue-storage/

Demonstrates different queue storage backends for task management and horizontal scaling.

  • In-Memory: Simple development setup with in-memory storage
  • Redis: Enterprise-ready Redis-based queue storage
  • Docker Compose setups for both storage backends
  • Complete server and client implementations
tls-example/

TLS-enabled A2A server demonstrating secure HTTPS communication.

  • Self-signed certificate generation
  • TLS/SSL encryption for client-server communication
  • Docker Compose orchestration with TLS setup
  • Secure task submission and response handling
usage-metadata/

Demonstrates automatic token usage and execution metrics tracking in task responses.

  • Token usage tracking (prompt_tokens, completion_tokens, total_tokens)
  • Execution statistics (iterations, messages, tool_calls, failed_tools)
  • Automatic metadata population in Task.Metadata field
  • Configuration options for enabling/disabling usage tracking
  • Cost monitoring and performance analysis use cases

πŸ”§ Configuration

All examples follow a consistent environment variable pattern with the A2A_ prefix:

Common A2A Variables
  • A2A_SERVER_PORT: Server port (default: 8080)
  • A2A_DEBUG: Enable debug logging (default: false)
  • A2A_AGENT_NAME: Agent identifier
  • A2A_AGENT_DESCRIPTION: Agent description
  • A2A_AGENT_VERSION: Agent version
AI/LLM Configuration

For examples with AI integration:

  • A2A_AGENT_CLIENT_PROVIDER: LLM provider (openai, anthropic)
  • A2A_AGENT_CLIENT_MODEL: Model to use (gpt-4, claude-3-haiku-20240307)
  • A2A_AGENT_CLIENT_BASE_URL: Custom gateway URL (optional)

See each example's README for specific configuration details.

πŸ“– Learning Path

Recommended progression:

  1. minimal/ - Understand basic A2A protocol and custom task handlers
  2. default-handlers/ - Learn built-in handlers for rapid development
  3. static-agent-card/ - Externalize agent configuration to JSON files
  4. input-required/ - Learn input-required flow for interactive conversations
  5. callbacks/ - Hook into agent lifecycle for guardrails, caching, and logging
  6. artifacts-filesystem/ - Add file generation and download capabilities
  7. ai-powered/ - Add LLM integration for intelligent responses
  8. streaming/ - Implement real-time streaming capabilities
  9. ai-powered-streaming/ - Combine AI integration with real-time streaming
  10. artifacts-autonomous-tool/ - Enable LLM to autonomously create artifacts
  11. artifacts-minio/ - Enterprise-ready artifact storage with MinIO
  12. queue-storage/ - Learn different queue storage backends for scaling
  13. tls-example/ - Learn TLS/SSL encryption and secure communication
  14. usage-metadata/ - Track token usage and execution metrics for cost monitoring

For detailed setup instructions, configuration options, and troubleshooting, see each example's individual README file.

For more information about the A2A protocol and framework, see the main README or refer to the official documentation.

Directories ΒΆ

Path Synopsis
callbacks
server command
static-agent-card
client command
server command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL