README
ΒΆ
Go Agent
A reusable AI agent framework for Go implementing the observe β decide β act β update loop pattern for LLM-based task execution.
Overview
go-agent provides a clean, production-ready foundation for building AI agents with tool use capabilities in Go. It features:
- ποΈ Hexagonal Architecture β Clean separation of domain logic and infrastructure
- π Agent Loop Pattern β Observe β Decide β Act β Update cycle for autonomous task execution
- π‘ Event-Driven β Observable task lifecycle via domain events
- π§ Memory System β Long-term context storage with text search and embedding-based semantic search
- π‘οΈ Resilience Patterns β Breaker, debounce, retry, throttle, and timeout
- π§ Tool Use β Extensible tool system with type-safe definitions
Works with any OpenAI-compatible API (LM Studio, Ollama, OpenAI, vLLM, etc.).
Table of Contents
- Architecture
- CLI Usage
- Configuration
- Contributing
- Creating Custom Tools
- Docker
- Features
- Installation
- License
- Project Structure
- Quick Start
- Testing
Installation
go get github.com/andygeiss/go-agent
Requirements: Go 1.25.5+
Quick Start
Run the CLI Demo
# Clone the repository
git clone https://github.com/andygeiss/go-agent.git
cd go-agent
# Start LM Studio (or any OpenAI-compatible server) on localhost:1234
# Run the CLI
go run ./cmd/cli -model <your-model-name>
Use as a Library
package main
import (
"context"
"fmt"
"time"
"github.com/andygeiss/cloud-native-utils/messaging"
"github.com/andygeiss/go-agent/internal/adapters/outbound"
"github.com/andygeiss/go-agent/internal/domain/agent"
"github.com/andygeiss/go-agent/internal/domain/tooling"
)
func main() {
// Create infrastructure
dispatcher := messaging.NewExternalDispatcher()
llmClient := outbound.NewOpenAIClient("http://localhost:1234", "your-model")
toolExecutor := outbound.NewToolExecutor()
publisher := outbound.NewEventPublisher(dispatcher)
memoryStore := outbound.NewMemoryStore()
// Register tools
idGen := func() string { return fmt.Sprintf("note-%d", time.Now().UnixNano()) }
memoryToolSvc := tooling.NewMemoryToolService(memoryStore, idGen)
memoryGetTool := tooling.NewMemoryGetTool(memoryToolSvc)
toolExecutor.RegisterTool(string(memoryGetTool.ID), memoryGetTool.Func)
toolExecutor.RegisterToolDefinition(memoryGetTool.Definition)
// Create agent
ag := agent.NewAgent("my-agent", "You are a helpful assistant.",
agent.WithMaxIterations(10),
agent.WithMaxMessages(50),
)
// Create task service and run
taskService := agent.NewTaskService(llmClient, toolExecutor, publisher)
task := agent.NewTask("task-1", "chat", "What do you remember about my preferences?")
result, _ := taskService.RunTask(context.Background(), &ag, task)
println(result.Output)
}
Architecture
The project follows hexagonal architecture (ports and adapters) with domain-driven design:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β cmd/cli β
β (Application Entry) β
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
β internal/domain β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β agent/ Core aggregate, task service, types ββ
β β chatting/ Use cases: SendMessage, ClearConversation ββ
β β indexing/ Use cases: Scan, ChangedSince, DiffSnapshots ββ
β β memorizing/ Use cases: WriteNote, SearchNotes ββ
β β tooling/ Tool implementations ββ
β β openai/ OpenAI API types (request, response, tool) ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β depends on interfaces (ports)
ββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
β internal/adapters β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β inbound/ FSWalker (file system traversal) ββ
β β outbound/ LLMClient, ToolExecutor, EventPublisher, ββ
β β MemoryStore, IndexStore implementations ββ
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Agent Loop
The core agent implements an iterative loop:
- Observe β Receive user input, build message context
- Decide β Call LLM with messages and available tools
- Act β Execute any tool calls requested by the LLM
- Update β Add results to conversation, check termination
- Repeat β Continue until task completes or max iterations reached
For detailed architecture documentation, see CONTEXT.md.
Features
Built-in Tools (alphabetically sorted)
| Tool | Description |
|---|---|
index.changed_since |
Find files modified after a given timestamp |
index.diff_snapshot |
Compare two snapshots to find added/changed/removed files |
index.scan |
Scan directories and create a file system snapshot |
memory_get |
Retrieve a specific memory note by ID |
memory_search |
Search memory notes with query, source types, and importance filters |
memory_write |
Store a typed memory note with metadata and importance |
Typed Memory System
Memory notes are categorized by source type for schema-aware storage and retrieval:
| Source Type | Use Case | Default Importance |
|---|---|---|
decision |
Architectural or design decisions | 4 |
experiment |
Hypotheses and experimental results | 3 |
external_source |
URLs and external references | 2 |
fact |
Verified information about the system | 3 |
issue |
Problems, bugs, or blockers | 4 |
plan_step |
Steps in a task plan | 3 |
preference |
User preferences and settings | 4 |
requirement |
Must-have requirements | 5 |
retrospective |
Lessons learned | 3 |
summary |
Condensed information from sources | 3 |
The memory_search tool supports filtering by source_types and min_importance, enabling precise retrieval of relevant context.
Helper constructors for schema-aware note creation:
// Create typed notes with appropriate defaults
note := agent.NewDecisionNote("note-1", "Use PostgreSQL", "database", "architecture")
note := agent.NewRequirementNote("note-2", "Must support 1000 concurrent users")
note := agent.NewExperimentNote("note-3", "Caching improves latency", "50% reduction")
Programmatic search by type:
svc := memorizing.NewService(store)
// Search specific types
decisions, _ := svc.SearchDecisions(ctx, "database", 10)
requirements, _ := svc.SearchRequirements(ctx, "scalability", 10)
// Search with combined filters
opts := &agent.MemorySearchOptions{
SourceTypes: []agent.SourceType{agent.SourceTypeDecision},
MinImportance: 4,
Tags: []string{"architecture"},
}
notes, _ := store.Search(ctx, "query", 10, opts)
Embedding-Based Semantic Search
Memory notes support vector embeddings for semantic similarity search using cosine similarity:
// Store a note with an embedding (embeddings generated by external model)
embedding := getEmbedding("Go is a statically typed language") // your embedding function
note := agent.NewMemoryNote("note-1", agent.SourceTypeFact).
WithRawContent("Go is a statically typed language").
WithEmbedding(embedding)
_ = store.Write(ctx, note)
// Search with cosine similarity ranking
queryEmbedding := getEmbedding("programming languages")
results, _ := store.SearchWithEmbedding(ctx, "languages", queryEmbedding, 10, nil)
Key features:
Embeddingtype β[]float32vector representationWithEmbedding()β Builder method to attach embedding to notesSearchWithEmbedding()β Ranks results by cosine similarity- Falls back to importance-based sorting when no query embedding provided
- Supports common embedding dimensions (128, 512, 1536 for OpenAI ada-002)
Domain Events (alphabetically sorted)
Subscribe to task lifecycle events:
agent.task.completedβ Task finishes successfullyagent.task.failedβ Task terminates with erroragent.task.startedβ Task begins executionagent.toolcall.executedβ Tool call completes
Lifecycle Hooks (alphabetically sorted)
hooks := agent.NewHooks().
WithAfterLLMCall(func(ctx context.Context, ag *agent.Agent, t *agent.Task) error {
log.Println("LLM response received")
return nil
}).
WithAfterTask(func(ctx context.Context, ag *agent.Agent, t *agent.Task) error {
log.Println("Task finished:", t.Status)
return nil
}).
WithAfterToolCall(func(ctx context.Context, ag *agent.Agent, tc *agent.ToolCall) error {
log.Println("Tool executed:", tc.Name, "β", tc.Result)
return nil
}).
WithBeforeLLMCall(func(ctx context.Context, ag *agent.Agent, t *agent.Task) error {
log.Println("Calling LLM...")
return nil
}).
WithBeforeTask(func(ctx context.Context, ag *agent.Agent, t *agent.Task) error {
log.Println("Starting task:", t.Name)
return nil
}).
WithBeforeToolCall(func(ctx context.Context, ag *agent.Agent, tc *agent.ToolCall) error {
log.Println("Executing tool:", tc.Name)
return nil
})
taskService.WithHooks(hooks)
Resilience Patterns
The OpenAIClient includes configurable resilience (alphabetically sorted):
- Circuit Breaker: Opens after 5 consecutive failures (configurable)
- Debounce: Coalesces rapid calls (disabled by default)
- Retry: 3 attempts with 2s delay (configurable)
- Throttling: Rate limiting via token bucket (disabled by default)
- Timeout: HTTP (60s) and LLM call (120s) timeouts (configurable)
CLI Usage
go run ./cmd/cli [flags]
Commands (during chat, alphabetically sorted)
| Command | Description |
|---|---|
clear |
Reset conversation history |
help |
Show available commands |
index changed [since] |
Find files changed since timestamp/duration (default: 24h) |
index diff <from> <to> |
Compare two snapshots |
index scan [paths...] |
Scan directories (default: current directory) |
memory delete <id> |
Delete a memory note by ID |
memory get <id> |
Retrieve a memory note by ID |
memory search [opts] <query> |
Search memory notes (opts: --source-type, --min-importance, --tags) |
memory write [opts] <content> |
Store a memory note (opts: --source-type, --importance, --tags) |
quit / exit |
Exit the CLI |
stats |
Show agent statistics |
Flags (alphabetically sorted)
| Flag | Default | Description |
|---|---|---|
-chatting-model |
$OPENAI_CHAT_MODEL |
Model name |
-chatting-url |
http://localhost:1234 |
OpenAI-compatible API base URL |
-embedding-model |
$OPENAI_EMBED_MODEL |
Embedding model name (empty = no embeddings) |
-embedding-url |
$OPENAI_EMBED_URL or http://localhost:1234 |
Embedding API URL |
-index-file |
"" |
JSON file for persistent indexing (empty = in-memory) |
-max-iterations |
10 |
Max iterations per task |
-max-messages |
50 |
Max messages to retain (0 = unlimited) |
-memory-file |
"" |
JSON file for persistent memory (empty = in-memory) |
-parallel-tools |
false |
Execute tools in parallel |
-verbose |
false |
Show detailed metrics |
Creating Custom Tools
package tooling
import (
"context"
"github.com/andygeiss/go-agent/internal/domain/agent"
)
// Define the tool
func NewMyTool() agent.Tool {
return agent.Tool{
ID: "my_tool",
Definition: agent.NewToolDefinition("my_tool", "Description of what it does").
WithParameter("input", "The input parameter"),
Func: MyToolFunc,
}
}
// Implement the function
func MyToolFunc(ctx context.Context, arguments string) (string, error) {
var args struct {
Input string `json:"input"`
}
if err := agent.DecodeArgs(arguments, &args); err != nil {
return "", err
}
// Your tool logic here
return "result", nil
}
Register the tool:
myTool := tooling.NewMyTool()
executor.RegisterTool("my_tool", myTool.Func)
executor.RegisterToolDefinition(myTool.Definition)
Configuration
Agent Options (alphabetically sorted)
agent.NewAgent("id", "system prompt",
agent.WithMaxIterations(20), // Max loop iterations per task
agent.WithMaxMessages(100), // Message history limit (0 = unlimited)
agent.WithMetadata(agent.Metadata{
"model": "gpt-4",
"user": "alice",
}),
)
LLM Client Options (alphabetically sorted)
client := outbound.NewOpenAIClient(baseURL, model).
WithCircuitBreaker(10). // Open after 10 failures
WithDebounce(500 * time.Millisecond). // Coalesce rapid calls
WithHTTPClient(customClient). // Custom HTTP client
WithLLMTimeout(180 * time.Second). // LLM call timeout
WithLogger(slog.Default()). // Structured logging
WithRetry(5, 3*time.Second). // 5 attempts, 3s delay
WithThrottle(100, 10, time.Second) // tokens, refill, period
Task Service Options
taskService := agent.NewTaskService(llm, executor, publisher).
WithHooks(hooks). // Lifecycle hooks
WithParallelToolExecution() // Enable parallel tool calls
Docker
Build
docker build -t go-agent .
Run with Docker Compose
# Create .env file with required variables
echo "APP_SHORTNAME=go-agent" > .env
echo "USER=$(whoami)" >> .env
# Start services
docker-compose up -d
Testing
# Run all tests (~2s)
go test ./...
# Run with coverage
go test -cover ./...
# Run integration tests (requires LM Studio or compatible server)
OPENAI_CHAT_MODEL="your-model" OPENAI_CHAT_URL="http://localhost:1234" \
go test -tags=integration ./...
# Run all benchmarks (PGO profiling)
go test -bench=. ./cmd/cli/...
# Run specific benchmark categories
go test -bench=Memory ./cmd/cli/... # Memory system benchmarks
go test -bench=FullStack ./cmd/cli/... # End-to-end benchmarks
go test -bench=TaskService ./cmd/cli/... # Task service benchmarks
# Run benchmarks with custom time
go test -bench=. -benchtime=1s ./cmd/cli/...
Integration Tests
Integration tests are guarded by the //go:build integration build tag and require:
- A running LM Studio (or compatible) server
- Environment variables:
OPENAI_CHAT_MODEL,OPENAI_CHAT_URL,OPENAI_EMBED_MODEL,OPENAI_EMBED_URL
Without -tags=integration, these tests are excluded from the build.
### Benchmark Categories
The CLI benchmarks (`cmd/cli/main_test.go`) cover all domain contexts:
| Category | Description |
|----------|-------------|
| `Benchmark_FSWalker_*` | Real file system walking |
| `Benchmark_FullStack_*` | End-to-end agent with tools |
| `Benchmark_IndexingService_*` | Scan/ChangedSince/DiffSnapshots at 100/1000/10000 files |
| `Benchmark_IndexStore_*` | Snapshot save/get operations |
| `Benchmark_IndexToolService_*` | Tool-based indexing operations |
| `Benchmark_MemoryNote_*` | MemoryNote object creation/methods |
| `Benchmark_MemoryStore_*` | Raw store ops at 100/1000/10000 notes |
| `Benchmark_MemoryTools_*` | Tool-based memory operations |
| `Benchmark_MemorizingService_*` | Complete memory workflow |
| `Benchmark_*NoteUseCase` | Domain use case benchmarks |
| `Benchmark_NewFileInfo` | FileInfo object creation |
| `Benchmark_NewSnapshot` | Snapshot object creation |
| `Benchmark_SendMessageUseCase_*` | Chat use case execution |
| `Benchmark_Snapshot_*` | Snapshot method performance |
| `Benchmark_TaskService_*` | Task service with hooks/parallelism |
---
## Project Structure
go-agent/ βββ cmd/cli/ # CLI application βββ internal/ β βββ adapters/ β β βββ inbound/ # Inbound adapters (data sources) β β β βββ file_walker.go # FileWalker β filesystem traversal β β βββ outbound/ # Outbound adapters (infrastructure) β β βββ conversation_store.go # ConversationStore β resource.Access β β βββ encrypted_conversation_store.go # AES-GCM encrypted variant β β βββ event_publisher.go # EventPublisher β messaging.Dispatcher β β βββ index_store.go # IndexStore β resource.Access β β βββ memory_store.go # MemoryStore β resource.Access β β βββ openai_client.go # LLMClient β OpenAI-compatible API β β βββ tool_executor.go # ToolExecutor β tool registry β βββ domain/ β βββ agent/ # Core domain (Agent, Task, Message, Hooks, Events) β βββ chatting/ # Chat use cases (SendMessage, ClearConversation, GetAgentStats) β βββ indexing/ # File indexing (Scan, ChangedSince, DiffSnapshots) β βββ memorizing/ # Memory use cases (WriteNote, GetNote, SearchNotes, DeleteNote) β βββ openai/ # OpenAI API types (Request, Response, Tool) β βββ tooling/ # Tool implementations (memory, index) βββ AGENTS.md # AI agent definitions βββ CONTEXT.md # Architecture documentation βββ Dockerfile βββ docker-compose.yml βββ README.md # This file βββ VENDOR.md # Vendor library documentation
---
## Contributing
1. Read [CONTEXT.md](CONTEXT.md) for architecture and conventions
2. Check [VENDOR.md](VENDOR.md) for approved vendor patterns
3. Follow the hexagonal architecture pattern
4. Add tests for new functionality
5. Run `go fmt ./...` and `go vet ./...` before committing
---
## License
[MIT License](LICENSE)
Directories
ΒΆ
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
cli
command
|
|
|
internal
|
|
|
adapters/inbound
Package inbound provides inbound adapters for the application.
|
Package inbound provides inbound adapters for the application. |
|
adapters/outbound
Package outbound provides outbound adapters for the application.
|
Package outbound provides outbound adapters for the application. |
|
domain/agent
Package agent provides a reusable AI agent framework implementing the observe β decide β act β update loop pattern for LLM-based task execution.
|
Package agent provides a reusable AI agent framework implementing the observe β decide β act β update loop pattern for LLM-based task execution. |
|
domain/indexing
Package indexing provides file system indexing capabilities for the agent.
|
Package indexing provides file system indexing capabilities for the agent. |
|
domain/openai
Package openai provides OpenAI-compatible API types for use with LLM providers that implement the OpenAI chat completions API (e.g., LM Studio, Ollama, vLLM, LocalAI).
|
Package openai provides OpenAI-compatible API types for use with LLM providers that implement the OpenAI chat completions API (e.g., LM Studio, Ollama, vLLM, LocalAI). |