langgraph-go

module
v0.4.0-beta Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 18, 2025 License: MIT

README ΒΆ

LangGraph-Go

A Go-native orchestration framework for building stateful, graph-based LLM and tool workflows.

⚠️ Alpha Software Warning

LangGraph-Go is currently in alpha stage. The API is not yet stable and may change significantly between releases. While the core functionality is complete and tested, we recommend:

  • Not using in production without thorough testing
  • Pinning to specific versions in your go.mod
  • Expecting breaking changes until v1.0.0
  • Reporting issues to help us improve stability

We welcome early adopters and contributors! Please see CONTRIBUTING.md for how to get involved.

Overview

LangGraph-Go enables you to build complex AI agent systems with:

  • Deterministic replay - Resume workflows from any checkpoint
  • Type-safe state - Strongly-typed state management using Go generics
  • Flexible routing - Conditional logic, loops, and parallel execution
  • LLM integration - Unified interface for OpenAI, Anthropic, Google Gemini, AWS Bedrock, and Ollama
  • Production-ready - Built-in persistence, observability, and error handling

Quick Start

package main

import (
    "context"
    "fmt"
    "os"
    "time"

    "github.com/dshills/langgraph-go/graph"
    "github.com/dshills/langgraph-go/graph/emit"
    "github.com/dshills/langgraph-go/graph/store"
)

// Define your state type
type Session struct {
    Query  string
    Answer string
    Steps  int
}

// Create a reducer for state merging
func reduce(prev, delta Session) Session {
    if delta.Query != "" {
        prev.Query = delta.Query
    }
    if delta.Answer != "" {
        prev.Answer = delta.Answer
    }
    prev.Steps += delta.Steps
    return prev
}

func main() {
    // Create nodes
    process := graph.NodeFunc[Session](func(ctx context.Context, s Session) graph.NodeResult[Session] {
        return graph.NodeResult[Session]{
            Delta: Session{Answer: "Processed: " + s.Query, Steps: 1},
            Route: graph.Stop(),
        }
    })

    // Build the workflow with functional options (recommended)
    st := store.NewMemStore[Session]()
    emitter := emit.NewLogEmitter(os.Stdout, false)

    engine := graph.New(
        reduce, st, emitter,
        graph.WithMaxConcurrent(8),
        graph.WithDefaultNodeTimeout(10*time.Second),
    )
    engine.Add("process", process)
    engine.StartAt("process")

    // Execute
    ctx := context.Background()
    final, err := engine.Run(ctx, "run-001", Session{Query: "Hello LangGraph!"})
    if err != nil {
        panic(err)
    }

    fmt.Println(final.Answer) // Output: Processed: Hello LangGraph!
}

Installation

# Install latest version (may include breaking changes)
go get github.com/dshills/langgraph-go

Note: Since this is alpha software, we recommend pinning to a specific version in your go.mod to avoid unexpected breaking changes.

LLM Provider Support

LangGraph-Go provides unified adapters for multiple LLM providers through the ChatModel interface:

AWS Bedrock

Access Claude and Amazon Nova models through AWS Bedrock:

import "github.com/dshills/langgraph-go/graph/model/bedrock"

config := bedrock.Config{
    Region:  "us-east-1",
    ModelID: "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
    MaxTokens: 4096,
    FallbackRegions: []string{"us-west-2"}, // Multi-region failover
}

adapter, err := bedrock.NewAdapter(context.Background(), config)
if err != nil {
    log.Fatal(err)
}

messages := []model.Message{
    {Role: model.RoleUser, Content: "Explain quantum computing"},
}

response, err := adapter.Chat(ctx, messages, nil)

Features:

  • Multi-region failover for high availability
  • Streaming responses via ChatStream()
  • Tool calling support
  • Inference profiles for cross-region routing
  • Support for Claude and Amazon Nova model families

See examples/bedrock_nova for complete examples.

Ollama (Local Models)

Run open-source models locally with Ollama:

import "github.com/dshills/langgraph-go/graph/model/ollama"

config := ollama.Config{
    Endpoint: "http://localhost:11434", // Default local Ollama
    Model:    "llama3.2",
}

adapter, err := ollama.NewChatModel(config)
if err != nil {
    log.Fatal(err)
}

messages := []model.Message{
    {Role: model.RoleUser, Content: "What is Go programming?"},
}

response, err := adapter.Chat(ctx, messages, nil)

Features:

  • Local model execution (no API costs)
  • Support for all Ollama-compatible models
  • Deterministic generation with seed control
  • Tool calling support (model-dependent)
  • Custom temperature, TopP, and token limits

See examples/ollama for complete examples.

Other Supported Providers
  • OpenAI - GPT-4, GPT-3.5-turbo (graph/model/openai)
  • Anthropic - Claude models via direct API (graph/model/anthropic)
  • Google - Gemini models (graph/model/google)

All providers implement the same ChatModel interface, making it easy to switch between providers or use multiple providers in the same workflow.

Core Concepts

Nodes

Nodes are processing units that receive state, perform computation, and return state modifications:

type Node[S any] interface {
    Run(ctx context.Context, state S) NodeResult[S]
}
State & Reducers

State flows through the workflow and is merged using reducer functions:

type Reducer[S any] func(prev S, delta S) S
Routing

Control flow with conditional routing:

// Explicit routing
graph.Goto("next-node")
graph.Stop()

// Conditional edges
engine.Connect("nodeA", "nodeB", func(s Session) bool {
    return s.Confidence > 0.8
})
Persistence

Automatic checkpointing enables workflow resumption:

// Save checkpoint
engine.SaveCheckpoint("after-step-1")

// Resume from checkpoint
engine.LoadCheckpoint("after-step-1")
final, err := engine.Run(ctx, runID, initialState)
Concurrent Execution

New in v0.2.0: Execute independent nodes in parallel with deterministic results:

// Enable concurrent execution with functional options (recommended in v0.3.0)
engine := graph.New(
    reducer, store, emitter,
    graph.WithMaxConcurrent(10),   // Execute up to 10 nodes in parallel
    graph.WithQueueDepth(1000),     // Frontier queue capacity
)

// Or use Options struct (backward compatible)
opts := graph.Options{
    MaxConcurrentNodes: 10,
    QueueDepth:         1000,
}
engine := graph.New(reducer, store, emitter, opts)

// Graph with parallel branches executes concurrently
// Fan-out: one node spawns multiple parallel branches
engine.Add("start", startNode)
engine.Add("branchA", branchANode) // Executes in parallel
engine.Add("branchB", branchBNode) // Executes in parallel
engine.Add("branchC", branchCNode) // Executes in parallel
engine.Add("merge", mergeNode)     // Results merge deterministically

// Results are deterministic regardless of completion order
final, err := engine.Run(ctx, runID, initialState)

Key Features:

  • ⚑ Performance: 2-5x speedup for workflows with independent nodes
  • 🎯 Deterministic: Same results regardless of execution order
  • πŸ”„ Replay: Record and replay executions exactly for debugging
  • πŸ›‘οΈ Backpressure: Automatic queue management prevents resource exhaustion
  • πŸ“Š Observability: Built-in metrics for queue depth and active workers

Deterministic Replay for debugging production issues:

// Original execution automatically recorded
final, err := engine.Run(ctx, "prod-run-123", initialState)

// Load checkpoint from production
checkpoint, _ := store.LoadCheckpointV2(ctx, "prod-run-123", "")

// Replay execution exactly (no external API calls)
opts := graph.Options{ReplayMode: true}
replayResult, _ := engine.ReplayRun(ctx, "debug-replay", checkpoint)

// Identical results, full debugging context

See Concurrency Guide and Replay Guide for details.

Architecture

/graph
  β”œβ”€β”€ engine.go      # Workflow execution engine
  β”œβ”€β”€ node.go        # Node abstractions
  β”œβ”€β”€ edge.go        # Edge and predicate types
  β”œβ”€β”€ state.go       # State management
  β”œβ”€β”€ store/         # Persistence layer
  β”‚   β”œβ”€β”€ store.go   # Store interface
  β”‚   β”œβ”€β”€ memory.go  # In-memory store (for testing)
  β”‚   β”œβ”€β”€ sqlite.go  # SQLite store (dev/small prod)
  β”‚   β”œβ”€β”€ mysql.go   # MySQL/Aurora store (production)
  β”‚   └── mysql/     # MySQL implementation details
  β”œβ”€β”€ emit/          # Event system
  β”‚   β”œβ”€β”€ emitter.go # Emitter interface
  β”‚   β”œβ”€β”€ event.go   # Event types
  β”‚   β”œβ”€β”€ log.go     # Logging emitter
  β”‚   β”œβ”€β”€ buffered.go# Buffered emitter
  β”‚   └── null.go    # Null emitter
  β”œβ”€β”€ model/         # LLM integrations
  β”‚   β”œβ”€β”€ chat.go    # ChatModel interface
  β”‚   β”œβ”€β”€ openai/    # OpenAI adapter
  β”‚   β”œβ”€β”€ anthropic/ # Anthropic Claude adapter
  β”‚   β”œβ”€β”€ google/    # Google Gemini adapter
  β”‚   β”œβ”€β”€ bedrock/   # AWS Bedrock adapter (Claude, Nova, etc.)
  β”‚   β”œβ”€β”€ ollama/    # Ollama local models adapter
  β”‚   └── mock.go    # Mock for testing
  └── tool/          # Tool abstractions
      β”œβ”€β”€ tool.go    # Tool interface
      β”œβ”€β”€ http.go    # HTTP tool implementation
      └── mock.go    # Mock tool for testing

Features

Core Features
  • βœ… Stateful Execution - Checkpoint and resume workflows
  • βœ… Conditional Routing - Dynamic control flow based on state
  • βœ… Parallel Execution - Fan-out to concurrent nodes
  • βœ… LLM Integration - OpenAI, Anthropic, Google Gemini, AWS Bedrock, Ollama
  • βœ… Tool Support - HTTP tools and custom tool integration
  • βœ… Type Safety - Go generics for compile-time safety
Performance & Reliability (v0.2.0)
  • βœ… Concurrent Execution - Worker pool with deterministic ordering
  • βœ… Deterministic Replay - Record and replay executions exactly
  • βœ… Backpressure Control - Automatic queue management
  • βœ… Retry Policies - Exponential backoff with jitter
Production Hardening (v0.3.0)
  • βœ… SQLite Store - Zero-config persistence for development
  • βœ… Prometheus Metrics - Production monitoring (6 metrics)
  • βœ… Cost Tracking - LLM token and cost accounting
  • βœ… Functional Options - Ergonomic API with backward compatibility
  • βœ… Typed Errors - errors.Is() compatible error handling
  • βœ… Formal Guarantees - Documented determinism and exactly-once semantics
  • βœ… Contract Tests - CI-validated correctness proofs

Examples

See the examples/ directory for complete, runnable examples:

  • sqlite_quickstart/ - ⭐ Start here! Zero-config persistence
  • chatbot/ - Customer support chatbot with intent detection
  • checkpoint/ - Checkpoint and resume workflows
  • routing/ - Conditional routing based on state
  • parallel/ - Parallel execution with fan-out/fan-in
  • llm/ - Multi-provider LLM integration (OpenAI, Anthropic, Google)
  • bedrock_nova/ - AWS Bedrock integration with Amazon Nova models
  • ollama/ - Local LLM integration with Ollama
  • tools/ - Tool calling and integration
  • data-pipeline/ - Data processing pipeline
  • research-pipeline/ - Multi-stage research workflow
  • interactive-workflow/ - Interactive user input workflow
  • tracing/ - Event tracing and observability
  • benchmarks/ - Performance benchmarking

All examples can be built with:

make examples
./build/<example-name>

Documentation

πŸ“š Complete Documentation Index - Organized guide to all documentation

Getting Started
Core Concepts (v0.2.0)
Production Features (v0.3.0)
Migration Guides
Project Documentation

Development

The project includes a comprehensive Makefile for common development tasks:

# Build the library
make build

# Build all examples
make examples

# Run tests
make test

# Run tests with verbose output
make test-verbose

# Run tests with coverage report
make test-cover

# Run benchmarks
make bench

# Format code
make fmt

# Run go vet
make vet

# Run linter (if golangci-lint is available)
make lint

# Clean build artifacts
make clean

# Install dependencies
make install

# Build everything
make all

# Show all available targets
make help

Or use standard Go commands:

# Run tests
go test ./...

# Run tests with coverage
go test -cover ./...

# Run linter
golangci-lint run

# Build
go build ./...

Testing

LangGraph-Go has comprehensive test coverage including contract tests that validate production guarantees.

Run All Tests
# Run all tests
go test ./...

# Run with race detector (recommended)
go test -race ./...

# Run with coverage
go test -cover -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
Contract Tests

Contract tests validate critical system guarantees:

# Determinism contracts
go test -v -race -run TestReplayMismatchDetection ./graph/
go test -v -race -run TestMergeOrderingWithRandomDelays ./graph/

# Exactly-once semantics
go test -v -race -run TestConcurrentStateUpdates ./graph/
go test -v -race -run TestIdempotencyAcrossStores ./graph/store/

# Backpressure
go test -v -race -run TestBackpressureBlocking ./graph/

# RNG determinism
go test -v -race -run TestRNGDeterminism ./graph/
MySQL Integration Tests

Some tests require MySQL for persistence validation:

# Start MySQL (Docker)
docker run -d -p 3306:3306 \
  -e MYSQL_ROOT_PASSWORD=testpassword \
  -e MYSQL_DATABASE=langgraph_test \
  mysql:8.0

# Run with MySQL
export TEST_MYSQL_DSN="root:testpassword@tcp(127.0.0.1:3306)/langgraph_test?parseTime=true"
go test -v -race ./graph/...
What Contract Tests Prove
  • Determinism: Same inputs produce same outputs across replays
  • Exactly-Once: State updates happen exactly once, never duplicated or lost
  • Backpressure: System handles overload gracefully without crashes
  • RNG Determinism: Random workflows are replayable for debugging
  • Cross-Store Consistency: All Store implementations uphold the same contracts

See docs/testing-contracts.md for detailed test documentation.

CI/CD

All tests run automatically on push/PR via GitHub Actions:

  • Platforms: Linux, macOS, Windows
  • Go versions: 1.21, 1.22, 1.23
  • Race detector enabled
  • Coverage uploaded to Codecov

License

MIT License

Project Status

Current Version: v0.1.0-alpha

βœ… Core Framework Complete - All 5 user stories have been implemented and tested:

  • βœ… US1: Stateful workflow with checkpointing
  • βœ… US2: Conditional routing and dynamic control flow
  • βœ… US3: Parallel execution with fan-out/fan-in
  • βœ… US4: Multi-provider LLM integration
  • βœ… US5: Comprehensive event tracing and observability

Current Phase: Alpha release - API stabilization and community feedback

Roadmap to v1.0.0:

  • Gather feedback from early adopters
  • Stabilize public API based on real-world usage
  • Address any critical bugs or issues
  • Complete API documentation
  • Add more examples and use cases

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for development guidelines including our Test-Driven Development workflow.

Please note that this project is released with a Code of Conduct. By participating in this project you agree to abide by its terms.

Security

For security concerns, please see our Security Policy.

Acknowledgments

This project is inspired by LangGraph by LangChain AI, a powerful framework for building stateful, multi-actor applications with LLMs. LangGraph-Go brings these concepts to the Go ecosystem, redesigned from the ground up to leverage Go's type system, generics, and concurrency primitives.

Key differences from the original LangGraph:

  • Type-safe - Uses Go generics for compile-time type safety
  • Go-native - Idiomatic Go design patterns and error handling
  • Concurrency-first - Built on goroutines and channels for efficient parallelism
  • Minimal dependencies - Pure Go core with optional external adapters

We are grateful to the LangChain AI team for pioneering the graph-based workflow approach for LLM applications. Check out the original Python implementation at https://github.com/langchain-ai/langgraph.

Directories ΒΆ

Path Synopsis
examples
ai_research_assistant command
Package main demonstrates an AI research assistant workflow using LangGraph-Go.
Package main demonstrates an AI research assistant workflow using LangGraph-Go.
bedrock_nova command
Package main demonstrates AWS Bedrock Nova model integration with LangGraph-Go.
Package main demonstrates AWS Bedrock Nova model integration with LangGraph-Go.
benchmarks command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
chatbot command
Package main demonstrates a customer service chatbot with intent routing using LangGraph-Go.
Package main demonstrates a customer service chatbot with intent routing using LangGraph-Go.
checkpoint command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
concurrent_workflow command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
data-pipeline command
Package main demonstrates a data processing pipeline with error handling using LangGraph-Go.
Package main demonstrates a data processing pipeline with error handling using LangGraph-Go.
human_in_the_loop command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
interactive-workflow command
Package main demonstrates an interactive approval workflow using LangGraph-Go.
Package main demonstrates an interactive approval workflow using LangGraph-Go.
llm command
Package main demonstrates basic LLM integration with LangGraph-Go.
Package main demonstrates basic LLM integration with LangGraph-Go.
mcp_server command
Package main implements an MCP server that exposes a weather lookup tool to external LLM applications via the Model Context Protocol.
Package main implements an MCP server that exposes a weather lookup tool to external LLM applications via the Model Context Protocol.
mcp_server_stateful command
Package main implements an MCP server that demonstrates resource exposure capabilities for stateful workflows.
Package main implements an MCP server that demonstrates resource exposure capabilities for stateful workflows.
ollama command
parallel command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
prometheus_monitoring command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
replay_demo command
Package main demonstrates deterministic replay capabilities of LangGraph-Go.
Package main demonstrates deterministic replay capabilities of LangGraph-Go.
research-pipeline command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
routing command
Package main demonstrates conditional routing and decision-making in LangGraph-Go.
Package main demonstrates conditional routing and decision-making in LangGraph-Go.
sqlite_quickstart command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
tools command
Package main demonstrates usage of the LangGraph-Go framework.
Package main demonstrates usage of the LangGraph-Go framework.
tracing command
Package main demonstrates OpenTelemetry tracing integration with LangGraph-Go.
Package main demonstrates OpenTelemetry tracing integration with LangGraph-Go.
Package graph provides the core graph execution engine for LangGraph-Go.
Package graph provides the core graph execution engine for LangGraph-Go.
emit
Package emit provides event emission and observability for graph execution.
Package emit provides event emission and observability for graph execution.
mcp
Package mcp provides MCP (Model Context Protocol) server implementation for LangGraph workflows.
Package mcp provides MCP (Model Context Protocol) server implementation for LangGraph workflows.
model
Package model provides LLM integration adapters.
Package model provides LLM integration adapters.
model/anthropic
Package anthropic provides ChatModel adapter for Anthropic Claude API.
Package anthropic provides ChatModel adapter for Anthropic Claude API.
model/bedrock
Package bedrock provides AWS Bedrock LLM integration for LangGraph-Go.
Package bedrock provides AWS Bedrock LLM integration for LangGraph-Go.
model/google
Package google provides ChatModel adapter for Google Gemini API.
Package google provides ChatModel adapter for Google Gemini API.
model/ollama
Package ollama provides ChatModel adapter for Ollama API.
Package ollama provides ChatModel adapter for Ollama API.
model/openai
Package openai provides ChatModel adapter for OpenAI GPT API.
Package openai provides ChatModel adapter for OpenAI GPT API.
store
Package store provides persistence implementations for graph state.
Package store provides persistence implementations for graph state.
tool
Package tool provides tool interfaces for graph nodes.
Package tool provides tool interfaces for graph nodes.
specs
004-multi-llm-code-review/contracts
Package contracts defines the interfaces and types for the Multi-LLM Code Review workflow.
Package contracts defines the interfaces and types for the Multi-LLM Code Review workflow.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL