langgraphgo

module
v0.1.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 1, 2025 License: MIT

README ยถ

๐Ÿฆœ๏ธ๐Ÿ”— LangGraphGo

go.dev reference

English | ็ฎ€ไฝ“ไธญๆ–‡

๐Ÿ”€ Forked from paulnegz/langgraphgo - Enhanced with streaming, visualization, observability, and production-ready features.

This fork aims for feature parity with the Python LangGraph library, adding support for parallel execution, persistence, advanced state management, pre-built agents, and human-in-the-loop workflows.

๐Ÿ“ฆ Installation

go get github.com/smallnest/langgraphgo

๐Ÿš€ Features

  • Core Runtime:

    • Parallel Execution: Concurrent node execution (fan-out) with thread-safe state merging.
    • Runtime Configuration: Propagate callbacks, tags, and metadata via RunnableConfig.
    • LangChain Compatible: Works seamlessly with langchaingo.
  • Persistence & Reliability:

    • Checkpointers: Redis, Postgres, and SQLite implementations for durable state.
    • State Recovery: Pause and resume execution from checkpoints.
  • Advanced Capabilities:

    • State Schema: Granular state updates with custom reducers (e.g., AppendReducer).
    • Smart Messages: Intelligent message merging with ID-based upserts (AddMessages).
    • Command API: Dynamic control flow and state updates directly from nodes.
    • Ephemeral Channels: Temporary state values that clear automatically after each step.
    • Subgraphs: Compose complex agents by nesting graphs within graphs.
    • Enhanced Streaming: Real-time event streaming with multiple modes (updates, values, messages).
    • Pre-built Agents: Ready-to-use ReAct and Supervisor agent factories.
  • Developer Experience:

    • Visualization: Export graphs to Mermaid, DOT, and ASCII with conditional edge support.
    • Human-in-the-loop (HITL): Interrupt execution, inspect state, edit history (UpdateState), and resume.
    • Observability: Built-in tracing and metrics support.
    • Tools: Integrated Tavily and Exa search tools.

๐ŸŽฏ Quick Start

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/smallnest/langgraphgo/graph"
	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/openai"
)

func main() {
	ctx := context.Background()
	model, _ := openai.New()

	// 1. Create Graph
	g := graph.NewMessageGraph()

	// 2. Add Nodes
	g.AddNode("generate", func(ctx context.Context, state interface{}) (interface{}, error) {
		messages := state.([]llms.MessageContent)
		response, _ := model.GenerateContent(ctx, messages)
		return append(messages, llms.TextParts("ai", response.Choices[0].Content)), nil
	})

	// 3. Define Edges
	g.AddEdge("generate", graph.END)
	g.SetEntryPoint("generate")

	// 4. Compile
	runnable, _ := g.Compile()

	// 5. Invoke
	initialState := []llms.MessageContent{
		llms.TextParts("human", "Hello, LangGraphGo!"),
	}
	result, _ := runnable.Invoke(ctx, initialState)
	
	fmt.Println(result)
}

๐Ÿ“š Examples

๐Ÿ”ง Key Concepts

Parallel Execution

LangGraphGo automatically executes nodes in parallel when they share the same starting node. Results are merged using the graph's state merger or schema.

g.AddEdge("start", "branch_a")
g.AddEdge("start", "branch_b")
// branch_a and branch_b run concurrently
Human-in-the-loop (HITL)

Pause execution to allow for human approval or input.

config := &graph.Config{
    InterruptBefore: []string{"human_review"},
}

// Execution stops before "human_review" node
state, err := runnable.InvokeWithConfig(ctx, input, config)

// Resume execution
resumeConfig := &graph.Config{
    ResumeFrom: []string{"human_review"},
}
runnable.InvokeWithConfig(ctx, state, resumeConfig)
Pre-built Agents

Quickly create complex agents using factory functions.

// Create a ReAct agent
agent, err := prebuilt.CreateReactAgent(model, tools)

// Create a Supervisor agent
supervisor, err := prebuilt.CreateSupervisor(model, agents)

๐ŸŽจ Graph Visualization

exporter := runnable.GetGraph()
fmt.Println(exporter.DrawMermaid()) // Generates Mermaid flowchart

๐Ÿ“ˆ Performance

  • Graph Operations: ~14-94ฮผs depending on format
  • Tracing Overhead: ~4ฮผs per execution
  • Event Processing: 1000+ events/second
  • Streaming Latency: <100ms

๐Ÿงช Testing

go test ./... -v

๐Ÿค Contributing

This project is open for contributions! Please check TASKS.md for the roadmap and TODOs.md for specific items.

๐Ÿ“„ License

MIT License - see original repository for details.

Directories ยถ

Path Synopsis
checkpoint
examples
basic_example command
basic_llm command
checkpointing command
command_api command
configuration command
custom_reducer command
listeners command
memory_basic command
memory_chatbot command
rag_advanced command
rag_basic command
rag_conditional command
rag_pipeline command
react_agent command
smart_messages command
state_schema command
streaming_modes command
subgraph command
subgraphs command
supervisor command
swarm command
time_travel command
tool_exa command
tool_tavily command
visualization command
showcases

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL