swarmgo

package module
v0.0.0-...-9be5384 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 3, 2025 License: MIT Imports: 17 Imported by: 0

README

Swarmgo Logo

SwarmGo (agents-sdk-go)

SwarmGo is a Go package that allows you to create AI agents capable of interacting, coordinating, and executing tasks. Inspired by OpenAI's Swarm framework, SwarmGo focuses on making agent coordination and execution lightweight, highly controllable, and easily testable.

It achieves this through two primitive abstractions: Agents and handoffs. An Agent encompasses instructions and tools (functions it can execute), and can at any point choose to hand off a conversation to another Agent.

These primitives are powerful enough to express rich dynamics between tools and networks of agents, allowing you to build scalable, real-world solutions while avoiding a steep learning curve.

Table of Contents

Why SwarmGo

SwarmGo explores patterns that are lightweight, scalable, and highly customizable by design. It's best suited for situations dealing with a large number of independent capabilities and instructions that are difficult to encode into a single prompt.

SwarmGo runs (almost) entirely on the client and, much like the Chat Completions API, does not store state between calls.

Installation

go get github.com/mohan2020coder/swarmgo

Quick Start

Here's a simple example to get you started:

package main

import (
	"context"
	"fmt"
	"log"

	swarmgo "github.com/mohan2020coder/swarmgo"
	openai "github.com/sashabaranov/go-openai"
	llm "github.com/mohan2020coder/swarmgo/llm"
)

func main() {
	client := swarmgo.NewSwarm("YOUR_OPENAI_API_KEY", llm.OpenAI)

	agent := &swarmgo.Agent{
		Name:         "Agent",
		Instructions: "You are a helpful assistant.",
		Model:        "gpt-3.5-turbo",
	}

	messages := []openai.ChatCompletionMessage{
		{Role: "user", Content: "Hello!"},
	}

	ctx := context.Background()
	response, err := client.Run(ctx, agent, messages, nil, "", false, false, 5, true)
	if err != nil {
		log.Fatalf("Error: %v", err)
	}

	fmt.Println(response.Messages[len(response.Messages)-1].Content)
}

Usage

Creating an Agent

An Agent represents an AI assistant with specific instructions and functions (tools) it can use.

agent := &swarmgo.Agent{
	Name:         "Agent",
	Instructions: "You are a helpful assistant.",
	Model:        "gpt-4o",
}
  • Name: Identifier for the agent (no spaces or special characters).
  • Instructions: The system prompt or instructions for the agent.
  • Model: The OpenAI model to use (e.g., "gpt-4o").
Running the Agent

To interact with the agent, use the Run method:

messages := []openai.ChatCompletionMessage{
	{Role: "user", Content: "Hello!"},
}

ctx := context.Background()
response, err := client.Run(ctx, agent, messages, nil, "", false, false, 5, true)
if err != nil {
	log.Fatalf("Error: %v", err)
}

fmt.Println(response.Messages[len(response.Messages)-1].Content)
Adding Functions (Tools)

Agents can use functions to perform specific tasks. Functions are defined and then added to an agent.

Defining a Function
func getWeather(args map[string]interface{}, contextVariables map[string]interface{}) swarmgo.Result {
	location := args["location"].(string)
	// Simulate fetching weather data
	return swarmgo.Result{
		Value: fmt.Sprintf(`{"temp": 67, "unit": "F", "location": "%s"}`, location),
	}
}
Adding the Function to an Agent
agent.Functions = []swarmgo.AgentFunction{
	{
		Name:        "getWeather",
		Description: "Get the current weather in a given location.",
		Parameters: map[string]interface{}{
			"type": "object",
			"properties": map[string]interface{}{
				"location": map[string]interface{}{
					"type":        "string",
					"description": "The city to get the weather for.",
				},
			},
			"required": []interface{}{"location"},
		},
		Function: getWeather,
	},
}
Using Context Variables

Context variables allow you to pass information between function calls and agents.

Using Context Variables in Instructions
func instructions(contextVariables map[string]interface{}) string {
	name, ok := contextVariables["name"].(string)
	if !ok {
		name = "User"
	}
	return fmt.Sprintf("You are a helpful assistant. Greet the user by name (%s).", name)
}

agent.InstructionsFunc = instructions
Agent Handoff

Agents can hand off conversations to other agents. This is useful for delegating tasks or escalating when an agent is unable to handle a request.

func transferToAnotherAgent(args map[string]interface{}, contextVariables map[string]interface{}) swarmgo.Result {
	anotherAgent := &swarmgo.Agent{
		Name:         "AnotherAgent",
		Instructions: "You are another agent.",
		Model:        "gpt-3.5-turbo",
	}
	return swarmgo.Result{
		Agent: anotherAgent,
		Value: "Transferring to AnotherAgent.",
	}
}
Adding Handoff Functions to Agents
agent.Functions = append(agent.Functions, swarmgo.AgentFunction{
	Name:        "transferToAnotherAgent",
	Description: "Transfer the conversation to AnotherAgent.",
	Parameters: map[string]interface{}{
		"type":       "object",
		"properties": map[string]interface{}{},
	},
	Function: transferToAnotherAgent,
})

Streaming Support

SwarmGo now includes built-in support for streaming responses, allowing real-time processing of AI responses and tool calls. This is particularly useful for long-running operations or when you want to provide immediate feedback to users.

Using Streaming

To use streaming, implement the StreamHandler interface:

type StreamHandler interface {
    OnStart()
    OnToken(token string)
    OnToolCall(toolCall openai.ToolCall)
    OnComplete(message openai.ChatCompletionMessage)
    OnError(err error)
}

A default implementation (DefaultStreamHandler) is provided, but you can create your own handler for custom behavior:

type CustomStreamHandler struct {
    totalTokens int
}

func (h *CustomStreamHandler) OnStart() {
    fmt.Println("Starting stream...")
}

func (h *CustomStreamHandler) OnToken(token string) {
    h.totalTokens++
    fmt.Print(token)
}

func (h *CustomStreamHandler) OnComplete(msg openai.ChatCompletionMessage) {
    fmt.Printf("\nComplete! Total tokens: %d\n", h.totalTokens)
}

func (h *CustomStreamHandler) OnError(err error) {
    fmt.Printf("Error: %v\n", err)
}

func (h *CustomStreamHandler) OnToolCall(tool openai.ToolCall) {
    fmt.Printf("\nUsing tool: %s\n", tool.Function.Name)
}
Streaming Example

Here's an example of using streaming with a file analyzer:

client := swarmgo.NewSwarm("YOUR_OPENAI_API_KEY", llm.OpenAI)

agent := &swarmgo.Agent{
    Name:         "FileAnalyzer",
    Instructions: "You are an assistant that analyzes files.",
    Model:        "gpt-4",
}

handler := &CustomStreamHandler{}
err := client.StreamingResponse(
    context.Background(),
    agent,
    messages,
    nil,
    "",
    handler,
    true,
)

For a complete example of file analysis with streaming, see examples/file_analyzer_stream/main.go.

Concurrent Agent Execution

SwarmGo supports running multiple agents concurrently using the ConcurrentSwarm type. This is particularly useful when you need to parallelize agent tasks or run multiple analyses simultaneously.

// Create a concurrent swarm
cs := swarmgo.NewConcurrentSwarm(apiKey)

// Configure multiple agents
configs := map[string]swarmgo.AgentConfig{
    "agent1": {
        Agent: agent1,
        Messages: []openai.ChatCompletionMessage{
            {Role: openai.ChatMessageRoleUser, Content: "Task 1"},
        },
        MaxTurns: 1,
        ExecuteTools: true,
    },
    "agent2": {
        Agent: agent2,
        Messages: []openai.ChatCompletionMessage{
            {Role: openai.ChatMessageRoleUser, Content: "Task 2"},
        },
        MaxTurns: 1,
        ExecuteTools: true,
    },
}

// Run agents concurrently with a timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()

results := cs.RunConcurrent(ctx, configs)

// Process results
for _, result := range results {
    if result.Error != nil {
        log.Printf("Error in %s: %v\n", result.AgentName, result.Error)
        continue
    }
    // Handle successful response
    fmt.Printf("Result from %s: %s\n", result.AgentName, result.Response)
}

Key features of concurrent execution:

  • Run multiple agents in parallel with independent configurations
  • Context-based timeout and cancellation support
  • Thread-safe result collection
  • Support for both ordered and unordered execution
  • Error handling for individual agent failures

See the examples/concurrent_analyzer/main.go for a complete example of concurrent code analysis using multiple specialized agents.

Memory Management

SwarmGo includes a built-in memory management system that allows agents to store and recall information across conversations. The memory system supports both short-term and long-term memory, with features for organizing and retrieving memories based on type and context.

// Create a new agent with memory capabilities
agent := swarmgo.NewAgent("MyAgent", "gpt-4")

// Memory is automatically managed for conversations and tool calls
// You can also explicitly store memories:
memory := swarmgo.Memory{
    Content:    "User prefers dark mode",
    Type:       "preference",
    Context:    map[string]interface{}{"setting": "ui"},
    Timestamp:  time.Now(),
    Importance: 0.8,
}
agent.Memory.AddMemory(memory)

// Retrieve recent memories
recentMemories := agent.Memory.GetRecentMemories(5)

// Search specific types of memories
preferences := agent.Memory.SearchMemories("preference", nil)

Key features of the memory system:

  • Automatic Memory Management: Conversations and tool interactions are automatically stored
  • Memory Types: Organize memories by type (conversation, fact, tool_result, etc.)
  • Context Association: Link memories with relevant context
  • Importance Scoring: Assign priority levels to memories (0-1)
  • Memory Search: Search by type and context
  • Persistence: Save and load memories to/from disk
  • Thread Safety: Concurrent memory access is protected
  • Short-term Buffer: Recent memories are kept in a FIFO buffer
  • Long-term Storage: Organized storage by memory type

See the memory_demo example for a complete demonstration of memory capabilities.

LLM Interface

SwarmGo provides a flexible LLM (Language Learning Model) interface that supports multiple providers: currently OpenAI and Gemini.

To initialize a new Swarm with a specific provider:

// Initialize with OpenAI
client := swarmgo.NewSwarm("YOUR_API_KEY", llm.OpenAI)

// Initialize with Gemini
client := swarmgo.NewSwarm("YOUR_API_KEY", llm.Gemini)

Workflows

Workflows in SwarmGo provide structured patterns for organizing and coordinating multiple agents. They help manage complex interactions between agents, define communication paths, and establish clear hierarchies or collaboration patterns. Think of workflows as the orchestration layer that determines how your agents work together to accomplish tasks.

Each workflow type serves a different organizational need:

1. Supervisor Workflow

A hierarchical pattern where a supervisor agent oversees and coordinates tasks among worker agents. This is ideal for:

  • Task delegation and monitoring
  • Quality control and oversight
  • Centralized decision making
  • Resource allocation across workers
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.SupervisorWorkflow)

// Add agents to teams
workflow.AddAgentToTeam(supervisorAgent, swarmgo.SupervisorTeam)
workflow.AddAgentToTeam(workerAgent1, swarmgo.WorkerTeam)
workflow.AddAgentToTeam(workerAgent2, swarmgo.WorkerTeam)

// Set supervisor as team leader
workflow.SetTeamLeader(supervisorAgent.Name, swarmgo.SupervisorTeam)

// Connect agents
workflow.ConnectAgents(supervisorAgent.Name, workerAgent1.Name)
workflow.ConnectAgents(supervisorAgent.Name, workerAgent2.Name)
2. Hierarchical Workflow

A tree-like structure where tasks flow from top to bottom through multiple levels. This pattern is best for:

  • Complex task decomposition
  • Specialized agent roles at each level
  • Clear reporting structures
  • Sequential processing pipelines
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.HierarchicalWorkflow)

// Add agents to teams
workflow.AddAgentToTeam(managerAgent, swarmgo.SupervisorTeam)
workflow.AddAgentToTeam(researchAgent, swarmgo.ResearchTeam)
workflow.AddAgentToTeam(analysisAgent, swarmgo.AnalysisTeam)

// Connect agents in hierarchy
workflow.ConnectAgents(managerAgent.Name, researchAgent.Name)
workflow.ConnectAgents(researchAgent.Name, analysisAgent.Name)
3. Collaborative Workflow

A peer-based pattern where agents work together as equals, passing tasks between them as needed. This approach excels at:

  • Team-based problem solving
  • Parallel processing
  • Iterative refinement
  • Dynamic task sharing
workflow := swarmgo.NewWorkflow(apiKey, llm.OpenAI, swarmgo.CollaborativeWorkflow)

// Add agents to document team
workflow.AddAgentToTeam(editor, swarmgo.DocumentTeam)
workflow.AddAgentToTeam(reviewer, swarmgo.DocumentTeam)
workflow.AddAgentToTeam(writer, swarmgo.DocumentTeam)

// Connect agents in collaborative pattern
workflow.ConnectAgents(editor.Name, reviewer.Name)
workflow.ConnectAgents(reviewer.Name, writer.Name)
workflow.ConnectAgents(writer.Name, editor.Name)

Key workflow features:

  • Team Management: Organize agents into functional teams
  • Leadership Roles: Designate team leaders for coordination
  • Flexible Routing: Dynamic task routing between agents
  • Cycle Detection: Built-in cycle detection and handling
  • State Management: Share state between agents in a workflow
  • Error Handling: Robust error handling and recovery

Examples

For more examples, see the examples directory.

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature/YourFeature).
  3. Commit your changes (git commit -am 'Add a new feature').
  4. Push to the branch (git push origin feature/YourFeature).
  5. Open a Pull Request.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Acknowledgments

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// DemoLoop specific errors
	ErrDemoLoopInterrupted  = errors.New("demo loop interrupted by user")
	ErrAgentExecutionFailed = errors.New("agent execution failed")
)
View Source
var (
	// Define common errors for better error handling
	ErrNilAgent          = errors.New("agent cannot be nil")
	ErrEmptyMessages     = errors.New("message history cannot be empty")
	ErrLLMClientNotReady = errors.New("LLM client is not initialized")
	ErrInvalidProvider   = errors.New("invalid LLM provider specified")
	ErrNoChoicesInResp   = errors.New("no choices in LLM response")
	ErrMessageTooLong    = errors.New("message exceeds maximum token limit")
)

Functions

func FunctionToDefinition

func FunctionToDefinition(af AgentFunction) llm.Function

FunctionToDefinition converts an AgentFunction to a llm.Function

func ProcessAndPrintResponse

func ProcessAndPrintResponse(response Response)

ProcessAndPrintResponse processes and prints the response from the LLM. It uses different colors for different roles: blue for "assistant" and magenta for "function" or "tool".

func RunDemoLoop

func RunDemoLoop(client *Swarm, agent *Agent)

RunDemoLoop starts an interactive CLI loop for conversing with an agent

func RunDemoLoopWithConfig

func RunDemoLoopWithConfig(client *Swarm, agent *Agent, config *DemoLoopConfig)

RunDemoLoopWithConfig starts an interactive CLI loop with custom configuration

Types

type Agent

type Agent struct {
	Name              string                                               // The name of the agent.
	Model             string                                               // The model identifier.
	Provider          llm.LLMProvider                                      // The LLM provider to use.
	Config            *ClientConfig                                        // Provider-specific configuration.
	Instructions      string                                               // Static instructions for the agent.
	InstructionsFunc  func(contextVariables map[string]interface{}) string // Function to generate dynamic instructions based on context.
	Functions         []AgentFunction                                      // A list of functions the agent can perform.
	Memory            *MemoryStore                                         // Memory store for the agent.
	ParallelToolCalls bool                                                 // Whether to allow parallel tool calls.
}

Agent represents an entity with specific attributes and behaviors.

func NewAgent

func NewAgent(name, model string, provider llm.LLMProvider) *Agent

NewAgent creates a new agent with initialized memory store

func (*Agent) WithConfig

func (a *Agent) WithConfig(config *ClientConfig) *Agent

WithConfig sets the configuration for the agent

func (*Agent) WithFunctions

func (a *Agent) WithFunctions(functions []AgentFunction) *Agent

WithFunctions sets the functions available to the agent

func (*Agent) WithInstructions

func (a *Agent) WithInstructions(instructions string) *Agent

WithInstructions sets the static instructions for the agent

func (*Agent) WithInstructionsFunc

func (a *Agent) WithInstructionsFunc(f func(map[string]interface{}) string) *Agent

WithInstructionsFunc sets the dynamic instructions function for the agent

func (*Agent) WithParallelToolCalls

func (a *Agent) WithParallelToolCalls(enabled bool) *Agent

WithParallelToolCalls enables or disables parallel tool calls

type AgentConfig

type AgentConfig struct {
	Agent            *Agent
	Messages         []llm.Message
	ContextVariables map[string]interface{}
	ModelOverride    string
	Stream           bool
	Debug            bool
	MaxTurns         int
	ExecuteTools     bool
}

AgentConfig holds the configuration for a single agent execution

type AgentFunction

type AgentFunction struct {
	Name        string                                                                            // The name of the function.
	Description string                                                                            // Description of what the function does.
	Parameters  map[string]interface{}                                                            // Parameters for the function.
	Function    func(args map[string]interface{}, contextVariables map[string]interface{}) Result // The actual function implementation.
}

AgentFunction represents a function that can be performed by an agent

type AgentSpec

type AgentSpec struct {
	Name         string   `json:"name"`
	Role         string   `json:"role"`
	Instructions string   `json:"instructions"`
	Model        string   `json:"model"`
	Connections  []string `json:"connections"`
}

AgentSpec represents a specification for an agent

type ClientConfig

type ClientConfig struct {
	Provider           llm.LLMProvider
	AuthToken          string
	BaseURL            string
	OrgID              string
	APIVersion         string
	AssistantVersion   string
	ModelMapperFunc    func(model string) string // replace model to provider-specific deployment name
	HTTPClient         *http.Client
	EmptyMessagesLimit uint
	Options            map[string]interface{} // Additional provider-specific options
}

ClientConfig represents the configuration for an LLM client

type ConcurrentResult

type ConcurrentResult struct {
	AgentName string
	Response  Response
	Error     error
}

ConcurrentResult represents the result from a single agent's execution

type ConcurrentSwarm

type ConcurrentSwarm struct {
	*Swarm
}

ConcurrentSwarm manages concurrent execution of multiple agents

func NewConcurrentSwarm

func NewConcurrentSwarm(apiKey string, provider llm.LLMProvider) *ConcurrentSwarm

NewConcurrentSwarm creates a new ConcurrentSwarm instance

func (*ConcurrentSwarm) RunConcurrent

func (cs *ConcurrentSwarm) RunConcurrent(ctx context.Context, configs map[string]AgentConfig) []ConcurrentResult

RunConcurrent executes multiple agents concurrently and returns their results

func (*ConcurrentSwarm) RunConcurrentOrdered

func (cs *ConcurrentSwarm) RunConcurrentOrdered(ctx context.Context, orderedConfigs []struct {
	Name   string
	Config AgentConfig
}) []ConcurrentResult

RunConcurrentOrdered executes multiple agents concurrently and returns their results in the order specified

type ConditionFunc

type ConditionFunc func(state GraphState) (NodeID, error)

ConditionFunc determines which edge to follow from a node

type Config

type Config struct {
	MaxRetries        int
	RetryBackoff      time.Duration
	RequestTimeout    time.Duration
	MaxTokens         int
	DefaultModel      string
	Debug             bool
	LogLevel          LogLevel
	TokenLimits       map[string]int // Model-specific token limits
	FailureHandlers   []FailureHandler
	RateLimitStrategy RateLimitStrategy
}

Config holds configuration options for Swarm

func DefaultConfig

func DefaultConfig() *Config

DefaultConfig returns default configuration values

type CycleHandling

type CycleHandling int

CycleHandling represents how to handle detected cycles

const (
	StopOnCycle CycleHandling = iota
	ContinueOnCycle
)

type DataFlowSpec

type DataFlowSpec struct {
	From        string `json:"from"`
	To          string `json:"to"`
	Description string `json:"description"`
}

DataFlowSpec represents a data flow between agents

type DefaultStreamHandler

type DefaultStreamHandler struct{}

DefaultStreamHandler provides a basic implementation of StreamHandler

func (*DefaultStreamHandler) OnComplete

func (h *DefaultStreamHandler) OnComplete(message llm.Message)

func (*DefaultStreamHandler) OnError

func (h *DefaultStreamHandler) OnError(err error)

func (*DefaultStreamHandler) OnStart

func (h *DefaultStreamHandler) OnStart()

func (*DefaultStreamHandler) OnToken

func (h *DefaultStreamHandler) OnToken(token string)

func (*DefaultStreamHandler) OnToolCall

func (h *DefaultStreamHandler) OnToolCall(toolCall llm.ToolCall)

type DemoLoopConfig

type DemoLoopConfig struct {
	Timeout             time.Duration // Timeout for each agent execution
	MaxHistoryMessages  int           // Maximum number of messages to keep in history
	MaxInputLength      int           // Maximum length of user input
	ShowFunctionResults bool          // Whether to display function results
	ColorOutput         bool          // Whether to use color in output
	Debug               bool          // Whether to show debug information
	SaveHistory         bool          // Whether to save conversation history to file
	HistoryFile         string        // Path to file for saving history
}

DemoLoopConfig contains configuration options for the demo loop

func DefaultDemoLoopConfig

func DefaultDemoLoopConfig() *DemoLoopConfig

DefaultDemoLoopConfig returns default configuration for demo loop

type DynamicWorkflowCreator

type DynamicWorkflowCreator struct {
	// contains filtered or unexported fields
}

DynamicWorkflowCreator helps construct workflows dynamically based on user tasks

func NewDynamicWorkflowCreator

func NewDynamicWorkflowCreator(apiKey string, provider llm.LLMProvider) *DynamicWorkflowCreator

NewDynamicWorkflowCreator creates a new workflow creator

func (*DynamicWorkflowCreator) BuildWorkflow

func (dwc *DynamicWorkflowCreator) BuildWorkflow(spec *WorkflowSpec) (*Workflow, error)

BuildWorkflow creates a concrete Workflow instance from a WorkflowSpec

func (*DynamicWorkflowCreator) CreateAndExecuteWorkflow

func (dwc *DynamicWorkflowCreator) CreateAndExecuteWorkflow(ctx context.Context, userTask string) (*WorkflowResult, error)

CreateAndExecuteWorkflow is a convenience method to create and execute a workflow in one step

func (*DynamicWorkflowCreator) CreateWorkflowFromTask

func (dwc *DynamicWorkflowCreator) CreateWorkflowFromTask(ctx context.Context, userTask string) (*WorkflowSpec, error)

CreateWorkflowFromTask generates a workflow specification based on a user task

func (*DynamicWorkflowCreator) RegisterBaseAgent

func (dwc *DynamicWorkflowCreator) RegisterBaseAgent(name string, agent *Agent)

RegisterBaseAgent adds a pre-defined agent template

type Edge

type Edge struct {
	From      NodeID
	To        NodeID
	Type      EdgeType
	Condition ConditionFunc // For conditional edges
	Metadata  map[string]interface{}
}

Edge represents a connection between nodes

type EdgeType

type EdgeType string

EdgeType represents the type of relationship between nodes

const (
	// Edge types
	StandardEdge  EdgeType = "standard"  // Regular flow between nodes
	ConditionEdge EdgeType = "condition" // Flow based on condition
	FallbackEdge  EdgeType = "fallback"  // Used when other edges fail
	CallbackEdge  EdgeType = "callback"  // Used for callbacks
)

type FailureHandler

type FailureHandler func(error) (bool, error)

FailureHandler defines a function to handle specific failures

type Graph

type Graph struct {
	ID          string
	Name        string
	Description string
	Nodes       map[NodeID]*Node
	Edges       map[NodeID][]Edge
	EntryPoint  NodeID
	ExitPoints  []NodeID // Optional exit points
	// contains filtered or unexported fields
}

Graph represents the workflow graph

func NewGraph

func NewGraph(name string, description string) *Graph

NewGraph creates a new workflow graph

func (*Graph) AddAgentNode

func (g *Graph) AddAgentNode(id NodeID, name string, agent *Agent) *Node

AddAgentNode adds a node with an associated agent

func (*Graph) AddConditionalEdge

func (g *Graph) AddConditionalEdge(from NodeID, to NodeID, condition ConditionFunc) error

AddConditionalEdge adds an edge with a condition

func (*Graph) AddDirectedEdge

func (g *Graph) AddDirectedEdge(from NodeID, to NodeID) error

AddDirectedEdge adds a simple directed edge between nodes

func (*Graph) AddEventHook

func (g *Graph) AddEventHook(event string, hook func(state GraphState))

AddEventHook adds a hook for graph events

func (*Graph) AddExitPoint

func (g *Graph) AddExitPoint(nodeID NodeID) error

AddExitPoint adds an exit point to the graph

func (*Graph) AddNode

func (g *Graph) AddNode(id NodeID, name string, process NodeFunc) *Node

AddNode adds a node to the graph

func (*Graph) ExecuteGraph

func (g *Graph) ExecuteGraph(ctx context.Context, initialState GraphState) (GraphState, error)

ExecuteGraph runs the workflow graph from the entry point

func (*Graph) SetEntryPoint

func (g *Graph) SetEntryPoint(nodeID NodeID) error

SetEntryPoint sets the entry point for the graph

type GraphBuilder

type GraphBuilder struct {
	// contains filtered or unexported fields
}

GraphBuilder provides a fluent interface for building graphs

func NewGraphBuilder

func NewGraphBuilder(name, description string) *GraphBuilder

NewGraphBuilder creates a new graph builder

func (*GraphBuilder) Build

func (b *GraphBuilder) Build() *Graph

Build returns the constructed graph

func (*GraphBuilder) WithAgent

func (b *GraphBuilder) WithAgent(id NodeID, name string, agent *Agent) *GraphBuilder

WithAgent adds an agent node to the graph

func (*GraphBuilder) WithConditionalEdge

func (b *GraphBuilder) WithConditionalEdge(from NodeID, to NodeID, condition ConditionFunc) *GraphBuilder

WithConditionalEdge adds a conditional edge between nodes

func (*GraphBuilder) WithEdge

func (b *GraphBuilder) WithEdge(from NodeID, to NodeID) *GraphBuilder

WithEdge adds a directed edge between nodes

func (*GraphBuilder) WithEntryPoint

func (b *GraphBuilder) WithEntryPoint(nodeID NodeID) *GraphBuilder

WithEntryPoint sets the entry point for the graph

func (*GraphBuilder) WithExitPoint

func (b *GraphBuilder) WithExitPoint(nodeID NodeID) *GraphBuilder

WithExitPoint adds an exit point to the graph

func (*GraphBuilder) WithNode

func (b *GraphBuilder) WithNode(id NodeID, name string, process NodeFunc) *GraphBuilder

WithNode adds a generic node to the graph

type GraphRunner

type GraphRunner struct {
	// contains filtered or unexported fields
}

GraphRunner handles the execution of workflow graphs

func NewGraphRunner

func NewGraphRunner() *GraphRunner

NewGraphRunner creates a new graph runner

func (*GraphRunner) ExecuteGraph

func (r *GraphRunner) ExecuteGraph(ctx context.Context, graphID string, initialState GraphState) (GraphState, error)

ExecuteGraph runs a graph with the given initial state

func (*GraphRunner) RegisterGraph

func (r *GraphRunner) RegisterGraph(graph *Graph)

RegisterGraph adds a graph to the runner

type GraphState

type GraphState map[StateKey]interface{}

GraphState represents the current state of the workflow

func (GraphState) Clone

func (s GraphState) Clone() GraphState

Clone creates a deep copy of the state

func (GraphState) Get

func (s GraphState) Get(key StateKey) interface{}

Get retrieves a value from state, with type assertion

func (GraphState) GetBool

func (s GraphState) GetBool(key StateKey) (bool, bool)

GetBool gets a boolean value from state

func (GraphState) GetString

func (s GraphState) GetString(key StateKey) (string, bool)

GetString gets a string value from state

func (GraphState) UpdateState

func (s GraphState) UpdateState(updates GraphState)

UpdateState updates the state with new values

type LogLevel

type LogLevel int

LogLevel represents the level of logging

const (
	LogSilent LogLevel = iota
	LogError
	LogWarning
	LogInfo
	LogDebug
	LogTrace
)

type Memory

type Memory struct {
	Content    string                 `json:"content"`    // The actual memory content
	Type       string                 `json:"type"`       // Type of memory (e.g., "conversation", "fact", "task")
	Context    map[string]interface{} `json:"context"`    // Associated context
	Timestamp  time.Time              `json:"timestamp"`  // When the memory was created
	Importance float64                `json:"importance"` // Importance score (0-1)
	References []string               `json:"references"` // References to related memories
}

Memory represents a single memory entry

type MemoryStore

type MemoryStore struct {
	// contains filtered or unexported fields
}

MemoryStore manages agent memories

func NewMemoryStore

func NewMemoryStore(maxShortTerm int) *MemoryStore

NewMemoryStore creates a new memory store with default settings

func (*MemoryStore) AddMemory

func (ms *MemoryStore) AddMemory(memory Memory)

AddMemory adds a new memory to both short and long-term storage

func (*MemoryStore) GetRecentMemories

func (ms *MemoryStore) GetRecentMemories(n int) []Memory

GetRecentMemories retrieves the n most recent memories

func (*MemoryStore) LoadMemories

func (ms *MemoryStore) LoadMemories(data []byte) error

LoadMemories loads memories from JSON data

func (*MemoryStore) SearchMemories

func (ms *MemoryStore) SearchMemories(memoryType string, context map[string]interface{}) []Memory

SearchMemories searches for memories based on type and context

func (*MemoryStore) SerializeMemories

func (ms *MemoryStore) SerializeMemories() ([]byte, error)

SerializeMemories serializes all memories to JSON

type Node

type Node struct {
	ID          NodeID
	Name        string
	Description string
	Process     NodeFunc
	Agent       *Agent // Optional agent associated with this node
	Metadata    map[string]interface{}
}

Node represents a node in the workflow graph

func CreateAgentNode

func CreateAgentNode(g *Graph, id NodeID, name string, instructions string, model string, functions []AgentFunction, provider llm.LLMProvider) *Node

CreateAgentNode is a helper function to create common agent node types

func CreateHumanInputNode

func CreateHumanInputNode(g *Graph, id NodeID, prompt string) *Node

CreateHumanInputNode creates a node that collects input from a human

func CreateParallelNode

func CreateParallelNode(g *Graph, id NodeID, parallelProcesses []NodeFunc) *Node

CreateParallelNode creates a node that processes tasks in parallel

func CreateRouterNode

func CreateRouterNode(g *Graph, id NodeID, destinations map[string]NodeID) *Node

CreateRouterNode creates a node that routes to different destinations based on content

type NodeFunc

type NodeFunc func(ctx context.Context, state GraphState) (GraphState, error)

NodeFunc is a function that processes state and returns updates

type NodeID

type NodeID string

NodeID represents a unique identifier for a node in the workflow graph

type RateLimitStrategy

type RateLimitStrategy int

RateLimitStrategy defines how rate limits are handled

const (
	RateLimitRetry RateLimitStrategy = iota
	RateLimitFail
	RateLimitQueue
)

type Response

type Response struct {
	Messages         []llm.Message
	Agent            *Agent
	ContextVariables map[string]interface{}
	ToolResults      []ToolResult // Results from tool calls
}

Response represents the response from an agent

type Result

type Result struct {
	Success bool        // Whether the function execution was successful
	Data    interface{} // Any data returned by the function
	Error   error       // Any error that occurred during execution
	Agent   *Agent      // Active agent
}

Result represents the result of a function execution

type StateKey

type StateKey string

StateKey represents a key in the state map

const MessageKey StateKey = "messages"

MessageKey is the default key for storing messages in state

type StepResult

type StepResult struct {
	AgentName  string
	Input      []llm.Message
	Output     []llm.Message
	Error      error
	StartTime  time.Time
	EndTime    time.Time
	NextAgent  string
	StepNumber int
}

StepResult represents the outcome of a single workflow step

type StreamHandler

type StreamHandler interface {
	OnStart()
	OnToken(token string)
	OnToolCall(toolCall llm.ToolCall)
	OnComplete(message llm.Message)
	OnError(err error)
}

StreamHandler represents a handler for streaming responses

type Swarm

type Swarm struct {
	// contains filtered or unexported fields
}

Swarm represents the main structure

func NewSwarm

func NewSwarm(apiKey string, provider llm.LLMProvider) *Swarm

NewSwarm initializes a new Swarm instance with an LLM client

func NewSwarmWithConfig

func NewSwarmWithConfig(apiKey string, provider llm.LLMProvider, config *Config) *Swarm

NewSwarmWithConfig initializes a new Swarm with custom configuration

func NewSwarmWithCustomProvider

func NewSwarmWithCustomProvider(providerImpl llm.LLM, config *Config) *Swarm

NewSwarmWithCustomProvider creates a Swarm with a custom LLM provider implementation

func NewSwarmWithHost

func NewSwarmWithHost(apiKey, host string, provider llm.LLMProvider) *Swarm

NewSwarmWithHost creates a Swarm with a custom host

func (*Swarm) IsInitialized

func (s *Swarm) IsInitialized() bool

IsInitialized returns whether the Swarm is properly initialized

func (*Swarm) Run

func (s *Swarm) Run(
	ctx context.Context,
	agent *Agent,
	messages []llm.Message,
	contextVariables map[string]interface{},
	modelOverride string,
	stream bool,
	debug bool,
	maxTurns int,
	executeTools bool,
) (Response, error)

Run is the main entry point for agent execution

func (*Swarm) SetTokenCounter

func (s *Swarm) SetTokenCounter(counter func(string) int)

SetTokenCounter sets a function to count tokens in messages

func (*Swarm) StreamingResponse

func (s *Swarm) StreamingResponse(
	ctx context.Context,
	agent *Agent,
	messages []llm.Message,
	contextVariables map[string]interface{},
	modelOverride string,
	handler StreamHandler,
	debug bool,
) error

StreamingResponse handles streaming chat completions

func (*Swarm) ValidateConnection

func (s *Swarm) ValidateConnection(ctx context.Context) error

ValidateConnection tests the LLM connection with a simple request

type TeamType

type TeamType string

TeamType represents a type of agent team

const (
	ResearchTeam   TeamType = "research"
	DocumentTeam   TeamType = "document"
	SupervisorTeam TeamType = "supervisor"
	AnalysisTeam   TeamType = "analysis"
	DeveloperTeam  TeamType = "developer"
)

type ToolResult

type ToolResult struct {
	ToolName string      // Name of the tool that was called
	Args     interface{} // Arguments passed to the tool
	Result   Result      // Result returned by the tool
}

ToolResult represents the result of a tool call

type Workflow

type Workflow struct {
	// contains filtered or unexported fields
}

Workflow represents a collection of agents and their connections.

func NewWorkflow

func NewWorkflow(apikey string, provider llm.LLMProvider, workflowType WorkflowType) *Workflow

NewWorkflow initializes a new Workflow instance.

func (*Workflow) AddAgent

func (wf *Workflow) AddAgent(agent *Agent)

AddAgent adds an agent to the workflow.

func (*Workflow) AddAgentToTeam

func (wf *Workflow) AddAgentToTeam(agent *Agent, team TeamType)

AddAgentToTeam adds an agent to a specific team

func (*Workflow) ConnectAgents

func (wf *Workflow) ConnectAgents(fromAgent, toAgent string) error

ConnectAgents creates a connection between two agents.

func (*Workflow) Execute

func (wf *Workflow) Execute(startAgent string, userRequest string) (*WorkflowResult, error)

Execute runs the workflow and returns detailed results including step outcomes

func (*Workflow) GetAgents

func (wf *Workflow) GetAgents() map[string]*Agent

GetAgents returns all agents in the workflow

func (*Workflow) GetAllStepResults

func (wf *Workflow) GetAllStepResults() []StepResult

GetAllStepResults returns all step results

func (*Workflow) GetConnections

func (wf *Workflow) GetConnections() map[string][]string

GetConnections returns all connections in the workflow

func (*Workflow) GetCurrentAgent

func (wf *Workflow) GetCurrentAgent() string

GetCurrentAgent returns the currently active agent

func (*Workflow) GetLastStepResult

func (wf *Workflow) GetLastStepResult() (*StepResult, error)

GetLastStepResult returns the result of the last executed step

func (*Workflow) GetRoutingLog

func (wf *Workflow) GetRoutingLog() []string

GetRoutingLog returns the routing history

func (*Workflow) GetStepResult

func (wf *Workflow) GetStepResult(stepNumber int) (*StepResult, error)

GetStepResult returns the result of a specific step

func (*Workflow) GetTeamLeaders

func (wf *Workflow) GetTeamLeaders() map[TeamType]string

GetTeamLeaders returns all team leaders in the workflow

func (*Workflow) GetTeams

func (wf *Workflow) GetTeams() map[TeamType][]*Agent

GetTeams returns all teams in the workflow

func (*Workflow) SetCycleCallback

func (wf *Workflow) SetCycleCallback(callback func(from, to string) (bool, error))

SetCycleCallback sets a callback function to be called when a cycle is detected

func (*Workflow) SetCycleHandling

func (wf *Workflow) SetCycleHandling(handling CycleHandling)

SetCycleHandling sets how cycles should be handled

func (*Workflow) SetTeamLeader

func (wf *Workflow) SetTeamLeader(agentName string, team TeamType) error

SetTeamLeader designates an agent as the leader of a team

type WorkflowResult

type WorkflowResult struct {
	Steps       []StepResult
	FinalOutput []llm.Message
	Error       error
	StartTime   time.Time
	EndTime     time.Time
}

WorkflowResult represents the complete workflow execution result

type WorkflowSpec

type WorkflowSpec struct {
	MainGoal     string         `json:"mainGoal"`
	WorkflowType string         `json:"workflowType"`
	Agents       []AgentSpec    `json:"agents"`
	DataFlow     []DataFlowSpec `json:"dataFlow"`
	EntryPoint   string         `json:"entryPoint"`
}

WorkflowSpec represents the specification for a dynamic workflow

type WorkflowType

type WorkflowType int

WorkflowType defines the type of agent interaction pattern

const (
	CollaborativeWorkflow WorkflowType = iota
	SupervisorWorkflow
	HierarchicalWorkflow
)

Directories

Path Synopsis
examples
agenthandoff command
basic command
collaborative command
customproivder command
deepseekagent command
dynamicworkflow command
fileanalyzer2 command
functioncall command
gemini command
hierarchical command
memory_demo command
ollama command
openrouter command
personalshopper command
sdlc-agents command
supervisor command
taskmanager command
toolresults command
triageagent command
weatheragent command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL