tool

package
v0.0.0-...-7871f83 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2025 License: Apache-2.0 Imports: 18 Imported by: 0

Documentation

Overview

Package tool provides abstractions for defining and executing tools that agents can use to perform actions and retrieve information.

Overview

Tools are functions or objects that agents can invoke to interact with external systems, perform computations, or retrieve data. The tool package provides:

  • Automatic JSON schema generation for parameters
  • Type-safe function wrapping
  • Structured tool interfaces

Quick Start

Create a tool from a function:

import (
	"context"
	"github.com/hupe1980/agentmesh/pkg/tool"
)

weatherTool, err := tool.NewFuncTool(
	"get_weather",
	"Get current weather for a given location",
	func(ctx context.Context, args struct {
		Location string `json:"location" description:"City name"`
		Units    string `json:"units,omitempty" description:"Temperature units (celsius/fahrenheit)"`
	}) (any, error) {
		// Implementation...
		return map[string]any{
			"temperature": 72,
			"conditions":  "Sunny",
			"location":    args.Location,
		}, nil
	},
)

Tool Interface

Implement the Interface to create custom tools:

type Interface interface {
	Name() string
	Description() string
	JSONSchema() (map[string]any, error)
	Run(ctx context.Context, input string) (any, error)
}

type CustomTool struct{}

func (t *CustomTool) Name() string {
	return "custom"
}

func (t *CustomTool) Description() string {
	return "A custom tool"
}

func (t *CustomTool) JSONSchema() (map[string]any, error) {
	return map[string]any{
		"type": "object",
		"properties": map[string]any{
			"query": map[string]string{"type": "string"},
		},
	}, nil
}

func (t *CustomTool) Run(ctx context.Context, input string) (any, error) {
	// Parse input JSON and execute
	return result, nil
}

Function Tools

NewFuncTool automatically generates JSON schemas from struct tags:

type SearchArgs struct {
	Query    string   `json:"query" description:"Search query"`
	MaxResults int    `json:"max_results,omitempty" description:"Maximum number of results"`
	Filters  []string `json:"filters,omitempty" description:"Filter categories"`
}

searchTool, _ := tool.NewFuncTool(
	"search",
	"Search the knowledge base",
	func(ctx context.Context, args SearchArgs) ([]string, error) {
		// Implementation...
		return results, nil
	},
)

Supported Types

Function tools support various parameter types:

  • Primitives: string, int, float64, bool
  • Structs: Nested objects with JSON tags
  • Slices: Arrays of any supported type
  • Maps: map[string]any for flexible schemas
  • Pointers: Optional fields with omitempty

Error Handling

Tool errors are returned to the agent:

func (ctx context.Context, args Args) (any, error) {
	if args.Query == "" {
		return nil, fmt.Errorf("query cannot be empty")
	}
	// Error returned to agent as tool result
}

Context Handling

Tools receive context for cancellation and timeouts:

func (ctx context.Context, args Args) (any, error) {
	select {
	case <-ctx.Done():
		return nil, ctx.Err()
	case result := <-performLongOperation():
		return result, nil
	}
}

Best Practices

  • Keep tools focused (single responsibility)
  • Use descriptive names and descriptions
  • Provide detailed JSON schema descriptions
  • Handle errors gracefully
  • Support context cancellation
  • Return structured data when possible

Package tool provides sentinel errors for the tool package.

Package tool provides tool execution with lifecycle management.

The Executor pattern separates tool execution from graph orchestration:

  • Executor: Handles execution lifecycle (observability, error handling)
  • Tool: Core tool logic (actual work)
  • ToolNode: Graph orchestration (message extraction, routing)

This separation enables:

  • Reusable execution logic across different contexts
  • Multiple execution strategies (sequential, parallel)
  • Custom executor implementations (rate limiting, caching, batching)
  • Clean testing boundaries
  • Centralized observability handling

Architecture:

┌─────────────┐
│  ToolNode   │  Graph layer: message extraction, routing
└──────┬──────┘
       │ delegates to
┌──────▼──────┐
│  Executor   │  Execution layer: lifecycle, parallelism, observability
└──────┬──────┘
       │ calls
┌──────▼──────┐
│    Tool     │  Core layer: actual work
└─────────────┘

Execution Strategies:

  • SequentialExecutor: Executes tools one by one in order Use when tools have dependencies or side effects

  • ParallelExecutor: Executes tools concurrently with optional concurrency limits Use when tools are independent for better performance

Arguments as JSON String:

Tool arguments are passed as JSON strings (not maps) to eliminate wasteful marshal/unmarshal cycles. Arguments flow as JSON from LLM → Executor → Tool:

LLM generates: {"location": "Berlin", "unit": "celsius"}
    ↓
ToolCall.Arguments: "{\"location\": \"Berlin\", \"unit\": \"celsius\"}"
    ↓
Tool receives: "{\"location\": \"Berlin\", \"unit\": \"celsius\"}"

This avoids: JSON string → map → JSON string → tool unmarshal

Example (basic usage):

executor := tool.NewSequentialExecutor(registry)
calls := []tool.Call{{
    ID: "call_1",
    Name: "weather",
    Arguments: `{"location":"Berlin"}`,
}}
results, err := executor.Execute(ctx, calls)

Example (parallel execution):

executor := tool.NewParallelExecutor(registry,
    tool.WithMaxConcurrency(5))

Example (custom executor):

type CachedExecutor struct {
    wrapped tool.Executor
    cache   map[string]tool.ExecutionResult
}

func (e *CachedExecutor) Execute(ctx context.Context, calls []Call) ([]ExecutionResult, error) {
    // Implement caching logic wrapping e.wrapped
}

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNilOutputSchema is returned when an output schema is nil.
	ErrNilOutputSchema = errors.New("tool/set_model_response: nil output schema")

	// ErrNilOutputSchemaPointer is returned when an output schema pointer is nil.
	ErrNilOutputSchemaPointer = errors.New("tool/set_model_response: nil output schema pointer")

	// ErrEmptyQuery is returned when a query is empty.
	ErrEmptyQuery = errors.New("tool: query cannot be empty")

	// ErrInvalidAgentResult is returned when an agent returns an invalid result.
	ErrInvalidAgentResult = errors.New("tool/handoff: agent returned invalid result")

	// ErrNoAgentMessages is returned when an agent produces no messages.
	ErrNoAgentMessages = errors.New("tool/handoff: agent produced no messages")
)

Functions

func CollectInstructions

func CollectInstructions(tools []Tool) string

CollectInstructions gathers instructions from all tools that implement InstructionProvider. Returns a combined string with all instructions separated by double newlines. Returns empty string if no tools provide instructions.

func WithInstruction

func WithInstruction(instruction string) func(*SetModelResponseToolOptions)

WithInstruction sets a custom instruction text for the SetModelResponseTool. This overrides the default instruction that tells the model how to use the tool.

func WithSetModelResponseDescription

func WithSetModelResponseDescription(description string) func(*SetModelResponseToolOptions)

WithSetModelResponseDescription sets a custom description for the SetModelResponseTool.

func WithSetModelResponseName

func WithSetModelResponseName(name string) func(*SetModelResponseToolOptions)

WithSetModelResponseName sets a custom name for the SetModelResponseTool.

Types

type Call

type Call struct {
	ID        string // Unique identifier for this call
	Name      string // Tool name to execute
	Arguments string // Tool arguments as JSON string (not map[string]any)
}

Call represents a single tool invocation request.

The Arguments field is a JSON string (not a map) to avoid wasteful marshal/unmarshal cycles. This design keeps arguments as JSON throughout the pipeline from LLM generation to tool execution.

Example:

call := tool.Call{
    ID:        "call_123",
    Name:      "get_weather",
    Arguments: `{"location":"Berlin","unit":"celsius"}`,
}

type CompositeToolset

type CompositeToolset struct {
	// contains filtered or unexported fields
}

CompositeToolset combines multiple toolsets into one.

func Combine

func Combine(toolsets ...Toolset) *CompositeToolset

Combine creates a composite toolset from multiple toolsets.

func (*CompositeToolset) Close

func (c *CompositeToolset) Close() error

Close releases resources from all contained toolsets.

func (*CompositeToolset) ListTools

func (c *CompositeToolset) ListTools(ctx context.Context, scope graph.ReadOnlyScope) ([]Tool, error)

ListTools returns tools from all contained toolsets.

type Definition

type Definition struct {
	Type     string             `json:"type"` // "function"
	Function FunctionDefinition `json:"function"`
}

Definition declaratively exposes a callable function to the model.

type ExecutionResult

type ExecutionResult struct {
	ToolCallID string        // ID of the tool call
	ToolName   string        // Name of the tool executed
	Result     any           // Tool result (nil if error)
	Error      error         // Execution error (nil if success)
	Duration   time.Duration // Execution time
}

ExecutionResult contains the outcome of a tool execution.

type Executor

type Executor interface {
	// Execute runs one or more tool calls with full lifecycle.
	// Returns execution results for each tool call in the same order.
	// The executor handles observability and error recovery.
	Execute(ctx context.Context, calls []Call) ([]ExecutionResult, error)
}

Executor handles the complete lifecycle of tool executions.

This interface allows users to provide custom executor implementations for specialized behavior (e.g., rate limiting, caching, custom parallelism).

Example custom implementations:

  • RateLimitedExecutor: Wraps with rate limiting
  • CachedExecutor: Caches deterministic tool results
  • CircuitBreakerExecutor: Implements circuit breaker pattern
  • BatchedExecutor: Batches multiple calls for efficiency

func Chain

func Chain(executor Executor, middleware ...Middleware) Executor

Chain applies multiple middleware to an executor in order. Middleware are applied in the order given, so the first middleware in the list is the outermost layer.

Example:

executor := tool.Chain(
    tool.NewSequentialExecutor(registry),
    middleware.NewCacheMiddleware(),
    middleware.NewTimeoutMiddleware(30*time.Second),
    middleware.NewCircuitBreakerMiddleware(5, time.Minute),
)

This produces: cache(timeout(circuitBreaker(executor))) Execution flows: cache → timeout → circuitBreaker → executor

func NewExecutor

func NewExecutor(registry map[string]Tool, opts ...ExecutorOption) Executor

NewExecutor creates a tool executor with the recommended default (sequential). For parallel execution, use NewParallelExecutor explicitly.

Example:

executor := tool.NewExecutor(registry,
    tool.WithErrorPrefix("my-agent"))

func NewSequentialExecutor

func NewSequentialExecutor(registry map[string]Tool, opts ...ExecutorOption) Executor

NewSequentialExecutor creates a sequential tool executor. Tools are executed one by one in the order provided.

Use this when:

  • Tools have dependencies on each other
  • Tools have side effects that must be ordered
  • You want deterministic execution order

Example:

executor := tool.NewSequentialExecutor(registry,
    tool.WithContinueOnError(false),
    tool.WithErrorPrefix("react-agent"))

func WrapFunc

func WrapFunc(fn func(ctx context.Context, calls []Call) ([]ExecutionResult, error)) Executor

WrapFunc creates an executor from a function. This is a convenience function for middleware implementations.

Example:

return tool.WrapFunc(func(ctx context.Context, calls []tool.Call) ([]tool.ExecutionResult, error) {
        // Pre-processing
        start := time.Now()
        results, err := next.Execute(ctx, calls)
        // Post-processing
        log.Printf("Tool execution took: %v", time.Since(start))
        return results, err
})

type ExecutorOption

type ExecutorOption interface {
	// contains filtered or unexported methods
}

ExecutorOption configures an executor.

This interface-based design (rather than function types) provides:

  • Full compile-time type safety for both SequentialExecutor and ParallelExecutor
  • Shared options that work with multiple executor types via sharedExecutorOption
  • No runtime type switches or silent failures from invalid option types

Options work with executorConfig to provide consistent behavior across executor variants.

type ExecutorWrapper

type ExecutorWrapper struct {
	// contains filtered or unexported fields
}

ExecutorWrapper wraps a function as an Executor. This is useful for creating ad-hoc executors or for middleware implementations.

func (*ExecutorWrapper) Execute

func (w *ExecutorWrapper) Execute(ctx context.Context, calls []Call) ([]ExecutionResult, error)

Execute implements the Executor interface.

type Func

type Func[T any, R any] func(ctx context.Context, args T) (R, error)

Func is the signature for tool implementation functions with typed arguments and results.

type FuncTool

type FuncTool[T any, R any] struct {
	// contains filtered or unexported fields
}

FuncTool wraps a Go function as a callable tool with JSON Schema validation. It provides type-safe tool implementations with automatic argument parsing.

func HandoffToAgent

func HandoffToAgent(
	agentName string,
	agentDescription string,
	agentGraph *graph.Graph,
	options ...HandoffOption,
) (*FuncTool[HandoffArgs, string], error)

HandoffToAgent creates a tool that delegates work to a worker agent graph. This is the core building block for supervisor patterns with tool-based handoffs.

The tool automatically handles:

  • Message history control (only passes task + optional context)
  • Retry logic on failures
  • Result validation
  • Error handling

Example:

researchAgent := createResearchAgentGraph(ctx)
researchTool, err := tool.HandoffToAgent(
    "research_agent",
    "Use this to find information, research papers, or gather data on any topic",
    researchAgent,
    tool.WithContext(true),
    tool.WithRetries(2),
)

The supervisor agent can then use this tool:

supervisor, err := agent.NewReAct(
    llm,
    agent.WithTools(researchTool, codeTool),
)

func NewFuncTool

func NewFuncTool[T any, R any](
	name, description string,
	fn Func[T, R],
	opts ...FuncToolOption,
) (*FuncTool[T, R], error)

NewFuncTool creates a FuncTool with automatic JSON Schema generation from the argument type. The schema is inferred from the type parameter T using struct tags and field types.

Example:

type SearchArgs struct {
    Query string `json:"query" jsonschema:"description=Search query"`
    Limit int    `json:"limit" jsonschema:"description=Max results"`
}

tool, err := tool.NewFuncTool("search", "Search for documents",
    func(ctx context.Context, args SearchArgs) (string, error) {
        return performSearch(args.Query, args.Limit)
    },
)

func NewFuncToolFromMap

func NewFuncToolFromMap[T any, R any](name, description string, parameters map[string]any, fn Func[T, R], opts ...FuncToolOption) *FuncTool[T, R]

NewFuncToolFromMap creates a FuncTool with an explicit JSON Schema provided as a map. Use this when you need fine-grained control over the schema or when the automatic schema generation from NewFuncTool doesn't meet your needs.

Example:

schema := map[string]any{
    "type": "object",
    "properties": map[string]any{
        "query": map[string]any{
            "type": "string",
            "description": "Search query",
        },
    },
    "required": []string{"query"},
}
tool := tool.NewFuncToolFromMap("search", "Search documents", schema, searchFunc)

func NewRetrievalTool

func NewRetrievalTool(name, description string, retriever retrieval.Retriever) (*FuncTool[RetrievalArgs, []retrieval.Document], error)

NewRetrievalTool creates a new retrieval tool that queries the given retriever.

func (*FuncTool[T, R]) Call

func (t *FuncTool[T, R]) Call(ctx context.Context, args string) (any, error)

Call executes the tool function with JSON-serialized arguments. It validates arguments against the JSON Schema, deserializes them to type T, and invokes the wrapped function. Returns an error if validation fails or the function returns an error.

func (*FuncTool[T, R]) Definition

func (t *FuncTool[T, R]) Definition() *Definition

Definition returns the tool definition with schema.

func (*FuncTool[T, R]) Description

func (t *FuncTool[T, R]) Description() string

Description returns the short natural language description exposed to models.

func (*FuncTool[T, R]) Name

func (t *FuncTool[T, R]) Name() string

Name returns the unique tool name used in function call declarations and routing.

type FuncToolOption

type FuncToolOption func(*FuncToolOptions)

FuncToolOption configures FuncTool options.

type FuncToolOptions

type FuncToolOptions struct{}

FuncToolOptions configures a FuncTool.

type FunctionDefinition

type FunctionDefinition struct {
	Name        string         `json:"name"`
	Description string         `json:"description"`
	Parameters  map[string]any `json:"parameters"` // JSON Schema
}

FunctionDefinition describes an individual function (tool) exposed to the model. Parameters is a JSON Schema object (draft agnostic, minimal subset expected).

type HandoffArgs

type HandoffArgs struct {
	Task string `json:"task" jsonschema:"required,description=The specific task to delegate to the agent"`
}

HandoffArgs defines the arguments for agent handoff operations.

type HandoffConfig

type HandoffConfig struct {
	RetryAttempts   int
	ValidateResults bool
}

HandoffConfig configures handoff behavior.

type HandoffOption

type HandoffOption func(*HandoffConfig)

HandoffOption configures handoff behavior.

func WithRetries

func WithRetries(attempts int) HandoffOption

WithRetries sets the number of retry attempts on failure.

func WithValidation

func WithValidation(validate bool) HandoffOption

WithValidation enables/disables result validation.

type HandoffResult

type HandoffResult struct {
	Output   string
	Messages []message.Message
}

HandoffResult represents the result of a handoff operation.

type InstructionProvider

type InstructionProvider interface {
	// Instruction returns additional instruction text to append to the system prompt.
	// Return an empty string if no additional instructions are needed.
	Instruction() string
}

InstructionProvider is an optional interface that tools can implement to provide additional instructions that should be appended to the model's system prompt. This is useful for tools that need to explain special usage patterns to the model.

Example use cases:

  • SetModelResponseTool: Instructs the model to use this tool for final responses
  • Search tools: Provide query formatting guidelines
  • API tools: Explain rate limits or authentication requirements

type Middleware

type Middleware interface {
	// Wrap takes the next executor in the chain and returns a wrapped version.
	// The wrapped executor should call next.Execute() to continue the chain.
	Wrap(next Executor) Executor
}

Middleware intercepts and extends tool execution. Middleware can add cross-cutting concerns like timeouts, circuit breakers, caching, etc. without modifying the tool executor implementation.

Example:

type TimeoutMiddleware struct {
    timeout time.Duration
}

func (m *TimeoutMiddleware) Wrap(next tool.Executor) tool.Executor {
    return tool.WrapFunc(func(ctx context.Context, calls []tool.Call) ([]tool.ExecutionResult, error) {
        ctx, cancel := context.WithTimeout(ctx, m.timeout)
        defer cancel()
        return next.Execute(ctx, calls)
    })
}

type MiddlewareFunc

type MiddlewareFunc func(next Executor) Executor

MiddlewareFunc is a function adapter for Middleware. It allows using functions as middleware without defining a type.

Example:

loggingMiddleware := tool.MiddlewareFunc(func(next tool.Executor) tool.Executor {
    return tool.WrapFunc(func(ctx context.Context, calls []tool.Call) ([]tool.ExecutionResult, error) {
        log.Printf("Executing %d tools", len(calls))
        return next.Execute(ctx, calls)
    })
})

func (MiddlewareFunc) Wrap

func (f MiddlewareFunc) Wrap(next Executor) Executor

Wrap implements the Middleware interface.

type ParallelExecutor

type ParallelExecutor struct {
	// contains filtered or unexported fields
}

ParallelExecutor executes tools concurrently using goroutines. This provides better performance when tools are independent.

func NewParallelExecutor

func NewParallelExecutor(registry map[string]Tool, opts ...ParallelExecutorOption) *ParallelExecutor

NewParallelExecutor creates a parallel tool executor. Tools are executed concurrently using goroutines.

Use this when:

  • Tools are independent of each other
  • You want maximum performance
  • Tools can safely run concurrently

Example:

executor := tool.NewParallelExecutor(registry,
    tool.WithContinueOnError(true),
    tool.WithMaxConcurrency(10))

func (*ParallelExecutor) Execute

func (e *ParallelExecutor) Execute(ctx context.Context, calls []Call) ([]ExecutionResult, error)

Execute implements Executor for ParallelExecutor.

func (*ParallelExecutor) WithParallelOptions

func (e *ParallelExecutor) WithParallelOptions(opts ...ParallelExecutorOption) *ParallelExecutor

WithParallelOptions applies ParallelExecutor-specific options. This method is provided for compatibility but is not required - ParallelExecutorOption can be passed directly to NewParallelExecutor.

Example:

executor := tool.NewParallelExecutor(registry,
    tool.WithMaxConcurrency(5))

type ParallelExecutorOption

type ParallelExecutorOption interface {
	// contains filtered or unexported methods
}

ParallelExecutorOption configures a ParallelExecutor. These options are specific to parallel execution and don't apply to sequential executors.

func WithMaxConcurrency

func WithMaxConcurrency(maxConcurrency int) ParallelExecutorOption

WithMaxConcurrency limits concurrent tool executions. A value of 0 means unlimited concurrency (default).

Example:

executor := tool.NewParallelExecutor(registry,
    tool.WithMaxConcurrency(5)) // Max 5 concurrent tools

type RetrievalArgs

type RetrievalArgs struct {
	Query string `json:"query" jsonschema:"title=The query to retrieve.,required"`
}

RetrievalArgs defines the arguments for the retrieval tool.

type SequentialExecutor

type SequentialExecutor struct {
	// contains filtered or unexported fields
}

SequentialExecutor executes tools one by one in order. This is the safest option when tools have dependencies or side effects.

func (*SequentialExecutor) Execute

func (e *SequentialExecutor) Execute(ctx context.Context, calls []Call) ([]ExecutionResult, error)

Execute implements Executor for SequentialExecutor.

type SetModelResponseTool

type SetModelResponseTool struct {
	// contains filtered or unexported fields
}

SetModelResponseTool is an internal tool used when structured output is configured alongside other tools. It lets the model provide its final structured response using tool calling, enabling structured output on models without native support.

This is the "tool trick" - converting a schema into a tool that the model must call to provide its final response.

func NewSetModelResponseTool

func NewSetModelResponseTool(outputSchema any, optFns ...func(*SetModelResponseToolOptions)) (*SetModelResponseTool, error)

NewSetModelResponseTool creates a new tool with the given schema. The schema parameter can be:

  • A struct type (generates schema via reflection)
  • A map[string]any (uses directly as schema)
  • A *schema.OutputSchema (extracts Schema field)

Example with struct:

type AnalysisResult struct {
    Category   string  `json:"category" jsonschema:"required"`
    Confidence float64 `json:"confidence" jsonschema:"required"`
}
tool, err := tool.NewSetModelResponseTool(AnalysisResult{})

Example with OutputSchema:

outputSchema, _ := schema.NewOutputSchema("result", MyStruct{})
tool, err := tool.NewSetModelResponseTool(&outputSchema)

Example with custom instruction:

tool, err := tool.NewSetModelResponseTool(&outputSchema,
    tool.WithInstruction("Always use set_model_response for your final answer."),
)

The tool automatically:

  • Validates the response against the schema
  • Adds instructions to the model request
  • Returns the validated response for further processing

func (*SetModelResponseTool) Call

func (t *SetModelResponseTool) Call(ctx context.Context, args string) (any, error)

Call validates the provided arguments against the schema and returns them. The arguments are expected to be a JSON string matching the output schema.

func (*SetModelResponseTool) Definition

func (t *SetModelResponseTool) Definition() *Definition

Definition returns the tool definition including name, description, and parameters.

func (*SetModelResponseTool) Description

func (t *SetModelResponseTool) Description() string

Description returns the tool's description.

func (*SetModelResponseTool) Instruction

func (t *SetModelResponseTool) Instruction() string

Instruction returns the instruction text to be added to model requests. This explains to the model how and when to use the set_model_response tool. Returns the custom instruction if configured, otherwise returns the default.

func (*SetModelResponseTool) Name

func (t *SetModelResponseTool) Name() string

Name returns the tool's unique identifier.

func (*SetModelResponseTool) Parameters

func (t *SetModelResponseTool) Parameters() map[string]any

Parameters returns the tool's parameter schema. Extracts properties and required fields from the output schema.

type SetModelResponseToolOptions

type SetModelResponseToolOptions struct {
	// Name overrides the default tool name ("set_model_response").
	Name string
	// Description overrides the default tool description.
	Description string
	// Instruction overrides the default instruction text added to model requests.
	Instruction string
}

SetModelResponseToolOptions configures the SetModelResponseTool.

type SharedExecutorOption

type SharedExecutorOption func(*executorConfig)

SharedExecutorOption implements both ExecutorOption and ParallelExecutorOption interfaces.

This allows common options (like WithContinueOnError, WithErrorPrefix) to work with both sequential and parallel executors without code duplication or type prefixes.

The pattern uses embedded executorConfig to achieve this:

  • Both SequentialExecutor and ParallelExecutor embed executorConfig
  • SharedExecutorOption modifies the embedded executorConfig fields
  • Interface methods delegate to the embedded struct

Example usage:

// Same option works for both executor types
seq := tool.NewSequentialExecutor(registry, tool.WithErrorPrefix("agent"))
par := tool.NewParallelExecutor(registry, tool.WithErrorPrefix("agent"))

func WithContinueOnError

func WithContinueOnError(continueOnError bool) SharedExecutorOption

WithContinueOnError configures error handling behavior. If true, execution continues even when individual tools fail. Errors are still returned in ExecutionResult.Error for each failed tool.

Works with both SequentialExecutor and ParallelExecutor.

Example:

executor := tool.NewSequentialExecutor(registry,
    tool.WithContinueOnError(true))

func WithErrorPrefix

func WithErrorPrefix(prefix string) SharedExecutorOption

WithErrorPrefix sets the error message prefix. This prefix is added to all error messages from the executor.

Works with both SequentialExecutor and ParallelExecutor.

Example:

executor := tool.NewSequentialExecutor(registry,
    tool.WithErrorPrefix("my-agent"))

type StaticToolset

type StaticToolset struct {
	// contains filtered or unexported fields
}

StaticToolset wraps a static list of tools.

func NewStaticToolset

func NewStaticToolset(tools ...Tool) *StaticToolset

NewStaticToolset creates a toolset from a static list of tools.

func (*StaticToolset) Close

func (s *StaticToolset) Close() error

Close is a no-op for static toolsets.

func (*StaticToolset) ListTools

func (s *StaticToolset) ListTools(_ context.Context, _ graph.ReadOnlyScope) ([]Tool, error)

ListTools returns the static list of tools. The scope parameter is ignored for static toolsets.

type Tool

type Tool interface {
	Name() string
	Description() string
	Definition() *Definition
	Call(ctx context.Context, args string) (any, error)
}

Tool defines the interface for executable functions that can be called by LLMs.

type Toolset

type Toolset interface {
	// ListTools returns available tools.
	// The scope parameter provides read access to the current graph state,
	// enabling context-aware tool selection.
	// If scope is nil, returns all available tools (static discovery).
	ListTools(ctx context.Context, scope graph.ReadOnlyScope) ([]Tool, error)

	// Close releases any resources held by the toolset.
	Close() error
}

Toolset defines a collection of tools that can be managed together. The interface supports both static and dynamic tool discovery.

Directories

Path Synopsis
Package a2a provides tools for integrating external A2A agents into AgentMesh workflows.
Package a2a provides tools for integrating external A2A agents into AgentMesh workflows.
Package docker provides Docker-based tool sandboxing for secure execution of containerized commands with resource limits and network isolation.
Package docker provides Docker-based tool sandboxing for secure execution of containerized commands with resource limits and network isolation.
Package langchaingo provides adapters for using langchaingo tools (github.com/tmc/langchaingo/tools) within AgentMesh workflows.
Package langchaingo provides adapters for using langchaingo tools (github.com/tmc/langchaingo/tools) within AgentMesh workflows.
Package mcp provides integration with the Model Context Protocol (MCP).
Package mcp provides integration with the Model Context Protocol (MCP).
Package middleware provides reusable middleware for tool executors.
Package middleware provides reusable middleware for tool executors.
Package wasm provides WebAssembly-based tool sandboxing for secure execution of untrusted code with memory isolation and resource limits.
Package wasm provides WebAssembly-based tool sandboxing for secure execution of untrusted code with memory isolation and resource limits.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL