Documentation
¶
Index ¶
- func ParseModelName(modelString string) (provider, model string)
- func StreamWithCallback(ctx context.Context, reader *schema.StreamReader[*schema.Message], ...) (*schema.Message, error)
- type Agent
- func (a *Agent) Close() error
- func (a *Agent) GenerateWithLoop(ctx context.Context, messages []*schema.Message, onToolCall ToolCallHandler, ...) (*GenerateWithLoopResult, error)
- func (a *Agent) GenerateWithLoopAndStreaming(ctx context.Context, messages []*schema.Message, onToolCall ToolCallHandler, ...) (*GenerateWithLoopResult, error)
- func (a *Agent) GetLoadedServerNames() []string
- func (a *Agent) GetLoadingMessage() string
- func (a *Agent) GetTools() []tool.BaseTool
- type AgentConfig
- type AgentCreationOptions
- type GenerateWithLoopResult
- type ResponseHandler
- type SpinnerFunc
- type StreamingResponseHandler
- type ToolCallContentHandler
- type ToolCallHandler
- type ToolExecutionHandler
- type ToolResultHandler
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ParseModelName ¶ added in v0.19.0
ParseModelName extracts provider and model name from a model string. Model strings are formatted as "provider:model" (e.g., "anthropic:claude-3-5-sonnet-20241022"). If the string doesn't contain a colon, returns "unknown" for both provider and model.
func StreamWithCallback ¶ added in v0.18.0
func StreamWithCallback(ctx context.Context, reader *schema.StreamReader[*schema.Message], callback func(string)) (*schema.Message, error)
StreamWithCallback streams content with real-time callbacks and returns the complete response. It accumulates content and tool calls from the stream, invoking the callback for each content chunk. IMPORTANT: Tool calls are only processed after EOF is reached to ensure we have the complete and final tool call information. This prevents premature tool execution on partial data. Handles different provider streaming patterns: - Anthropic: Text content first, then tool calls streamed incrementally - OpenAI/Others: Tool calls first or alone - Mixed: Tool calls and content interleaved
Types ¶
type Agent ¶ added in v0.10.0
type Agent struct {
// contains filtered or unexported fields
}
Agent represents an AI agent with MCP tool integration and real-time tool call display. It manages the interaction between an LLM and various tools through the MCP protocol.
func CreateAgent ¶ added in v0.19.0
func CreateAgent(ctx context.Context, opts *AgentCreationOptions) (*Agent, error)
CreateAgent creates an agent with optional spinner for Ollama models. It shows a loading spinner for Ollama models if ShowSpinner is true and not in quiet mode. Returns the created agent or an error if creation fails.
func NewAgent ¶ added in v0.10.0
func NewAgent(ctx context.Context, config *AgentConfig) (*Agent, error)
NewAgent creates a new Agent with MCP tool integration and streaming support. It initializes the LLM provider, loads MCP tools, and configures the agent based on the provided configuration. Returns an error if provider creation or tool loading fails.
func (*Agent) Close ¶ added in v0.10.0
Close closes the agent and cleans up resources. It ensures all MCP connections are properly closed and resources are released.
func (*Agent) GenerateWithLoop ¶ added in v0.10.0
func (a *Agent) GenerateWithLoop(ctx context.Context, messages []*schema.Message, onToolCall ToolCallHandler, onToolExecution ToolExecutionHandler, onToolResult ToolResultHandler, onResponse ResponseHandler, onToolCallContent ToolCallContentHandler) (*GenerateWithLoopResult, error)
GenerateWithLoop processes messages with a custom loop that displays tool calls in real-time. It handles the conversation flow, executing tools as needed and invoking callbacks for various events. This method does not support streaming responses; use GenerateWithLoopAndStreaming for streaming support.
func (*Agent) GenerateWithLoopAndStreaming ¶ added in v0.18.0
func (a *Agent) GenerateWithLoopAndStreaming(ctx context.Context, messages []*schema.Message, onToolCall ToolCallHandler, onToolExecution ToolExecutionHandler, onToolResult ToolResultHandler, onResponse ResponseHandler, onToolCallContent ToolCallContentHandler, onStreamingResponse StreamingResponseHandler) (*GenerateWithLoopResult, error)
GenerateWithLoopAndStreaming processes messages with a custom loop that displays tool calls in real-time and supports streaming callbacks. It handles the conversation flow, executing tools as needed and invoking callbacks for various events including streaming chunks. The onStreamingResponse callback is invoked for each content chunk during streaming if streaming is enabled.
func (*Agent) GetLoadedServerNames ¶ added in v0.19.0
GetLoadedServerNames returns the names of successfully loaded MCP servers. This includes both builtin servers and external MCP server configurations.
func (*Agent) GetLoadingMessage ¶ added in v0.18.0
GetLoadingMessage returns the loading message from provider creation. This may contain information about GPU fallback or other provider-specific initialization details.
type AgentConfig ¶ added in v0.10.0
type AgentConfig struct {
// ModelConfig specifies the LLM provider and model to use
ModelConfig *models.ProviderConfig
// MCPConfig contains MCP server configurations
MCPConfig *config.Config
// SystemPrompt is the initial system message for the agent
SystemPrompt string
// MaxSteps limits the number of tool calls (0 for unlimited)
MaxSteps int
// StreamingEnabled controls whether responses are streamed
StreamingEnabled bool
// DebugLogger is an optional logger for debugging MCP communications
DebugLogger tools.DebugLogger // Optional debug logger
}
AgentConfig holds configuration options for creating a new Agent. It includes model configuration, MCP settings, and various behavioral options.
type AgentCreationOptions ¶ added in v0.19.0
type AgentCreationOptions struct {
// ModelConfig specifies the LLM provider and model to use
ModelConfig *models.ProviderConfig
// MCPConfig contains MCP server configurations
MCPConfig *config.Config
// SystemPrompt is the initial system message for the agent
SystemPrompt string
// MaxSteps limits the number of tool calls (0 for unlimited)
MaxSteps int
// StreamingEnabled controls whether responses are streamed
StreamingEnabled bool
// ShowSpinner indicates whether to show a spinner for Ollama models during loading
ShowSpinner bool // For Ollama models
// Quiet suppresses the spinner even if ShowSpinner is true
Quiet bool // Skip spinner if quiet
// SpinnerFunc is the function to show spinner, provided by the caller
SpinnerFunc SpinnerFunc // Function to show spinner (provided by caller)
// DebugLogger is an optional logger for debugging MCP communications
DebugLogger tools.DebugLogger // Optional debug logger
}
AgentCreationOptions contains options for creating an agent. It extends AgentConfig with UI-related options for showing progress during creation.
type GenerateWithLoopResult ¶ added in v0.17.0
type GenerateWithLoopResult struct {
// FinalResponse is the last message generated by the model
FinalResponse *schema.Message
// ConversationMessages contains all messages in the conversation including tool calls and results
ConversationMessages []*schema.Message // All messages in the conversation (including tool calls and results)
}
GenerateWithLoopResult contains the result and conversation history from an agent interaction. It includes both the final response and the complete message history with tool interactions.
type ResponseHandler ¶ added in v0.10.0
type ResponseHandler func(content string)
ResponseHandler is a function type for handling LLM responses. It receives the complete response content from the model.
type SpinnerFunc ¶ added in v0.19.0
SpinnerFunc is a function type for showing spinners during agent creation. It executes the provided function while displaying a spinner with the given message.
type StreamingResponseHandler ¶ added in v0.18.0
type StreamingResponseHandler func(content string)
StreamingResponseHandler is a function type for handling streaming LLM responses. It receives content chunks as they are streamed from the model.
type ToolCallContentHandler ¶ added in v0.10.0
type ToolCallContentHandler func(content string)
ToolCallContentHandler is a function type for handling content that accompanies tool calls. It receives any text content that the model generates alongside tool calls.
type ToolCallHandler ¶ added in v0.10.0
type ToolCallHandler func(toolName, toolArgs string)
ToolCallHandler is a function type for handling tool calls as they happen. It receives the tool name and its arguments when a tool is about to be invoked.
type ToolExecutionHandler ¶ added in v0.11.0
ToolExecutionHandler is a function type for handling tool execution start/end events. The isStarting parameter indicates whether the tool is starting (true) or finished (false).
type ToolResultHandler ¶ added in v0.10.0
ToolResultHandler is a function type for handling tool results. It receives the tool name, arguments, result, and whether the result is an error.