cogito

package module
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 3, 2025 License: Apache-2.0 Imports: 15 Imported by: 2

README

Gemini_Generated_Image_jbv0xajbv0xajbv0

Cogito is a powerful Go library for building intelligent, co-operative agentic software and LLM-powered workflows, focusing on improving results for small, open source language models that scales to any LLM.

🧪 Tested on Small Models ! Our test suite runs on 0.6b Qwen (not fine-tuned), proving effectiveness even with minimal resources.

📝 Working on Official Paper
I am currently working on the official academic/white paper for Cogito. The paper will provide detailed theoretical foundations, experimental results, and comprehensive analysis of the framework's capabilities.

🏗️ Architecture

Cogito is the result of building LocalAI, LocalAGI and LocalOperator (yet to be released).

Cogito uses an internal pipeline to first make the LLM reason about a specific task, forcing the model to reason and later extracts with BNF grammars exact data structures from the LLM. This is applied to every primitive exposed by the framework.

It provides a comprehensive framework for creating conversational AI systems with advanced reasoning, tool execution, goal-oriented planning, iterative content refinement capabilities, and seamless integration with external tools also via the Model Context Protocol (MCP).

🔧 Composable Primitives
Cogito primitives can be combined to form more complex pipelines, enabling sophisticated AI workflows.

🚀 Quick Start

Installation

go get github.com/mudler/cogito

Basic Usage

package main

import (
    "context"
    "fmt"
    "github.com/mudler/cogito"
)

func main() {
    // Create an LLM client
    llm := cogito.NewOpenAILLM("your-model", "api-key", "https://api.openai.com")
    
    // Create a conversation fragment
    fragment := cogito.NewEmptyFragment().
        AddMessage("user", "Tell me about artificial intelligence")
    
    // Get a response
    newFragment, err := llm.Ask(context.Background(), fragment)
    if err != nil {
        panic(err)
    }
    
    fmt.Println(newFragment.LastMessage().Content)
}

Using Tools

Creating Custom Tools

To create a custom tool, implement the Tool[T] interface:

type MyToolArgs struct {
    Param string `json:"param" description:"A parameter"`
}

type MyTool struct{}

// Implement the Tool interface
func (t *MyTool) Run(args MyToolArgs) (string, error) {
    // Your tool logic here
    return fmt.Sprintf("Processed: %s", args.Param), nil
}

// Create a ToolDefinition using NewToolDefinition helper
myTool := cogito.NewToolDefinition(
    &MyTool{},
    MyToolArgs{},
    "my_tool",
    "A custom tool",
)

Tools in Cogito are added by calling NewToolDefinition on your tool, which automatically generates openai.Tool via the Tool() method. Tools are then passed by to cogito.WithTools:

// Define tool argument types
type WeatherArgs struct {
    City string `json:"city" description:"The city to get weather for"`
}

type SearchArgs struct {
    Query string `json:"query" description:"The search query"`
}

// Create tool definitions - these automatically generate openai.Tool
weatherTool := cogito.NewToolDefinition(
    &WeatherTool{},
    WeatherArgs{},
    "get_weather",
    "Get the current weather for a city",
)

searchTool := cogito.NewToolDefinition(
    &SearchTool{},
    SearchArgs{},
    "search",
    "Search for information",
)

// Create a fragment with user input
fragment := cogito.NewFragment(openai.ChatCompletionMessage{
    Role:    "user",
    Content: "What's the weather in San Francisco?",
})

// Execute with tools - you can pass multiple tools with different types
result, err := cogito.ExecuteTools(llm, fragment, 
    cogito.WithTools(weatherTool, searchTool))
if err != nil {
    panic(err)
}

// result.Status.ToolsCalled will contain all the tools being called
Configuring Sink State

When the LLM determines that no tool is needed to respond to the user, Cogito uses a "sink state" tool to handle the response. By default, Cogito uses a built-in reply tool, but you can customize or disable this behavior.

Disable Sink State:

// Disable sink state entirely - the LLM will return an error if no tool is selected
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithTools(weatherTool, searchTool),
    cogito.DisableSinkState)

Custom Sink State Tool:

// Define a custom sink state tool
type CustomReplyArgs struct {
    Reasoning string `json:"reasoning" description:"The reasoning for the reply"`
}

type CustomReplyTool struct{}

func (t *CustomReplyTool) Run(args CustomReplyArgs) (string, error) {
    // Custom logic to process the reasoning and generate a response
    return fmt.Sprintf("Based on: %s", args.Reasoning), nil
}

// Create a custom sink state tool
customSinkTool := cogito.NewToolDefinition(
    &CustomReplyTool{},
    CustomReplyArgs{},
    "custom_reply",
    "Custom tool for handling responses when no other tool is needed",
)

// Use the custom sink state tool
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithTools(weatherTool, searchTool),
    cogito.WithSinkState(customSinkTool))

Notes:

  • The sink state tool is enabled by default with a built-in reply tool
  • When enabled, the sink state tool appears as an option in the tool selection enum
  • The sink state tool receives a reasoning parameter containing the LLM's reasoning about why no tool is needed
  • Custom sink state tools must accept a reasoning parameter in their arguments
Field Annotations for Tool Arguments

Cogito supports several struct field annotations to control how tool arguments are defined in the generated JSON schema:

Available Annotations:

  • json:"field_name" - Required. Defines the JSON field name for the parameter.
  • description:"text" - Provides a description for the field that helps the LLM understand what the parameter is for.
  • enum:"value1,value2,value3" - Restricts the field to a specific set of allowed values (comma-separated).
  • required:"false" - Makes the field optional. By default, all fields are required unless marked with required:"false".

Examples:

// Basic required field with description
type BasicArgs struct {
    Query string `json:"query" description:"The search query"`
}

// Optional field
type OptionalArgs struct {
    Query string `json:"query" required:"false" description:"Optional search query"`
    Limit int    `json:"limit" required:"false" description:"Maximum number of results"`
}

// Field with enum values
type EnumArgs struct {
    Action string `json:"action" enum:"create,read,update,delete" description:"The action to perform"`
}

// Field with enum and description
type WeatherArgs struct {
    City        string `json:"city" description:"The city name"`
    Unit        string `json:"unit" enum:"celsius,fahrenheit" description:"Temperature unit"`
    Format      string `json:"format" enum:"short,detailed" required:"false" description:"Output format"`
}

// Complete example with multiple field types
type AdvancedSearchArgs struct {
    // Required field with description
    Query string `json:"query" description:"The search query"`
    
    // Optional field with enum
    SortBy string `json:"sort_by" enum:"relevance,date,popularity" required:"false" description:"Sort order"`
    
    // Optional numeric field
    Limit int `json:"limit" required:"false" description:"Maximum number of results"`
    
    // Optional boolean field
    IncludeImages bool `json:"include_images" required:"false" description:"Include images in results"`
}

// Create tool with advanced arguments
searchTool := cogito.NewToolDefinition(
    &AdvancedSearchTool{},
    AdvancedSearchArgs{},
    "advanced_search",
    "Advanced search with sorting and filtering options",
)

Notes:

  • Fields without required:"false" are automatically marked as required in the JSON schema
  • Enum values are case-sensitive and should match exactly what you expect in Run()
  • The json tag is required for all fields that should be included in the tool schema
  • Descriptions help the LLM understand the purpose of each parameter, leading to better tool calls

Alternatively, you can implement ToolDefinitionInterface directly if you prefer more control:

type CustomTool struct{}

func (t *CustomTool) Tool() openai.Tool {
    return openai.Tool{
        Type: openai.ToolTypeFunction,
        Function: &openai.FunctionDefinition{
            Name:        "custom_tool",
            Description: "A custom tool",
            Parameters: jsonschema.Definition{
                // Define your schema
            },
        },
    }
}

func (t *CustomTool) Execute(args map[string]any) (string, error) {
    // Your execution logic
    return "result", nil
}

Guidelines for Intelligent Tool Selection

Guidelines provide a powerful way to define conditional rules for tool usage. The LLM intelligently selects which guidelines are relevant based on the conversation context, enabling dynamic and context-aware tool selection.

// Create tool definitions
searchTool := cogito.NewToolDefinition(
    &SearchTool{},
    SearchArgs{},
    "search",
    "Search for information",
)

weatherTool := cogito.NewToolDefinition(
    &WeatherTool{},
    WeatherArgs{},
    "get_weather",
    "Get weather information",
)

// Define guidelines with conditions and associated tools
guidelines := cogito.Guidelines{
    cogito.Guideline{
        Condition: "User asks about information or facts",
        Action:    "Use the search tool to find information",
        Tools: cogito.Tools{
            searchTool,
        },
    },
    cogito.Guideline{
        Condition: "User asks for the weather in a city",
        Action:    "Use the weather tool to find the weather",
        Tools: cogito.Tools{
            weatherTool,
        },
    },
}

// Get relevant guidelines for the current conversation
fragment := cogito.NewEmptyFragment().
    AddMessage("user", "When was Isaac Asimov born?")

// Execute tools with guidelines
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithGuidelines(guidelines),
    cogito.EnableStrictGuidelines) // Only use tools from relevant guidelines
if err != nil {
    panic(err)
}

Goal-Oriented Planning

// Extract a goal from conversation
goal, err := cogito.ExtractGoal(llm, fragment)
if err != nil {
    panic(err)
}

// Create tool definition
searchTool := cogito.NewToolDefinition(
    &SearchTool{},
    SearchArgs{},
    "search",
    "Search for information",
)

// Create a plan to achieve the goal
plan, err := cogito.ExtractPlan(llm, fragment, goal, 
    cogito.WithTools(searchTool))
if err != nil {
    panic(err)
}

// Execute the plan
result, err := cogito.ExecutePlan(llm, fragment, plan, goal,
    cogito.WithTools(searchTool))
if err != nil {
    panic(err)
}

Content Refinement

// Create tool definition
searchTool := cogito.NewToolDefinition(
    &SearchTool{},
    SearchArgs{},
    "search",
    "Search for information",
)

// Refine content through iterative improvement
refined, err := cogito.ContentReview(llm, fragment,
    cogito.WithIterations(3),
    cogito.WithTools(searchTool))
if err != nil {
    panic(err)
}

Iterative Content Improvement

An example on how to iteratively improve content by using two separate models:

llm := cogito.NewOpenAILLM("your-model", "api-key", "https://api.openai.com")
reviewerLLM := cogito.NewOpenAILLM("your-reviewer-model", "api-key", "https://api.openai.com")

// Create content to review
initial := cogito.NewEmptyFragment().
    AddMessage("user", "Write about climate change")

response, _ := llm.Ask(ctx, initial)

// Create tool definitions
searchTool := cogito.NewToolDefinition(
    &SearchTool{},
    SearchArgs{},
    "search",
    "Search for information",
)

factCheckTool := cogito.NewToolDefinition(
    &FactCheckTool{},
    FactCheckArgs{},
    "fact_check",
    "Verify facts",
)

// Iteratively improve with tool support
improvedResponse, _ := cogito.ContentReview(reviewerLLM, response,
    cogito.WithIterations(3),
    cogito.WithTools(searchTool, factCheckTool),
    cogito.EnableToolReasoner)

Model Context Protocol (MCP) Integration

Cogito supports the Model Context Protocol (MCP) for seamless integration with external tools and services. MCP allows you to connect to remote tool providers and use their capabilities directly within your Cogito workflows.

import (
    "github.com/modelcontextprotocol/go-sdk/mcp"
)

// Create MCP client sessions
command := exec.Command("docker", "run", "-i", "--rm", "ghcr.io/mudler/mcps/weather:master")
transport := &mcp.CommandTransport{ Command: command }

client := mcp.NewClient(&mcp.Implementation{Name: "test", Version: "v1.0.0"}, nil)
mcpSession, _ := client.Connect(context.Background(), transport, nil)

// Use MCP tools in your workflows
result, _ := cogito.ExecuteTools(llm, fragment,
    cogito.WithMCPs(mcpSession))

MCP with Guidelines
// Define guidelines that include MCP tools
guidelines := cogito.Guidelines{
    cogito.Guideline{
        Condition: "User asks about information or facts",
        Action:    "Use the MCP search tool to find information",
    },
}

// Execute with MCP tools and guidelines
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithMCPs(searchSession),
    cogito.WithGuidelines(guidelines),
    cogito.EnableStrictGuidelines)

Custom Prompts

customPrompt := cogito.NewPrompt(`Your custom prompt template with {{.Context}}`)

result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithPrompt(cogito.ToolReasonerType, customPrompt))

🎮 Examples

Interactive Chat Bot

# Run the example chat application
make example-chat

This starts an interactive chat session with tool support including web search capabilities.

Custom Tool Implementation

See examples/internal/search/search.go for a complete example of implementing a DuckDuckGo search tool.

🧪 Testing

The library includes comprehensive test coverage using Ginkgo and Gomega. Tests use containerized LocalAI for integration testing.

Running Tests

# Run all tests
make test

# Run with specific log level
LOG_LEVEL=debug make test

# Run with custom arguments
GINKGO_ARGS="--focus=Fragment" make test

📄 License

Ettore Di Giacinto 2025-now. Cogito is released under the Apache 2.0 License.

📚 Citation

If you use Cogito in your research or academic work, please cite our paper:

@article{cogito2025,
  title={Cogito: A Framework for Building Intelligent Agentic Software with LLM-Powered Workflows},
  author={Ettore Di Giacinto <mudler@localai.io>},
  journal={https://github.com/mudler/cogito},
  year={2025},
  note={}
}

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNoToolSelected error = errors.New("no tool selected by the LLM")
	ErrLoopDetected   error = errors.New("loop detected: same tool called repeatedly with same parameters")
)
View Source
var (
	ErrGoalNotAchieved error = errors.New("goal not achieved")
)

Functions

func ExtractBoolean

func ExtractBoolean(llm LLM, f Fragment, opts ...Option) (*structures.Boolean, error)

ExtractBoolean extracts a boolean from a conversation

func ExtractGoal

func ExtractGoal(llm LLM, f Fragment, opts ...Option) (*structures.Goal, error)

ExtractGoal extracts a goal from a conversation

func ExtractKnowledgeGaps

func ExtractKnowledgeGaps(llm LLM, f Fragment, opts ...Option) ([]string, error)

func ExtractPlan

func ExtractPlan(llm LLM, f Fragment, goal *structures.Goal, opts ...Option) (*structures.Plan, error)

ExtractPlan extracts a plan from a conversation To override the prompt, define a PromptPlanType, PromptReEvaluatePlanType and PromptSubtaskExtractionType

func IsGoalAchieved

func IsGoalAchieved(llm LLM, f Fragment, goal *structures.Goal, opts ...Option) (*structures.Boolean, error)

IsGoalAchieved checks if a goal has been achieved

func ReEvaluatePlan

func ReEvaluatePlan(llm LLM, f, subtaskFragment Fragment, goal *structures.Goal, toolStatuses []ToolStatus, subtask string, opts ...Option) (*structures.Plan, error)

ExtractPlan extracts a plan from a conversation to override the prompt, define a PromptReEvaluatePlanType and PromptSubtaskExtractionType

func WithContext

func WithContext(ctx context.Context) func(o *Options)

WithContext sets the execution context for the agent

func WithFeedbackCallback

func WithFeedbackCallback(fn func() *Fragment) func(o *Options)

WithFeedbackCallback sets a callback to get continous feedback during execution of plans

func WithForceReasoning added in v0.4.0

func WithForceReasoning() func(o *Options)

WithForceReasoning enables forcing the LLM to reason before selecting tools

func WithGaps

func WithGaps(gaps ...string) func(o *Options)

WithGaps adds knowledge gaps that the agent should address

func WithGuidelines

func WithGuidelines(guidelines ...Guideline) func(o *Options)

WithGuidelines adds behavioral guidelines for the agent to follow. The guildelines allows a more curated selection of the tool to use and only relevant are shown to the LLM during tool selection.

func WithIterations

func WithIterations(i int) func(o *Options)

WithIterations allows to set the number of refinement iterations

func WithLoopDetection added in v0.4.0

func WithLoopDetection(steps int) func(o *Options)

WithLoopDetection enables loop detection to prevent repeated tool calls If the same tool with the same parameters is called more than 'steps' times, it will be detected

func WithMCPArgs added in v0.3.0

func WithMCPArgs(args map[string]string) func(o *Options)

WithMCPArgs sets the arguments for the MCP prompts

func WithMCPs added in v0.2.0

func WithMCPs(sessions ...*mcp.ClientSession) func(o *Options)

WithMCPs adds Model Context Protocol client sessions for external tool integration. When specified, the tools available in the MCPs will be available to the cogito pipelines

func WithMaxAttempts

func WithMaxAttempts(i int) func(o *Options)

WithMaxAttempts sets the maximum number of execution attempts

func WithMaxRetries added in v0.4.0

func WithMaxRetries(retries int) func(o *Options)

WithMaxRetries sets the maximum number of retries for LLM calls

func WithPrompt

func WithPrompt(t prompt.PromptType, p prompt.StaticPrompt) func(o *Options)

WithPrompt allows to set a custom prompt for a given PromptType

func WithReasoningCallback added in v0.4.1

func WithReasoningCallback(fn func(string)) func(o *Options)

WithReasoningCallback sets a callback function to receive reasoning updates during execution

func WithSinkState added in v0.6.0

func WithSinkState(tool ToolDefinitionInterface) func(o *Options)

func WithStatusCallback

func WithStatusCallback(fn func(string)) func(o *Options)

WithStatusCallback sets a callback function to receive status updates during execution

func WithToolCallBack

func WithToolCallBack(fn func(*ToolChoice) bool) func(o *Options)

WithToolCallBack allows to set a callback to prompt the user if running the tool or not

func WithToolCallResultCallback

func WithToolCallResultCallback(fn func(ToolStatus)) func(o *Options)

WithToolCallResultCallback runs the callback on every tool result

func WithTools

func WithTools(tools ...ToolDefinitionInterface) func(o *Options)

WithTools allows to set the tools available to the Agent. Pass *ToolDefinition[T] instances - they will automatically generate openai.Tool via their Tool() method. Example: WithTools(&ToolDefinition[SearchArgs]{...}, &ToolDefinition[WeatherArgs]{...})

Types

type Fragment

type Fragment struct {
	Messages       []openai.ChatCompletionMessage
	ParentFragment *Fragment
	Status         *Status
	Multimedia     []Multimedia
}

func ContentReview

func ContentReview(llm LLM, originalFragment Fragment, opts ...Option) (Fragment, error)

ContentReview refines an LLM response until for a fixed number of iterations or if the LLM doesn't find anymore gaps

func ExecutePlan

func ExecutePlan(llm LLM, conv Fragment, plan *structures.Plan, goal *structures.Goal, opts ...Option) (Fragment, error)

ExecutePlan Executes an already-defined plan with a set of options. To override its prompt, configure PromptPlanExecutionType, PromptPlanType, PromptReEvaluatePlanType and PromptSubtaskExtractionType

func ExecuteTools

func ExecuteTools(llm LLM, f Fragment, opts ...Option) (Fragment, error)

ExecuteTools runs a fragment through an LLM, and executes Tools. It returns a new fragment with the tool result at the end The result is guaranteed that can be called afterwards with llm.Ask() to explain the result to the user.

func NewEmptyFragment

func NewEmptyFragment() Fragment

func NewFragment

func NewFragment(messages ...openai.ChatCompletionMessage) Fragment

func ToolReasoner

func ToolReasoner(llm LLM, f Fragment, opts ...Option) (Fragment, error)

ToolReasoner forces the LLM to reason about available tools in a fragment

func (Fragment) AddLastMessage

func (f Fragment) AddLastMessage(f2 Fragment) Fragment

func (Fragment) AddMessage

func (r Fragment) AddMessage(role, content string, mm ...Multimedia) Fragment

func (Fragment) AddStartMessage

func (r Fragment) AddStartMessage(role, content string, mm ...Multimedia) Fragment

func (Fragment) AddToolMessage added in v0.5.1

func (r Fragment) AddToolMessage(content, toolCallID string) Fragment

AddToolMessage adds a tool result message with the specified tool_call_id

func (Fragment) AllFragmentsStrings

func (f Fragment) AllFragmentsStrings() string

AllFragmentsStrings walks through all the fragment parents to retrieve all the conversations and represent that as a string This is particularly useful if chaining different fragments and want to still feed the conversation as a context to the LLM.

func (Fragment) ExtractStructure

func (r Fragment) ExtractStructure(ctx context.Context, llm LLM, s structures.Structure) error

ExtractStructure extracts a structure from the result using the provided JSON schema definition and unmarshals it into the provided destination

func (Fragment) GetMessages added in v0.4.0

func (f Fragment) GetMessages() []openai.ChatCompletionMessage

Messages returns the chat completion messages from this fragment, automatically prepending a force-text-reply system message if tool calls are detected. This ensures LLMs provide natural language responses instead of JSON tool syntax when Ask() is called after ExecuteTools().

func (Fragment) LastAssistantAndToolMessages added in v0.3.0

func (f Fragment) LastAssistantAndToolMessages() []openai.ChatCompletionMessage

func (Fragment) LastMessage

func (f Fragment) LastMessage() *openai.ChatCompletionMessage

func (Fragment) SelectTool

func (f Fragment) SelectTool(ctx context.Context, llm LLM, availableTools Tools, forceTool string) (Fragment, *ToolChoice, error)

SelectTool allows the LLM to select a tool from the fragment of conversation

func (Fragment) String

func (f Fragment) String() string

type Guideline

type Guideline struct {
	Condition string
	Action    string
	Tools     Tools
}

type GuidelineMetadata

type GuidelineMetadata struct {
	Condition string
	Action    string
	Tools     []string
}

type GuidelineMetadataList

type GuidelineMetadataList []GuidelineMetadata

type Guidelines

type Guidelines []Guideline

func GetRelevantGuidelines

func GetRelevantGuidelines(llm LLM, guidelines Guidelines, fragment Fragment, opts ...Option) (Guidelines, error)

func (Guidelines) ToMetadata

func (g Guidelines) ToMetadata() GuidelineMetadataList

type IntentionResponse added in v0.4.0

type IntentionResponse struct {
	Tool      string `json:"tool"`
	Reasoning string `json:"reasoning"`
}

IntentionResponse is used to extract the tool choice from the intention tool

type LLM

type LLM interface {
	Ask(ctx context.Context, f Fragment) (Fragment, error)
	CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (openai.ChatCompletionResponse, error)
}

type Multimedia

type Multimedia interface {
	URL() string
}

TODO: Video, Audio, Image input

type OpenAIClient

type OpenAIClient struct {
	// contains filtered or unexported fields
}

func NewOpenAILLM

func NewOpenAILLM(model, apiKey, baseURL string) *OpenAIClient

func (*OpenAIClient) Ask

func (llm *OpenAIClient) Ask(ctx context.Context, f Fragment) (Fragment, error)

Ask prompts to the LLM with the provided messages and returns a Fragment containing the response. The Fragment.GetMessages() method automatically handles force-text-reply when tool calls are present in the conversation history.

func (*OpenAIClient) CreateChatCompletion

func (llm *OpenAIClient) CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (openai.ChatCompletionResponse, error)

type Option

type Option func(*Options)
var (
	// EnableDeepContext enables full context to the LLM when chaining conversations
	// It might yield to better results to the cost of bigger context use.
	EnableDeepContext Option = func(o *Options) {
		o.deepContext = true
	}

	// EnableToolReasoner enables the reasoning about the need to call other tools
	// before each tool call, preventing calling more tools than necessary.
	EnableToolReasoner Option = func(o *Options) {
		o.toolReasoner = true
	}

	// DisableToolReEvaluator disables the re-evaluation of the need to call other tools
	// after each tool call. It might yield to better results to the cost of more
	// LLM calls.
	DisableToolReEvaluator Option = func(o *Options) {
		o.toolReEvaluator = false
	}

	// DisableSinkState disables the use of a sink state
	// when the LLM decides that no tool is needed
	DisableSinkState Option = func(o *Options) {
		o.sinkState = false
	}

	// EnableInfiniteExecution enables infinite, long-term execution on Plans
	EnableInfiniteExecution Option = func(o *Options) {
		o.infiniteExecution = true
	}

	// EnableStrictGuidelines enforces cogito to pick tools only from the guidelines
	EnableStrictGuidelines Option = func(o *Options) {
		o.strictGuidelines = true
	}

	// EnableAutoPlan enables cogito to automatically use planning if needed
	EnableAutoPlan Option = func(o *Options) {
		o.autoPlan = true
	}

	// EnableAutoPlanReEvaluator enables cogito to automatically re-evaluate the need to use planning
	EnableAutoPlanReEvaluator Option = func(o *Options) {
		o.planReEvaluator = true
	}

	// EnableMCPPrompts enables the use of MCP prompts
	EnableMCPPrompts Option = func(o *Options) {
		o.mcpPrompts = true
	}
)

type Options

type Options struct {
	// contains filtered or unexported fields
}

Options contains all configuration options for the Cogito agent It allows customization of behavior, tools, prompts, and execution parameters

func (*Options) Apply

func (o *Options) Apply(opts ...Option)

type PlanStatus added in v0.3.2

type PlanStatus struct {
	Plan  structures.Plan
	Tools []ToolStatus
}

type Status

type Status struct {
	Iterations   int
	ToolsCalled  Tools
	ToolResults  []ToolStatus
	Plans        []PlanStatus
	PastActions  []ToolStatus // Track past actions for loop detections
	ReasoningLog []string     // Track reasoning for each iteration
}

type Tool

type Tool[T any] interface {
	Run(args T) (string, error)
}

type ToolChoice

type ToolChoice struct {
	Name      string
	Arguments map[string]any
	ID        string
	Reasoning string
}

func ToolReEvaluator added in v0.4.0

func ToolReEvaluator(llm LLM, f Fragment, previousTool ToolStatus, tools Tools, guidelines Guidelines, opts ...Option) (*ToolChoice, string, error)

ToolReEvaluator evaluates the conversation after a tool execution and determines next steps Calls pickAction/toolSelection with reEvaluationTemplate and the conversation that already has tool results

type ToolDefinition added in v0.5.0

type ToolDefinition[T any] struct {
	ToolRunner        Tool[T]
	InputArguments    any
	Name, Description string
}

func (*ToolDefinition[T]) Execute added in v0.5.0

func (t *ToolDefinition[T]) Execute(args map[string]any) (string, error)

Execute implements ToolDef.Execute by marshaling the arguments map to type T and calling ToolRunner.Run

func (ToolDefinition[T]) Tool added in v0.5.0

func (t ToolDefinition[T]) Tool() openai.Tool

type ToolDefinitionInterface added in v0.5.0

type ToolDefinitionInterface interface {
	Tool() openai.Tool
	// Execute runs the tool with the given arguments (as JSON map) and returns the result
	Execute(args map[string]any) (string, error)
}

func NewToolDefinition added in v0.5.0

func NewToolDefinition[T any](toolRunner Tool[T], inputArguments any, name, description string) ToolDefinitionInterface

type ToolStatus

type ToolStatus struct {
	Executed      bool
	ToolArguments ToolChoice
	Result        string
	Name          string
}

type Tools

func (Tools) Definitions

func (t Tools) Definitions() []*openai.FunctionDefinition

func (Tools) Find

func (t Tools) Find(name string) ToolDefinitionInterface

func (Tools) ToOpenAI

func (t Tools) ToOpenAI() []openai.Tool

Directories

Path Synopsis
examples
chat command
pkg
tests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL