cogito

package module
v0.3.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 21, 2025 License: Apache-2.0 Imports: 12 Imported by: 2

README

Gemini_Generated_Image_jbv0xajbv0xajbv0

Cogito is a powerful Go library for building intelligent, co-operative agentic software and LLM-powered workflows, focusing on improving results for small, open source language models that scales to any LLM.

🧪 Tested on Small Models ! Our test suite runs on 0.6b Qwen (not fine-tuned), proving effectiveness even with minimal resources.

📝 Working on Official Paper
I am currently working on the official academic/white paper for Cogito. The paper will provide detailed theoretical foundations, experimental results, and comprehensive analysis of the framework's capabilities.

🏗️ Architecture

Cogito is the result of building LocalAI, LocalAGI and LocalOperator (yet to be released).

Cogito uses an internal pipeline to first make the LLM reason about a specific task, forcing the model to reason and later extracts with BNF grammars exact data structures from the LLM. This is applied to every primitive exposed by the framework.

It provides a comprehensive framework for creating conversational AI systems with advanced reasoning, tool execution, goal-oriented planning, iterative content refinement capabilities, and seamless integration with external tools also via the Model Context Protocol (MCP).

🔧 Composable Primitives
Cogito primitives can be combined to form more complex pipelines, enabling sophisticated AI workflows.

🚀 Quick Start

Installation

go get github.com/mudler/cogito

Basic Usage

package main

import (
    "context"
    "fmt"
    "github.com/mudler/cogito"
)

func main() {
    // Create an LLM client
    llm := cogito.NewOpenAILLM("your-model", "api-key", "https://api.openai.com")
    
    // Create a conversation fragment
    fragment := cogito.NewEmptyFragment().
        AddMessage("user", "Tell me about artificial intelligence")
    
    // Get a response
    newFragment, err := llm.Ask(context.Background(), fragment)
    if err != nil {
        panic(err)
    }
    
    fmt.Println(newFragment.LastMessage().Content)
}

Using Tools

// Create a fragment with user input
fragment := cogito.NewFragment(openai.ChatCompletionMessage{
    Role:    "user",
    Content: "What's the weather in San Francisco?",
})

// Execute with tools
result, err := cogito.ExecuteTools(llm, fragment, 
    cogito.WithTools(&WeatherTool{}))
if err != nil {
    panic(err)
}

// result.Status.ToolsCalled will contain all the tools being called

Guidelines for Intelligent Tool Selection

Guidelines provide a powerful way to define conditional rules for tool usage. The LLM intelligently selects which guidelines are relevant based on the conversation context, enabling dynamic and context-aware tool selection.

// Define guidelines with conditions and associated tools
guidelines := cogito.Guidelines{
    cogito.Guideline{
        Condition: "User asks about information or facts",
        Action:    "Use the search tool to find information",
        Tools: cogito.Tools{
            &SearchTool{},
        },
    },
    cogito.Guideline{
        Condition: "User asks for the weather in a city",
        Action:    "Use the weather tool to find the weather",
        Tools: cogito.Tools{
            &WeatherTool{},
        },
    },
}

// Get relevant guidelines for the current conversation
fragment := cogito.NewEmptyFragment().
    AddMessage("user", "When was Isaac Asimov born?")

// Execute tools with guidelines
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithGuidelines(guidelines),
    cogito.EnableStrictGuidelines) // Only use tools from relevant guidelines
if err != nil {
    panic(err)
}

Goal-Oriented Planning

// Extract a goal from conversation
goal, err := cogito.ExtractGoal(llm, fragment)
if err != nil {
    panic(err)
}

// Create a plan to achieve the goal
plan, err := cogito.ExtractPlan(llm, fragment, goal, 
    cogito.WithTools(&SearchTool{}))
if err != nil {
    panic(err)
}

// Execute the plan
result, err := cogito.ExecutePlan(llm, fragment, plan, goal,
    cogito.WithTools(&SearchTool{}))
if err != nil {
    panic(err)
}

Content Refinement

// Refine content through iterative improvement
refined, err := cogito.ContentReview(llm, fragment,
    cogito.WithIterations(3),
    cogito.WithTools(&SearchTool{}))
if err != nil {
    panic(err)
}

Iterative Content Improvement

An example on how to iteratively improve content by using two separate models:

llm := cogito.NewOpenAILLM("your-model", "api-key", "https://api.openai.com")
reviewerLLM := cogito.NewOpenAILLM("your-reviewer-model", "api-key", "https://api.openai.com")

// Create content to review
initial := cogito.NewEmptyFragment().
    AddMessage("user", "Write about climate change")

response, _ := llm.Ask(ctx, initial)

// Iteratively improve with tool support
improvedResponse, _ := cogito.ContentReview(reviewerLLM, response,
    cogito.WithIterations(3),
    cogito.WithTools(&SearchTool{}, &FactCheckTool{}),
    cogito.EnableToolReasoner)

Model Context Protocol (MCP) Integration

Cogito supports the Model Context Protocol (MCP) for seamless integration with external tools and services. MCP allows you to connect to remote tool providers and use their capabilities directly within your Cogito workflows.

import (
    "github.com/modelcontextprotocol/go-sdk/mcp"
)

// Create MCP client sessions
command := exec.Command("docker", "run", "-i", "--rm", "ghcr.io/mudler/mcps/weather:master")
transport := &mcp.CommandTransport{ Command: command }

client := mcp.NewClient(&mcp.Implementation{Name: "test", Version: "v1.0.0"}, nil)
mcpSession, _ := client.Connect(context.Background(), transport, nil)

// Use MCP tools in your workflows
result, _ := cogito.ExecuteTools(llm, fragment,
    cogito.WithMCPs(mcpSession))

MCP with Guidelines
// Define guidelines that include MCP tools
guidelines := cogito.Guidelines{
    cogito.Guideline{
        Condition: "User asks about information or facts",
        Action:    "Use the MCP search tool to find information",
    },
}

// Execute with MCP tools and guidelines
result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithMCPs(searchSession),
    cogito.WithGuidelines(guidelines),
    cogito.EnableStrictGuidelines)

Custom Prompts

customPrompt := cogito.NewPrompt(`Your custom prompt template with {{.Context}}`)

result, err := cogito.ExecuteTools(llm, fragment,
    cogito.WithPrompt(cogito.ToolSelectorType, customPrompt))

🎮 Examples

Interactive Chat Bot

# Run the example chat application
make example-chat

This starts an interactive chat session with tool support including web search capabilities.

Custom Tool Implementation

See examples/internal/search/search.go for a complete example of implementing a DuckDuckGo search tool.

🧪 Testing

The library includes comprehensive test coverage using Ginkgo and Gomega. Tests use containerized LocalAI for integration testing.

Running Tests

# Run all tests
make test

# Run with specific log level
LOG_LEVEL=debug make test

# Run with custom arguments
GINKGO_ARGS="--focus=Fragment" make test

📄 License

Ettore Di Giacinto 2025-now. Cogito is released under the Apache 2.0 License.

📚 Citation

If you use Cogito in your research or academic work, please cite our paper:

@article{cogito2025,
  title={Cogito: A Framework for Building Intelligent Agentic Software with LLM-Powered Workflows},
  author={Ettore Di Giacinto <mudler@localai.io>},
  journal={https://github.com/mudler/cogito},
  year={2025},
  note={}
}

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrGoalNotAchieved error = errors.New("goal not achieved")
)
View Source
var (
	ErrNoToolSelected error = errors.New("no tool selected by the LLM")
)

Functions

func ExtractBoolean

func ExtractBoolean(llm LLM, f Fragment, opts ...Option) (*structures.Boolean, error)

ExtractBoolean extracts a boolean from a conversation

func ExtractGoal

func ExtractGoal(llm LLM, f Fragment, opts ...Option) (*structures.Goal, error)

ExtractGoal extracts a goal from a conversation

func ExtractKnowledgeGaps

func ExtractKnowledgeGaps(llm LLM, f Fragment, opts ...Option) ([]string, error)

func ExtractPlan

func ExtractPlan(llm LLM, f Fragment, goal *structures.Goal, opts ...Option) (*structures.Plan, error)

ExtractPlan extracts a plan from a conversation To override the prompt, define a PromptPlanType, PromptReEvaluatePlanType and PromptSubtaskExtractionType

func IsGoalAchieved

func IsGoalAchieved(llm LLM, f Fragment, goal *structures.Goal, opts ...Option) (*structures.Boolean, error)

IsGoalAchieved checks if a goal has been achieved

func ReEvaluatePlan

func ReEvaluatePlan(llm LLM, f, subtaskFragment Fragment, goal *structures.Goal, toolStatuses []ToolStatus, subtask string, opts ...Option) (*structures.Plan, error)

ExtractPlan extracts a plan from a conversation to override the prompt, define a PromptReEvaluatePlanType and PromptSubtaskExtractionType

func WithContext

func WithContext(ctx context.Context) func(o *Options)

WithContext sets the execution context for the agent

func WithFeedbackCallback

func WithFeedbackCallback(fn func() *Fragment) func(o *Options)

WithFeedbackCallback sets a callback to get continous feedback during execution of plans

func WithGaps

func WithGaps(gaps ...string) func(o *Options)

WithGaps adds knowledge gaps that the agent should address

func WithGuidelines

func WithGuidelines(guidelines ...Guideline) func(o *Options)

WithGuidelines adds behavioral guidelines for the agent to follow. The guildelines allows a more curated selection of the tool to use and only relevant are shown to the LLM during tool selection.

func WithIterations

func WithIterations(i int) func(o *Options)

WithIterations allows to set the number of refinement iterations

func WithMCPArgs added in v0.3.0

func WithMCPArgs(args map[string]string) func(o *Options)

WithMCPArgs sets the arguments for the MCP prompts

func WithMCPs added in v0.2.0

func WithMCPs(sessions ...*mcp.ClientSession) func(o *Options)

WithMCPs adds Model Context Protocol client sessions for external tool integration. When specified, the tools available in the MCPs will be available to the cogito pipelines

func WithMaxAttempts

func WithMaxAttempts(i int) func(o *Options)

WithMaxAttempts sets the maximum number of execution attempts

func WithPrompt

func WithPrompt(t prompt.PromptType, p prompt.StaticPrompt) func(o *Options)

WithPrompt allows to set a custom prompt for a given PromptType

func WithStatusCallback

func WithStatusCallback(fn func(string)) func(o *Options)

WithStatusCallback sets a callback function to receive status updates during execution

func WithToolCallBack

func WithToolCallBack(fn func(*ToolChoice) bool) func(o *Options)

WithToolCallBack allows to set a callback to prompt the user if running the tool or not

func WithToolCallResultCallback

func WithToolCallResultCallback(fn func(ToolStatus)) func(o *Options)

WithToolCallResultCallback runs the callback on every tool result

func WithTools

func WithTools(tools ...Tool) func(o *Options)

WithTools allows to set the tools available to the Agent

Types

type Fragment

type Fragment struct {
	Messages       []openai.ChatCompletionMessage
	ParentFragment *Fragment
	Status         Status
	Multimedia     []Multimedia
}

func ContentReview

func ContentReview(llm LLM, originalFragment Fragment, opts ...Option) (Fragment, error)

ContentReview refines an LLM response until for a fixed number of iterations or if the LLM doesn't find anymore gaps

func ExecutePlan

func ExecutePlan(llm LLM, conv Fragment, plan *structures.Plan, goal *structures.Goal, opts ...Option) (Fragment, error)

ExecutePlan Executes an already-defined plan with a set of options. To override its prompt, configure PromptPlanExecutionType, PromptPlanType, PromptReEvaluatePlanType and PromptSubtaskExtractionType

func ExecuteTools

func ExecuteTools(llm LLM, f Fragment, opts ...Option) (Fragment, error)

ExecuteTools runs a fragment through an LLM, and executes Tools. It returns a new fragment with the tool result at the end The result is guaranteed that can be called afterwards with llm.Ask() to explain the result to the user.

func NewEmptyFragment

func NewEmptyFragment() Fragment

func NewFragment

func NewFragment(messages ...openai.ChatCompletionMessage) Fragment

func ToolReasoner

func ToolReasoner(llm LLM, f Fragment, opts ...Option) (Fragment, error)

ToolReasoner forces the LLM to reason about available tools in a fragment

func (Fragment) AddLastMessage

func (f Fragment) AddLastMessage(f2 Fragment) Fragment

func (Fragment) AddMessage

func (r Fragment) AddMessage(role, content string, mm ...Multimedia) Fragment

func (Fragment) AddStartMessage

func (r Fragment) AddStartMessage(role, content string, mm ...Multimedia) Fragment

func (Fragment) AllFragmentsStrings

func (f Fragment) AllFragmentsStrings() string

AllFragmentsStrings walks through all the fragment parents to retrieve all the conversations and represent that as a string This is particularly useful if chaining different fragments and want to still feed the conversation as a context to the LLM.

func (Fragment) ExtractStructure

func (r Fragment) ExtractStructure(ctx context.Context, llm LLM, s structures.Structure) error

ExtractStructure extracts a structure from the result using the provided JSON schema definition and unmarshals it into the provided destination

func (Fragment) LastAssistantAndToolMessages added in v0.3.0

func (f Fragment) LastAssistantAndToolMessages() []openai.ChatCompletionMessage

func (Fragment) LastMessage

func (f Fragment) LastMessage() *openai.ChatCompletionMessage

func (Fragment) SelectTool

func (f Fragment) SelectTool(ctx context.Context, llm LLM, availableTools Tools, forceTool string) (Fragment, *ToolChoice, error)

SelectTool allows the LLM to select a tool from the fragment of conversation

func (Fragment) String

func (f Fragment) String() string

type Guideline

type Guideline struct {
	Condition string
	Action    string
	Tools     Tools
}

type GuidelineMetadata

type GuidelineMetadata struct {
	Condition string
	Action    string
	Tools     []string
}

type GuidelineMetadataList

type GuidelineMetadataList []GuidelineMetadata

type Guidelines

type Guidelines []Guideline

func GetRelevantGuidelines

func GetRelevantGuidelines(llm LLM, guidelines Guidelines, fragment Fragment, opts ...Option) (Guidelines, error)

func (Guidelines) ToMetadata

func (g Guidelines) ToMetadata() GuidelineMetadataList

type LLM

type LLM interface {
	Ask(ctx context.Context, f Fragment) (Fragment, error)
	CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (openai.ChatCompletionResponse, error)
}

type Multimedia

type Multimedia interface {
	URL() string
}

TODO: Video, Audio, Image input

type OpenAIClient

type OpenAIClient struct {
	// contains filtered or unexported fields
}

func NewOpenAILLM

func NewOpenAILLM(model, apiKey, baseURL string) *OpenAIClient

func (*OpenAIClient) Ask

func (llm *OpenAIClient) Ask(ctx context.Context, f Fragment) (Fragment, error)

Ask prompts to the LLM with the provided messages and returns a Fragment containing the response

func (*OpenAIClient) CreateChatCompletion

func (llm *OpenAIClient) CreateChatCompletion(ctx context.Context, request openai.ChatCompletionRequest) (openai.ChatCompletionResponse, error)

type Option

type Option func(*Options)
var (
	// EnableDeepContext enables full context to the LLM when chaining conversations
	// It might yield to better results to the cost of bigger context use.
	EnableDeepContext Option = func(o *Options) {
		o.deepContext = true
	}

	// EnableToolReasoner enables the reasoning about the need to call other tools
	// before each tool call, preventing calling more tools than necessary.
	EnableToolReasoner Option = func(o *Options) {
		o.toolReasoner = true
	}

	// EnableToolReEvaluator enables the re-evaluation of the need to call other tools
	// after each tool call. It might yield to better results to the cost of more
	// LLM calls.
	EnableToolReEvaluator Option = func(o *Options) {
		o.toolReEvaluator = true
	}

	// EnableInfiniteExecution enables infinite, long-term execution on Plans
	EnableInfiniteExecution Option = func(o *Options) {
		o.infiniteExecution = true
	}

	// EnableStrictGuidelines enforces cogito to pick tools only from the guidelines
	EnableStrictGuidelines Option = func(o *Options) {
		o.strictGuidelines = true
	}

	// EnableAutoPlan enables cogito to automatically use planning if needed
	EnableAutoPlan Option = func(o *Options) {
		o.autoPlan = true
	}

	// EnableAutoPlanReEvaluator enables cogito to automatically re-evaluate the need to use planning
	EnableAutoPlanReEvaluator Option = func(o *Options) {
		o.planReEvaluator = true
	}

	// EnableMCPPrompts enables the use of MCP prompts
	EnableMCPPrompts Option = func(o *Options) {
		o.mcpPrompts = true
	}
)

type Options

type Options struct {
	// contains filtered or unexported fields
}

Options contains all configuration options for the Cogito agent It allows customization of behavior, tools, prompts, and execution parameters

func (*Options) Apply

func (o *Options) Apply(opts ...Option)

type PlanStatus added in v0.3.2

type PlanStatus struct {
	Plan  structures.Plan
	Tools []ToolStatus
}

type Status

type Status struct {
	Iterations  int
	ToolsCalled Tools
	ToolResults []ToolStatus
	Plans       []PlanStatus
}

type Tool

type Tool interface {
	Tool() openai.Tool
	Run(args map[string]any) (string, error)
}

type ToolChoice

type ToolChoice struct {
	Name      string
	Arguments map[string]any
}

type ToolStatus

type ToolStatus struct {
	Executed      bool
	ToolArguments ToolChoice
	Result        string
	Name          string
}

type Tools

type Tools []Tool

func (Tools) Definitions

func (t Tools) Definitions() []*openai.FunctionDefinition

func (Tools) Find

func (t Tools) Find(name string) Tool

func (Tools) ToOpenAI

func (t Tools) ToOpenAI() []openai.Tool

Directories

Path Synopsis
examples
chat command
pkg
tests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL