rlm

package
v0.75.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 6, 2026 License: MIT Imports: 17 Imported by: 0

Documentation

Overview

Package rlm provides a native Recursive Language Model implementation for dspy-go. RLM enables LLMs to explore large contexts programmatically through a Go REPL, making iterative queries to sub-LLMs until a final answer is reached.

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrMissingContext = errors.New("missing required input: context")
	ErrMissingQuery   = errors.New("missing or invalid required input: query")
)

Functions

func ChunkAnalysisSignature

func ChunkAnalysisSignature() core.Signature

ChunkAnalysisSignature for analyzing individual chunks of large contexts.

func ContextMetadata

func ContextMetadata(payload any) string

ContextMetadata returns a string describing the context.

func FindCodeBlocks

func FindCodeBlocks(text string) []string

FindCodeBlocks extracts all ```go or ```repl code blocks from the LLM response. Returns an empty slice if no code blocks are found.

func FormatExecutionResult

func FormatExecutionResult(result *ExecutionResult) string

FormatExecutionResult formats an execution result for display.

func IterationDemos

func IterationDemos() []core.Example

IterationDemos provides few-shot examples for the iteration module.

func IterationSignature

func IterationSignature() core.Signature

IterationSignature defines the signature for each RLM iteration. This powers the inner loop where the LLM decides what to do next.

func RLMSignature

func RLMSignature() core.Signature

RLMSignature creates the main RLM module signature. This is the outer interface: takes context + query, returns answer.

func SubQueryDemos

func SubQueryDemos() []core.Example

SubQueryDemos provides few-shot examples for sub-LLM queries.

func SubQuerySignature

func SubQuerySignature() core.Signature

SubQuerySignature defines the signature for sub-LLM queries. This is used by Query() and QueryBatched() internally.

func SynthesisSignature

func SynthesisSignature() core.Signature

SynthesisSignature for combining results from multiple chunk analyses.

Types

type CodeBlock

type CodeBlock struct {
	Code   string
	Result ExecutionResult
}

CodeBlock represents an extracted and executed code block.

type CompletionResult

type CompletionResult struct {
	Response   string
	Iterations int
	Duration   time.Duration
	Usage      core.TokenUsage
}

CompletionResult represents the final result of an RLM completion.

type Config

type Config struct {
	// MaxIterations is the maximum number of iteration loops (default: 30).
	MaxIterations int

	// Verbose enables verbose logging.
	Verbose bool

	// Timeout is the maximum duration for the entire RLM completion.
	// Zero means no timeout (default).
	Timeout time.Duration

	// TraceDir is the directory for RLM trace logs (JSONL format compatible with rlm-viewer).
	// Empty string disables tracing.
	TraceDir string
}

Config holds RLM configuration.

func DefaultConfig

func DefaultConfig() Config

DefaultConfig returns the default RLM configuration.

type ExecutionResult

type ExecutionResult struct {
	Stdout   string
	Stderr   string
	Duration time.Duration
}

ExecutionResult represents the result of executing code in the REPL.

type FinalAnswer

type FinalAnswer struct {
	Type    FinalAnswerType
	Content string
}

FinalAnswer represents a detected FINAL or FINAL_VAR signal.

func FindFinalAnswer

func FindFinalAnswer(text string) *FinalAnswer

FindFinalAnswer detects FINAL() or FINAL_VAR() signals in the LLM response. Returns nil if no final answer is found. Note: Code blocks are filtered out first to avoid false positives when FINAL appears in code examples.

type FinalAnswerType

type FinalAnswerType string

FinalAnswerType indicates whether the answer is direct or a variable reference.

const (
	// FinalTypeDirect indicates a direct value like FINAL(42).
	FinalTypeDirect FinalAnswerType = "FINAL"
	// FinalTypeVariable indicates a variable reference like FINAL_VAR(answer).
	FinalTypeVariable FinalAnswerType = "FINAL_VAR"
)

type LLMCall

type LLMCall struct {
	Prompt           string        `json:"prompt"`
	Response         string        `json:"response"`
	Duration         time.Duration `json:"duration"`
	PromptTokens     int           `json:"prompt_tokens"`
	CompletionTokens int           `json:"completion_tokens"`
}

LLMCall represents a sub-LLM call made from within the REPL.

type LLMSubClient

type LLMSubClient struct {
	// contains filtered or unexported fields
}

LLMSubClient adapts a core.LLM to the SubLLMClient interface. This allows any dspy-go LLM to be used for sub-queries in RLM.

func NewLLMSubClient

func NewLLMSubClient(llm core.LLM) *LLMSubClient

NewLLMSubClient creates a SubLLMClient from a core.LLM.

func (*LLMSubClient) Query

func (c *LLMSubClient) Query(ctx context.Context, prompt string) (QueryResponse, error)

Query implements SubLLMClient.

func (*LLMSubClient) QueryBatched

func (c *LLMSubClient) QueryBatched(ctx context.Context, prompts []string) ([]QueryResponse, error)

QueryBatched implements SubLLMClient with concurrent queries.

type Option

type Option func(*Config)

Option configures the RLM.

func WithMaxIterations

func WithMaxIterations(n int) Option

WithMaxIterations sets the maximum number of iterations. Values <= 0 are ignored and the default is used.

func WithTimeout

func WithTimeout(d time.Duration) Option

WithTimeout sets the maximum duration for the completion.

func WithTraceDir

func WithTraceDir(dir string) Option

WithTraceDir enables JSONL tracing to the specified directory. The trace files are compatible with rlm-go's rlm-viewer command.

func WithVerbose

func WithVerbose(v bool) Option

WithVerbose enables verbose logging.

type QueryResponse

type QueryResponse struct {
	Response         string
	PromptTokens     int
	CompletionTokens int
}

QueryResponse contains the LLM response with usage metadata.

type REPLEnvironment

type REPLEnvironment interface {
	// LoadContext loads the context payload into the REPL environment.
	LoadContext(payload any) error

	// Execute runs Go code in the interpreter and returns the result.
	Execute(ctx context.Context, code string) (*ExecutionResult, error)

	// GetVariable retrieves a variable value from the interpreter.
	GetVariable(name string) (string, error)

	// Reset clears the interpreter state.
	Reset() error

	// ContextInfo returns metadata about the loaded context.
	ContextInfo() string

	// GetLocals extracts commonly used variables from the interpreter.
	GetLocals() map[string]any
}

REPLEnvironment defines the interface for a REPL that can execute code and make LLM queries.

type RLM

type RLM struct {
	core.BaseModule
	// contains filtered or unexported fields
}

RLM is the main Recursive Language Model module implementation. It enables LLMs to explore large contexts programmatically through a Go REPL, making iterative queries to sub-LLMs until a final answer is reached.

func New

func New(rootLLM core.LLM, subLLMClient SubLLMClient, opts ...Option) *RLM

New creates a new RLM module instance with separate LLMs. rootLLM is used for the main orchestration loop. subLLMClient is used for Query/QueryBatched calls from within the REPL. For most cases, use NewFromLLM instead which uses the same LLM for both.

func NewFromLLM

func NewFromLLM(llm core.LLM, opts ...Option) *RLM

NewFromLLM creates a new RLM module using a single core.LLM for both root orchestration and sub-queries. This is the recommended constructor for most use cases.

func (*RLM) Clone

func (r *RLM) Clone() core.Module

Clone creates a copy of the RLM module.

func (*RLM) Complete

func (r *RLM) Complete(ctx context.Context, contextPayload any, query string) (*CompletionResult, error)

Complete runs an RLM completion. contextPayload is the context data (string, map, or slice). query is the user's question.

func (*RLM) GetTokenTracker

func (r *RLM) GetTokenTracker() *TokenTracker

GetTokenTracker returns the token tracker for inspecting usage.

func (*RLM) Process

func (r *RLM) Process(ctx context.Context, inputs map[string]any, opts ...core.Option) (map[string]any, error)

Process implements the core.Module interface. It takes inputs with "context" and "query" fields and returns the answer.

func (*RLM) ProcessWithInterceptors

func (r *RLM) ProcessWithInterceptors(ctx context.Context, inputs map[string]any, interceptors []core.ModuleInterceptor, opts ...core.Option) (map[string]any, error)

ProcessWithInterceptors executes the RLM module's logic with interceptor support.

func (*RLM) SetLLM

func (r *RLM) SetLLM(llm core.LLM)

SetLLM sets the root LLM for orchestration and updates the iteration module.

func (*RLM) WithOptions

func (r *RLM) WithOptions(opts ...Option) *RLM

WithOptions applies additional options to the RLM module.

type SubLLMClient

type SubLLMClient interface {
	// Query makes a single LLM query.
	Query(ctx context.Context, prompt string) (QueryResponse, error)
	// QueryBatched makes concurrent LLM queries.
	QueryBatched(ctx context.Context, prompts []string) ([]QueryResponse, error)
}

SubLLMClient defines the interface for making LLM calls from within the REPL.

type TokenTracker

type TokenTracker struct {
	// contains filtered or unexported fields
}

TokenTracker aggregates token usage across root LLM and sub-LLM calls.

func NewTokenTracker

func NewTokenTracker() *TokenTracker

NewTokenTracker creates a new token tracker.

func (*TokenTracker) AddRootUsage

func (t *TokenTracker) AddRootUsage(promptTokens, completionTokens int)

AddRootUsage adds token usage from a root LLM call.

func (*TokenTracker) AddSubCall

func (t *TokenTracker) AddSubCall(call LLMCall)

AddSubCall adds a sub-LLM call with its token usage.

func (*TokenTracker) AddSubCalls

func (t *TokenTracker) AddSubCalls(calls []LLMCall)

AddSubCalls adds multiple sub-LLM calls.

func (*TokenTracker) ClearSubCalls

func (t *TokenTracker) ClearSubCalls()

ClearSubCalls clears the recorded sub-LLM calls but preserves the counts.

func (*TokenTracker) GetRootUsage

func (t *TokenTracker) GetRootUsage() core.TokenUsage

GetRootUsage returns token usage from root LLM calls only.

func (*TokenTracker) GetSubCalls

func (t *TokenTracker) GetSubCalls() []LLMCall

GetSubCalls returns a copy of all sub-LLM calls.

func (*TokenTracker) GetSubUsage

func (t *TokenTracker) GetSubUsage() core.TokenUsage

GetSubUsage returns token usage from sub-LLM calls only.

func (*TokenTracker) GetTotalUsage

func (t *TokenTracker) GetTotalUsage() core.TokenUsage

GetTotalUsage returns the total aggregated token usage.

func (*TokenTracker) Reset

func (t *TokenTracker) Reset()

Reset clears all tracked usage.

type YaegiREPL

type YaegiREPL struct {
	// contains filtered or unexported fields
}

YaegiREPL is a Yaegi-based Go interpreter with RLM capabilities.

SECURITY NOTE: The interpreter is sandboxed by restricting imports to a safe subset of the standard library (no os, net, syscall, etc.). However, it does NOT protect against resource exhaustion attacks. LLM-generated code could potentially allocate large amounts of memory or create infinite loops that exceed the execution timeout. If running untrusted code in production, consider additional OS-level resource limits (e.g., cgroups, containers) or running the interpreter in a separate process with strict memory limits.

func NewYaegiREPL

func NewYaegiREPL(client SubLLMClient) (*YaegiREPL, error)

NewYaegiREPL creates a new YaegiREPL instance. Returns an error if initialization fails (e.g., stdlib loading or builtin injection).

func (*YaegiREPL) ClearLLMCalls

func (r *YaegiREPL) ClearLLMCalls()

ClearLLMCalls clears the recorded LLM calls.

func (*YaegiREPL) ContextInfo

func (r *YaegiREPL) ContextInfo() string

ContextInfo returns metadata about the loaded context.

func (*YaegiREPL) Execute

func (r *YaegiREPL) Execute(ctx context.Context, code string) (result *ExecutionResult, err error)

Execute runs Go code in the interpreter. Execution errors are captured in stderr rather than returned, allowing the caller to inspect all output. The mutex is held for the entire duration to ensure thread safety, as the yaegi interpreter is not safe for concurrent use. Panics from the Yaegi interpreter are recovered and reported as stderr errors.

func (*YaegiREPL) GetLLMCalls

func (r *YaegiREPL) GetLLMCalls() []LLMCall

GetLLMCalls returns and clears the recorded LLM calls. Returns a copy of the calls slice to prevent external modification.

func (*YaegiREPL) GetLocals

func (r *YaegiREPL) GetLocals() map[string]any

GetLocals extracts commonly used variables from the interpreter.

func (*YaegiREPL) GetVariable

func (r *YaegiREPL) GetVariable(name string) (string, error)

GetVariable retrieves a variable value from the interpreter.

func (*YaegiREPL) LoadContext

func (r *YaegiREPL) LoadContext(payload any) error

LoadContext injects the context payload into the interpreter as the `context` variable.

func (*YaegiREPL) Reset

func (r *YaegiREPL) Reset() error

Reset clears the interpreter state and creates a fresh instance.

func (*YaegiREPL) SetContext

func (r *YaegiREPL) SetContext(ctx context.Context)

SetContext sets the execution context for LLM calls.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL