Documentation
¶
Overview ¶
Package rlm provides a native Recursive Language Model implementation for dspy-go. RLM enables LLMs to explore large contexts programmatically through a Go REPL, making iterative queries to sub-LLMs until a final answer is reached.
Index ¶
- Variables
- func ChunkAnalysisSignature() core.Signature
- func ContextMetadata(payload any) string
- func FindCodeBlocks(text string) []string
- func FormatExecutionResult(result *ExecutionResult) string
- func IterationDemos() []core.Example
- func IterationSignature() core.Signature
- func RLMSignature() core.Signature
- func SubQueryDemos() []core.Example
- func SubQuerySignature() core.Signature
- func SynthesisSignature() core.Signature
- type CodeBlock
- type CompletionResult
- type Config
- type ExecutionResult
- type FinalAnswer
- type FinalAnswerType
- type LLMCall
- type LLMSubClient
- type Option
- type QueryResponse
- type REPLEnvironment
- type RLM
- func (r *RLM) Clone() core.Module
- func (r *RLM) Complete(ctx context.Context, contextPayload any, query string) (*CompletionResult, error)
- func (r *RLM) GetTokenTracker() *TokenTracker
- func (r *RLM) Process(ctx context.Context, inputs map[string]any, opts ...core.Option) (map[string]any, error)
- func (r *RLM) ProcessWithInterceptors(ctx context.Context, inputs map[string]any, ...) (map[string]any, error)
- func (r *RLM) SetLLM(llm core.LLM)
- func (r *RLM) WithOptions(opts ...Option) *RLM
- type SubLLMClient
- type TokenTracker
- func (t *TokenTracker) AddRootUsage(promptTokens, completionTokens int)
- func (t *TokenTracker) AddSubCall(call LLMCall)
- func (t *TokenTracker) AddSubCalls(calls []LLMCall)
- func (t *TokenTracker) ClearSubCalls()
- func (t *TokenTracker) GetRootUsage() core.TokenUsage
- func (t *TokenTracker) GetSubCalls() []LLMCall
- func (t *TokenTracker) GetSubUsage() core.TokenUsage
- func (t *TokenTracker) GetTotalUsage() core.TokenUsage
- func (t *TokenTracker) Reset()
- type YaegiREPL
- func (r *YaegiREPL) ClearLLMCalls()
- func (r *YaegiREPL) ContextInfo() string
- func (r *YaegiREPL) Execute(ctx context.Context, code string) (result *ExecutionResult, err error)
- func (r *YaegiREPL) GetLLMCalls() []LLMCall
- func (r *YaegiREPL) GetLocals() map[string]any
- func (r *YaegiREPL) GetVariable(name string) (string, error)
- func (r *YaegiREPL) LoadContext(payload any) error
- func (r *YaegiREPL) Reset() error
- func (r *YaegiREPL) SetContext(ctx context.Context)
Constants ¶
This section is empty.
Variables ¶
var ( ErrMissingContext = errors.New("missing required input: context") ErrMissingQuery = errors.New("missing or invalid required input: query") )
Functions ¶
func ChunkAnalysisSignature ¶
ChunkAnalysisSignature for analyzing individual chunks of large contexts.
func ContextMetadata ¶
ContextMetadata returns a string describing the context.
func FindCodeBlocks ¶
FindCodeBlocks extracts all ```go or ```repl code blocks from the LLM response. Returns an empty slice if no code blocks are found.
func FormatExecutionResult ¶
func FormatExecutionResult(result *ExecutionResult) string
FormatExecutionResult formats an execution result for display.
func IterationDemos ¶
IterationDemos provides few-shot examples for the iteration module.
func IterationSignature ¶
IterationSignature defines the signature for each RLM iteration. This powers the inner loop where the LLM decides what to do next.
func RLMSignature ¶
RLMSignature creates the main RLM module signature. This is the outer interface: takes context + query, returns answer.
func SubQueryDemos ¶
SubQueryDemos provides few-shot examples for sub-LLM queries.
func SubQuerySignature ¶
SubQuerySignature defines the signature for sub-LLM queries. This is used by Query() and QueryBatched() internally.
func SynthesisSignature ¶
SynthesisSignature for combining results from multiple chunk analyses.
Types ¶
type CodeBlock ¶
type CodeBlock struct {
Code string
Result ExecutionResult
}
CodeBlock represents an extracted and executed code block.
type CompletionResult ¶
type CompletionResult struct {
Response string
Iterations int
Duration time.Duration
Usage core.TokenUsage
}
CompletionResult represents the final result of an RLM completion.
type Config ¶
type Config struct {
// MaxIterations is the maximum number of iteration loops (default: 30).
MaxIterations int
// Verbose enables verbose logging.
Verbose bool
// Timeout is the maximum duration for the entire RLM completion.
// Zero means no timeout (default).
Timeout time.Duration
// TraceDir is the directory for RLM trace logs (JSONL format compatible with rlm-viewer).
// Empty string disables tracing.
TraceDir string
}
Config holds RLM configuration.
func DefaultConfig ¶
func DefaultConfig() Config
DefaultConfig returns the default RLM configuration.
type ExecutionResult ¶
ExecutionResult represents the result of executing code in the REPL.
type FinalAnswer ¶
type FinalAnswer struct {
Type FinalAnswerType
Content string
}
FinalAnswer represents a detected FINAL or FINAL_VAR signal.
func FindFinalAnswer ¶
func FindFinalAnswer(text string) *FinalAnswer
FindFinalAnswer detects FINAL() or FINAL_VAR() signals in the LLM response. Returns nil if no final answer is found. Note: Code blocks are filtered out first to avoid false positives when FINAL appears in code examples.
type FinalAnswerType ¶
type FinalAnswerType string
FinalAnswerType indicates whether the answer is direct or a variable reference.
const ( // FinalTypeDirect indicates a direct value like FINAL(42). FinalTypeDirect FinalAnswerType = "FINAL" // FinalTypeVariable indicates a variable reference like FINAL_VAR(answer). FinalTypeVariable FinalAnswerType = "FINAL_VAR" )
type LLMCall ¶
type LLMCall struct {
Prompt string `json:"prompt"`
Response string `json:"response"`
Duration time.Duration `json:"duration"`
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
}
LLMCall represents a sub-LLM call made from within the REPL.
type LLMSubClient ¶
type LLMSubClient struct {
// contains filtered or unexported fields
}
LLMSubClient adapts a core.LLM to the SubLLMClient interface. This allows any dspy-go LLM to be used for sub-queries in RLM.
func NewLLMSubClient ¶
func NewLLMSubClient(llm core.LLM) *LLMSubClient
NewLLMSubClient creates a SubLLMClient from a core.LLM.
func (*LLMSubClient) Query ¶
func (c *LLMSubClient) Query(ctx context.Context, prompt string) (QueryResponse, error)
Query implements SubLLMClient.
func (*LLMSubClient) QueryBatched ¶
func (c *LLMSubClient) QueryBatched(ctx context.Context, prompts []string) ([]QueryResponse, error)
QueryBatched implements SubLLMClient with concurrent queries.
type Option ¶
type Option func(*Config)
Option configures the RLM.
func WithMaxIterations ¶
WithMaxIterations sets the maximum number of iterations. Values <= 0 are ignored and the default is used.
func WithTimeout ¶
WithTimeout sets the maximum duration for the completion.
func WithTraceDir ¶
WithTraceDir enables JSONL tracing to the specified directory. The trace files are compatible with rlm-go's rlm-viewer command.
type QueryResponse ¶
QueryResponse contains the LLM response with usage metadata.
type REPLEnvironment ¶
type REPLEnvironment interface {
// LoadContext loads the context payload into the REPL environment.
LoadContext(payload any) error
// Execute runs Go code in the interpreter and returns the result.
Execute(ctx context.Context, code string) (*ExecutionResult, error)
// GetVariable retrieves a variable value from the interpreter.
GetVariable(name string) (string, error)
// Reset clears the interpreter state.
Reset() error
// ContextInfo returns metadata about the loaded context.
ContextInfo() string
// GetLocals extracts commonly used variables from the interpreter.
GetLocals() map[string]any
}
REPLEnvironment defines the interface for a REPL that can execute code and make LLM queries.
type RLM ¶
type RLM struct {
core.BaseModule
// contains filtered or unexported fields
}
RLM is the main Recursive Language Model module implementation. It enables LLMs to explore large contexts programmatically through a Go REPL, making iterative queries to sub-LLMs until a final answer is reached.
func New ¶
func New(rootLLM core.LLM, subLLMClient SubLLMClient, opts ...Option) *RLM
New creates a new RLM module instance with separate LLMs. rootLLM is used for the main orchestration loop. subLLMClient is used for Query/QueryBatched calls from within the REPL. For most cases, use NewFromLLM instead which uses the same LLM for both.
func NewFromLLM ¶
NewFromLLM creates a new RLM module using a single core.LLM for both root orchestration and sub-queries. This is the recommended constructor for most use cases.
func (*RLM) Complete ¶
func (r *RLM) Complete(ctx context.Context, contextPayload any, query string) (*CompletionResult, error)
Complete runs an RLM completion. contextPayload is the context data (string, map, or slice). query is the user's question.
func (*RLM) GetTokenTracker ¶
func (r *RLM) GetTokenTracker() *TokenTracker
GetTokenTracker returns the token tracker for inspecting usage.
func (*RLM) Process ¶
func (r *RLM) Process(ctx context.Context, inputs map[string]any, opts ...core.Option) (map[string]any, error)
Process implements the core.Module interface. It takes inputs with "context" and "query" fields and returns the answer.
func (*RLM) ProcessWithInterceptors ¶
func (r *RLM) ProcessWithInterceptors(ctx context.Context, inputs map[string]any, interceptors []core.ModuleInterceptor, opts ...core.Option) (map[string]any, error)
ProcessWithInterceptors executes the RLM module's logic with interceptor support.
func (*RLM) WithOptions ¶
WithOptions applies additional options to the RLM module.
type SubLLMClient ¶
type SubLLMClient interface {
// Query makes a single LLM query.
Query(ctx context.Context, prompt string) (QueryResponse, error)
// QueryBatched makes concurrent LLM queries.
QueryBatched(ctx context.Context, prompts []string) ([]QueryResponse, error)
}
SubLLMClient defines the interface for making LLM calls from within the REPL.
type TokenTracker ¶
type TokenTracker struct {
// contains filtered or unexported fields
}
TokenTracker aggregates token usage across root LLM and sub-LLM calls.
func NewTokenTracker ¶
func NewTokenTracker() *TokenTracker
NewTokenTracker creates a new token tracker.
func (*TokenTracker) AddRootUsage ¶
func (t *TokenTracker) AddRootUsage(promptTokens, completionTokens int)
AddRootUsage adds token usage from a root LLM call.
func (*TokenTracker) AddSubCall ¶
func (t *TokenTracker) AddSubCall(call LLMCall)
AddSubCall adds a sub-LLM call with its token usage.
func (*TokenTracker) AddSubCalls ¶
func (t *TokenTracker) AddSubCalls(calls []LLMCall)
AddSubCalls adds multiple sub-LLM calls.
func (*TokenTracker) ClearSubCalls ¶
func (t *TokenTracker) ClearSubCalls()
ClearSubCalls clears the recorded sub-LLM calls but preserves the counts.
func (*TokenTracker) GetRootUsage ¶
func (t *TokenTracker) GetRootUsage() core.TokenUsage
GetRootUsage returns token usage from root LLM calls only.
func (*TokenTracker) GetSubCalls ¶
func (t *TokenTracker) GetSubCalls() []LLMCall
GetSubCalls returns a copy of all sub-LLM calls.
func (*TokenTracker) GetSubUsage ¶
func (t *TokenTracker) GetSubUsage() core.TokenUsage
GetSubUsage returns token usage from sub-LLM calls only.
func (*TokenTracker) GetTotalUsage ¶
func (t *TokenTracker) GetTotalUsage() core.TokenUsage
GetTotalUsage returns the total aggregated token usage.
type YaegiREPL ¶
type YaegiREPL struct {
// contains filtered or unexported fields
}
YaegiREPL is a Yaegi-based Go interpreter with RLM capabilities.
SECURITY NOTE: The interpreter is sandboxed by restricting imports to a safe subset of the standard library (no os, net, syscall, etc.). However, it does NOT protect against resource exhaustion attacks. LLM-generated code could potentially allocate large amounts of memory or create infinite loops that exceed the execution timeout. If running untrusted code in production, consider additional OS-level resource limits (e.g., cgroups, containers) or running the interpreter in a separate process with strict memory limits.
func NewYaegiREPL ¶
func NewYaegiREPL(client SubLLMClient) (*YaegiREPL, error)
NewYaegiREPL creates a new YaegiREPL instance. Returns an error if initialization fails (e.g., stdlib loading or builtin injection).
func (*YaegiREPL) ClearLLMCalls ¶
func (r *YaegiREPL) ClearLLMCalls()
ClearLLMCalls clears the recorded LLM calls.
func (*YaegiREPL) ContextInfo ¶
ContextInfo returns metadata about the loaded context.
func (*YaegiREPL) Execute ¶
Execute runs Go code in the interpreter. Execution errors are captured in stderr rather than returned, allowing the caller to inspect all output. The mutex is held for the entire duration to ensure thread safety, as the yaegi interpreter is not safe for concurrent use. Panics from the Yaegi interpreter are recovered and reported as stderr errors.
func (*YaegiREPL) GetLLMCalls ¶
GetLLMCalls returns and clears the recorded LLM calls. Returns a copy of the calls slice to prevent external modification.
func (*YaegiREPL) GetVariable ¶
GetVariable retrieves a variable value from the interpreter.
func (*YaegiREPL) LoadContext ¶
LoadContext injects the context payload into the interpreter as the `context` variable.
func (*YaegiREPL) SetContext ¶
SetContext sets the execution context for LLM calls.