modules

package
v0.0.13 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 7, 2025 License: MIT Imports: 16 Imported by: 0

Documentation

Overview

Package modules provides the Parallel execution wrapper for DSPy-Go.

The Parallel module enables concurrent execution of any DSPy module across multiple inputs, providing significant performance improvements for batch processing.

Example usage:

predict := modules.NewPredict(signature)
parallel := modules.NewParallel(predict,
	modules.WithMaxWorkers(4),
	modules.WithReturnFailures(true))

batchInputs := []map[string]interface{}{
	{"input": "first example"},
	{"input": "second example"},
	{"input": "third example"},
}

result, err := parallel.Process(ctx, map[string]interface{}{
	"batch_inputs": batchInputs,
})

results := result["results"].([]map[string]interface{})

The parallel module automatically manages worker pools, error handling, and result collection while maintaining the order of inputs in outputs.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ProcessTyped

func ProcessTyped[TInput, TOutput any](ctx context.Context, predict *Predict, inputs TInput, opts ...core.Option) (TOutput, error)

ProcessTyped provides type-safe processing with compile-time type validation.

func ProcessTypedDirect added in v0.0.13

func ProcessTypedDirect[TInput, TOutput any](ctx context.Context, predict *Predict, inputs TInput, opts ...core.Option) (TOutput, error)

func ProcessTypedWithValidation

func ProcessTypedWithValidation[TInput, TOutput any](ctx context.Context, predict *Predict, inputs TInput, opts ...core.Option) (TOutput, error)

ProcessTypedWithValidation provides type-safe processing with input and output validation.

Types

type ChainOfThought

type ChainOfThought struct {
	Predict *Predict
}

func NewChainOfThought

func NewChainOfThought(signature core.Signature) *ChainOfThought

func NewTypedChainOfThought

func NewTypedChainOfThought[TInput, TOutput any]() *ChainOfThought

NewTypedChainOfThought creates a new type-safe ChainOfThought module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*ChainOfThought) ClearInterceptors

func (c *ChainOfThought) ClearInterceptors()

ClearInterceptors removes all interceptors from this module.

func (*ChainOfThought) Clone

func (c *ChainOfThought) Clone() core.Module

Clone creates a deep copy of the ChainOfThought module.

func (*ChainOfThought) Compose

func (c *ChainOfThought) Compose(next core.Module) core.Module

Compose creates a new module that chains this module with the next module.

func (*ChainOfThought) GetDisplayName

func (c *ChainOfThought) GetDisplayName() string

GetDisplayName returns the display name for this ChainOfThought module.

func (*ChainOfThought) GetInterceptors

func (c *ChainOfThought) GetInterceptors() []core.ModuleInterceptor

GetInterceptors returns the current interceptors for this module.

func (*ChainOfThought) GetModuleType

func (c *ChainOfThought) GetModuleType() string

GetModuleType returns "ChainOfThought".

func (*ChainOfThought) GetSignature

func (c *ChainOfThought) GetSignature() core.Signature

GetSignature returns the signature from the internal Predict module.

func (*ChainOfThought) GetSubModules

func (c *ChainOfThought) GetSubModules() []core.Module

GetSubModules returns the sub-modules of this ChainOfThought.

func (*ChainOfThought) Process

func (c *ChainOfThought) Process(ctx context.Context, inputs map[string]any, opts ...core.Option) (map[string]any, error)

func (*ChainOfThought) ProcessWithInterceptors

func (c *ChainOfThought) ProcessWithInterceptors(ctx context.Context, inputs map[string]any, interceptors []core.ModuleInterceptor, opts ...core.Option) (map[string]any, error)

ProcessWithInterceptors executes the module's logic with interceptor support.

func (*ChainOfThought) SetInterceptors

func (c *ChainOfThought) SetInterceptors(interceptors []core.ModuleInterceptor)

SetInterceptors sets the default interceptors for this module instance.

func (*ChainOfThought) SetLLM

func (c *ChainOfThought) SetLLM(llm core.LLM)

SetLLM sets the LLM on the internal Predict module.

func (*ChainOfThought) SetSignature

func (c *ChainOfThought) SetSignature(signature core.Signature)

SetSignature sets the signature on the internal Predict module with rationale field.

func (*ChainOfThought) SetSubModules

func (c *ChainOfThought) SetSubModules(modules []core.Module)

SetSubModules sets the sub-modules (expects exactly one Predict module).

func (*ChainOfThought) WithDefaultOptions

func (c *ChainOfThought) WithDefaultOptions(opts ...core.Option) *ChainOfThought

WithDefaultOptions sets default options by configuring the underlying Predict module.

func (*ChainOfThought) WithName

func (c *ChainOfThought) WithName(name string) *ChainOfThought

WithName sets a semantic name for this ChainOfThought instance.

func (*ChainOfThought) WithStructuredOutput

func (c *ChainOfThought) WithStructuredOutput() *ChainOfThought

WithStructuredOutput enables native JSON structured output for ChainOfThought. This uses the LLM's GenerateWithJSON capability to produce structured responses that include both reasoning and output fields, eliminating parsing errors.

The output will include a "reasoning" field containing the step-by-step thought process.

Usage:

cot := modules.NewChainOfThought(signature).WithStructuredOutput()

func (*ChainOfThought) WithStructuredOutputConfig

func (c *ChainOfThought) WithStructuredOutputConfig(config interceptors.ChainOfThoughtStructuredConfig) *ChainOfThought

WithStructuredOutputConfig enables structured output with custom CoT configuration.

type Module

type Module interface {
	Forward(inputs map[string]interface{}) (Predict, error)
}

type MultiChainComparison

type MultiChainComparison struct {
	core.BaseModule
	M int // Number of attempts to compare
	// contains filtered or unexported fields
}

MultiChainComparison implements the multi-chain comparison module that compares multiple reasoning attempts and produces a holistic evaluation.

func NewMultiChainComparison

func NewMultiChainComparison(signature core.Signature, M int, temperature float64, opts ...core.Option) *MultiChainComparison

NewMultiChainComparison creates a new MultiChainComparison module.

func NewTypedMultiChainComparison

func NewTypedMultiChainComparison[TInput, TOutput any](M int, temperature float64, opts ...core.Option) *MultiChainComparison

NewTypedMultiChainComparison creates a new type-safe MultiChainComparison module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*MultiChainComparison) Clone

func (m *MultiChainComparison) Clone() core.Module

Clone creates a deep copy of the MultiChainComparison module.

func (*MultiChainComparison) GetSignature

func (m *MultiChainComparison) GetSignature() core.Signature

GetSignature returns the signature of the module.

func (*MultiChainComparison) Process

func (m *MultiChainComparison) Process(ctx context.Context, inputs map[string]interface{}, opts ...core.Option) (map[string]interface{}, error)

Process implements the core.Module interface. It takes completions and processes them into reasoning attempts for comparison.

func (*MultiChainComparison) SetLLM

func (m *MultiChainComparison) SetLLM(llm core.LLM)

SetLLM sets the LLM for the internal predict module.

func (*MultiChainComparison) WithName

WithName sets a semantic name for this MultiChainComparison instance.

type OfferFeedback

type OfferFeedback struct {
	core.BaseModule
	// contains filtered or unexported fields
}

OfferFeedback represents a module for generating advice to improve module performance. This is a simplified version of the Python implementation's OfferFeedback signature.

func NewOfferFeedback

func NewOfferFeedback() *OfferFeedback

NewOfferFeedback creates a new OfferFeedback module.

func (*OfferFeedback) Clone

func (of *OfferFeedback) Clone() core.Module

Clone creates a deep copy of the OfferFeedback module.

func (*OfferFeedback) Process

func (of *OfferFeedback) Process(ctx context.Context, inputs map[string]interface{}, opts ...core.Option) (map[string]interface{}, error)

Process generates feedback and advice for improving module performance.

func (*OfferFeedback) SetLLM

func (of *OfferFeedback) SetLLM(llm core.LLM)

SetLLM sets the language model for feedback generation.

type Parallel

type Parallel struct {
	core.BaseModule
	// contains filtered or unexported fields
}

Parallel executes a module against multiple inputs concurrently.

func NewParallel

func NewParallel(module core.Module, opts ...ParallelOption) *Parallel

NewParallel creates a new parallel execution wrapper around a module.

func NewTypedParallel

func NewTypedParallel[TInput, TOutput any](innerModule core.Module, opts ...ParallelOption) *Parallel

NewTypedParallel creates a new type-safe Parallel module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*Parallel) Clone

func (p *Parallel) Clone() core.Module

Clone creates a deep copy of the Parallel module.

func (*Parallel) GetInnerModule

func (p *Parallel) GetInnerModule() core.Module

GetInnerModule returns the wrapped module.

func (*Parallel) Process

func (p *Parallel) Process(ctx context.Context, inputs map[string]interface{}, opts ...core.Option) (map[string]interface{}, error)

Process executes the inner module against multiple inputs in parallel.

func (*Parallel) SetLLM

func (p *Parallel) SetLLM(llm core.LLM)

SetLLM sets the LLM for both this module and the inner module.

func (*Parallel) WithName

func (p *Parallel) WithName(name string) *Parallel

WithName sets a semantic name for this Parallel instance.

type ParallelOption

type ParallelOption func(*ParallelOptions)

ParallelOption is a function that configures ParallelOptions.

func WithMaxWorkers

func WithMaxWorkers(count int) ParallelOption

WithMaxWorkers sets the maximum number of concurrent workers.

func WithReturnFailures

func WithReturnFailures(returnFailures bool) ParallelOption

WithReturnFailures configures whether to return failed results.

func WithStopOnFirstError

func WithStopOnFirstError(stopOnError bool) ParallelOption

WithStopOnFirstError configures whether to stop on first error.

type ParallelOptions

type ParallelOptions struct {
	// MaxWorkers sets the maximum number of concurrent workers.
	// If 0, defaults to 100 workers (optimized for I/O-bound remote API calls).
	// For local models (CPU-bound), set to runtime.NumCPU() for optimal performance.
	MaxWorkers int
	// ReturnFailures determines if failed results should be included in output
	ReturnFailures bool
	// StopOnFirstError stops execution on first error encountered
	StopOnFirstError bool
}

ParallelOptions configures parallel execution behavior.

type ParallelResult

type ParallelResult struct {
	Index   int                    // Original index in the input batch
	Success bool                   // Whether execution succeeded
	Output  map[string]interface{} // The actual output
	Error   error                  // Error if execution failed
}

ParallelResult contains the result of a parallel execution.

type Predict

type Predict struct {
	core.BaseModule
	Demos []core.Example
	LLM   core.LLM
	// contains filtered or unexported fields
}

func NewPredict

func NewPredict(signature core.Signature) *Predict

func NewTypedPredict

func NewTypedPredict[TInput, TOutput any]() *Predict

NewTypedPredict creates a new type-safe Predict module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*Predict) Clone

func (p *Predict) Clone() core.Module

func (*Predict) FormatOutputs

func (p *Predict) FormatOutputs(outputs map[string]interface{}) map[string]interface{}

func (*Predict) GetDemos

func (p *Predict) GetDemos() []core.Example

func (*Predict) GetLLMIdentifier

func (p *Predict) GetLLMIdentifier() map[string]string

GetLLMIdentifier implements the LMConfigProvider interface.

func (*Predict) GetSignature

func (p *Predict) GetSignature() core.Signature

func (*Predict) GetXMLConfig

func (p *Predict) GetXMLConfig() *interceptors.XMLConfig

GetXMLConfig returns the XML configuration if XML mode is enabled.

func (*Predict) IsXMLModeEnabled

func (p *Predict) IsXMLModeEnabled() bool

IsXMLModeEnabled returns true if XML mode is enabled for this module.

func (*Predict) Process

func (p *Predict) Process(ctx context.Context, inputs map[string]interface{}, opts ...core.Option) (map[string]interface{}, error)

func (*Predict) ProcessWithInterceptors

func (p *Predict) ProcessWithInterceptors(ctx context.Context, inputs map[string]any, interceptors []core.ModuleInterceptor, opts ...core.Option) (map[string]any, error)

ProcessWithInterceptors executes the Predict module's logic with interceptor support.

func (*Predict) SetDemos

func (p *Predict) SetDemos(demos []core.Example)

func (*Predict) SetLLM

func (p *Predict) SetLLM(llm core.LLM)

func (*Predict) ValidateInputs

func (p *Predict) ValidateInputs(inputs map[string]interface{}) error

func (*Predict) WithDefaultOptions

func (p *Predict) WithDefaultOptions(opts ...core.Option) *Predict

func (*Predict) WithName

func (p *Predict) WithName(name string) *Predict

WithName sets a semantic name for this module instance.

func (*Predict) WithStructuredOutput

func (p *Predict) WithStructuredOutput() *Predict

WithStructuredOutput enables native JSON structured output instead of text parsing. This uses the LLM's GenerateWithJSON capability to produce structured responses that map directly to the signature's output fields, eliminating parsing errors.

Benefits:

  • No parsing errors from malformed prefixes
  • Strongly typed output from the LLM
  • Works with any signature without custom prefix configuration
  • More reliable extraction of multiple output fields

Requirements:

  • The LLM must support CapabilityJSON
  • Falls back to text-based parsing if not supported

Usage:

predict := modules.NewPredict(signature).WithStructuredOutput()

func (*Predict) WithStructuredOutputConfig

func (p *Predict) WithStructuredOutputConfig(config interceptors.StructuredOutputConfig) *Predict

WithStructuredOutputConfig enables structured output with custom configuration.

func (*Predict) WithTextOutput

func (p *Predict) WithTextOutput() *Predict

WithTextOutput disables XML output and uses traditional text-based parsing. This is an escape hatch for users who prefer the original behavior.

IMPORTANT: This method currently removes ALL interceptors from the module, not just XML-related ones. This means any custom interceptors you've configured (such as logging, caching, or metrics) will also be removed.

TODO(#interceptor-preservation): Implement selective removal of only XML interceptors. This requires an interceptor identification mechanism since interceptors are function types. Possible solutions:

  1. Wrap interceptors in a struct with metadata
  2. Maintain a separate list of XML interceptor indices
  3. Use a registry pattern for interceptor management

Until this is fixed, if you need to preserve custom interceptors:

  1. Save your interceptors before calling WithTextOutput()
  2. Re-add them after calling WithTextOutput()

Example workaround:

customInterceptors := predict.GetInterceptors()[:2] // Save first 2 custom interceptors
predict.WithTextOutput()
predict.SetInterceptors(customInterceptors) // Re-add them

func (*Predict) WithXMLOutput

func (p *Predict) WithXMLOutput(config interceptors.XMLConfig) *Predict

WithXMLOutput enables XML interceptor-based output formatting. This provides structured XML output with validation, security features, and error handling.

type ReAct

type ReAct struct {
	core.BaseModule
	Predict   *Predict
	Extract   *ChainOfThought // Fallback extraction module for when loop ends without Finish
	Registry  *tools.InMemoryToolRegistry
	MaxIters  int
	XMLConfig *interceptors.XMLConfig // Optional XML config for enhanced parsing
}

ReAct implements the ReAct agent loop (Reason, Action, Observation). It uses a Predict module to generate thoughts and actions, and executes tools. If the loop ends without a "Finish" action (e.g., max iterations reached), a fallback Extract module attempts to produce an answer from the gathered trajectory.

func NewReAct

func NewReAct(signature core.Signature, registry *tools.InMemoryToolRegistry, maxIters int) *ReAct

NewReAct creates a new ReAct module. It takes a signature (which it modifies), a tool registry pointer, and max iterations.

func NewTypedReAct

func NewTypedReAct[TInput, TOutput any](registry *tools.InMemoryToolRegistry, maxIters int) *ReAct

NewTypedReAct creates a new type-safe ReAct module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*ReAct) Clone

func (r *ReAct) Clone() core.Module

Clone creates a copy of the ReAct module. Note: Predict and Extract modules are cloned, but the LLM instance and ToolRegistry are shared. Cloning the registry itself might be complex and depends on the registry implementation. Sharing the registry is usually acceptable.

func (*ReAct) Process

func (r *ReAct) Process(ctx context.Context, inputs map[string]any, opts ...core.Option) (map[string]any, error)

Process executes the ReAct loop.

func (*ReAct) SetLLM

func (r *ReAct) SetLLM(llm core.LLM)

SetLLM sets the language model for the base module and all internal modules (Predict and Extract).

func (*ReAct) WithDefaultOptions

func (r *ReAct) WithDefaultOptions(opts ...core.Option) *ReAct

WithDefaultOptions sets default options by configuring the underlying Predict module.

func (*ReAct) WithNativeFunctionCalling

func (r *ReAct) WithNativeFunctionCalling() *ReAct

WithNativeFunctionCalling enables native LLM function calling for action selection. This bypasses text-based XML parsing entirely by using the LLM's built-in function/tool calling capabilities (e.g., OpenAI function calling, Gemini tools).

Benefits:

  • Eliminates parsing errors and hallucinated observations
  • Strongly typed tool arguments from the LLM
  • More reliable tool selection

Requirements:

  • The LLM must support CapabilityToolCalling
  • Falls back to text-based parsing if not supported

Usage:

react := modules.NewReAct(signature, registry, maxIters)
react.WithNativeFunctionCalling() // Enable native function calling

func (*ReAct) WithNativeFunctionCallingConfig

func (r *ReAct) WithNativeFunctionCallingConfig(config interceptors.FunctionCallingConfig) *ReAct

WithNativeFunctionCallingConfig enables native function calling with custom configuration.

func (*ReAct) WithXMLParsing

func (r *ReAct) WithXMLParsing(config interceptors.XMLConfig) *ReAct

WithXMLParsing enables XML interceptor-based parsing for tool actions. This replaces the hardcoded XML parsing with configurable XML interceptors.

type Refine

type Refine struct {
	core.BaseModule
	// contains filtered or unexported fields
}

Refine implements a refinement module that runs predictions multiple times with varying temperatures to improve quality based on a reward function.

func NewRefine

func NewRefine(module core.Module, config RefineConfig) *Refine

NewRefine creates a new Refine module with the specified configuration.

func NewTypedRefine

func NewTypedRefine[TInput, TOutput any](module core.Module, config RefineConfig) *Refine

NewTypedRefine creates a new type-safe Refine module from a typed signature. Typed modules use text-based parsing by default since they typically rely on prefixes.

func (*Refine) Clone

func (r *Refine) Clone() core.Module

Clone creates a deep copy of the Refine module.

func (*Refine) GetConfig

func (r *Refine) GetConfig() RefineConfig

GetConfig returns the current refinement configuration.

func (*Refine) GetSignature

func (r *Refine) GetSignature() core.Signature

GetSignature returns the module's signature.

func (*Refine) GetWrappedModule

func (r *Refine) GetWrappedModule() core.Module

GetWrappedModule returns the underlying module being refined.

func (*Refine) Process

func (r *Refine) Process(ctx context.Context, inputs map[string]interface{}, opts ...core.Option) (map[string]interface{}, error)

Process executes the refinement logic by running the module multiple times with different temperatures and selecting the best result based on the reward function.

func (*Refine) SetLLM

func (r *Refine) SetLLM(llm core.LLM)

SetLLM sets the language model for the wrapped module.

func (*Refine) SetSignature

func (r *Refine) SetSignature(signature core.Signature)

SetSignature updates the signature for both this module and the wrapped module.

func (*Refine) UpdateConfig

func (r *Refine) UpdateConfig(config RefineConfig) *Refine

UpdateConfig allows updating the refinement configuration.

func (*Refine) WithDefaultOptions

func (r *Refine) WithDefaultOptions(opts ...core.Option) *Refine

WithDefaultOptions sets default options for the module.

func (*Refine) WithName

func (r *Refine) WithName(name string) *Refine

WithName sets a semantic name for this Refine instance.

type RefineConfig

type RefineConfig struct {
	// Number of refinement attempts
	N int
	// Reward function to evaluate predictions
	RewardFn RewardFunction
	// Minimum threshold for acceptable predictions
	Threshold float64
	// Number of failed attempts before giving up (optional)
	FailCount *int
}

RefineConfig holds configuration options for the Refine module.

type RewardFunction

type RewardFunction func(inputs map[string]interface{}, outputs map[string]interface{}) float64

RewardFunction represents a function that evaluates the quality of a prediction. It takes the inputs used and the outputs produced, and returns a reward score. Higher scores indicate better predictions.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL