Documentation
¶
Overview ¶
Package chains provides composable chains for LLM workflows.
Chains combine prompts, LLMs, retrievers, and other components into reusable pipelines for tasks like RAG, validation, and map-reduce.
Package chains provides composable chains for LLM workflows. Chains combine prompts, LLMs, retrievers, and other components into reusable pipelines for tasks like RAG, validation, and map-reduce.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type DocumentMapReduceChain ¶ added in v0.15.0
type DocumentMapReduceChain struct {
// contains filtered or unexported fields
}
DocumentMapReduceChain implements the MapReduce pattern for document processing. It transforms inputs into documents via a mapper, optionally stores them in a vector store for persistence/caching, and synthesizes them via a reducer.
The chain is safe for concurrent use as each call to Execute receives its own input and produces its own output. The chain struct itself is read-only after construction.
Use cases:
- Code-warden: Directory summaries → Project context
- Wiki-warden: Page summaries → Space overview
- Document summarization: Section summaries → Document summary
Design Notes ¶
The vector store is intended for write-only persistence/caching of mapped documents. The reducer operates on the raw documents returned by the mapper, not retrieved from the store. If retrieval-augmented reduction is needed, implement it in a custom MapFunc that performs the retrieval itself.
func NewDocumentMapReduceChain ¶ added in v0.15.0
func NewDocumentMapReduceChain(mapper MapFunc, reducer ReduceFunc, opts ...DocumentMapReduceOption) (*DocumentMapReduceChain, error)
NewDocumentMapReduceChain creates a new DocumentMapReduce chain. Returns an error if mapper or reducer is nil.
The chain is safe for concurrent use.
func (*DocumentMapReduceChain) Execute ¶ added in v0.15.0
Execute runs the full MapReduce pipeline:
- MAP phase: Transform input into documents via mapper
- STORE phase: Optionally store documents in vector store
- REDUCE phase: Synthesize documents into output via reducer
The chain is safe for concurrent use - each call operates independently.
type DocumentMapReduceOption ¶ added in v0.15.0
type DocumentMapReduceOption func(*DocumentMapReduceChain)
DocumentMapReduceOption configures a DocumentMapReduceChain.
func WithBatchSize ¶ added in v0.15.0
func WithBatchSize(size int) DocumentMapReduceOption
WithBatchSize sets the batch size for storing documents. Defaults to 100 if not set or set to a value <= 0.
func WithStore ¶ added in v0.15.0
func WithStore(store vectorstores.VectorStore) DocumentMapReduceOption
WithStore sets the vector store for storing mapped documents. The store is used for write-only persistence. Documents are stored after the map phase, but the reducer operates on the original mapped documents, not retrieved ones.
func WithStoreOptions ¶ added in v0.15.0
func WithStoreOptions(opts ...vectorstores.Option) DocumentMapReduceOption
WithStoreOptions sets options applied to each document during storage. These are typically metadata filters or other storage-level options. Note: These are applied to ALL documents, not per-document metadata.
type LLMChain ¶ added in v0.15.0
type LLMChain[T any] struct { // LLM is the language model to use for generation. LLM llms.Model // Prompt is the template to render before calling the LLM. Prompt prompts.PromptTemplate // Parser converts the LLM output to type T. Parser schema.OutputParser[T] // CallOptions are options passed to the LLM call. CallOptions []llms.CallOption }
LLMChain combines a prompt template, an LLM, and an output parser into a single callable unit. It renders the prompt, calls the LLM, and parses the output into a typed result.
func NewLLMChain ¶ added in v0.15.0
func NewLLMChain[T any](llm llms.Model, prompt prompts.PromptTemplate, opts ...LLMChainOption[T]) (*LLMChain[T], error)
NewLLMChain creates a new LLMChain. If no parser is provided via options, a StringParser is used (only valid when T is string). Returns an error if llm or prompt is nil.
type LLMChainOption ¶ added in v0.15.0
LLMChainOption configures an LLMChain.
func WithLLMCallOptions ¶ added in v0.15.0
func WithLLMCallOptions[T any](opts ...llms.CallOption) LLMChainOption[T]
WithLLMCallOptions sets LLM call options (temperature, max tokens, etc.)
func WithOutputParser ¶ added in v0.15.0
func WithOutputParser[T any](parser schema.OutputParser[T]) LLMChainOption[T]
WithOutputParser sets a custom output parser for the chain.
type MapReduceChain ¶ added in v0.15.0
type MapReduceChain[In, Mid, Out any] struct { // MapFunc processes each input item. MapFunc func(ctx context.Context, input In) (Mid, error) // ReduceFunc combines all map results into the final output. ReduceFunc func(ctx context.Context, results []Mid) (Out, error) // MaxConcurrency limits concurrent map operations. 0 means unlimited. MaxConcurrency int // Timeout is the per-task timeout. 0 means no timeout. Timeout time.Duration // QuorumFraction is the fraction of tasks that must succeed (e.g., 0.66). QuorumFraction float64 }
MapReduceChain runs a map function concurrently over inputs, then reduces the collected results into a final output. Supports concurrency limits, per-task timeouts, and quorum-based early return.
func NewMapReduceChain ¶ added in v0.15.0
func NewMapReduceChain[In, Mid, Out any]( mapFn func(ctx context.Context, input In) (Mid, error), reduceFn func(ctx context.Context, results []Mid) (Out, error), opts ...MapReduceOption[In, Mid, Out], ) *MapReduceChain[In, Mid, Out]
NewMapReduceChain creates a new MapReduceChain with the given map and reduce functions.
type MapReduceOption ¶ added in v0.15.0
type MapReduceOption[In, Mid, Out any] func(*MapReduceChain[In, Mid, Out])
MapReduceOption configures a MapReduceChain.
func WithMapTimeout ¶ added in v0.15.0
func WithMapTimeout[In, Mid, Out any](d time.Duration) MapReduceOption[In, Mid, Out]
WithMapTimeout sets a timeout for each individual map task.
func WithMaxConcurrency ¶ added in v0.15.0
func WithMaxConcurrency[In, Mid, Out any](n int) MapReduceOption[In, Mid, Out]
WithMaxConcurrency limits the number of map tasks running in parallel.
func WithQuorum ¶ added in v0.15.0
func WithQuorum[In, Mid, Out any](fraction float64) MapReduceOption[In, Mid, Out]
WithQuorum sets the fraction of map tasks that must succeed before the chain proceeds to the reduce step. For example, 0.66 means at least 2 out of 3 tasks must succeed.
type ReduceFunc ¶ added in v0.15.0
ReduceFunc synthesizes multiple documents into a single output.
type RetrievalQA ¶
type RetrievalQA struct {
// Retriever fetches relevant documents for the query.
Retriever schema.Retriever
// LLM generates the final answer.
LLM llms.Model
// PromptBuilder formats the query and documents into a prompt.
// If nil, uses DefaultRAGPrompt.
PromptBuilder func(query string, docs []schema.Document) (string, error)
}
RetrievalQA is a simple RAG chain that retrieves documents and generates an answer. It retrieves relevant documents using the Retriever, builds a prompt with the documents as context, and uses the LLM to generate an answer.
func NewRetrievalQA ¶
func NewRetrievalQA(retriever schema.Retriever, llm llms.Model, opts ...RetrievalQAOption) (RetrievalQA, error)
NewRetrievalQA creates a new RetrievalQA chain. Returns an error if retriever or llm is nil.
type RetrievalQAOption ¶ added in v0.15.0
type RetrievalQAOption func(*RetrievalQA)
RetrievalQAOption configures a RetrievalQA chain.
func WithPromptBuilder ¶ added in v0.15.0
func WithPromptBuilder(pb func(query string, docs []schema.Document) (string, error)) RetrievalQAOption
WithPromptBuilder sets a custom prompt builder function. The builder receives the query and retrieved documents and returns the formatted prompt string.
type ValidatingRetrievalQA ¶ added in v0.2.0
type ValidatingRetrievalQA struct {
// Retriever fetches relevant documents for the query.
Retriever schema.Retriever
// GeneratorLLM generates the final answer.
GeneratorLLM llms.Model
// ValidatorLLM validates context relevance.
ValidatorLLM llms.Model
// contains filtered or unexported fields
}
ValidatingRetrievalQA validates the relevance of retrieved context before generation. It uses a separate validator LLM to check if the retrieved documents are relevant to the query before using them for generation.
func NewValidatingRetrievalQA ¶ added in v0.2.0
func NewValidatingRetrievalQA(retriever schema.Retriever, generator llms.Model, opts ...ValidatingRetrievalQAOption) (ValidatingRetrievalQA, error)
NewValidatingRetrievalQA creates a new ValidatingRetrievalQA chain. Returns an error if retriever, generator, or validator is nil.
type ValidatingRetrievalQAOption ¶ added in v0.2.0
type ValidatingRetrievalQAOption func(*ValidatingRetrievalQA)
ValidatingRetrievalQAOption configures a ValidatingRetrievalQA chain.
func WithLogger ¶ added in v0.2.0
func WithLogger(logger *slog.Logger) ValidatingRetrievalQAOption
WithLogger sets the logger for the chain.
func WithValidator ¶ added in v0.2.0
func WithValidator(llm llms.Model) ValidatingRetrievalQAOption
WithValidator sets the LLM used for context validation.