llms

package
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 12, 2026 License: MIT Imports: 2 Imported by: 0

Documentation

Overview

Package llms provides the interfaces for language model integrations.

Index

Constants

View Source
const (
	ConfigKeyTemperature = "temperature"
	ConfigKeyMaxTokens   = "max_tokens"
	ConfigKeyTopP        = "top_p"
	ConfigKeyModel       = "model"
)

Common ChatModel option keys used in RunnableConfig.Configurable.

Variables

This section is empty.

Functions

func WithMaxTokens

func WithMaxTokens(n int) core.Option

WithMaxTokens sets the maximum number of tokens to generate.

func WithModel

func WithModel(model string) core.Option

WithModel sets the model name.

func WithTemperature

func WithTemperature(temp float64) core.Option

WithTemperature sets the temperature for generation.

func WithTopP

func WithTopP(p float64) core.Option

WithTopP sets the top-p (nucleus sampling) parameter.

Types

type ChatGeneration

type ChatGeneration struct {
	// Message is the generated AI message.
	Message *core.AIMessage `json:"message"`

	// GenerationInfo contains generation-specific metadata.
	GenerationInfo map[string]any `json:"generation_info,omitempty"`
}

ChatGeneration represents a single generated message.

type ChatModel

type ChatModel interface {
	core.Runnable[[]core.Message, *core.AIMessage]

	// Generate performs a chat completion and returns detailed results
	// including token usage.
	Generate(ctx context.Context, messages []core.Message, opts ...core.Option) (*ChatResult, error)

	// BindTools returns a new ChatModel that will use the given tool definitions
	// when generating responses.
	BindTools(tools ...ToolDefinition) ChatModel

	// WithStructuredOutput configures the model to return structured output
	// matching the given JSON schema.
	WithStructuredOutput(schema map[string]any) ChatModel
}

ChatModel is the interface that all chat model implementations must satisfy. It extends the Runnable interface with chat-specific methods.

type ChatResult

type ChatResult struct {
	// Generations contains the generated messages.
	Generations []*ChatGeneration `json:"generations"`

	// LLMOutput contains provider-specific output data.
	LLMOutput map[string]any `json:"llm_output,omitempty"`
}

ChatResult holds the complete result of a chat model invocation.

type TokenUsage

type TokenUsage struct {
	PromptTokens     int `json:"prompt_tokens"`
	CompletionTokens int `json:"completion_tokens"`
	TotalTokens      int `json:"total_tokens"`
}

TokenUsage tracks token consumption.

type ToolDefinition

type ToolDefinition struct {
	// Name of the tool.
	Name string `json:"name"`

	// Description of what the tool does.
	Description string `json:"description"`

	// Parameters is a JSON Schema describing the tool's parameters.
	Parameters map[string]any `json:"parameters"`
}

ToolDefinition describes a tool that can be bound to a chat model.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL