Documentation
¶
Overview ¶
Package llms provides the interfaces for language model integrations.
Index ¶
Constants ¶
View Source
const ( ConfigKeyTemperature = "temperature" ConfigKeyMaxTokens = "max_tokens" ConfigKeyTopP = "top_p" ConfigKeyModel = "model" )
Common ChatModel option keys used in RunnableConfig.Configurable.
Variables ¶
This section is empty.
Functions ¶
func WithMaxTokens ¶
WithMaxTokens sets the maximum number of tokens to generate.
func WithTemperature ¶
WithTemperature sets the temperature for generation.
Types ¶
type ChatGeneration ¶
type ChatGeneration struct {
// Message is the generated AI message.
Message *core.AIMessage `json:"message"`
// GenerationInfo contains generation-specific metadata.
GenerationInfo map[string]any `json:"generation_info,omitempty"`
}
ChatGeneration represents a single generated message.
type ChatModel ¶
type ChatModel interface {
core.Runnable[[]core.Message, *core.AIMessage]
// Generate performs a chat completion and returns detailed results
// including token usage.
Generate(ctx context.Context, messages []core.Message, opts ...core.Option) (*ChatResult, error)
// BindTools returns a new ChatModel that will use the given tool definitions
// when generating responses.
BindTools(tools ...ToolDefinition) ChatModel
// WithStructuredOutput configures the model to return structured output
// matching the given JSON schema.
WithStructuredOutput(schema map[string]any) ChatModel
}
ChatModel is the interface that all chat model implementations must satisfy. It extends the Runnable interface with chat-specific methods.
type ChatResult ¶
type ChatResult struct {
// Generations contains the generated messages.
Generations []*ChatGeneration `json:"generations"`
// LLMOutput contains provider-specific output data.
LLMOutput map[string]any `json:"llm_output,omitempty"`
}
ChatResult holds the complete result of a chat model invocation.
type TokenUsage ¶
type TokenUsage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
}
TokenUsage tracks token consumption.
type ToolDefinition ¶
type ToolDefinition struct {
// Name of the tool.
Name string `json:"name"`
// Description of what the tool does.
Description string `json:"description"`
// Parameters is a JSON Schema describing the tool's parameters.
Parameters map[string]any `json:"parameters"`
}
ToolDefinition describes a tool that can be bound to a chat model.
Click to show internal directories.
Click to hide internal directories.