Documentation
¶
Overview ¶
Package model defines the ChatModel component interface for interacting with large language models (LLMs).
Overview ¶
A ChatModel takes a slice of schema.Message as input and returns a response message — either in full ([BaseChatModel.Generate]) or incrementally as a stream ([BaseChatModel.Stream]). It is the most fundamental building block in an eino pipeline: every application that talks to an LLM goes through this interface.
Concrete implementations (OpenAI, Ark, Ollama, …) live in eino-ext:
github.com/cloudwego/eino-ext/components/model/
Interface Hierarchy ¶
BaseChatModel — Generate + Stream (all implementations) ├── ToolCallingChatModel — preferred; WithTools returns a new instance (concurrency-safe) └── ChatModel — deprecated; BindTools mutates state (avoid in new code)
Choosing Generate vs Stream ¶
Use [BaseChatModel.Generate] when the full response is needed before proceeding (e.g. structured extraction, classification). Use [BaseChatModel.Stream] when output should be forwarded to the caller incrementally (e.g. chat UI, long-form generation). Always close the schema.StreamReader returned by Stream — failing to do so leaks the underlying connection:
reader, err := model.Stream(ctx, messages)
if err != nil { ... }
defer reader.Close()
Implementing a ChatModel ¶
Implementations must call GetCommonOptions to extract standard options and GetImplSpecificOptions to extract their own options from the Option list. Expose implementation-specific options via WrapImplSpecificOptFn.
See https://www.cloudwego.io/docs/eino/core_modules/components/chat_model_guide/ for the full component guide.
Index ¶
- func GetImplSpecificOptions[T any](base *T, opts ...Option) *T
- type AgenticCallbackInput
- type AgenticCallbackOutput
- type AgenticConfig
- type AgenticModel
- type BaseChatModel
- type CallbackInput
- type CallbackOutput
- type ChatModeldeprecated
- type CompletionTokensDetails
- type Config
- type Option
- func WithAgenticToolChoice(toolChoice *schema.AgenticToolChoice) Option
- func WithMaxTokens(maxTokens int) Option
- func WithModel(name string) Option
- func WithStop(stop []string) Option
- func WithTemperature(temperature float32) Option
- func WithToolChoice(toolChoice schema.ToolChoice, allowedToolNames ...string) Option
- func WithTools(tools []*schema.ToolInfo) Option
- func WithTopP(topP float32) Option
- func WrapImplSpecificOptFn[T any](optFn func(*T)) Option
- type Options
- type PromptTokenDetails
- type TokenUsage
- type ToolCallingChatModel
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetImplSpecificOptions ¶
GetImplSpecificOptions extracts implementation-specific options from an Option list, merging them onto base. If base is nil, a zero-value T is used.
Call this alongside GetCommonOptions to support both standard and custom options in your implementation:
type MyOptions struct { MyParam string }
func (m *MyModel) Generate(ctx context.Context, input []*schema.Message, opts ...model.Option) (*schema.Message, error) {
common := model.GetCommonOptions(nil, opts...)
myOpts := model.GetImplSpecificOptions(&MyOptions{MyParam: "default"}, opts...)
// use common.Temperature, myOpts.MyParam, etc.
}
Types ¶
type AgenticCallbackInput ¶
type AgenticCallbackInput struct {
// Messages is the agentic messages to be sent to the agentic model.
Messages []*schema.AgenticMessage
// Tools is the tools to be used in the agentic model.
Tools []*schema.ToolInfo
// Config is the config for the agentic model.
Config *AgenticConfig
// Extra is the extra information for the callback.
Extra map[string]any
}
AgenticCallbackInput is the input for the agentic model callback.
func ConvAgenticCallbackInput ¶
func ConvAgenticCallbackInput(src callbacks.CallbackInput) *AgenticCallbackInput
ConvAgenticCallbackInput converts the callback input to the agentic model callback input.
type AgenticCallbackOutput ¶
type AgenticCallbackOutput struct {
// Message is the agentic message generated by the agentic model.
Message *schema.AgenticMessage
// Config is the config for the agentic model.
Config *AgenticConfig
// TokenUsage is the token usage of this request.
TokenUsage *TokenUsage
// Extra is the extra information for the callback.
Extra map[string]any
}
AgenticCallbackOutput is the output for the agentic model callback.
func ConvAgenticCallbackOutput ¶
func ConvAgenticCallbackOutput(src callbacks.CallbackOutput) *AgenticCallbackOutput
ConvAgenticCallbackOutput converts the callback output to the agentic model callback output.
type AgenticConfig ¶
type AgenticConfig struct {
// Model is the model name.
Model string
// MaxTokens is the max number of output tokens, if reached the max tokens, the model will stop generating.
MaxTokens int
// Temperature is the temperature, which controls the randomness of the agentic model.
Temperature float32
// TopP is the top p, which controls the diversity of the agentic model.
TopP float32
}
AgenticConfig is the config for the agentic model.
type AgenticModel ¶
type AgenticModel interface {
Generate(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.AgenticMessage, error)
Stream(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.StreamReader[*schema.AgenticMessage], error)
// WithTools returns a new Model instance with the specified tools bound.
// This method does not modify the current instance, making it safer for concurrent use.
WithTools(tools []*schema.ToolInfo) (AgenticModel, error)
}
AgenticModel defines the interface for agentic models that support AgenticMessage. It provides methods for generating complete and streaming outputs, and supports tool calling via the WithTools method.
type BaseChatModel ¶ added in v0.3.23
type BaseChatModel interface {
Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
*schema.StreamReader[*schema.Message], error)
}
BaseChatModel defines the core interface for all chat model implementations.
It exposes two modes of interaction:
- [BaseChatModel.Generate]: blocks until the model returns a complete response.
- [BaseChatModel.Stream]: returns a schema.StreamReader that yields message chunks incrementally as the model generates them.
The input is a slice of schema.Message representing the conversation history. Messages carry a role (system, user, assistant, tool) and support multimodal content (text, images, audio, video). Tool messages must include a ToolCallID that correlates them with a prior assistant tool-call message.
Stream usage — the caller is responsible for closing the reader:
reader, err := m.Stream(ctx, messages)
if err != nil { ... }
defer reader.Close()
for {
chunk, err := reader.Recv()
if errors.Is(err, io.EOF) { break }
if err != nil { ... }
// handle chunk
}
Note: a schema.StreamReader can only be read once. If multiple consumers need the stream, it must be copied before reading.
type CallbackInput ¶
type CallbackInput struct {
// Messages is the messages to be sent to the model.
Messages []*schema.Message
// Tools is the tools to be used in the model.
Tools []*schema.ToolInfo
// ToolChoice is the tool choice, which controls the tool to be used in the model.
ToolChoice *schema.ToolChoice
// Config is the config for the model.
Config *Config
// Extra is the extra information for the callback.
Extra map[string]any
}
CallbackInput is the input for the model callback.
func ConvCallbackInput ¶
func ConvCallbackInput(src callbacks.CallbackInput) *CallbackInput
ConvCallbackInput converts the callback input to the model callback input.
type CallbackOutput ¶
type CallbackOutput struct {
// Message is the message generated by the model.
Message *schema.Message
// Config is the config for the model.
Config *Config
// TokenUsage is the token usage of this request.
TokenUsage *TokenUsage
// Extra is the extra information for the callback.
Extra map[string]any
}
CallbackOutput is the output for the model callback.
func ConvCallbackOutput ¶
func ConvCallbackOutput(src callbacks.CallbackOutput) *CallbackOutput
ConvCallbackOutput converts the callback output to the model callback output.
type ChatModel
deprecated
type ChatModel interface {
BaseChatModel
// BindTools bind tools to the model.
// BindTools before requesting ChatModel generally.
// notice the non-atomic problem of BindTools and Generate.
BindTools(tools []*schema.ToolInfo) error
}
Deprecated: Use ToolCallingChatModel instead.
ChatModel extends BaseChatModel with tool binding via [ChatModel.BindTools]. BindTools mutates the instance in place, which causes a race condition when the same instance is used concurrently: one goroutine's tool list can overwrite another's. Prefer [ToolCallingChatModel.WithTools], which returns a new immutable instance and is safe for concurrent use.
type CompletionTokensDetails ¶ added in v0.7.10
type CompletionTokensDetails struct {
// ReasoningTokens tokens generated by the model for reasoning.
// This is currently supported by OpenAI, Gemini, ARK and Qwen chat models.
// For other models, this field will be 0.
ReasoningTokens int `json:"reasoning_tokens,omitempty"`
}
type Config ¶
type Config struct {
// Model is the model name.
Model string
// MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return an finish reason of "length".
MaxTokens int
// Temperature is the temperature, which controls the randomness of the model.
Temperature float32
// TopP is the top p, which controls the diversity of the model.
TopP float32
// Stop is the stop words, which controls the stopping condition of the model.
Stop []string
}
Config is the config for the model.
type Option ¶
type Option struct {
// contains filtered or unexported fields
}
Option is a call-time option for a ChatModel. Options are immutable and composable: each Option carries either a common-option setter (applied via GetCommonOptions) or an implementation-specific setter (applied via GetImplSpecificOptions), never both.
func WithAgenticToolChoice ¶
func WithAgenticToolChoice(toolChoice *schema.AgenticToolChoice) Option
WithAgenticToolChoice is the option to set tool choice for the agentic model. Only available for AgenticModel.
func WithMaxTokens ¶
WithMaxTokens is the option to set the max tokens for the model.
func WithTemperature ¶
WithTemperature is the option to set the temperature for the model.
func WithToolChoice ¶ added in v0.3.8
func WithToolChoice(toolChoice schema.ToolChoice, allowedToolNames ...string) Option
WithToolChoice sets the tool choice for the model. It also allows for providing a list of tool names to constrain the model to a specific subset of the available tools. Only available for ChatModel.
func WrapImplSpecificOptFn ¶
WrapImplSpecificOptFn is the option to wrap the implementation specific option function. WrapImplSpecificOptFn wraps an implementation-specific option function into an Option so it can be passed alongside standard options.
This is intended for ChatModel implementors, not callers. Define a typed setter for your own config struct and expose it as an Option:
// In your implementation package:
func WithMyParam(v string) model.Option {
return model.WrapImplSpecificOptFn(func(o *MyOptions) {
o.MyParam = v
})
}
Callers can then mix standard and implementation-specific options freely:
model.Generate(ctx, msgs,
model.WithTemperature(0.7),
mypkg.WithMyParam("value"),
)
type Options ¶
type Options struct {
// Temperature is the temperature for the model, which controls the randomness of the model.
Temperature *float32
// Model is the model name.
Model *string
// TopP is the top p for the model, which controls the diversity of the model.
TopP *float32
// Tools is a list of tools the model may call.
Tools []*schema.ToolInfo
// MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return a finish reason of "length".
MaxTokens *int
// Stop is the stop words for the model, which controls the stopping condition of the model.
Stop []string
// ToolChoice controls which tool is called by the model.
ToolChoice *schema.ToolChoice
// AllowedToolNames specifies a list of tool names that the model is allowed to call.
// This allows for constraining the model to a specific subset of the available tools.
AllowedToolNames []string
// AgenticToolChoice controls how the agentic model calls tools.
AgenticToolChoice *schema.AgenticToolChoice
}
Options is the common options for the model.
func GetCommonOptions ¶
GetCommonOptions extracts standard Options from an Option list, merging them onto base. If base is nil, a zero-value Options is used.
Implementors must call this to honour options passed by callers:
func (m *MyModel) Generate(ctx context.Context, input []*schema.Message, opts ...model.Option) (*schema.Message, error) {
options := model.GetCommonOptions(&model.Options{Temperature: &m.defaultTemp}, opts...)
// use options.Temperature, options.Tools, etc.
}
type PromptTokenDetails ¶ added in v0.4.2
type PromptTokenDetails struct {
// Cached tokens present in the prompt.
CachedTokens int
}
PromptTokenDetails provides a breakdown of prompt token usage.
type TokenUsage ¶
type TokenUsage struct {
// PromptTokens is the number of prompt tokens, including all the input tokens of this request.
PromptTokens int
// PromptTokenDetails is a breakdown of the prompt tokens.
PromptTokenDetails PromptTokenDetails
// CompletionTokens is the number of completion tokens.
CompletionTokens int
// TotalTokens is the total number of tokens.
TotalTokens int
// CompletionTokensDetails is breakdown of completion tokens.
CompletionTokensDetails CompletionTokensDetails `json:"completion_token_details"`
}
TokenUsage is the token usage for the model.
type ToolCallingChatModel ¶ added in v0.3.23
type ToolCallingChatModel interface {
BaseChatModel
// WithTools returns a new ToolCallingChatModel instance with the specified tools bound.
// This method does not modify the current instance, making it safer for concurrent use.
WithTools(tools []*schema.ToolInfo) (ToolCallingChatModel, error)
}
ToolCallingChatModel extends BaseChatModel with safe tool binding.
Unlike the deprecated [ChatModel.BindTools], [ToolCallingChatModel.WithTools] does not mutate the receiver — it returns a new instance with the given tools attached. This makes it safe to share a base model instance across goroutines and derive per-request variants with different tool sets:
base, _ := openai.NewChatModel(ctx, cfg) // shared, no tools
withSearch, _ := base.WithTools([]*schema.ToolInfo{searchTool})
withCalc, _ := base.WithTools([]*schema.ToolInfo{calcTool})