Documentation
¶
Index ¶
- func GetImplSpecificOptions[T any](base *T, opts ...Option) *T
- type BaseChatModel
- type CallbackInput
- type CallbackOutput
- type ChatModeldeprecated
- type CompletionTokensDetails
- type Config
- type Option
- func WithMaxTokens(maxTokens int) Option
- func WithModel(name string) Option
- func WithStop(stop []string) Option
- func WithTemperature(temperature float32) Option
- func WithToolChoice(toolChoice schema.ToolChoice, allowedToolNames ...string) Option
- func WithTools(tools []*schema.ToolInfo) Option
- func WithTopP(topP float32) Option
- func WrapImplSpecificOptFn[T any](optFn func(*T)) Option
- type Options
- type PromptTokenDetails
- type TokenUsage
- type ToolCallingChatModel
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetImplSpecificOptions ¶
GetImplSpecificOptions extract the implementation specific options from Option list, optionally providing a base options with default values. e.g.
myOption := &MyOption{
Field1: "default_value",
}
myOption := model.GetImplSpecificOptions(myOption, opts...)
Types ¶
type BaseChatModel ¶ added in v0.3.23
type BaseChatModel interface {
Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
*schema.StreamReader[*schema.Message], error)
}
BaseChatModel defines the basic interface for chat models. It provides methods for generating complete outputs and streaming outputs. This interface serves as the foundation for all chat model implementations.
type CallbackInput ¶
type CallbackInput struct {
// Messages is the messages to be sent to the model.
Messages []*schema.Message
// Tools is the tools to be used in the model.
Tools []*schema.ToolInfo
// ToolChoice is the tool choice, which controls the tool to be used in the model.
ToolChoice *schema.ToolChoice
// Config is the config for the model.
Config *Config
// Extra is the extra information for the callback.
Extra map[string]any
}
CallbackInput is the input for the model callback.
func ConvCallbackInput ¶
func ConvCallbackInput(src callbacks.CallbackInput) *CallbackInput
ConvCallbackInput converts the callback input to the model callback input.
type CallbackOutput ¶
type CallbackOutput struct {
// Message is the message generated by the model.
Message *schema.Message
// Config is the config for the model.
Config *Config
// TokenUsage is the token usage of this request.
TokenUsage *TokenUsage
// Extra is the extra information for the callback.
Extra map[string]any
}
CallbackOutput is the output for the model callback.
func ConvCallbackOutput ¶
func ConvCallbackOutput(src callbacks.CallbackOutput) *CallbackOutput
ConvCallbackOutput converts the callback output to the model callback output.
type ChatModel
deprecated
type ChatModel interface {
BaseChatModel
// BindTools bind tools to the model.
// BindTools before requesting ChatModel generally.
// notice the non-atomic problem of BindTools and Generate.
BindTools(tools []*schema.ToolInfo) error
}
Deprecated: Please use ToolCallingChatModel interface instead, which provides a safer way to bind tools without the concurrency issues and tool overwriting problems that may arise from the BindTools method.
type CompletionTokensDetails ¶ added in v0.7.10
type CompletionTokensDetails struct {
// ReasoningTokens tokens generated by the model for reasoning.
// This is currently supported by OpenAI, Gemini, ARK and Qwen chat models.
// For other models, this field will be 0.
ReasoningTokens int `json:"reasoning_tokens,omitempty"`
}
type Config ¶
type Config struct {
// Model is the model name.
Model string
// MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return an finish reason of "length".
MaxTokens int
// Temperature is the temperature, which controls the randomness of the model.
Temperature float32
// TopP is the top p, which controls the diversity of the model.
TopP float32
// Stop is the stop words, which controls the stopping condition of the model.
Stop []string
}
Config is the config for the model.
type Option ¶
type Option struct {
// contains filtered or unexported fields
}
Option is the call option for ChatModel component.
func WithMaxTokens ¶
WithMaxTokens is the option to set the max tokens for the model.
func WithTemperature ¶
WithTemperature is the option to set the temperature for the model.
func WithToolChoice ¶ added in v0.3.8
func WithToolChoice(toolChoice schema.ToolChoice, allowedToolNames ...string) Option
WithToolChoice sets the tool choice for the model. It also allows for providing a list of tool names to constrain the model to a specific subset of the available tools.
func WrapImplSpecificOptFn ¶
WrapImplSpecificOptFn is the option to wrap the implementation specific option function.
type Options ¶
type Options struct {
// Temperature is the temperature for the model, which controls the randomness of the model.
Temperature *float32
// MaxTokens is the max number of tokens, if reached the max tokens, the model will stop generating, and mostly return an finish reason of "length".
MaxTokens *int
// Model is the model name.
Model *string
// TopP is the top p for the model, which controls the diversity of the model.
TopP *float32
// Stop is the stop words for the model, which controls the stopping condition of the model.
Stop []string
// Tools is a list of tools the model may call.
Tools []*schema.ToolInfo
// ToolChoice controls which tool is called by the model.
ToolChoice *schema.ToolChoice
// AllowedToolNames specifies a list of tool names that the model is allowed to call.
// This allows for constraining the model to a specific subset of the available tools.
AllowedToolNames []string
}
Options is the common options for the model.
func GetCommonOptions ¶
GetCommonOptions extract model Options from Option list, optionally providing a base Options with default values.
type PromptTokenDetails ¶ added in v0.4.2
type PromptTokenDetails struct {
// Cached tokens present in the prompt.
CachedTokens int
}
type TokenUsage ¶
type TokenUsage struct {
// PromptTokens is the number of prompt tokens, including all the input tokens of this request.
PromptTokens int
// PromptTokenDetails is a breakdown of the prompt tokens.
PromptTokenDetails PromptTokenDetails
// CompletionTokens is the number of completion tokens.
CompletionTokens int
// TotalTokens is the total number of tokens.
TotalTokens int
// CompletionTokensDetails is breakdown of completion tokens.
CompletionTokensDetails CompletionTokensDetails `json:"completion_token_details"`
}
TokenUsage is the token usage for the model.
type ToolCallingChatModel ¶ added in v0.3.23
type ToolCallingChatModel interface {
BaseChatModel
// WithTools returns a new ToolCallingChatModel instance with the specified tools bound.
// This method does not modify the current instance, making it safer for concurrent use.
WithTools(tools []*schema.ToolInfo) (ToolCallingChatModel, error)
}
ToolCallingChatModel extends BaseChatModel with tool calling capabilities. It provides a WithTools method that returns a new instance with the specified tools bound, avoiding state mutation and concurrency issues.