Documentation
¶
Index ¶
- type ChatModel
- func (cm *ChatModel) BindForcedTools(tools []*schema.ToolInfo) error
- func (cm *ChatModel) BindTools(tools []*schema.ToolInfo) error
- func (cm *ChatModel) Generate(ctx context.Context, input []*schema.Message, opts ...model.Option) (message *schema.Message, err error)
- func (cm *ChatModel) GetType() string
- func (cm *ChatModel) IsCallbacksEnabled() bool
- func (cm *ChatModel) Stream(ctx context.Context, input []*schema.Message, opts ...model.Option) (result *schema.StreamReader[*schema.Message], err error)
- func (cm *ChatModel) WithTools(tools []*schema.ToolInfo) (model.ToolCallingChatModel, error)
- type Config
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ChatModel ¶
type ChatModel struct {
// contains filtered or unexported fields
}
ChatModel implements the Gemini chat model for the eino framework. It provides integration with Google's Gemini API, supporting both text generation and tool calling capabilities.
func NewChatModel ¶
NewChatModel creates a new Gemini chat model instance. It initializes a Google Gemini model with the specified configuration, supporting both text generation and tool calling capabilities.
Parameters:
- ctx: The context for the operation (currently unused but kept for interface consistency)
- cfg: Configuration for the Gemini model including client, model name, and parameters
Returns:
- *ChatModel: A Gemini chat model instance implementing ToolCallingChatModel
- error: Any error that occurred during creation
Example:
client, _ := genai.NewClient(ctx, &genai.ClientConfig{
APIKey: "your-api-key",
})
model, err := gemini.NewChatModel(ctx, &gemini.Config{
Client: client,
Model: "gemini-pro",
MaxTokens: &maxTokens,
})
func (*ChatModel) BindForcedTools ¶
BindForcedTools binds tools to the current model instance in forced mode. This ensures the model will always use one of the provided tools rather than generating a text response.
Parameters:
- tools: A slice of tool definitions to bind to the model
Returns:
- error: Returns an error if no tools provided or conversion fails
func (*ChatModel) BindTools ¶
BindTools binds tools to the current model instance. Unlike WithTools, this modifies the current instance rather than creating a new one. Tools are set to "allowed" mode by default.
Parameters:
- tools: A slice of tool definitions to bind to the model
Returns:
- error: Returns an error if no tools provided or conversion fails
func (*ChatModel) Generate ¶
func (cm *ChatModel) Generate(ctx context.Context, input []*schema.Message, opts ...model.Option) (message *schema.Message, err error)
Generate generates a single response from the Gemini model. It processes the input messages and returns a complete response.
Parameters:
- ctx: Context for the operation, supporting cancellation and callbacks
- input: The conversation history as a slice of messages
- opts: Optional configuration options for the generation
Returns:
- *schema.Message: The generated response message with content and metadata
- error: Any error that occurred during generation
func (*ChatModel) GetType ¶
GetType returns the type identifier for this model. This is used for logging and debugging purposes to identify which model implementation is being used.
Returns:
- string: Returns "Gemini" as the model type
func (*ChatModel) IsCallbacksEnabled ¶
IsCallbacksEnabled indicates whether this model supports callbacks. For the Gemini model, callbacks are always enabled to support token usage tracking and other monitoring features.
Returns:
- bool: Always returns true for Gemini models
func (*ChatModel) Stream ¶
func (cm *ChatModel) Stream(ctx context.Context, input []*schema.Message, opts ...model.Option) (result *schema.StreamReader[*schema.Message], err error)
Stream generates a streaming response from the Gemini model. It allows incremental processing of the model's output as it's generated.
Parameters:
- ctx: Context for the operation, supporting cancellation and callbacks
- input: The conversation history as a slice of messages
- opts: Optional configuration options for the generation
Returns:
- *schema.StreamReader[*schema.Message]: A reader for the streaming response
- error: Any error that occurred during stream setup
func (*ChatModel) WithTools ¶
WithTools creates a new model instance with the specified tools available. It returns a new ChatModel with tools configured for function calling. The original model instance remains unchanged.
Parameters:
- tools: A slice of tool definitions that the model can use
Returns:
- model.ToolCallingChatModel: A new model instance with tools enabled
- error: Returns an error if no tools provided or conversion fails
type Config ¶
type Config struct {
// Client is the Gemini API client instance
// Required for making API calls to Gemini
Client *genai.Client
// Model specifies which Gemini model to use
// Examples: "gemini-pro", "gemini-pro-vision", "gemini-1.5-flash"
Model string
// MaxTokens limits the maximum number of tokens in the response
// Optional. Example: maxTokens := 100
MaxTokens *int
// Temperature controls randomness in responses
// Range: [0.0, 1.0], where 0.0 is more focused and 1.0 is more creative
// Optional. Example: temperature := float32(0.7)
Temperature *float32
// TopP controls diversity via nucleus sampling
// Range: [0.0, 1.0], where 1.0 disables nucleus sampling
// Optional. Example: topP := float32(0.95)
TopP *float32
// TopK controls diversity by limiting the top K tokens to sample from
// Optional. Example: topK := int32(40)
TopK *int32
// ResponseSchema defines the structure for JSON responses
// Optional. Used when you want structured output in JSON format
ResponseSchema *openapi3.Schema
// EnableCodeExecution allows the model to execute code
// Warning: Be cautious with code execution in production
// Optional. Default: false
EnableCodeExecution bool
// SafetySettings configures content filtering for different harm categories
// Controls the model's filtering behavior for potentially harmful content
// Optional.
SafetySettings []*genai.SafetySetting
}
Config contains the configuration options for the Gemini model