Documentation
¶
Overview ¶
Package llms provides unified support for interacting with different Language Models (LLMs) from various providers. Designed with an extensible architecture, the package facilitates seamless integration of LLMs with a focus on modularity, encapsulation, and easy configurability.
Each subpackage includes provider-specific LLM implementations and helper files for communication with supported LLM providers. The internal directories within these subpackages contain provider-specific client and API implementations.
The `llms.go` file contains the types and interfaces for interacting with different LLMs.
The `options.go` file provides various options and functions to configure the LLMs.
Index ¶
- Constants
- Variables
- func GenerateFromSinglePrompt(ctx context.Context, llm Model, prompt string, options ...CallOption) (string, error)
- type BinaryContent
- type BinaryContentJSON
- type BinaryJSON
- type CallOption
- func WithCandidateCount(c int) CallOption
- func WithFrequencyPenalty(frequencyPenalty float64) CallOption
- func WithMaxLength(maxLength int) CallOption
- func WithMaxTokens(maxTokens int) CallOption
- func WithMetadata(metadata map[string]any) CallOption
- func WithMinLength(minLength int) CallOption
- func WithModel(model string) CallOption
- func WithN(n int) CallOption
- func WithOptions(options CallOptions) CallOption
- func WithPresencePenalty(presencePenalty float64) CallOption
- func WithPromptCacheKey(promptCacheKey string) CallOption
- func WithPromptCacheMode(promptCacheMode PromptCacheMode) CallOption
- func WithReasoningEffort(reasoningEffort ReasoningEffort) CallOption
- func WithRepetitionPenalty(repetitionPenalty float64) CallOption
- func WithResponseFormat(responseFormat *schema.ResponseFormat) CallOption
- func WithSeed(seed int) CallOption
- func WithStopWords(stopWords []string) CallOption
- func WithStreamingFunc(streamingFunc func(ctx context.Context, chunk []byte) error) CallOption
- func WithStreamingReasoningFunc(...) CallOption
- func WithTemperature(temperature float64) CallOption
- func WithToolChoice(choice any) CallOption
- func WithTools(tools []Tool) CallOption
- func WithTopK(topK int) CallOption
- func WithTopP(topP float64) CallOption
- type CallOptions
- type Capability
- type ContentChoice
- type ContentPart
- type ContentPartJSON
- type ContentResponse
- type Embedder
- type FunctionCall
- type FunctionCallBehavior
- type FunctionDefinition
- type FunctionReference
- type ImageURLContent
- type ImageURLContentJSON
- type ImageURLJSON
- type Message
- type MessageContentJSON
- type MessageContentWithPartsJSON
- type Model
- type PromptCacheMode
- type PromptValue
- type ProviderType
- type ReasoningEffort
- type Role
- type TextContent
- type TextContentJSON
- type Tool
- type ToolCall
- type ToolCallContentJSON
- type ToolCallJSON
- type ToolCallJSONOrdered
- type ToolCallResponse
- type ToolChoice
- type ToolResponseContentJSON
- type ToolResponseJSON
- type ToolResponseJSONOrdered
- type WebSearchOptions
Constants ¶
const ( ReasoningEffortDefault = iota ReasoningEffortNone ReasoningEffortLow ReasoningEffortMedium ReasoningEffortHigh )
Variables ¶
var ErrUnexpectedRole = errors.New("unexpected role")
ErrUnexpectedRole is returned when a message role is of an unexpected type.
Functions ¶
func GenerateFromSinglePrompt ¶
func GenerateFromSinglePrompt(ctx context.Context, llm Model, prompt string, options ...CallOption) (string, error)
GenerateFromSinglePrompt is a convenience function for calling an LLM with a single string prompt, expecting a single string response. It's useful for simple, string-only interactions and provides a slightly more ergonomic API than the more general llms.Model.GenerateContent.
Types ¶
type BinaryContent ¶
BinaryContent is content holding some binary data with a MIME type.
func BinaryPart ¶
func BinaryPart(mime string, data []byte) BinaryContent
BinaryPart creates a new BinaryContent from the given MIME type (e.g. "image/png" and binary data).
func (BinaryContent) MarshalJSON ¶
func (bc BinaryContent) MarshalJSON() ([]byte, error)
MarshalJSON implements json.Marshaler for BinaryContent
func (BinaryContent) String ¶
func (bc BinaryContent) String() string
func (*BinaryContent) UnmarshalJSON ¶
func (bc *BinaryContent) UnmarshalJSON(data []byte) error
UnmarshalJSON implements json.Unmarshaler for BinaryContent
type BinaryContentJSON ¶ added in v0.10.55
type BinaryContentJSON struct {
Type string `json:"type"`
Binary BinaryJSON `json:"binary"`
}
BinaryContentJSON represents the JSON structure for binary content
type BinaryJSON ¶ added in v0.10.55
BinaryJSON represents the JSON structure for binary content
type CallOption ¶
type CallOption func(*CallOptions)
CallOption is a function that configures a CallOptions.
func WithCandidateCount ¶
func WithCandidateCount(c int) CallOption
WithCandidateCount specifies the number of response candidates to generate.
func WithFrequencyPenalty ¶
func WithFrequencyPenalty(frequencyPenalty float64) CallOption
WithFrequencyPenalty will add an option to set the frequency penalty for sampling.
func WithMaxLength ¶
func WithMaxLength(maxLength int) CallOption
WithMaxLength will add an option to set the maximum length of the generated text.
func WithMaxTokens ¶
func WithMaxTokens(maxTokens int) CallOption
WithMaxTokens specifies the max number of tokens to generate.
func WithMetadata ¶
func WithMetadata(metadata map[string]any) CallOption
WithMetadata will add an option to set metadata to include in the request. The meaning of this field is specific to the backend in use.
func WithMinLength ¶
func WithMinLength(minLength int) CallOption
WithMinLength will add an option to set the minimum length of the generated text.
func WithModel ¶
func WithModel(model string) CallOption
WithModel specifies which model name to use.
func WithN ¶
func WithN(n int) CallOption
WithN will add an option to set how many chat completion choices to generate for each input message.
func WithPresencePenalty ¶
func WithPresencePenalty(presencePenalty float64) CallOption
WithPresencePenalty will add an option to set the presence penalty for sampling.
func WithPromptCacheKey ¶ added in v0.14.105
func WithPromptCacheKey(promptCacheKey string) CallOption
WithPromptCacheKey allows setting the prompt cache key.
func WithPromptCacheMode ¶ added in v0.14.105
func WithPromptCacheMode(promptCacheMode PromptCacheMode) CallOption
WithPromptCacheMode allows setting the prompt cache mode.
func WithReasoningEffort ¶ added in v0.14.100
func WithReasoningEffort(reasoningEffort ReasoningEffort) CallOption
WithReasoningEffort allows setting the reasoning effort.
func WithRepetitionPenalty ¶
func WithRepetitionPenalty(repetitionPenalty float64) CallOption
WithRepetitionPenalty will add an option to set the repetition penalty for sampling.
func WithResponseFormat ¶ added in v0.10.55
func WithResponseFormat(responseFormat *schema.ResponseFormat) CallOption
WithResponseFormat allows setting a custom response format. If it's not set the response MIME type is text/plain. Otherwise, from response format the JSON mode is derived.
func WithSeed ¶
func WithSeed(seed int) CallOption
WithSeed will add an option to use deterministic sampling.
func WithStopWords ¶
func WithStopWords(stopWords []string) CallOption
WithStopWords specifies a list of words to stop generation on.
func WithStreamingFunc ¶
func WithStreamingFunc(streamingFunc func(ctx context.Context, chunk []byte) error) CallOption
WithStreamingFunc specifies the streaming function to use.
func WithStreamingReasoningFunc ¶
func WithStreamingReasoningFunc(streamingReasoningFunc func(ctx context.Context, reasoningChunk, chunk []byte) error) CallOption
WithStreamingReasoningFunc specifies the streaming reasoning function to use.
func WithTemperature ¶
func WithTemperature(temperature float64) CallOption
WithTemperature specifies the model temperature, a hyperparameter that regulates the randomness, or creativity, of the AI's responses.
func WithToolChoice ¶
func WithToolChoice(choice any) CallOption
WithToolChoice will add an option to set the choice of tool to use. It can either be "none", "auto" (the default behavior), or a specific tool as described in the ToolChoice type.
func WithTools ¶
func WithTools(tools []Tool) CallOption
WithTools will add an option to set the tools to use.
func WithTopK ¶
func WithTopK(topK int) CallOption
WithTopK will add an option to use top-k sampling.
func WithTopP ¶
func WithTopP(topP float64) CallOption
WithTopP will add an option to use top-p sampling.
type CallOptions ¶
type CallOptions struct {
// Model is the model to use.
Model string
// CandidateCount is the number of response candidates to generate.
CandidateCount int
// MaxTokens is the maximum number of tokens to generate.
MaxTokens int
// Temperature is the temperature for sampling, between 0 and 1.
Temperature float64
// StopWords is a list of words to stop on.
StopWords []string
// StreamingFunc is a function to be called for each chunk of a streaming response.
// Return an error to stop streaming early.
StreamingFunc func(ctx context.Context, chunk []byte) error
// StreamingReasoningFunc is a function to be called for each chunk of a streaming response.
// Return an error to stop streaming early.
StreamingReasoningFunc func(ctx context.Context, reasoningChunk, chunk []byte) error
// TopK is the number of tokens to consider for top-k sampling.
TopK int
// TopP is the cumulative probability for top-p sampling.
TopP float64
// Seed is a seed for deterministic sampling.
Seed int
// MinLength is the minimum length of the generated text.
MinLength int
// MaxLength is the maximum length of the generated text.
MaxLength int
// N is how many chat completion choices to generate for each input message.
N int
// RepetitionPenalty is the repetition penalty for sampling.
RepetitionPenalty float64
// FrequencyPenalty is the frequency penalty for sampling.
FrequencyPenalty float64
// PresencePenalty is the presence penalty for sampling.
PresencePenalty float64
// Tools is a list of tools to use. Each tool can be a specific tool or a function.
Tools []Tool
// ToolChoice is the choice of tool to use, it can either be "none", "auto" (the default behavior), or a specific tool as described in the ToolChoice type.
ToolChoice any
// Metadata is a map of metadata to include in the request.
// The meaning of this field is specific to the backend in use.
Metadata map[string]any
// ResponseFormat is a custom response format.
// If it's not set the response MIME type is text/plain.
// Otherwise, from response format the JSON mode is derived.
ResponseFormat *schema.ResponseFormat
ReasoningEffort ReasoningEffort
// PromptCacheMode controls whether and how prompt caching is used.
// Prompt caching allows storing and retrieving model responses for identical prompts,
// which can improve performance and reduce costs. The mode determines the caching strategy.
PromptCacheMode PromptCacheMode
// PromptCacheKey is the key used to identify a cached prompt/response pair.
// If set, it overrides the default cache key derived from the prompt and options.
PromptCacheKey string
}
CallOptions is a set of options for calling models. Not all models support all options.
type Capability ¶ added in v0.10.54
type Capability uint64
Capability is a bitmask indicating supported features of an LLM provider.
const ( // Basic text or chat generation CapabilityText Capability = 1 << iota // Structured response formats CapabilityJSONResponse CapabilityJSONSchema CapabilityJSONSchemaStrict // Function/tool calling CapabilityFunctionCalling CapabilityMultiToolCalling CapabilityToolCallStreaming // Multimodal (images, audio, etc.) CapabilityVision CapabilityImageGeneration CapabilityAudioTranscription // Open weight models / self-hosted CapabilitySelfHosted // System prompt support CapabilitySystemPrompt // Web Search tool support, used by models that support web search grounding. CapabilityWebSearchTool // Prompt Caching CapabilityPromptCaching )
func ProviderCapabilities ¶ added in v0.10.54
func ProviderCapabilities(pt ProviderType) Capability
type ContentChoice ¶
type ContentChoice struct {
// Content is the textual content of a response
Content string `json:"content"`
// StopReason is the reason the model stopped generating output.
StopReason string `json:"stop_reason"`
// GenerationInfo is arbitrary information the model adds to the response.
GenerationInfo map[string]any `json:"generation_info"`
// FuncCall is non-nil when the model asks to invoke a function/tool.
// If a model invokes more than one function/tool, this field will only
// contain the first one.
FuncCall *FunctionCall `json:"func_call"`
// ToolCalls is a list of tool calls the model asks to invoke.
ToolCalls []ToolCall `json:"tool_calls"`
// This field is only used with the deepseek-reasoner model and represents the reasoning contents of the assistant message before the final answer.
ReasoningContent string `json:"reasoning_content"`
}
ContentChoice is one of the response choices returned by GenerateContent calls.
type ContentPart ¶
type ContentPart interface {
// contains filtered or unexported methods
}
ContentPart is an interface all parts of content have to implement.
type ContentPartJSON ¶ added in v0.10.55
type ContentPartJSON struct {
Type string `json:"type"`
Text string `json:"text,omitempty"`
ImageURL *ImageURLJSON `json:"image_url,omitempty"`
Binary *BinaryJSON `json:"binary,omitempty"`
ToolCall *ToolCallJSON `json:"tool_call,omitempty"`
ToolResponse *ToolResponseJSON `json:"tool_response,omitempty"`
}
ContentPartJSON represents the JSON structure for content parts
type ContentResponse ¶
type ContentResponse struct {
Choices []*ContentChoice
}
ContentResponse is the response returned by a GenerateContent call. It can potentially return multiple content choices.
type Embedder ¶ added in v0.10.63
type Embedder interface {
// CreateEmbedding creates embeddings for the given input texts.
CreateEmbedding(ctx context.Context, texts []string) ([][]float32, error)
}
Embedder is an interface for models that can create embeddings.
type FunctionCall ¶
type FunctionCall struct {
// The name of the function to call.
Name string `json:"name"`
// The arguments to pass to the function, as a JSON string.
Arguments string `json:"arguments"`
}
FunctionCall is the name and arguments of a function call.
type FunctionCallBehavior ¶
type FunctionCallBehavior string
FunctionCallBehavior is the behavior to use when calling functions.
const ( // FunctionCallBehaviorNone will not call any functions. FunctionCallBehaviorNone FunctionCallBehavior = "none" // FunctionCallBehaviorAuto will call functions automatically. FunctionCallBehaviorAuto FunctionCallBehavior = "auto" )
type FunctionDefinition ¶
type FunctionDefinition struct {
// Name is the name of the function.
Name string `json:"name"`
// Description is a description of the function.
Description string `json:"description"`
// Parameters is a list of parameters for the function.
Parameters *jsonschema.Schema `json:"parameters,omitempty"`
// Strict is a flag to indicate if the function should be called strictly. Only used for openai llm structured output.
Strict bool `json:"strict,omitempty"`
}
FunctionDefinition is a definition of a function that can be called by the model.
type FunctionReference ¶
type FunctionReference struct {
// Name is the name of the function.
Name string `json:"name"`
}
FunctionReference is a reference to a function.
type ImageURLContent ¶
type ImageURLContent struct {
URL string `json:"url"`
Detail string `json:"detail,omitempty"` // Detail is the detail of the image, e.g. "low", "high".
}
ImageURLContent is content with an URL pointing to an image.
func ImageURLPart ¶
func ImageURLPart(url string) ImageURLContent
ImageURLPart creates a new ImageURLContent from the given URL.
func ImageURLWithDetailPart ¶
func ImageURLWithDetailPart(url string, detail string) ImageURLContent
ImageURLWithDetailPart creates a new ImageURLContent from the given URL and detail.
func (ImageURLContent) MarshalJSON ¶
func (iuc ImageURLContent) MarshalJSON() ([]byte, error)
MarshalJSON implements json.Marshaler for ImageURLContent
func (ImageURLContent) String ¶
func (iuc ImageURLContent) String() string
func (*ImageURLContent) UnmarshalJSON ¶
func (iuc *ImageURLContent) UnmarshalJSON(data []byte) error
UnmarshalJSON implements json.Unmarshaler for ImageURLContent
type ImageURLContentJSON ¶ added in v0.10.55
type ImageURLContentJSON struct {
Type string `json:"type"`
ImageURL ImageURLJSON `json:"image_url"`
}
ImageURLContentJSON represents the JSON structure for image URL content
type ImageURLJSON ¶ added in v0.10.55
ImageURLJSON represents the JSON structure for image URL content
type Message ¶ added in v0.11.64
type Message struct {
Role Role `json:"role"`
Parts []ContentPart `json:"parts"`
}
Message is the message sent to a LLM. It has a role and a sequence of parts. For example, it can represent one message in a chat session sent by the user, in which case Role will be ChatMessageTypeHuman and Parts will be the sequence of items sent in this specific message.
func MessageFromParts ¶ added in v0.11.64
func MessageFromParts(role Role, parts ...ContentPart) Message
MessageFromParts is a helper function to create a Message with a role and a list of parts.
func MessageFromTextParts ¶ added in v0.11.64
MessageFromTextParts is a helper function to create a Message with a role and a list of text parts.
func MessageFromToolCalls ¶ added in v0.11.64
MessageFromToolCalls is a helper function to create a Message with a role and a list of tool calls.
func MessageFromToolResponse ¶ added in v0.11.64
func MessageFromToolResponse(role Role, toolResponse ToolCallResponse) Message
MessageFromToolResponse is a helper function to create a Message with a role and a tool response.
func (Message) GetContent ¶ added in v0.11.64
func (Message) MarshalJSON ¶ added in v0.11.64
MarshalJSON implements json.Marshaler for MessageContent
func (*Message) ToMessageContentWithPartsJSON ¶ added in v0.11.64
func (mc *Message) ToMessageContentWithPartsJSON() *MessageContentWithPartsJSON
ToMessageContentWithPartsJSON converts MessageContent to MessageContentWithPartsJSON
func (*Message) UnmarshalJSON ¶ added in v0.11.64
UnmarshalJSON implements json.Unmarshaler for MessageContent
type MessageContentJSON ¶ added in v0.10.55
MessageContentJSON represents the JSON structure for MessageContent
type MessageContentWithPartsJSON ¶ added in v0.10.55
type MessageContentWithPartsJSON struct {
Role Role `json:"role"`
Parts []ContentPart `json:"parts"`
}
MessageContentWithPartsJSON represents the JSON structure for MessageContent with parts
type Model ¶
type Model interface {
// GetName returns the name of the model.
GetName() string
// GetProviderType returns the type of provider.
GetProviderType() ProviderType
// GenerateContent asks the model to generate content from a sequence of
// messages. It's the most general interface for multi-modal LLMs that support
// chat-like interactions.
GenerateContent(ctx context.Context, messages []Message, options ...CallOption) (*ContentResponse, error)
}
Model is an interface multi-modal models implement.
type PromptCacheMode ¶ added in v0.14.105
type PromptCacheMode int
const ( PromptCacheModeNone PromptCacheMode = iota PromptCacheModeInMemory PromptCacheModeStore )
type PromptValue ¶
PromptValue is the interface that all prompt values must implement.
type ProviderType ¶ added in v0.10.54
type ProviderType string
ProviderType is the type of provider.
const ( // ProviderAnthropic is the type of provider. ProviderAnthropic ProviderType = "ANTHROPIC" // ProviderAzure is the type of provider. ProviderAzure ProviderType = "AZURE" // ProviderAzureAD is the type of provider. ProviderAzureAD ProviderType = "AZURE_AD" // ProviderBedrock is the type of provider. ProviderBedrock ProviderType = "BEDROCK" // ProviderCloudflare is the type of provider. ProviderCloudflare ProviderType = "CLOUDFLARE" // ProviderGoogleAI is the type of provider. ProviderGoogleAI ProviderType = "GOOGLEAI" // ProviderOpenAI is the type of provider. ProviderOpenAI ProviderType = "OPENAI" // ProviderPerplexity is the type of provider. ProviderPerplexity ProviderType = "PERPLEXITY" )
func (ProviderType) Supports ¶ added in v0.10.54
func (p ProviderType) Supports(cap Capability) bool
type ReasoningEffort ¶ added in v0.14.100
type ReasoningEffort int
type Role ¶ added in v0.11.64
type Role string
Role is the type of chat message.
const ( // RoleAI is a message sent by an AI. RoleAI Role = "ai" // RoleHuman is a message sent by a human. RoleHuman Role = "human" // RoleSystem is a message sent by the system. RoleSystem Role = "system" // RoleGeneric is a message sent by a generic user. RoleGeneric Role = "generic" // RoleTool is a message sent by a tool. RoleTool Role = "tool" )
type TextContent ¶
type TextContent struct {
Text string `json:"text"`
}
TextContent is content with some text.
func TextPart ¶
func TextPart(s string) TextContent
TextPart creates TextContent from a given string.
func (TextContent) MarshalJSON ¶
func (tc TextContent) MarshalJSON() ([]byte, error)
MarshalJSON implements json.Marshaler for TextContent
func (TextContent) String ¶
func (tc TextContent) String() string
func (*TextContent) UnmarshalJSON ¶
func (tc *TextContent) UnmarshalJSON(data []byte) error
UnmarshalJSON implements json.Unmarshaler for TextContent
type TextContentJSON ¶ added in v0.10.55
TextContentJSON represents the JSON structure for text content
type Tool ¶
type Tool struct {
// Type is the type of the tool.
Type string `json:"type"`
// Function is the function to call.
Function *FunctionDefinition `json:"function,omitempty"`
// WebSearchOptions are the options for the web search tool,
// For providers and models that support Web Search grounding.
WebSearchOptions *WebSearchOptions `json:"-"`
}
Tool is a tool that can be used by the model.
type ToolCall ¶
type ToolCall struct {
// ID is the unique identifier of the tool call.
ID string `json:"id"`
// Type is the type of the tool call. Typically, this would be "function".
Type string `json:"type"`
// FunctionCall is the function call to be executed.
FunctionCall *FunctionCall `json:"function,omitempty"`
}
ToolCall is a call to a tool (as requested by the model) that should be executed.
func (ToolCall) GetFunctionCallArguments ¶ added in v0.14.88
func (ToolCall) GetFunctionCallName ¶ added in v0.14.88
func (ToolCall) MarshalJSON ¶
MarshalJSON implements json.Marshaler for ToolCall
func (*ToolCall) UnmarshalJSON ¶
UnmarshalJSON implements json.Unmarshaler for ToolCall
type ToolCallContentJSON ¶ added in v0.10.55
type ToolCallContentJSON struct {
Type string `json:"type"`
ToolCall ToolCallJSON `json:"tool_call"`
}
ToolCallContentJSON represents the JSON structure for tool call content
type ToolCallJSON ¶ added in v0.10.55
type ToolCallJSON struct {
ID string `json:"id"`
Type string `json:"type"`
FunctionCall *FunctionCall `json:"function"`
}
ToolCallJSON represents the JSON structure for tool call content
type ToolCallJSONOrdered ¶ added in v0.10.55
type ToolCallJSONOrdered struct {
FunctionCall *FunctionCall `json:"function"`
ID string `json:"id"`
Type string `json:"type"`
}
ToolCallJSONOrdered matches the expected field order for marshaling function, id, type This is only for marshaling (UnmarshalJSON still uses ToolCallJSON for flexibility)
type ToolCallResponse ¶
type ToolCallResponse struct {
// ToolCallID is the ID of the tool call this response is for.
ToolCallID string `json:"tool_call_id"`
// Name is the name of the tool that was called.
Name string `json:"name"`
// Content is the textual content of the response.
Content string `json:"content"`
}
ToolCallResponse is the response returned by a tool call.
func (ToolCallResponse) MarshalJSON ¶
func (tc ToolCallResponse) MarshalJSON() ([]byte, error)
MarshalJSON implements json.Marshaler for ToolCallResponse
func (ToolCallResponse) String ¶ added in v0.11.64
func (tc ToolCallResponse) String() string
func (*ToolCallResponse) ToToolResponseJSONOrdered ¶ added in v0.10.55
func (tc *ToolCallResponse) ToToolResponseJSONOrdered() *ToolResponseJSONOrdered
ToToolResponseJSONOrdered converts ToolCallResponse to ToolResponseJSONOrdered
func (*ToolCallResponse) UnmarshalJSON ¶
func (tc *ToolCallResponse) UnmarshalJSON(data []byte) error
UnmarshalJSON implements json.Unmarshaler for ToolCallResponse
type ToolChoice ¶
type ToolChoice struct {
// Type is the type of the tool.
Type string `json:"type"`
// Function is the function to call (if the tool is a function).
Function *FunctionReference `json:"function,omitempty"`
}
ToolChoice is a specific tool to use.
type ToolResponseContentJSON ¶ added in v0.10.55
type ToolResponseContentJSON struct {
Type string `json:"type"`
ToolResponse ToolResponseJSON `json:"tool_response"`
}
ToolResponseContentJSON represents the JSON structure for tool response content
type ToolResponseJSON ¶ added in v0.10.55
type ToolResponseJSON struct {
ToolCallID string `json:"tool_call_id"`
Name string `json:"name"`
Content string `json:"content"`
}
ToolResponseJSON represents the JSON structure for tool response content
type ToolResponseJSONOrdered ¶ added in v0.10.55
type ToolResponseJSONOrdered struct {
ToolCallID string `json:"tool_call_id"`
Name string `json:"name"`
Content string `json:"content"`
}
ToolResponseJSONOrdered matches the expected field order for marshaling tool_call_id, name, content This is only for marshaling (UnmarshalJSON still uses ToolResponseJSON for flexibility)
type WebSearchOptions ¶ added in v0.14.88
type WebSearchOptions struct {
// AllowedDomains is a list of domains to search on.
// Supported by OpenAI, Anthropic, and Azure.
AllowedDomains []string
// ExcludedDomains is a list of domains to exclude from search.
// Supported by Google AI.
ExcludedDomains []string
// MaxUses is the maximum number of times the tool can be used.
// Supported by OpenAI, Anthropic, and Azure.
MaxUses int
}
Source Files
¶
Directories
¶
| Path | Synopsis |
|---|---|
|
package googleai implements a langchaingo provider for Google AI LLMs.
|
package googleai implements a langchaingo provider for Google AI LLMs. |
|
internal/cmd
command
Code generator for vertex.go from googleai.go nolint
|
Code generator for vertex.go from googleai.go nolint |