Documentation
¶
Overview ¶
google implements an API client for the Google Gemini REST API. https://ai.google.dev/gemini-api/docs
Index ¶
- func GenerateRequest(model string, session *schema.Conversation, opts ...opt.Opt) (any, error)
- func WithFrequencyPenalty(value float64) opt.Opt
- func WithJSONOutput(schema *jsonschema.Schema) opt.Opt
- func WithMaxTokens(value uint) opt.Opt
- func WithOutputDimensionality(d uint) opt.Opt
- func WithPresencePenalty(value float64) opt.Opt
- func WithSeed(value int) opt.Opt
- func WithStopSequences(values ...string) opt.Opt
- func WithSystemPrompt(value string) opt.Opt
- func WithTaskType(taskType string) opt.Opt
- func WithTemperature(value float64) opt.Opt
- func WithThinking() opt.Opt
- func WithThinkingBudget(budgetTokens uint) opt.Opt
- func WithTitle(title string) opt.Opt
- func WithTopK(value uint) opt.Opt
- func WithTopP(value float64) opt.Opt
- type Client
- func (c *Client) BatchEmbedding(ctx context.Context, model schema.Model, texts []string, opts ...opt.Opt) ([][]float64, error)
- func (c *Client) Embedding(ctx context.Context, model schema.Model, text string, opts ...opt.Opt) ([]float64, error)
- func (c *Client) GetModel(ctx context.Context, name string, opts ...opt.Opt) (*schema.Model, error)
- func (c *Client) ListModels(ctx context.Context, opts ...opt.Opt) ([]schema.Model, error)
- func (*Client) Name() string
- func (c *Client) WithSession(ctx context.Context, model schema.Model, session *schema.Conversation, ...) (*schema.Message, *schema.Usage, error)
- func (c *Client) WithoutSession(ctx context.Context, model schema.Model, message *schema.Message, ...) (*schema.Message, *schema.Usage, error)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GenerateRequest ¶
GenerateRequest builds a generate request from options without sending it. Useful for testing and debugging.
func WithFrequencyPenalty ¶
WithFrequencyPenalty sets the frequency penalty (-2.0 to 2.0). Positive values penalise tokens proportionally to how often they have appeared so far, reducing repetition.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithJSONOutput ¶
func WithJSONOutput(schema *jsonschema.Schema) opt.Opt
WithJSONOutput constrains the model to produce JSON conforming to the given schema. Sets responseMimeType to "application/json" and responseJsonSchema on the request.
func WithMaxTokens ¶
WithMaxTokens sets the maximum number of tokens to generate (minimum 1).
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithOutputDimensionality ¶
WithOutputDimensionality sets the output dimensionality for the embedding. The default is 3072. Recommended values are 768, 1536, or 3072. Smaller dimensions save storage space with minimal quality loss.
See: https://ai.google.dev/gemini-api/docs/embeddings#controlling-embedding-size
func WithPresencePenalty ¶
WithPresencePenalty sets the presence penalty (-2.0 to 2.0). Positive values penalise tokens that have already appeared, encouraging the model to talk about new topics.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithSeed ¶
WithSeed sets the seed for deterministic generation. Using the same seed with the same inputs should produce repeatable results.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithStopSequences ¶
WithStopSequences sets custom stop sequences for the request. Generation stops when any of the specified sequences is encountered.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithSystemPrompt ¶
WithSystemPrompt sets the system instruction for the request.
See: https://ai.google.dev/gemini-api/docs/system-instructions
func WithTaskType ¶
WithTaskType sets the task type for the embedding request. Supported values: SEMANTIC_SIMILARITY, CLASSIFICATION, CLUSTERING, RETRIEVAL_DOCUMENT, RETRIEVAL_QUERY, CODE_RETRIEVAL_QUERY, QUESTION_ANSWERING, FACT_VERIFICATION.
See: https://ai.google.dev/gemini-api/docs/embeddings#specify-task-type
func WithTemperature ¶
WithTemperature sets the temperature for the request (0.0 to 2.0). Higher values produce more random output, lower values more deterministic.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
func WithThinking ¶
WithThinking enables the model's extended thinking/reasoning capability. When enabled, the model may include internal reasoning in its response. Only supported by models that have thinking capabilities (e.g. gemini-2.5-flash).
func WithThinkingBudget ¶
WithThinkingBudget enables extended thinking with a specific token budget. See: https://ai.google.dev/gemini-api/docs/thinking
func WithTitle ¶
WithTitle sets the title for the embedding request. Only applicable when TaskType is RETRIEVAL_DOCUMENT.
See: https://ai.google.dev/gemini-api/docs/embeddings#specify-task-type
func WithTopK ¶
WithTopK sets the top-K sampling parameter (minimum 1). Limits token selection to the K most probable tokens.
See: https://ai.google.dev/gemini-api/docs/text-generation#configuration-parameters
Types ¶
type Client ¶
type Client struct {
*client.Client
*modelcache.ModelCache
}
func (*Client) BatchEmbedding ¶
func (c *Client) BatchEmbedding(ctx context.Context, model schema.Model, texts []string, opts ...opt.Opt) ([][]float64, error)
BatchEmbedding generates embedding vectors for multiple texts using the specified model
func (*Client) Embedding ¶
func (c *Client) Embedding(ctx context.Context, model schema.Model, text string, opts ...opt.Opt) ([]float64, error)
Embedding generates an embedding vector for a single text using the specified model
func (*Client) ListModels ¶
ListModels returns all available models from the Gemini API