Documentation
¶
Overview ¶
Package gemini provides HTTP client functionality for the Gemini API.
Index ¶
- Constants
- func BytesToFloat32Slice(b []byte) []float32
- func CallLLM(ctx context.Context, apiKey string, req LLMRequest) (LLMResult, *LLMError)
- func EmbedQuery(ctx context.Context, apiKey string, query string) []float32
- func Float32SliceToBytes(v []float32) []byte
- func ResetBaseURL()
- func SetBaseURL(url string)
- func StreamLLM(ctx context.Context, apiKey string, req LLMRequest, onChunk func(string)) (LLMResult, *LLMError)
- type APIKeyValidationResult
- type BatchEmbedResult
- type EmbedError
- type EmbedRequest
- type EmbedResult
- type LLMError
- type LLMRequest
- type LLMResult
Constants ¶
const ( // EmbeddingDimensions is the expected embedding vector size. EmbeddingDimensions = 768 // EmbeddingModel is the Gemini embedding model to use. EmbeddingModel = "gemini-embedding-2-preview" // EmbeddingBatchSize is the maximum chunks per batch request. EmbeddingBatchSize = 20 )
const ( // LLMModel is the default Gemini LLM model to use. // gemini-2.5-flash is the recommended stable model as of March 2026. LLMModel = "gemini-2.5-flash" // LLMModelFallback is used when the primary model returns 404. LLMModelFallback = "gemini-3.0-flash" )
Variables ¶
This section is empty.
Functions ¶
func BytesToFloat32Slice ¶
BytesToFloat32Slice converts little-endian bytes back to float32 slice.
func CallLLM ¶
CallLLM makes a non-streaming generateContent request. Returns LLMResult on success, LLMError on failure — never panics. Falls back to LLMModelFallback if the primary model returns 404.
func EmbedQuery ¶
EmbedQuery embeds a single query string with taskType RETRIEVAL_QUERY. Returns nil on any error — never panics.
func Float32SliceToBytes ¶
Float32SliceToBytes converts a float32 slice to little-endian bytes for sqlite-vec storage.
func StreamLLM ¶
func StreamLLM(ctx context.Context, apiKey string, req LLMRequest, onChunk func(string)) (LLMResult, *LLMError)
StreamLLM makes a streaming generateContent request. Calls onChunk for each text chunk as it arrives via SSE. Returns LLMResult with assembled text on completion. Returns LLMError on failure — never panics. Falls back to LLMModelFallback if the primary model returns 404.
Types ¶
type APIKeyValidationResult ¶
APIKeyValidationResult contains the result of validating a Gemini API key.
func ValidateAPIKey ¶
func ValidateAPIKey(ctx context.Context, apiKey string) APIKeyValidationResult
ValidateAPIKey makes a lightweight embedContent call to verify the key.
type BatchEmbedResult ¶
type BatchEmbedResult struct {
Results []EmbedResult
Errors []EmbedError
}
BatchEmbedResult contains results from a batch embedding operation.
func EmbedChunks ¶
func EmbedChunks(ctx context.Context, apiKey string, chunks []EmbedRequest) BatchEmbedResult
EmbedChunks embeds up to 20 chunks in a single batchEmbedContents request. Uses taskType RETRIEVAL_DOCUMENT. Returns all chunks as errors if the API call fails entirely — never panics.
type EmbedError ¶
EmbedError represents a failed embedding attempt.
type EmbedRequest ¶
EmbedRequest represents a single chunk to embed.
type EmbedResult ¶
EmbedResult represents a successful embedding result.