Documentation
¶
Overview ¶
Package googleai provides a provider implementation for Google AI's Gemini API.
Google AI uses a different API format than OpenAI, so this package implements custom parsing, enrichment, and extraction logic.
Key differences from OpenAI:
- Endpoint: /v1beta/models/{model}:generateContent (model in path)
- Auth: x-goog-api-key header (not Bearer token)
- Request uses contents array with parts instead of messages
- System prompt is in systemInstruction field
- Response uses candidates instead of choices
- Token fields: promptTokenCount/candidatesTokenCount
Basic usage:
provider, _ := googleai.New("your-api-key")
proxy := llmproxy.NewProxy(provider)
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Candidate ¶
type Candidate struct {
Content *Content `json:"content,omitempty"`
FinishReason string `json:"finishReason,omitempty"`
SafetyRatings []SafetyRating `json:"safetyRatings,omitempty"`
}
Candidate represents a single completion candidate.
type Enricher ¶
type Enricher struct {
// APIKey is the Google AI API key.
APIKey string
}
Enricher implements llmproxy.RequestEnricher for Google AI's API. It sets the required x-goog-api-key header.
func NewEnricher ¶
NewEnricher creates a new Google AI enricher with the given API key.
type Extractor ¶
type Extractor struct{}
Extractor implements llmproxy.ResponseExtractor for Google AI responses.
func NewExtractor ¶
func NewExtractor() *Extractor
NewExtractor creates a new Google AI response extractor.
type GenerationConfig ¶
type GenerationConfig struct {
Temperature float64 `json:"temperature,omitempty"`
TopP float64 `json:"topP,omitempty"`
TopK int `json:"topK,omitempty"`
MaxOutputTokens int `json:"maxOutputTokens,omitempty"`
StopSequences []string `json:"stopSequences,omitempty"`
}
GenerationConfig contains generation parameters.
type InlineData ¶
InlineData represents binary data in a part.
type Parser ¶
type Parser struct{}
Parser implements llmproxy.BodyParser for Google AI's request format.
func (*Parser) Parse ¶
func (p *Parser) Parse(body io.ReadCloser) (llmproxy.BodyMetadata, []byte, error)
Parse reads a Google AI request body and extracts metadata. It handles Google AI-specific fields like contents with parts.
type Part ¶
type Part struct {
Text string `json:"text,omitempty"`
InlineData *InlineData `json:"inlineData,omitempty"`
Custom map[string]interface{} `json:"-"`
}
Part represents a single part of content (text, inline_data, etc.).
func (*Part) UnmarshalJSON ¶
UnmarshalJSON handles flexible part content.
type PromptFeedback ¶
type PromptFeedback struct {
BlockReason string `json:"blockReason,omitempty"`
SafetyRatings []SafetyRating `json:"safetyRatings,omitempty"`
}
PromptFeedback contains feedback about the prompt.
type Provider ¶
type Provider struct {
*llmproxy.BaseProvider
}
Provider is a Google AI provider implementation.
type Request ¶
type Request struct {
Model string `json:"-"` // Extracted from path
Contents []Content `json:"contents,omitempty"`
SystemInstruction *Content `json:"systemInstruction,omitempty"`
GenerationConfig GenerationConfig `json:"generationConfig,omitempty"`
SafetySettings []SafetySetting `json:"safetySettings,omitempty"`
Custom map[string]interface{} `json:"-"`
}
Request represents a Google AI generateContent request.
func (*Request) UnmarshalJSON ¶
UnmarshalJSON captures unknown fields into Custom.
type Resolver ¶
Resolver implements llmproxy.URLResolver for Google AI's API. It constructs the generateContent endpoint URL with the model in the path.
func NewResolver ¶
NewResolver creates a new resolver with the given base URL.
type Response ¶
type Response struct {
Candidates []Candidate `json:"candidates,omitempty"`
PromptFeedback *PromptFeedback `json:"promptFeedback,omitempty"`
UsageMetadata UsageMetadata `json:"usageMetadata,omitempty"`
ModelName string `json:"model,omitempty"`
}
Response represents a Google AI generateContent response.
type SafetyRating ¶
type SafetyRating struct {
Category string `json:"category"`
Probability string `json:"probability"`
}
SafetyRating represents safety assessment for a candidate.
type SafetySetting ¶
SafetySetting represents a safety configuration.
type StreamingExtractor ¶ added in v0.0.6
type StreamingExtractor struct {
*Extractor
}
func NewStreamingExtractor ¶ added in v0.0.6
func NewStreamingExtractor() *StreamingExtractor
func (*StreamingExtractor) ExtractStreamingWithController ¶ added in v0.0.6
func (e *StreamingExtractor) ExtractStreamingWithController(resp *http.Response, w http.ResponseWriter, rc *http.ResponseController) (llmproxy.ResponseMetadata, error)
func (*StreamingExtractor) IsStreamingResponse ¶ added in v0.0.6
func (e *StreamingExtractor) IsStreamingResponse(resp *http.Response) bool
type UsageMetadata ¶
type UsageMetadata struct {
PromptTokenCount int `json:"promptTokenCount"`
CandidatesTokenCount int `json:"candidatesTokenCount"`
TotalTokenCount int `json:"totalTokenCount"`
}
UsageMetadata tracks token usage in a Google AI response.