Documentation
¶
Overview ¶
Package openai provides an OpenAI LLM implementation using the Responses API.
This implementation is strictly aligned with ADK-Go's model architecture:
- Uses OpenAI's new Responses API (/v1/responses)
- Unified GenerateContent method with stream boolean
- Returns iter.Seq2[*Response, error]
- Uses StreamingAggregator for streaming with Partial flag
- Proper handling of reasoning/thinking for o1/o3/o4/gpt-5 models
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
Client is an OpenAI LLM implementation using the Responses API. Implements model.LLM interface aligned with ADK-Go.
func (*Client) GenerateContent ¶
func (c *Client) GenerateContent(ctx context.Context, req *model.Request, stream bool) iter.Seq2[*model.Response, error]
GenerateContent produces responses for the given request. This is the ADK-Go aligned interface.
When stream=false:
- Yields exactly one Response with complete content, Partial=false
When stream=true:
- Yields multiple partial Responses (Partial=true) for real-time UI updates
- Finally yields aggregated Response (Partial=false) for session persistence
type Config ¶
type Config struct {
APIKey string
Model string
MaxTokens int
Temperature *float64
BaseURL string
Timeout time.Duration
MaxRetries int
MaxToolOutputLength int
EnableReasoning bool
ReasoningBudget int // Maps to reasoning.effort: low/medium/high
}
Config configures the OpenAI client.
type Option ¶
type Option func(*Config)
Option configures the OpenAI client.
func WithMaxTokens ¶
WithMaxTokens sets the maximum output tokens.
func WithReasoning ¶
WithReasoning enables reasoning with the given budget.
func WithTemperature ¶
WithTemperature sets the temperature.
Click to show internal directories.
Click to hide internal directories.