Documentation
¶
Index ¶
- Constants
- func AddTypeFieldToInputItems(data []byte) ([]byte, error)
- func GetInputValue(input responses.ResponseNewParamsInputUnion) any
- func IsContextCanceled(err error) bool
- type APIStyle
- type AnthropicBetaMessagesRequest
- type AnthropicMessagesRequest
- type BetaRound
- type Client
- type ErrorDetail
- type ErrorResponse
- type Grouper
- type HandleContext
- func (hc *HandleContext) CallOnStreamComplete()
- func (hc *HandleContext) ProcessStream(nextFunc func() (bool, error, interface{}), handleFunc func(interface{}) error) error
- func (hc *HandleContext) SendError(err error, errorType, code string)
- func (hc *HandleContext) SetupSSEHeaders()
- func (hc *HandleContext) WithOnStreamComplete(hook func()) *HandleContext
- func (hc *HandleContext) WithOnStreamError(hook func(error)) *HandleContext
- func (hc *HandleContext) WithOnStreamEvent(hook func(interface{}) error) *HandleContext
- type OpenAIChatCompletionRequest
- type OpenAIConfig
- type Response
- type ResponseCreateRequest
- type ResponseInputItemUnionParam
- type ResponseNewParams
- type ResponseNewParamsInputUnion
- type RoundStats
- type TokenUsage
- type Transformer
- type V1Round
Constants ¶
const CodexAPIBase = "https://chatgpt.com/backend-api"
CodexAPIBase is the API base URL for ChatGPT/Codex OAuth provider
Variables ¶
This section is empty.
Functions ¶
func AddTypeFieldToInputItems ¶
AddTypeFieldToInputItems preprocesses the JSON to add "type": "message" to input items that don't have a type field. This is necessary because the OpenAI SDK's union deserializer requires the type field to correctly match variants.
func GetInputValue ¶
func GetInputValue(input responses.ResponseNewParamsInputUnion) any
GetInputValue extracts the raw input value from ResponseNewParamsInputUnion. Returns the underlying string, array, or nil.
func IsContextCanceled ¶
IsContextCanceled checks if the error is due to context cancellation.
Types ¶
type AnthropicBetaMessagesRequest ¶
type AnthropicBetaMessagesRequest struct {
Stream bool `json:"stream"`
anthropic.BetaMessageNewParams
}
Use official Anthropic SDK types directly
func (*AnthropicBetaMessagesRequest) UnmarshalJSON ¶
func (r *AnthropicBetaMessagesRequest) UnmarshalJSON(data []byte) error
type AnthropicMessagesRequest ¶
type AnthropicMessagesRequest struct {
Stream bool `json:"stream"`
anthropic.MessageNewParams
}
Request types
func (*AnthropicMessagesRequest) UnmarshalJSON ¶
func (r *AnthropicMessagesRequest) UnmarshalJSON(data []byte) error
type BetaRound ¶
type BetaRound struct {
Messages []anthropic.BetaMessageParam
IsCurrentRound bool
Stats *RoundStats // Optional metadata about the round structure
}
BetaRound represents a conversation round for v1beta API.
type Client ¶
type Client interface {
// APIStyle returns the type of provider this client implements
APIStyle() APIStyle
// Close closes any resources held by the client
Close() error
}
Client is the unified interface for AI provider clients
type ErrorDetail ¶
type ErrorDetail struct {
Message string `json:"message"`
Type string `json:"type"`
Code string `json:"code,omitempty"`
}
ErrorDetail represents error details
type ErrorResponse ¶
type ErrorResponse struct {
Error ErrorDetail `json:"error"`
}
ErrorResponse represents an error response
type Grouper ¶
type Grouper struct{}
Grouper provides methods to group messages into conversation rounds.
func (*Grouper) GroupBeta ¶
func (g *Grouper) GroupBeta(messages []anthropic.BetaMessageParam) []BetaRound
GroupBeta groups beta messages into conversation rounds.
func (*Grouper) GroupV1 ¶
func (g *Grouper) GroupV1(messages []anthropic.MessageParam) []V1Round
GroupV1 groups v1 messages into conversation rounds. A round starts with a pure user message and includes all subsequent messages (assistant with tool use, tool results) until the next pure user message (exclusive).
func (*Grouper) IsPureBetaUserMessage ¶
func (g *Grouper) IsPureBetaUserMessage(msg anthropic.BetaMessageParam) bool
IsPureBetaUserMessage checks if a beta message is a pure user instruction.
func (*Grouper) IsPureUserMessage ¶
func (g *Grouper) IsPureUserMessage(msg anthropic.MessageParam) bool
IsPureUserMessage checks if a v1 message is a pure user instruction (not a tool result).
type HandleContext ¶
type HandleContext struct {
// Gin context
GinContext *gin.Context
// Model info
ResponseModel string
// Hooks for stream processing (chainable - multiple hooks can be added)
OnStreamEventHooks []func(event interface{}) error
OnStreamCompleteHooks []func()
OnStreamErrorHooks []func(err error)
// Stream configuration flags
DisableStreamUsage bool // Don't include usage in streaming chunks
}
HandleContext provides dependencies for handle functions. It uses the builder pattern for optional configuration and hooks.
func NewHandleContext ¶
func NewHandleContext(c *gin.Context, responseModel string) *HandleContext
NewHandleContext creates a new HandleContext with required dependencies.
func (*HandleContext) CallOnStreamComplete ¶
func (hc *HandleContext) CallOnStreamComplete()
CallOnStreamComplete calls all OnStreamComplete hooks. This is useful for non-streaming handlers that still need to invoke complete hooks.
func (*HandleContext) ProcessStream ¶
func (hc *HandleContext) ProcessStream(nextFunc func() (bool, error, interface{}), handleFunc func(interface{}) error) error
ProcessStream provides a generic framework for processing streaming responses. It handles context cancellation, error checking, and event processing.
nextFunc should return (true, nil, event) to continue, (false, nil, nil) to stop, or (false, err, nil) on error. handleFunc is called for each event after OnStreamEventHooks are invoked. It can be used to send the event to the client.
func (*HandleContext) SendError ¶
func (hc *HandleContext) SendError(err error, errorType, code string)
SendError sends an error response to the client.
func (*HandleContext) SetupSSEHeaders ¶
func (hc *HandleContext) SetupSSEHeaders()
SetupSSEHeaders sets the standard SSE (Server-Sent Events) headers.
func (*HandleContext) WithOnStreamComplete ¶
func (hc *HandleContext) WithOnStreamComplete(hook func()) *HandleContext
WithOnStreamComplete adds a hook that is called when stream completes successfully. Multiple hooks can be added and will be called in order.
func (*HandleContext) WithOnStreamError ¶
func (hc *HandleContext) WithOnStreamError(hook func(error)) *HandleContext
WithOnStreamError adds a hook that is called when stream encounters an error. Multiple hooks can be added and will be called in order.
func (*HandleContext) WithOnStreamEvent ¶
func (hc *HandleContext) WithOnStreamEvent(hook func(interface{}) error) *HandleContext
WithOnStreamEvent adds a hook that is called for each stream event. Multiple hooks can be added and will be called in order.
type OpenAIChatCompletionRequest ¶
type OpenAIChatCompletionRequest struct {
openai.ChatCompletionNewParams
Stream bool `json:"stream"`
}
OpenAIChatCompletionRequest is a type alias for OpenAI chat completion request with extra fields.
func (*OpenAIChatCompletionRequest) UnmarshalJSON ¶
func (r *OpenAIChatCompletionRequest) UnmarshalJSON(data []byte) error
type OpenAIConfig ¶
type OpenAIConfig struct {
// HasThinking indicates whether the request contains thinking content
// This can be used by providers like DeepSeek to handle reasoning_content
HasThinking bool
// ReasoningEffort specifies the reasoning effort level for OpenAI-compatible APIs
// Valid values: "none", "minimal", "low", "medium", "high", "xhigh"
// Defaults to "low" when HasThinking is true
ReasoningEffort shared.ReasoningEffort
}
OpenAIConfig contains additional metadata that may be used by provider transforms
type ResponseCreateRequest ¶
type ResponseCreateRequest struct {
// Stream indicates whether to stream the response
// This is not part of ResponseNewParams as streaming is controlled
// by using NewStreaming() method on the SDK client
Stream bool `json:"stream"`
// Embed the native SDK type for all other fields
responses.ResponseNewParams
}
ResponseCreateRequest wraps the native ResponseNewParams with additional fields for proxy-specific handling like the `stream` parameter.
func (*ResponseCreateRequest) UnmarshalJSON ¶
func (r *ResponseCreateRequest) UnmarshalJSON(data []byte) error
UnmarshalJSON implements custom JSON unmarshaling for ResponseCreateRequest It handles both the custom Stream field and the embedded ResponseNewParams
type ResponseInputItemUnionParam ¶
type ResponseInputItemUnionParam = responses.ResponseInputItemUnionParam
ResponseInputItemUnionParam is an alias to the native OpenAI SDK type
type ResponseNewParams ¶
type ResponseNewParams = responses.ResponseNewParams
ResponseNewParams is an alias to the native OpenAI SDK type
type ResponseNewParamsInputUnion ¶
type ResponseNewParamsInputUnion = responses.ResponseNewParamsInputUnion
ResponseNewParamsInputUnion is an alias to the native OpenAI SDK type
type RoundStats ¶
type RoundStats struct {
UserMessageCount int // Number of pure user messages in this round (should be 1)
AssistantCount int // Number of assistant messages
ToolResultCount int // Number of tool result messages
TotalMessages int // Total messages in the round
HasThinking bool // Whether any assistant message contains thinking blocks
}
RoundStats contains metadata about a round's message composition.
type TokenUsage ¶
type TokenUsage struct {
// InputTokens is the number of input/prompt tokens consumed (excluding cache)
InputTokens int `json:"input_tokens"`
// OutputTokens is the number of output/completion tokens consumed
OutputTokens int `json:"output_tokens"`
// CacheInputTokens is the number of cache-related tokens consumed
// (includes both cache creation and cache read operations)
CacheInputTokens int `json:"cache_input_tokens,omitempty"`
// SystemTokens represents tokens consumed by system-level operations
// such as prompt templates, system instructions, or framework overhead
SystemTokens int `json:"system_tokens,omitempty"`
}
TokenUsage represents comprehensive token usage statistics. This structure provides a unified interface for tracking token usage across all supported protocols (OpenAI, Anthropic, Google).
func NewTokenUsage ¶
func NewTokenUsage(inputTokens, outputTokens int) *TokenUsage
NewTokenUsage creates a new TokenUsage with the given token counts.
func NewTokenUsageWithCache ¶
func NewTokenUsageWithCache(inputTokens, outputTokens, cacheTokens int) *TokenUsage
NewTokenUsageWithCache creates a new TokenUsage with cache token count.
func ZeroTokenUsage ¶
func ZeroTokenUsage() *TokenUsage
ZeroTokenUsage returns a TokenUsage with zero values.
func (*TokenUsage) HasCacheUsage ¶
func (u *TokenUsage) HasCacheUsage() bool
HasCacheUsage returns true if cache tokens are present.
func (*TokenUsage) HasUsage ¶
func (u *TokenUsage) HasUsage() bool
HasUsage returns true if any token count is non-zero.
func (*TokenUsage) TotalTokens ¶
func (u *TokenUsage) TotalTokens() int
TotalTokens returns the total tokens consumed (input + output, excluding cache). Cache tokens are tracked separately for cost calculation purposes.
type Transformer ¶
type Transformer interface {
// HandleV1 handles compacting for Anthropic v1 requests.
HandleV1(req *anthropic.MessageNewParams) error
// HandleV1Beta handles compacting for Anthropic v1beta requests.
HandleV1Beta(req *anthropic.BetaMessageNewParams) error
}
Transformer defines the interface for request compacting transformations. Each handler method is responsible for a different request model type.
type V1Round ¶
type V1Round struct {
Messages []anthropic.MessageParam
IsCurrentRound bool
Stats *RoundStats // Optional metadata about the round structure
}
V1Round represents a conversation round for v1 API.