Documentation
¶
Index ¶
- Constants
- func GetInputValue(input responses.ResponseNewParamsInputUnion) any
- func IsContextCanceled(err error) bool
- func PreprocessInputData(data []byte) ([]byte, error)
- type APIStyle
- type APIType
- type AnthropicBetaMessagesRequest
- type AnthropicMessagesRequest
- type BetaRound
- type Client
- type ErrorDetail
- type ErrorResponse
- type GoogleRequest
- type Grouper
- type GuardrailsBufferedEvent
- type GuardrailsStreamState
- type HandleContext
- func (hc *HandleContext) CallOnStreamComplete()
- func (hc *HandleContext) EnsureGuardrails() *HandleGuardrails
- func (hc *HandleContext) EnsureGuardrailsStream() *GuardrailsStreamState
- func (hc *HandleContext) ProcessStream(nextFunc func() (bool, error, interface{}), handleFunc func(interface{}) error) error
- func (hc *HandleContext) SendError(err error, errorType, code string)
- func (hc *HandleContext) SetupSSEHeaders()
- func (hc *HandleContext) WithOnStreamComplete(hook func()) *HandleContext
- func (hc *HandleContext) WithOnStreamError(hook func(error)) *HandleContext
- func (hc *HandleContext) WithOnStreamEvent(hook func(interface{}) error) *HandleContext
- type HandleGuardrails
- type OpenAIChatCompletionRequest
- type OpenAIConfig
- type Response
- type ResponseCreateRequest
- type ResponseInputItemUnionParam
- type ResponseNewParams
- type ResponseNewParamsInputUnion
- type RoundStats
- type TokenUsage
- type V1Round
Constants ¶
const CodexAPIBase = "https://chatgpt.com/backend-api"
CodexAPIBase is the API base URL for ChatGPT/Codex OAuth provider
Variables ¶
This section is empty.
Functions ¶
func GetInputValue ¶
func GetInputValue(input responses.ResponseNewParamsInputUnion) any
GetInputValue extracts the raw input value from ResponseNewParamsInputUnion. Returns the underlying string, array, or nil.
func IsContextCanceled ¶
IsContextCanceled checks if the error is due to context cancellation.
func PreprocessInputData ¶ added in v0.260409.1540
PreprocessInputData preprocesses the JSON data before unmarshaling. It performs two preprocessing steps: 1. Adds "type": "message" to input items that don't have a type field 2. Flattens output_text content blocks into single strings
Types ¶
type APIType ¶
type APIType string
APIType represents the target API style for protocol conversion
const ( // TypeOpenAIChat converts requests to OpenAI Chat Completions format TypeOpenAIChat APIType = "openai_chat" // TypeOpenAIResponses converts requests to OpenAI Responses API format TypeOpenAIResponses APIType = "openai_responses" // TypeAnthropicV1 converts requests to Anthropic v1 Messages API format TypeAnthropicV1 APIType = "anthropic_v1" // TypeAnthropicBeta converts requests to Anthropic v1beta Messages API format TypeAnthropicBeta APIType = "anthropic_beta" // TypeGoogle converts requests to Google Gemini API format TypeGoogle APIType = "google" )
type AnthropicBetaMessagesRequest ¶
type AnthropicBetaMessagesRequest struct {
Stream bool `json:"stream"`
anthropic.BetaMessageNewParams
}
Use official Anthropic SDK types directly
func (*AnthropicBetaMessagesRequest) UnmarshalJSON ¶
func (r *AnthropicBetaMessagesRequest) UnmarshalJSON(data []byte) error
type AnthropicMessagesRequest ¶
type AnthropicMessagesRequest struct {
Stream bool `json:"stream"`
anthropic.MessageNewParams
}
Request types
func (*AnthropicMessagesRequest) UnmarshalJSON ¶
func (r *AnthropicMessagesRequest) UnmarshalJSON(data []byte) error
type BetaRound ¶
type BetaRound struct {
Messages []anthropic.BetaMessageParam
IsCurrentRound bool
Stats *RoundStats // Optional metadata about the round structure
}
BetaRound represents a conversation round for v1beta API.
type Client ¶
type Client interface {
// APIStyle returns the type of provider this client implements
APIStyle() APIStyle
// Close closes any resources held by the client
Close() error
}
Client is the unified interface for AI provider clients
type ErrorDetail ¶
type ErrorDetail struct {
Message string `json:"message"`
Type string `json:"type"`
Code string `json:"code,omitempty"`
}
ErrorDetail represents error details
type ErrorResponse ¶
type ErrorResponse struct {
Error ErrorDetail `json:"error"`
}
ErrorResponse represents an error response
type GoogleRequest ¶
type GoogleRequest struct {
Model string
Contents []*genai.Content
Config *genai.GenerateContentConfig
}
GoogleRequest wraps Google API request parameters Google's SDK uses separate parameters rather than a single request struct
type Grouper ¶
type Grouper struct{}
Grouper provides methods to group messages into conversation rounds.
func (*Grouper) GroupBeta ¶
func (g *Grouper) GroupBeta(messages []anthropic.BetaMessageParam) []BetaRound
GroupBeta groups beta messages into conversation rounds.
func (*Grouper) GroupV1 ¶
func (g *Grouper) GroupV1(messages []anthropic.MessageParam) []V1Round
GroupV1 groups v1 messages into conversation rounds. A round starts with a pure user message and includes all subsequent messages (assistant with tool use, tool results) until the next pure user message (exclusive).
func (*Grouper) IsPureBetaUserMessage ¶
func (g *Grouper) IsPureBetaUserMessage(msg anthropic.BetaMessageParam) bool
IsPureBetaUserMessage checks if a beta message is a pure user instruction.
func (*Grouper) IsPureUserMessage ¶
func (g *Grouper) IsPureUserMessage(msg anthropic.MessageParam) bool
IsPureUserMessage checks if a v1 message is a pure user instruction (not a tool result).
type GuardrailsBufferedEvent ¶ added in v0.260418.2200
type GuardrailsStreamState ¶ added in v0.260418.2200
type GuardrailsStreamState struct {
// PendingBlockMessages stores early hook verdicts keyed by tool_use id.
PendingBlockMessages map[string]string
// PendingBlockedIndex tracks which content block index is currently blocked.
PendingBlockedIndex map[int]string
// AnthropicToolEvents buffers one tool_use block from start -> delta -> stop
// so the rewrite layer can either flush the original events or replace them.
AnthropicToolEvents map[int][]GuardrailsBufferedEvent
// AnthropicToolIDs links the buffered block index back to the provider tool id.
AnthropicToolIDs map[int]string
}
type HandleContext ¶
type HandleContext struct {
// Gin context
GinContext *gin.Context
// Model info
ResponseModel string
// Guardrails runtime state shared across request/response/stream phases for
// one proxied conversation.
Guardrails *HandleGuardrails
// Hooks for stream processing (chainable - multiple hooks can be added)
OnStreamEventHooks []func(event interface{}) error
OnStreamCompleteHooks []func()
OnStreamErrorHooks []func(err error)
// Stream configuration flags
DisableStreamUsage bool // Don't include usage in streaming chunks
}
HandleContext provides dependencies for handle functions. It uses the builder pattern for optional configuration and hooks.
func NewHandleContext ¶
func NewHandleContext(c *gin.Context, responseModel string) *HandleContext
NewHandleContext creates a new HandleContext with required dependencies.
func (*HandleContext) CallOnStreamComplete ¶
func (hc *HandleContext) CallOnStreamComplete()
CallOnStreamComplete calls all OnStreamComplete hooks. This is useful for non-streaming handlers that still need to invoke complete hooks.
func (*HandleContext) EnsureGuardrails ¶ added in v0.260418.2200
func (hc *HandleContext) EnsureGuardrails() *HandleGuardrails
func (*HandleContext) EnsureGuardrailsStream ¶ added in v0.260418.2200
func (hc *HandleContext) EnsureGuardrailsStream() *GuardrailsStreamState
func (*HandleContext) ProcessStream ¶
func (hc *HandleContext) ProcessStream(nextFunc func() (bool, error, interface{}), handleFunc func(interface{}) error) error
ProcessStream provides a generic framework for processing streaming responses. It handles context cancellation, error checking, and event processing.
nextFunc should return (true, nil, event) to continue, (false, nil, nil) to stop, or (false, err, nil) on error. handleFunc is called for each event after OnStreamEventHooks are invoked. It can be used to send the event to the client.
func (*HandleContext) SendError ¶
func (hc *HandleContext) SendError(err error, errorType, code string)
SendError sends an error response to the client.
func (*HandleContext) SetupSSEHeaders ¶
func (hc *HandleContext) SetupSSEHeaders()
SetupSSEHeaders sets the standard SSE (Server-Sent Events) headers.
func (*HandleContext) WithOnStreamComplete ¶
func (hc *HandleContext) WithOnStreamComplete(hook func()) *HandleContext
WithOnStreamComplete adds a hook that is called when stream completes successfully. Multiple hooks can be added and will be called in order.
func (*HandleContext) WithOnStreamError ¶
func (hc *HandleContext) WithOnStreamError(hook func(error)) *HandleContext
WithOnStreamError adds a hook that is called when stream encounters an error. Multiple hooks can be added and will be called in order.
func (*HandleContext) WithOnStreamEvent ¶
func (hc *HandleContext) WithOnStreamEvent(hook func(interface{}) error) *HandleContext
WithOnStreamEvent adds a hook that is called for each stream event. Multiple hooks can be added and will be called in order.
type HandleGuardrails ¶ added in v0.260418.2200
type HandleGuardrails struct {
Enabled bool
CredentialMask *guardrailscore.CredentialMaskState
Stream *GuardrailsStreamState
}
type OpenAIChatCompletionRequest ¶
type OpenAIChatCompletionRequest struct {
openai.ChatCompletionNewParams
Stream bool `json:"stream"`
}
OpenAIChatCompletionRequest is a type alias for OpenAI chat completion request with extra fields.
func (*OpenAIChatCompletionRequest) UnmarshalJSON ¶
func (r *OpenAIChatCompletionRequest) UnmarshalJSON(data []byte) error
type OpenAIConfig ¶
type OpenAIConfig struct {
// HasThinking indicates whether the request contains thinking content
// This can be used by providers like DeepSeek to handle reasoning_content
HasThinking bool
// ReasoningEffort specifies the reasoning effort level for OpenAI-compatible APIs
// Valid values: "none", "minimal", "low", "medium", "high", "xhigh"
// Defaults to "low" when HasThinking is true
ReasoningEffort shared.ReasoningEffort
// CursorCompat indicates Cursor compatibility handling is enabled for this request.
CursorCompat bool
}
OpenAIConfig contains additional metadata that may be used by provider transforms
type ResponseCreateRequest ¶
type ResponseCreateRequest struct {
// Stream indicates whether to stream the response
// This is not part of ResponseNewParams as streaming is controlled
// by using NewStreaming() method on the SDK client
Stream bool `json:"stream"`
// Embed the native SDK type for all other fields
responses.ResponseNewParams
}
ResponseCreateRequest wraps the native ResponseNewParams with additional fields for proxy-specific handling like the `stream` parameter.
func (*ResponseCreateRequest) UnmarshalJSON ¶
func (r *ResponseCreateRequest) UnmarshalJSON(data []byte) error
UnmarshalJSON implements custom JSON unmarshaling for ResponseCreateRequest It handles both the custom Stream field and the embedded ResponseNewParams
type ResponseInputItemUnionParam ¶
type ResponseInputItemUnionParam = responses.ResponseInputItemUnionParam
ResponseInputItemUnionParam is an alias to the native OpenAI SDK type
type ResponseNewParams ¶
type ResponseNewParams = responses.ResponseNewParams
ResponseNewParams is an alias to the native OpenAI SDK type
type ResponseNewParamsInputUnion ¶
type ResponseNewParamsInputUnion = responses.ResponseNewParamsInputUnion
ResponseNewParamsInputUnion is an alias to the native OpenAI SDK type
type RoundStats ¶
type RoundStats struct {
UserMessageCount int // Number of pure user messages in this round (should be 1)
AssistantCount int // Number of assistant messages
ToolResultCount int // Number of tool result messages
TotalMessages int // Total messages in the round
HasThinking bool // Whether any assistant message contains thinking blocks
}
RoundStats contains metadata about a round's message composition.
type TokenUsage ¶
type TokenUsage struct {
// InputTokens is the number of input/prompt tokens consumed (excluding cache)
InputTokens int `json:"input_tokens"`
// OutputTokens is the number of output/completion tokens consumed
OutputTokens int `json:"output_tokens"`
// CacheInputTokens is the number of cache-related tokens consumed
// (includes both cache creation and cache read operations)
CacheInputTokens int `json:"cache_input_tokens,omitempty"`
// SystemTokens represents tokens consumed by system-level operations
// such as prompt templates, system instructions, or framework overhead
SystemTokens int `json:"system_tokens,omitempty"`
}
TokenUsage represents comprehensive token usage statistics. This structure provides a unified interface for tracking token usage across all supported protocols (OpenAI, Anthropic, Google).
func NewTokenUsage ¶
func NewTokenUsage(inputTokens, outputTokens int) *TokenUsage
NewTokenUsage creates a new TokenUsage with the given token counts.
func NewTokenUsageWithCache ¶
func NewTokenUsageWithCache(inputTokens, outputTokens, cacheTokens int) *TokenUsage
NewTokenUsageWithCache creates a new TokenUsage with cache token count.
func ZeroTokenUsage ¶
func ZeroTokenUsage() *TokenUsage
ZeroTokenUsage returns a TokenUsage with zero values.
func (*TokenUsage) HasCacheUsage ¶
func (u *TokenUsage) HasCacheUsage() bool
HasCacheUsage returns true if cache tokens are present.
func (*TokenUsage) HasUsage ¶
func (u *TokenUsage) HasUsage() bool
HasUsage returns true if any token count is non-zero.
func (*TokenUsage) TotalTokens ¶
func (u *TokenUsage) TotalTokens() int
TotalTokens returns the total tokens consumed (input + output, excluding cache). Cache tokens are tracked separately for cost calculation purposes.
type V1Round ¶
type V1Round struct {
Messages []anthropic.MessageParam
IsCurrentRound bool
Stats *RoundStats // Optional metadata about the round structure
}
V1Round represents a conversation round for v1 API.