openai_compatible

package
v0.0.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 13, 2026 License: MIT Imports: 6 Imported by: 0

Documentation

Overview

Package openai_compatible provides a reusable implementation for LLM providers that use OpenAI-compatible APIs.

This includes providers like OpenAI, Groq, Together AI, Fireworks AI, and others that implement the OpenAI chat completions API format.

Basic usage:

provider, _ := openai_compatible.New("groq", "gsk_xxx", "https://api.groq.com")
proxy := llmproxy.NewProxy(provider)

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ParseOpenAIRequest

func ParseOpenAIRequest(body io.ReadCloser) (llmproxy.BodyMetadata, []byte, error)

ParseOpenAIRequest is a convenience function that parses an OpenAI-compatible request body and returns the metadata and raw bytes.

func ParseOpenAIRequestBody

func ParseOpenAIRequestBody(data []byte) (llmproxy.BodyMetadata, error)

ParseOpenAIRequestBody parses raw JSON bytes as an OpenAI-compatible request. It returns only the metadata, not the raw bytes.

Types

type Enricher

type Enricher struct {
	// APIKey is the API key for authentication.
	APIKey string
}

Enricher implements llmproxy.RequestEnricher for OpenAI-compatible APIs. It sets the required Authorization header with a Bearer token.

func NewEnricher

func NewEnricher(apiKey string) *Enricher

NewEnricher creates a new enricher with the given API key.

func (*Enricher) Enrich

func (e *Enricher) Enrich(req *http.Request, meta llmproxy.BodyMetadata, rawBody []byte) error

Enrich adds the Authorization and Content-Type headers to the request. It sets:

  • Authorization: Bearer <APIKey>
  • Content-Type: application/json

type Extractor

type Extractor struct{}

Extractor implements llmproxy.ResponseExtractor for OpenAI-compatible responses. It parses the response JSON and extracts token usage, choices, and other metadata.

func NewExtractor

func NewExtractor() *Extractor

NewExtractor creates a new OpenAI-compatible response extractor.

func (*Extractor) Extract

func (e *Extractor) Extract(resp *http.Response) (llmproxy.ResponseMetadata, []byte, error)

Extract reads the response body and parses it as an OpenAI-compatible response. It extracts the ID, model, usage statistics, and completion choices.

Returns:

  • metadata: Parsed response metadata
  • rawBody: The original response body bytes (preserved for forwarding)
  • error: Any parsing error

The raw body is returned so it can be re-attached to the response for the caller, preserving any custom/unsupported fields in the original JSON.

type OpenAIRequest

type OpenAIRequest struct {
	// Model is the model identifier (e.g., "gpt-4", "llama-2-70b").
	Model string `json:"model"`
	// Messages is the conversation history.
	Messages []llmproxy.Message `json:"messages"`
	// MaxTokens limits the generation length.
	MaxTokens int `json:"max_tokens,omitempty"`
	// Stream enables streaming responses.
	Stream bool `json:"stream"`
	// Custom holds provider-specific parameters not in the standard schema.
	Custom map[string]interface{} `json:"-"`
}

OpenAIRequest represents an OpenAI-compatible chat completion request. It includes standard fields and captures custom fields for provider extensions.

func (*OpenAIRequest) UnmarshalJSON

func (r *OpenAIRequest) UnmarshalJSON(data []byte) error

UnmarshalJSON implements custom JSON unmarshaling to capture unknown fields.

type OpenAIResponse

type OpenAIResponse struct {
	// ID is the unique response identifier.
	ID string `json:"id"`
	// Object is the object type (e.g., "chat.completion").
	Object string `json:"object"`
	// Created is the Unix timestamp of creation.
	Created int64 `json:"created"`
	// Model is the model used for completion.
	Model string `json:"model"`
	// Usage contains token consumption statistics.
	Usage UsageInfo `json:"usage"`
	// Choices contains the completion choices.
	Choices []ResponseChoice `json:"choices"`
}

OpenAIResponse represents an OpenAI-compatible chat completion response.

type Parser

type Parser struct{}

Parser implements llmproxy.BodyParser for OpenAI-compatible request formats. It extracts model, messages, and other fields into a unified BodyMetadata structure.

func (*Parser) Parse

func (p *Parser) Parse(body io.ReadCloser) (llmproxy.BodyMetadata, []byte, error)

Parse reads an OpenAI-compatible request body and extracts metadata. It returns both the parsed metadata and the raw body bytes for later use.

The parser handles standard OpenAI fields and captures unknown fields in the Custom map for provider-specific extensions.

type Provider

type Provider struct {
	*llmproxy.BaseProvider
}

Provider is an OpenAI-compatible provider implementation. It embeds llmproxy.BaseProvider and can be further customized.

func New

func New(name, apiKey, baseURL string) (*Provider, error)

New creates a new OpenAI-compatible provider with the given configuration.

Parameters:

  • name: A unique identifier for the provider (e.g., "openai", "groq")
  • apiKey: The API key for authentication
  • baseURL: The provider's API base URL (e.g., "https://api.openai.com")

Example:

provider, _ := openai_compatible.New("groq", "gsk_xxx", "https://api.groq.com")

func NewWithProvider

func NewWithProvider(name string, p *llmproxy.BaseProvider) *Provider

NewWithProvider creates a Provider that wraps an existing BaseProvider. Use this when you need to customize individual components before creating the provider.

Example:

base := llmproxy.NewBaseProvider("custom",
    llmproxy.WithBodyParser(&Parser{}),
    llmproxy.WithRequestEnricher(customEnricher),
)
provider := openai_compatible.NewWithProvider("custom", base)

type Resolver

type Resolver struct {
	// BaseURL is the provider's API base URL (e.g., "https://api.openai.com").
	BaseURL *url.URL
}

Resolver implements llmproxy.URLResolver for OpenAI-compatible APIs. It constructs the chat completions endpoint URL from a base URL.

func NewResolver

func NewResolver(baseURL string) (*Resolver, error)

NewResolver creates a new resolver with the given base URL. The baseURL should be the provider's API domain (e.g., "https://api.openai.com").

func (*Resolver) Resolve

func (r *Resolver) Resolve(meta llmproxy.BodyMetadata) (*url.URL, error)

Resolve returns the full URL for the chat completions endpoint. It appends "/v1/chat/completions" to the base URL.

type ResponseChoice

type ResponseChoice struct {
	// Index is the choice position.
	Index int `json:"index"`
	// Message contains the completed message (non-streaming).
	Message *ResponseMessage `json:"message,omitempty"`
	// Delta contains the partial message (streaming).
	Delta *ResponseMessage `json:"delta,omitempty"`
	// FinishReason indicates why completion stopped.
	FinishReason string `json:"finish_reason"`
}

ResponseChoice represents a single completion choice.

type ResponseMessage

type ResponseMessage struct {
	Role    string `json:"role"`
	Content string `json:"content"`
}

ResponseMessage represents a message in a completion choice.

type UsageInfo

type UsageInfo struct {
	PromptTokens     int `json:"prompt_tokens"`
	CompletionTokens int `json:"completion_tokens"`
	TotalTokens      int `json:"total_tokens"`
}

UsageInfo tracks token usage in an OpenAI-compatible response.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL