Documentation
¶
Overview ¶
Package providers contains LLM provider implementations for Iris.
Each provider is implemented in its own subpackage (e.g., providers/openai, providers/anthropic). Providers implement the core.Provider interface.
Provider Interface ¶
All providers must implement core.Provider:
type Provider interface {
ID() string
Models() []ModelInfo
Supports(feature Feature) bool
Chat(ctx context.Context, req *ChatRequest) (*ChatResponse, error)
StreamChat(ctx context.Context, req *ChatRequest) (*ChatStream, error)
}
Concurrency ¶
Providers SHOULD be safe for concurrent calls. If a provider cannot be concurrent-safe, it MUST document this limitation.
Streaming ¶
StreamChat returns a *ChatStream (not a raw channel) to carry errors and final tool calls consistently. Providers MUST:
- Close all channels (Ch, Err, Final) when finished
- Terminate promptly on context cancellation
- Send at most one error on Err
- Send exactly one response on Final (or zero on setup failure)
Feature Detection ¶
Use Supports() to check provider capabilities before making requests:
if p.Supports(core.FeatureToolCalling) {
// Safe to use tools
}
Index ¶
- Constants
- Variables
- func Create(name, apiKey string) (core.Provider, error)
- func IsRegistered(name string) bool
- func List() []string
- func Register(name string, factory ProviderFactory)
- type ChatChunk
- type ChatRequest
- type ChatResponse
- type ChatStream
- type Feature
- type Message
- type ModelID
- type ModelInfo
- type Provider
- type ProviderError
- type ProviderFactory
- type Role
- type TokenUsage
- type ToolCall
Constants ¶
const ( FeatureChat = core.FeatureChat FeatureChatStreaming = core.FeatureChatStreaming FeatureToolCalling = core.FeatureToolCalling )
Re-export feature constants.
const ( RoleSystem = core.RoleSystem RoleUser = core.RoleUser RoleAssistant = core.RoleAssistant )
Re-export role constants.
Variables ¶
var ( ErrRateLimited = core.ErrRateLimited ErrBadRequest = core.ErrBadRequest ErrServer = core.ErrServer ErrNetwork = core.ErrNetwork ErrDecode = core.ErrDecode ErrModelRequired = core.ErrModelRequired ErrNoMessages = core.ErrNoMessages )
Re-export sentinel errors.
Functions ¶
func Create ¶
Create creates a new provider instance by name with the given API key. Returns an error if the provider is not registered.
func IsRegistered ¶
IsRegistered returns true if a provider with the given name is registered.
func List ¶
func List() []string
List returns the names of all registered providers in sorted order.
func Register ¶
func Register(name string, factory ProviderFactory)
Register adds a provider factory to the registry. It is typically called from a provider's init() function. If a provider with the same name is already registered, it will be overwritten.
Example usage in a provider package:
func init() {
providers.Register("openai", func(apiKey string) core.Provider {
return New(apiKey)
})
}
Types ¶
type ChatRequest ¶
type ChatRequest = core.ChatRequest
ChatRequest represents a request to a chat model.
type ChatResponse ¶
type ChatResponse = core.ChatResponse
ChatResponse represents a response from a chat model.
type ChatStream ¶
type ChatStream = core.ChatStream
ChatStream represents a streaming response from a provider.
type ProviderError ¶
type ProviderError = core.ProviderError
ProviderError represents an error returned by a provider.
type ProviderFactory ¶
ProviderFactory creates a provider instance with the given API key. Some providers (like Ollama) may ignore the key parameter.
func Get ¶
func Get(name string) ProviderFactory
Get retrieves a provider factory by name. Returns nil if the provider is not registered.
type TokenUsage ¶
type TokenUsage = core.TokenUsage
TokenUsage tracks token consumption for a request.
Directories
¶
| Path | Synopsis |
|---|---|
|
Package anthropic provides an Anthropic API provider implementation for Iris.
|
Package anthropic provides an Anthropic API provider implementation for Iris. |
|
Package gemini provides a Google Gemini API provider implementation for Iris.
|
Package gemini provides a Google Gemini API provider implementation for Iris. |
|
Package huggingface provides an LLM provider implementation for Hugging Face Inference Providers.
|
Package huggingface provides an LLM provider implementation for Hugging Face Inference Providers. |
|
internal
|
|
|
normalize
Package normalize provides shared provider error normalization helpers.
|
Package normalize provides shared provider error normalization helpers. |
|
toolcalls
Package toolcalls provides shared streaming tool-call assembly utilities.
|
Package toolcalls provides shared streaming tool-call assembly utilities. |
|
Package ollama provides an Ollama provider implementation for the Iris SDK.
|
Package ollama provides an Ollama provider implementation for the Iris SDK. |
|
Package openai provides an OpenAI API provider implementation for Iris.
|
Package openai provides an OpenAI API provider implementation for Iris. |
|
Package perplexity provides a Perplexity Search API provider implementation for Iris.
|
Package perplexity provides a Perplexity Search API provider implementation for Iris. |
|
Package voyageai provides a Voyage AI API provider implementation for Iris.
|
Package voyageai provides a Voyage AI API provider implementation for Iris. |
|
Package xai provides an xAI Grok API provider implementation for Iris.
|
Package xai provides an xAI Grok API provider implementation for Iris. |
|
Package zai provides a Z.ai GLM API provider implementation for Iris.
|
Package zai provides a Z.ai GLM API provider implementation for Iris. |