omnillm

package module
v0.15.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 26, 2026 License: MIT Imports: 5 Imported by: 0

README

OmniLLM

Go CI Go Lint Go SAST Go Report Card Docs Docs Visualization License

Batteries-included LLM client that bundles omnillm-core with all thick providers.

Installation

go get github.com/plexusone/omnillm

Quick Start

import "github.com/plexusone/omnillm"

client, _ := omnillm.NewClient(omnillm.ClientConfig{
    Provider: omnillm.ProviderNameOpenAI,
    APIKey:   os.Getenv("OPENAI_API_KEY"),
})

Provider Support

Thin Providers (omnillm-core)

Lightweight implementations using stdlib net/http:

Provider Streaming Tools JSON Mode
OpenAI Yes Yes Yes
Anthropic Yes Yes No
Gemini Yes No No
X.AI (Grok) Yes Yes Yes
GLM (Zhipu) Yes Yes No
Kimi (Moonshot) Yes No No
Qwen (Alibaba) Yes Yes No
Ollama Yes Yes Yes
Thick Providers (Official SDKs)

Full-featured implementations using official vendor SDKs:

Provider Module Streaming Tools JSON Mode
OpenAI omni-openai Yes Yes Yes
Anthropic omnillm-anthropic Yes Yes No
Gemini omnillm-gemini Yes No No
Bedrock omnillm-bedrock Yes Yes No

Thick providers automatically override thin providers when imported.

Thin vs Thick

Aspect Thin (omnillm-core) Thick (omnillm-*)
Dependencies Minimal (stdlib) Official SDK
API Coverage Core features Full coverage
Retries Manual SDK-managed
Auth API key SDK-managed

Selective Import

Import only what you need:

import (
    omnillm "github.com/plexusone/omnillm-core"
    _ "github.com/plexusone/omni-openai/omnillm" // Only OpenAI thick provider
)

Documentation

License

MIT

Documentation

Overview

Package omnillm provides a batteries-included LLM client with all official SDK providers.

This package imports omnillm-core plus all thick (SDK-based) providers, giving you full official SDK support for OpenAI, Anthropic, and Gemini out of the box.

The thick providers automatically override the thin (native HTTP) implementations in omnillm-core via the priority-based provider registry.

Usage:

import "github.com/plexusone/omnillm"

client := omnillm.NewClient(omnillm.ClientConfig{
    Providers: []omnillm.ProviderConfig{
        {Provider: omnillm.ProviderNameOpenAI, APIKey: os.Getenv("OPENAI_API_KEY")},
        {Provider: omnillm.ProviderNameAnthropic, APIKey: os.Getenv("ANTHROPIC_API_KEY")},
        {Provider: omnillm.ProviderNameGemini, APIKey: os.Getenv("GEMINI_API_KEY")},
    },
})

For a lightweight alternative with minimal dependencies (thin providers only), use github.com/plexusone/omnillm-core directly.

Index

Constants

View Source
const (
	ProviderNameOpenAI    = core.ProviderNameOpenAI
	ProviderNameAnthropic = core.ProviderNameAnthropic
	ProviderNameGemini    = core.ProviderNameGemini
	ProviderNameXAI       = core.ProviderNameXAI
	ProviderNameGLM       = core.ProviderNameGLM
	ProviderNameKimi      = core.ProviderNameKimi
	ProviderNameQwen      = core.ProviderNameQwen
	ProviderNameOllama    = core.ProviderNameOllama
	ProviderNameBedrock   = core.ProviderNameBedrock
)

Re-export provider name constants.

View Source
const (
	RoleSystem    = core.RoleSystem
	RoleUser      = core.RoleUser
	RoleAssistant = core.RoleAssistant
	RoleTool      = core.RoleTool
)

Re-export role constants.

View Source
const (
	PriorityThin  = core.PriorityThin
	PriorityThick = core.PriorityThick
)

Re-export priority constants.

Variables

View Source
var (
	ErrUnsupportedProvider  = core.ErrUnsupportedProvider
	ErrInvalidConfiguration = core.ErrInvalidConfiguration
	ErrNoProviders          = core.ErrNoProviders
	ErrEmptyAPIKey          = core.ErrEmptyAPIKey
	ErrInvalidAPIKey        = core.ErrInvalidAPIKey
	ErrEmptyModel           = core.ErrEmptyModel
	ErrEmptyMessages        = core.ErrEmptyMessages
	ErrStreamClosed         = core.ErrStreamClosed
	ErrInvalidResponse      = core.ErrInvalidResponse
	ErrRateLimitExceeded    = core.ErrRateLimitExceeded
	ErrQuotaExceeded        = core.ErrQuotaExceeded
	ErrInvalidRequest       = core.ErrInvalidRequest
	ErrModelNotFound        = core.ErrModelNotFound
	ErrServerError          = core.ErrServerError
	ErrNetworkError         = core.ErrNetworkError
)

Re-export common errors.

View Source
var (
	NewClient               = core.NewClient
	NewAPIError             = core.NewAPIError
	NewAPIErrorFull         = core.NewAPIErrorFull
	RegisterProvider        = core.RegisterProvider
	GetProviderFactory      = core.GetProviderFactory
	ListRegisteredProviders = core.ListRegisteredProviders
	GetProviderPriority     = core.GetProviderPriority
	ClassifyError           = core.ClassifyError
	IsRetryableError        = core.IsRetryableError
	IsNonRetryableError     = core.IsNonRetryableError
)

Re-export constructor functions.

Functions

This section is empty.

Types

type APIError

type APIError = core.APIError

APIError represents an API error.

type Capabilities added in v0.15.1

type Capabilities = core.Capabilities

Capabilities describes provider features.

type ChatClient

type ChatClient = core.ChatClient

ChatClient is the multi-provider LLM client.

type ChatCompletionChoice

type ChatCompletionChoice = core.ChatCompletionChoice

ChatCompletionChoice is a single choice in a response.

type ChatCompletionChunk

type ChatCompletionChunk = core.ChatCompletionChunk

ChatCompletionChunk is a streaming response chunk.

type ChatCompletionRequest

type ChatCompletionRequest = core.ChatCompletionRequest

ChatCompletionRequest is the request for chat completion.

type ChatCompletionResponse

type ChatCompletionResponse = core.ChatCompletionResponse

ChatCompletionResponse is the response from chat completion.

type ChatCompletionStream

type ChatCompletionStream = core.ChatCompletionStream

ChatCompletionStream is the interface for streaming responses.

type ClientConfig

type ClientConfig = core.ClientConfig

ClientConfig holds configuration for creating a multi-provider client.

type LLMCallInfo

type LLMCallInfo = core.LLMCallInfo

LLMCallInfo provides metadata about the LLM call for observability.

type Message

type Message = core.Message

Message represents a chat message.

type ObservabilityHook

type ObservabilityHook = core.ObservabilityHook

ObservabilityHook allows external packages to observe LLM calls.

type Provider

type Provider = core.Provider

Provider is the interface for LLM providers.

type ProviderConfig

type ProviderConfig = core.ProviderConfig

ProviderConfig holds configuration for a single provider.

type ProviderFactory added in v0.15.1

type ProviderFactory = core.ProviderFactory

ProviderFactory creates providers from config.

type ProviderName

type ProviderName = core.ProviderName

ProviderName identifies a provider.

type ResponseFormat added in v0.15.1

type ResponseFormat = core.ResponseFormat

ResponseFormat specifies the response format.

type Role

type Role = core.Role

Role represents the role of a message sender.

type Tool

type Tool = core.Tool

Tool represents a tool/function definition.

type ToolCall

type ToolCall = core.ToolCall

ToolCall represents a tool call from the model.

type ToolFunction

type ToolFunction = core.ToolFunction

ToolFunction represents the function details of a tool call.

type ToolSpec

type ToolSpec = core.ToolSpec

ToolSpec defines a tool's function specification.

type Usage

type Usage = core.Usage

Usage tracks token usage.

Directories

Path Synopsis
Package provider re-exports the provider types from omnillm-core for API compatibility.
Package provider re-exports the provider types from omnillm-core for API compatibility.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL