providers

package
v0.0.11 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 16, 2025 License: Apache-2.0 Imports: 3 Imported by: 0

README


title: 'AI Providers' description: 'Overview of supported AI providers and configuration options'

This guide provides an overview of the AI providers supported by CentralMind Gateway, along with configuration options and examples.

Supported Providers

We support the following AI providers:

We've tested with OpenAI o3-mini, Anthropic Claude 3.7 and Gemini 2.0 Flash Thinking, which we recommend for optimal performance.

Google Gemini provides a generous free tier.

For best performance, we recommend using:

  • OpenAI: o3-mini
  • Anthropic: Claude 3.7
  • Google: Gemini 2.0 Flash Thinking (Free tier available)

These models provide a good balance of performance, speed, and cost for most use cases.

Configuration Schema

Below is the configuration schema for all supported AI providers:

Field Type Required Description
ai-provider string No AI provider to use. Options: openai, anthropic, bedrock, gemini, anthropic-vertexai. Defaults to openai
ai-endpoint string No Custom OpenAI-compatible API endpoint URL
ai-api-key string No AI API token for authentication
bedrock-region string No AWS region for Amazon Bedrock
vertexai-region string No Google Cloud region for Vertex AI
vertexai-project string No Google Cloud project ID for Vertex AI
ai-model string No AI model to use (provider-specific)
ai-max-tokens integer No Maximum tokens to use in the response (0 = provider default)
ai-temperature float No Temperature for AI responses (-1.0 = provider default)
ai-reasoning boolean No Enable reasoning mode for supported models (default: true)

Example

First specify OPENAI_API_KEY in the environment. You can get OpenAI API Key on OpenAI Platform.

export OPENAI_API_KEY='yourkey'
./gateway discover \
  --ai-provider openai \
  --config connection.yaml

Additional Configuration Options

You can further customize the AI behavior with these optional parameters:

./gateway discover \
  --ai-provider openai \
  --ai-api-key your-openai-api-key \
  --ai-model o3-mini \
  --ai-max-tokens 8192 \
  --ai-temperature 1.0 \
  --ai-reasoning=true \
  --config connection.yaml

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrUnknownProvider = errors.New("unknown ai provider")
)

Functions

func ExtractJSON

func ExtractJSON(text string) string

func RegisterModelProvider

func RegisterModelProvider(name string, factory ModelProviderFactory)

Types

type ChatStream

type ChatStream interface {
	Events() <-chan StreamChunk
}

type ChatStreamOutput

type ChatStreamOutput interface {
	GetStream() ChatStream
}

type ContentBlock

type ContentBlock interface {
	// contains filtered or unexported methods
}

type ContentBlockText

type ContentBlockText struct {
	Value string `json:"value"`
}

type ConversationRequest

type ConversationRequest struct {
	ModelId      string    `json:"modelId"`
	System       string    `json:"system,omitempty"`
	Messages     []Message `json:"messages"`
	MaxTokens    int       `json:"maxTokens,omitempty"`
	Temperature  float32   `json:"temperature,omitempty"`
	Reasoning    bool      `json:"reasoning,omitempty"`
	JsonResponse bool      `json:"requireJson,omitempty"`
}

type ConversationResponse

type ConversationResponse struct {
	ProviderName string         `json:"providerName,omitempty"`
	ModelId      string         `json:"modelId,omitempty"`
	Content      []ContentBlock `json:"content"`
	StopReason   StopReason     `json:"stopReason,omitempty"`
	Usage        *ModelUsage    `json:"usage,omitempty"`
}

type ConversationRole

type ConversationRole string
const (
	UserRole      ConversationRole = "user"
	AssistantRole ConversationRole = "assistant"
)

type Message

type Message struct {
	Role    ConversationRole `json:"role"`
	Content []ContentBlock   `json:"content"`
}

type ModelProvider

type ModelProvider interface {
	GetName() string
	CostEstimate(modelId string, usage ModelUsage) float64
	Chat(ctx context.Context, req *ConversationRequest) (*ConversationResponse, error)
	ChatStream(ctx context.Context, req *ConversationRequest) (ChatStreamOutput, error)
}

func NewModelProvider

func NewModelProvider(config ModelProviderConfig) (ModelProvider, error)

type ModelProviderConfig

type ModelProviderConfig struct {
	Name            string
	Endpoint        string
	APIKey          string
	BedrockRegion   string
	VertexAIRegion  string
	VertexAIProject string
}

type ModelProviderFactory

type ModelProviderFactory func(ModelProviderConfig) (ModelProvider, error)

type ModelUsage

type ModelUsage struct {
	InputTokens  int `json:"inputTokens"`
	OutputTokens int `json:"outputTokens"`
	TotalTokens  int `json:"totalTokens"`
}

type StopReason

type StopReason string
const (
	StopReasonStop      StopReason = "stop"
	StopReasonToolCalls StopReason = "toolCalls"
	StopReasonLength    StopReason = "length"
)

type StreamChunk

type StreamChunk interface {
	// contains filtered or unexported methods
}

type StreamChunkContent

type StreamChunkContent struct {
	Content ContentBlock `json:"content"`
}

type StreamChunkError

type StreamChunkError struct {
	Error string `json:"error,omitempty"`
}

type StreamChunkStop

type StreamChunkStop struct {
	StopReason StopReason `json:"stopReason,omitempty"`
}

type StreamChunkUsage

type StreamChunkUsage struct {
	ModelId string      `json:"modelId,omitempty"`
	Usage   *ModelUsage `json:"usage,omitempty"`
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL