ai

package
v0.7.13 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2026 License: MIT Imports: 6 Imported by: 0

Documentation

Overview

Package ai provides a unified interface for working with multiple LLM providers. It offers a simple, opinionated abstraction layer designed for tool-building workflows.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func GetAPIKey

func GetAPIKey(provider string) (string, error)

GetAPIKey retrieves the API key for the specified provider. It automatically loads ~/.schemaf/.env if not already loaded. Supported providers: "anthropic", "openai".

func LoadEnv

func LoadEnv() error

LoadEnv loads environment variables from ~/.schemaf/.env if not already loaded. This is called automatically by GetAPIKey but can be called explicitly if needed.

Types

type CacheConfig

type CacheConfig struct {
	// CacheSystemPrompt enables caching of the system prompt.
	// This is beneficial when reusing the same system prompt across multiple requests.
	CacheSystemPrompt bool

	// TTL is the cache time-to-live ("5m" or "1h").
	// Defaults to "5m" (5 minutes).
	TTL string
}

CacheConfig controls prompt caching behavior.

type Client

type Client interface {
	// Complete sends a completion request and returns the response.
	Complete(ctx context.Context, req CompletionRequest) (*CompletionResponse, error)

	// Provider returns the provider name (e.g., "anthropic", "openai").
	Provider() string
}

Client represents an LLM provider client. Implementations include Anthropic Claude, OpenAI, and others.

type CompletionRequest

type CompletionRequest struct {
	// SystemPrompt defines the AI's role and behavior.
	// This is typically cached when the same prompt is reused.
	SystemPrompt string

	// Messages is the conversation history.
	Messages []Message

	// MaxTokens is the maximum number of tokens to generate.
	MaxTokens int

	// Temperature controls randomness (0.0 = deterministic, 1.0 = creative).
	Temperature float64

	// Model is an optional model override.
	// If empty, the client's default model is used.
	Model string

	// CacheConfig controls prompt caching behavior.
	CacheConfig *CacheConfig
}

CompletionRequest is a provider-agnostic completion request.

type CompletionResponse

type CompletionResponse struct {
	// Content is the generated text response.
	Content string

	// StopReason indicates why generation stopped (e.g., "end_turn", "max_tokens").
	StopReason string

	// InputTokens is the number of input tokens processed.
	InputTokens int

	// OutputTokens is the number of output tokens generated.
	OutputTokens int

	// CacheReadTokens is the number of tokens read from cache (if caching enabled).
	CacheReadTokens int

	// CacheCreationTokens is the number of tokens written to cache (if caching enabled).
	CacheCreationTokens int
}

CompletionResponse is a provider-agnostic completion response.

type Error

type Error struct {
	// Type categorizes the error.
	Type ErrorType

	// Message provides a human-readable error description.
	Message string

	// Status is the HTTP status code (if applicable).
	Status int

	// Err is the underlying error (if any).
	Err error
}

Error represents an AI-related error with type information.

func NewAuthError

func NewAuthError(msg string) *Error

NewAuthError creates an authentication error.

func NewInvalidRequestError

func NewInvalidRequestError(msg string) *Error

NewInvalidRequestError creates an invalid request error.

func NewNetworkError

func NewNetworkError(msg string, err error) *Error

NewNetworkError creates a network error.

func NewRateLimitError

func NewRateLimitError(msg string) *Error

NewRateLimitError creates a rate limit error.

func NewServerError

func NewServerError(msg string, status int) *Error

NewServerError creates a server error.

func (*Error) Error

func (e *Error) Error() string

Error implements the error interface.

func (*Error) Unwrap

func (e *Error) Unwrap() error

Unwrap returns the underlying error for error chain unwrapping.

type ErrorType

type ErrorType string

ErrorType categorizes AI-related errors.

const (
	// ErrTypeAuth indicates authentication failure (invalid API key, etc.).
	ErrTypeAuth ErrorType = "auth"

	// ErrTypeRateLimit indicates rate limiting by the provider.
	ErrTypeRateLimit ErrorType = "rate_limit"

	// ErrTypeInvalidReq indicates an invalid request (malformed, missing fields).
	ErrTypeInvalidReq ErrorType = "invalid_request"

	// ErrTypeServer indicates a server error from the provider.
	ErrTypeServer ErrorType = "server_error"

	// ErrTypeNetwork indicates a network or connection error.
	ErrTypeNetwork ErrorType = "network"
)

type Message

type Message struct {
	// Role is the message role (user or assistant).
	Role MessageRole

	// Content is the message text.
	Content string
}

Message represents a single message in a conversation.

type MessageRole

type MessageRole string

MessageRole identifies the speaker of a message.

const (
	// RoleUser represents a user message.
	RoleUser MessageRole = "user"

	// RoleAssistant represents an assistant message.
	RoleAssistant MessageRole = "assistant"
)

Directories

Path Synopsis
providers
Package tool provides high-level utilities for common AI workflows.
Package tool provides high-level utilities for common AI workflows.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL