core

package
v0.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 12, 2026 License: MIT Imports: 12 Imported by: 0

Documentation

Index

Constants

View Source
const (
	// RoleSystem is for system instructions that guide the model's behavior.
	// System messages set the context, personality, or constraints.
	// Example: "You are a helpful coding assistant."
	RoleSystem = "system"

	// RoleUser is for messages from the human user.
	// These are the questions or prompts you send to the model.
	RoleUser = "user"

	// RoleAssistant is for messages from the AI model.
	// These are the model's responses.
	RoleAssistant = "assistant"

	// RoleTool is for tool execution results.
	// Use this when responding to a model's tool call request.
	RoleTool = "tool"
)

Message roles define who is speaking in a conversation.

Variables

This section is empty.

Functions

func ExponentialBackoff

func ExponentialBackoff(attempt int, baseDelay time.Duration) time.Duration

ExponentialBackoff calculates the backoff time with jitter

func GetAPIKey

func GetAPIKey(key string) string

GetAPIKey retrieves API key from environment variable if available

func GetEnv

func GetEnv(key, defaultValue string) string

GetEnv gets an environment variable with a default value

func IsRetryableError

func IsRetryableError(err error) bool

IsRetryableError checks if an error is retryable based on common patterns

Types

type AuthManager

type AuthManager struct {
	// contains filtered or unexported fields
}

AuthManager 认证管理器

func NewAuthManager

func NewAuthManager(provider interface {
	GetProviderName() string
	Authenticate() (*OAuthToken, error)
	RefreshToken(refreshToken string) (*OAuthToken, error)
}, filename string) *AuthManager

NewAuthManager 创建认证管理器 保持向后兼容性,接收 string 并自动转换为 FileTokenStore

func NewAuthManagerWithStore

func NewAuthManagerWithStore(provider interface {
	GetProviderName() string
	Authenticate() (*OAuthToken, error)
	RefreshToken(refreshToken string) (*OAuthToken, error)
}, store TokenStore) *AuthManager

NewAuthManagerWithStore 接受自定义 TokenStore (如 RedisStore, DBStore 等)

func (*AuthManager) GetToken

func (m *AuthManager) GetToken() (*OAuthToken, error)

GetToken 获取令牌,如果过期会自动刷新并保存

func (*AuthManager) LoadToken

func (m *AuthManager) LoadToken() error

LoadToken 加载令牌

func (*AuthManager) Login

func (m *AuthManager) Login() error

Login 执行登录

type Client

type Client interface {
	// Chat sends messages and returns a complete response
	//
	// Parameters:
	// - ctx: Context for cancellation and timeout
	// - messages: Conversation messages
	// - opts: Optional parameters (temperature, max tokens, tools, etc.)
	//
	// Returns:
	// - *Response: Complete response with content, usage, and tool calls
	// - error: Error if the request fails
	Chat(ctx context.Context, messages []Message, opts ...Option) (*Response, error)

	// ChatStream sends messages and returns a stream of events
	//
	// Parameters:
	// - ctx: Context for cancellation and timeout
	// - messages: Conversation messages
	// - opts: Optional parameters (temperature, max tokens, tools, etc.)
	//
	// Returns:
	// - *Stream: Stream of events (content, usage, errors)
	// - error: Error if the request fails to start
	ChatStream(ctx context.Context, messages []Message, opts ...Option) (*Stream, error)
}

Client is the primary interface for LLM interactions. All providers implement this interface.

Example usage:

messages := []core.Message{
    core.NewUserMessage("Hello, who are you?"),
}
response, err := client.Chat(ctx, messages)
if err != nil {
    log.Fatal(err)
}
fmt.Println(response.Content)

With options:

response, err := client.Chat(ctx, messages,
    core.WithTemperature(0.8),
    core.WithMaxTokens(1000),
)

type ContentBlock

type ContentBlock struct {
	// Type identifies what kind of content this block contains.
	Type ContentType `json:"type"`

	// Text is the text content (only for ContentTypeText).
	Text string `json:"text,omitempty"`

	// MediaType is the MIME type (only for ContentTypeImage/ContentTypeFile).
	// Examples: "image/png", "image/jpeg", "application/pdf"
	MediaType string `json:"media_type,omitempty"`

	// Data is the base64-encoded content (only for ContentTypeImage/ContentTypeFile).
	Data string `json:"data,omitempty"`

	// FileName is the original filename (only for ContentTypeFile).
	FileName string `json:"file_name,omitempty"`
}

ContentBlock is one piece of content within a message.

Messages can contain multiple content blocks of different types, enabling multimodal interactions (text + images + files).

For simple text messages, use the NewUserMessage helper which creates a message with a single text content block.

Example (text):

block := ContentBlock{
    Type: ContentTypeText,
    Text: "Hello, world!",
}

Example (image):

imageData := base64.StdEncoding.EncodeToString(imageBytes)
block := ContentBlock{
    Type:      ContentTypeImage,
    MediaType: "image/png",
    Data:      imageData,
}

type ContentType

type ContentType string

ContentType identifies the type of content in a ContentBlock.

const (
	// ContentTypeText is for plain text content.
	ContentTypeText ContentType = "text"

	// ContentTypeImage is for image content (base64-encoded).
	ContentTypeImage ContentType = "image"

	// ContentTypeFile is for file attachments.
	ContentTypeFile ContentType = "file"
	// ContentTypeThinking is for thinking/reasoning content from the model.
	ContentTypeThinking ContentType = "thinking"
)

type Error

type Error struct {
	Type    ErrorType
	Message string
	Cause   error
}

Error represents a structured error

func NewAPIError

func NewAPIError(message string, cause error) *Error

NewAPIError creates a new API error

func NewError

func NewError(errType ErrorType, message string, cause error) *Error

NewError creates a new structured error

func NewNetworkError

func NewNetworkError(message string, cause error) *Error

NewNetworkError creates a new network error

func NewTimeoutError

func NewTimeoutError(message string, cause error) *Error

NewTimeoutError creates a new timeout error

func NewUnknownError

func NewUnknownError(message string, cause error) *Error

NewUnknownError creates a new unknown error

func NewValidationError

func NewValidationError(message string, cause error) *Error

NewValidationError creates a new validation error

func (*Error) Error

func (e *Error) Error() string

Error implements the error interface

type ErrorType

type ErrorType string

ErrorType defines the type of error

const (
	ErrorTypeAPI        ErrorType = "api_error"
	ErrorTypeNetwork    ErrorType = "network_error"
	ErrorTypeTimeout    ErrorType = "timeout_error"
	ErrorTypeValidation ErrorType = "validation_error"
	ErrorTypeUnknown    ErrorType = "unknown_error"
)

type FileTokenStore

type FileTokenStore struct {
	Filename string
}

FileTokenStore is a simple file-based implementation of TokenStore.

func NewFileTokenStore

func NewFileTokenStore(filename string) *FileTokenStore

NewFileTokenStore creates a new FileTokenStore.

func (*FileTokenStore) Load

func (s *FileTokenStore) Load() (*OAuthToken, error)

Load reads the token from a file.

func (*FileTokenStore) Save

func (s *FileTokenStore) Save(token *OAuthToken) error

Save writes the token to a file as JSON.

type Message

type Message struct {
	Role       string         `json:"role"`
	Content    []ContentBlock `json:"content,omitempty"`
	ToolCalls  []ToolCall     `json:"tool_calls,omitempty"`   // assistant requesting tool use
	ToolCallID string         `json:"tool_call_id,omitempty"` // for role=tool responses
}

Message represents a single message in a conversation

func NewSystemMessage

func NewSystemMessage(text string) Message

NewSystemMessage creates a system message

func NewTextMessage

func NewTextMessage(role, text string) Message

NewTextMessage creates a message with text content

func NewUserMessage

func NewUserMessage(text string) Message

NewUserMessage creates a user message with text content

func (Message) TextContent

func (m Message) TextContent() string

TextContent returns the concatenated text of all text blocks. Convenience for the common single-text-block case.

type OAuthToken

type OAuthToken struct {
	Access              string  `json:"access"`
	Refresh             string  `json:"refresh"`
	Expires             int64   `json:"expires"`
	ResourceUrl         *string `json:"resourceUrl,omitempty"`
	NotificationMessage *string `json:"notification_message,omitempty"`
}

OAuthToken 通用 OAuth 令牌

type Option

type Option func(*Options)

Option is a functional option for Chat/ChatStream

func WithEnableSearch

func WithEnableSearch(enabled bool) Option

WithEnableSearch enables web search for models that support it (e.g. Qwen)

func WithMaxTokens

func WithMaxTokens(n int) Option

WithMaxTokens sets the maximum tokens to generate

func WithModel

func WithModel(model string) Option

WithModel sets the model to use for this request

func WithStop

func WithStop(stops ...string) Option

WithStop sets stop sequences

func WithSystemPrompt

func WithSystemPrompt(prompt string) Option

WithSystemPrompt sets a system prompt (prepended as system message)

func WithTemperature

func WithTemperature(t float64) Option

WithTemperature sets the temperature for generation

func WithThinking

func WithThinking(budget int) Option

WithThinking enables extended thinking/reasoning with optional token budget

func WithTools

func WithTools(tools ...Tool) Option

WithTools sets available tools for the model to call

func WithTopP

func WithTopP(p float64) Option

WithTopP sets the top-p sampling parameter

func WithUsageCallback

func WithUsageCallback(fn func(Usage)) Option

WithUsageCallback sets a callback to be called when usage info is available

type Options

type Options struct {
	Model          string
	Temperature    *float64 // pointer so zero-value is distinguishable from "not set"
	MaxTokens      *int
	TopP           *float64
	Stop           []string
	Tools          []Tool
	SystemPrompt   string // prepended as system message if set
	Thinking       bool   // enables extended thinking/reasoning (provider-dependent)
	ThinkingBudget int    // max tokens for thinking (0 = provider default)
	EnableSearch   bool   // Qwen/compatible-mode search
	UsageCallback  func(Usage)
}

Options holds per-request parameters

func ApplyOptions

func ApplyOptions(opts ...Option) Options

ApplyOptions builds Options from a list of Option funcs

type PKCEHelper

type PKCEHelper struct{}

PKCEHelper PKCE 辅助工具

func (*PKCEHelper) GeneratePKCE

func (h *PKCEHelper) GeneratePKCE() (verifier, challenge string, err error)

GeneratePKCE 生成 PKCE verifier 和 challenge

func (*PKCEHelper) GenerateState

func (h *PKCEHelper) GenerateState() (string, error)

GenerateState 生成随机 state

func (*PKCEHelper) GenerateUUID

func (h *PKCEHelper) GenerateUUID() (string, error)

GenerateUUID 生成随机 UUID

type Response

type Response struct {
	// ID is the unique identifier for this completion.
	// Provider-specific format (e.g., "chatcmpl-abc123" for OpenAI).
	ID string `json:"id,omitempty"`

	// Model is the name of the model that generated this response.
	// May differ from the requested model if the provider substituted it.
	Model string `json:"model,omitempty"`

	// Content is the generated text content.
	// This is a convenience field that concatenates all text blocks from Message.
	// For simple text responses, this is all you need.
	Content string `json:"content"`
	// ReasoningContent is the model's thinking/reasoning process output.
	ReasoningContent string `json:"reasoning_content,omitempty"`

	// Message is the full structured message from the model.
	// Use this when you need access to multimodal content or tool calls.
	Message Message `json:"message"`

	// FinishReason indicates why the model stopped generating.
	// Common values: "stop" (natural end), "length" (max tokens reached),
	// "tool_calls" (model wants to call tools), "content_filter" (filtered).
	FinishReason string `json:"finish_reason"`

	// Usage contains token consumption information.
	// Use this to track costs and monitor usage.
	Usage *Usage `json:"usage,omitempty"`

	// ToolCalls is a convenience field extracted from Message.ToolCalls.
	// Non-empty when the model wants to invoke tools.
	ToolCalls []ToolCall `json:"tool_calls,omitempty"`
}

Response is the complete result from a Chat call.

It contains the model's generated content, metadata about the request, token usage information, and any tool calls the model wants to make.

Example:

response, err := client.Chat(ctx, messages)
if err != nil {
    log.Fatal(err)
}

// Simple text response
fmt.Println(response.Content)

// Check token usage
fmt.Printf("Used %d tokens\n", response.Usage.TotalTokens)

// Handle tool calls
if len(response.ToolCalls) > 0 {
    for _, tc := range response.ToolCalls {
        fmt.Printf("Model wants to call: %s\n", tc.Name)
    }
}

type Stream

type Stream struct {
	// contains filtered or unexported fields
}

Stream represents an active streaming response. Callers iterate with Next() and must call Close().

func NewStream

func NewStream(ch <-chan StreamEvent, closer io.Closer) *Stream

NewStream creates a Stream from a channel and an optional closer

func (*Stream) Close

func (s *Stream) Close() error

Close releases resources associated with the stream. Safe to call multiple times.

func (*Stream) Err

func (s *Stream) Err() error

Err returns the error from the stream, if any

func (*Stream) Event

func (s *Stream) Event() StreamEvent

Event returns the current stream event. Only valid after Next() returns true.

func (*Stream) Next

func (s *Stream) Next() bool

Next advances to the next event. Returns false when the stream is exhausted.

func (*Stream) ReasoningText

func (s *Stream) ReasoningText() (string, error)

ReasoningText consumes the entire stream and returns the concatenated thinking/reasoning content

func (*Stream) Text

func (s *Stream) Text() (string, error)

Text consumes the entire stream and returns the concatenated content

func (*Stream) Usage

func (s *Stream) Usage() *Usage

Usage returns the usage information after the stream completes

type StreamEvent

type StreamEvent struct {
	Type    StreamEventType
	Content string
	Usage   *Usage
	Err     error
}

StreamEvent is one event from a streaming response

type StreamEventType

type StreamEventType string

StreamEventType identifies the type of stream event

const (
	EventContent  StreamEventType = "content"
	EventThinking StreamEventType = "thinking" // reasoning/thinking content from the model
	EventDone     StreamEventType = "done"
	EventError    StreamEventType = "error"
)

type TokenHelper

type TokenHelper struct{}

TokenHelper 令牌辅助工具

func (*TokenHelper) IsTokenExpired

func (h *TokenHelper) IsTokenExpired(token *OAuthToken) bool

IsTokenExpired 检查令牌是否过期

func (*TokenHelper) LoadToken

func (h *TokenHelper) LoadToken(filename string) (*OAuthToken, error)

LoadToken 从文件加载令牌

func (*TokenHelper) SaveToken

func (h *TokenHelper) SaveToken(token *OAuthToken, filename string) error

SaveToken 保存令牌到文件

type TokenStore

type TokenStore interface {
	Save(token *OAuthToken) error
	Load() (*OAuthToken, error)
}

TokenStore defines the interface for persisting and retrieving OAuth tokens. This allows developers to use custom storage like databases, Redis, or memory.

type Tool

type Tool struct {
	// Name is the unique identifier for this tool.
	// Must be a valid function name (alphanumeric and underscores).
	Name string `json:"name"`

	// Description explains what the tool does.
	// The model uses this to decide when to call the tool.
	Description string `json:"description"`

	// Parameters is a JSON Schema describing the tool's input parameters.
	// Must be a valid JSON Schema object with "type": "object".
	Parameters json.RawMessage `json:"parameters"`
}

Tool defines a function/tool that the LLM can call.

Tools enable the model to interact with external systems, APIs, or perform computations. When you provide tools to a Chat call, the model can decide to call one or more tools instead of (or in addition to) generating text.

Example:

tool := core.Tool{
    Name:        "get_weather",
    Description: "Get the current weather for a location",
    Parameters: json.RawMessage(`{
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "City name, e.g. San Francisco"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"]
            }
        },
        "required": ["location"]
    }`),
}

type ToolCall

type ToolCall struct {
	// ID is a unique identifier for this tool call.
	// Use this when responding with tool results.
	ID string `json:"id"`

	// Name is the name of the tool being called.
	Name string `json:"name"`

	// Arguments is a JSON string containing the tool's input parameters.
	// Parse this to extract the actual arguments.
	Arguments string `json:"arguments"`
}

ToolCall represents a request from the model to invoke a tool.

When the model decides to use a tool, it returns a ToolCall in the response. Your application should:

  1. Execute the tool with the provided arguments
  2. Add the result as a new message with Role=RoleTool and ToolCallID set
  3. Send the updated conversation back to the model

Example:

if len(response.ToolCalls) > 0 {
    for _, tc := range response.ToolCalls {
        result := executeMyTool(tc.Name, tc.Arguments)
        messages = append(messages, core.Message{
            Role:       core.RoleTool,
            ToolCallID: tc.ID,
            Content:    []core.ContentBlock{{Type: core.ContentTypeText, Text: result}},
        })
    }
    // Call Chat again with the tool results
    finalResponse, _ := client.Chat(ctx, messages)
}

type Usage

type Usage struct {
	// PromptTokens is the number of tokens in the input (your messages).
	PromptTokens int `json:"prompt_tokens"`

	// CompletionTokens is the number of tokens in the output (model's response).
	CompletionTokens int `json:"completion_tokens"`

	// TotalTokens is the sum of PromptTokens and CompletionTokens.
	// This is typically what you're billed for.
	TotalTokens int `json:"total_tokens"`
}

Usage tracks token consumption for a request.

Tokens are the basic units that LLM providers use for billing. Different providers may count tokens differently, but the general principle is: more tokens = higher cost.

Example:

response, _ := client.Chat(ctx, messages)
if response.Usage != nil {
    fmt.Printf("Prompt: %d tokens\n", response.Usage.PromptTokens)
    fmt.Printf("Completion: %d tokens\n", response.Usage.CompletionTokens)
    fmt.Printf("Total: %d tokens\n", response.Usage.TotalTokens)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL