sdk

package module
v1.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 23, 2025 License: Apache-2.0 Imports: 18 Imported by: 0

README

PromptKit SDK

High-level Go SDK for building LLM applications with PromptKit. The SDK provides two API levels:

  • High-Level API (ConversationManager): Simple, opinionated interface for common use cases
  • Low-Level API (PipelineBuilder): Full control over pipeline construction and middleware

Features

PromptPack-First: Load compiled .pack.json files with prompts, tools, and validators
Full Pipeline Integration: Uses PromptKit's pipeline architecture with middleware
State Persistence: Built-in support for Redis, Postgres, or in-memory state stores
Multi-Turn Conversations: Automatic conversation history management
Tool Execution: Register and execute tools with LLM guidance
Custom Middleware: Inject custom logic for context building, observability, etc.
Thread-Safe: Designed for concurrent multi-tenant web APIs
Provider Agnostic: Works with OpenAI, Claude, Gemini, and custom providers

Installation

go get github.com/AltairaLabs/PromptKit/sdk

Quick Start

High-Level API
// 1. Create provider
provider := providers.NewOpenAIProvider("your-api-key", "gpt-4", false)

// 2. Create manager
manager, _ := sdk.NewConversationManager(
    sdk.WithProvider(provider),
)

// 3. Load pack
pack, _ := manager.LoadPack("./support.pack.json")

// 4. Create conversation
conv, _ := manager.NewConversation(ctx, pack, sdk.ConversationConfig{
    UserID:     "user123",
    PromptName: "support",
    Variables: map[string]interface{}{
        "role": "customer support",
    },
})

// 5. Send messages
resp, _ := conv.Send(ctx, "I need help")
fmt.Printf("Assistant: %s (Cost: $%.4f)\n", resp.Content, resp.Cost)
Low-Level API
// Build custom pipeline with middleware
pipe := sdk.NewPipelineBuilder().
    WithMiddleware(&MyCustomMiddleware{}).
    WithSimpleProvider(provider).  // Simple provider (no tools)
    Build()

// Or use full provider with tools
registry := sdk.NewToolRegistry()
registry.Register("search", searchTool)

pipe := sdk.NewPipelineBuilder().
    WithProvider(provider, registry, nil).  // Provider with tools
    WithTemplate().  // Add template substitution
    Build()

// Execute
result, _ := pipe.Execute(ctx, "user", "Hello!")
fmt.Println(result.Response.Content)

Convenience Methods:

  • WithSimpleProvider(provider) - Provider without tool support
  • WithProvider(provider, registry, policy) - Provider with tools and execution policy
  • WithTemplate() - Template variable substitution ({{variable}})
  • WithMiddleware(m) - Add custom middleware

These methods leverage battle-tested middleware from runtime/pipeline/middleware.

Examples

See examples/ directory:

Run examples:

cd examples/custom-middleware
go run main.go

Documentation

Key Concepts

PromptPacks

Compiled JSON files containing prompts, variables, tools, and validators:

packc compile -c prompts/ -o support.pack.json
State Persistence
// In-memory (default)
manager, _ := sdk.NewConversationManager(
    sdk.WithProvider(provider),
)

// Redis
redisStore := statestore.NewRedisStore(...)
manager, _ := sdk.NewConversationManager(
    sdk.WithProvider(provider),
    sdk.WithStateStore(redisStore),
)
Custom Middleware
type MetricsMiddleware struct{}

func (m *MetricsMiddleware) Process(execCtx *pipeline.ExecutionContext, next func() error) error {
    start := time.Now()
    err := next()
    recordMetric("duration", time.Since(start))
    recordMetric("tokens", execCtx.CostInfo.InputTokens + execCtx.CostInfo.OutputTokens)
    return err
}

func (m *MetricsMiddleware) StreamChunk(execCtx *pipeline.ExecutionContext, chunk *providers.StreamChunk) error {
    return nil
}

// Use it
pipe := sdk.NewPipelineBuilder().
    WithMiddleware(&MetricsMiddleware{}).
    WithSimpleProvider(provider).
    Build()

Testing

cd sdk
go test -v ./...

Test Results:

  • ✅ PackManager: 12/12 tests passing
  • ✅ ConversationManager: 4/4 tests passing
  • ✅ PipelineBuilder: 4/4 tests passing
  • ✅ ToolRegistry: Included in conversation tests

Architecture

The SDK is built on PromptKit's runtime components:

┌─────────────────────────────────────────┐
│           SDK (High-Level)              │
│  ┌─────────────────────────────────┐   │
│  │   ConversationManager           │   │
│  │   - Load PromptPacks            │   │
│  │   - Create conversations        │   │
│  │   - Auto pipeline construction  │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘
                    │
┌─────────────────────────────────────────┐
│          SDK (Low-Level)                │
│  ┌─────────────────────────────────┐   │
│  │   PipelineBuilder               │   │
│  │   - Custom middleware           │   │
│  │   - Full pipeline control       │   │
│  └─────────────────────────────────┘   │
└─────────────────────────────────────────┘
                    │
┌─────────────────────────────────────────┐
│         Runtime Components              │
│  - Pipeline & Middleware                │
│  - Providers (OpenAI/Claude/Gemini)     │
│  - StateStore (Redis/Postgres/Memory)   │
│  - Tools, Validators, Types             │
└─────────────────────────────────────────┘

License

MIT License

Documentation

Overview

Package sdk provides a high-level SDK for building LLM applications with PromptKit.

The SDK is built around PromptPacks - compiled JSON files containing prompts, variables, tools, and validators. This PromptPack-first approach ensures you get the full benefits of PromptKit's pipeline architecture including:

  • Prompt assembly with variable interpolation
  • Template rendering with fragments
  • Tool orchestration and governance
  • Response validation and guardrails
  • State persistence across conversations

Two API Levels:

High-Level API (ConversationManager):

  • Simple interface for common use cases
  • Automatic pipeline construction
  • Load pack, create conversation, send messages

Low-Level API (PipelineBuilder):

  • Custom middleware injection
  • Full pipeline control
  • Advanced use cases (custom context builders, observability)

Index

Constants

View Source
const (
	RoleAssistant = "assistant"
	RoleUser      = "user"
	RoleTool      = "tool"
)

Role constants for message types

Variables

View Source
var (
	ErrPackNotFound     = errors.New("pack not found")
	ErrPromptNotFound   = errors.New("prompt not found")
	ErrInvalidConfig    = errors.New("invalid configuration")
	ErrProviderFailed   = errors.New("provider request failed")
	ErrValidationFailed = errors.New("validation failed")
)

Common error types for better error handling

Functions

func IsRetryableError

func IsRetryableError(err error) bool

IsRetryableError determines if an operation should be retried.

func IsTemporaryError

func IsTemporaryError(err error) bool

IsTemporaryError checks if an error is temporary and should be retried.

func WrapPackError

func WrapPackError(err error, packPath string) error

WrapPackError wraps an error with pack context information.

func WrapProviderError

func WrapProviderError(err error, provider string) error

WrapProviderError wraps an error with provider context information.

func WrapValidationError

func WrapValidationError(err error, validator string) error

WrapValidationError wraps an error with validation context information.

Types

type Conversation

type Conversation struct {
	// contains filtered or unexported fields
}

Conversation represents an active conversation

func (*Conversation) AddToolResult

func (c *Conversation) AddToolResult(toolCallID, result string) error

AddToolResult adds a tool execution result to the conversation. This is used to provide the result of a tool call that was pending approval.

Parameters:

  • toolCallID: The ID of the tool call (from MessageToolCall.ID)
  • result: The JSON string result from the tool execution

func (*Conversation) Continue

func (c *Conversation) Continue(ctx context.Context) (*Response, error)

Continue resumes execution after tool results have been added. This should be called after one or more AddToolResult() calls to continue the conversation with the LLM using the tool results.

Returns the assistant's response after processing the tool results.

func (*Conversation) GetHistory

func (c *Conversation) GetHistory() []types.Message

GetHistory returns the conversation message history

func (*Conversation) GetID

func (c *Conversation) GetID() string

GetID returns the conversation ID

func (*Conversation) GetPendingTools

func (c *Conversation) GetPendingTools() []tools.PendingToolInfo

GetPendingTools returns information about pending tool calls that require approval. This extracts PendingToolInfo from the conversation state metadata.

func (*Conversation) GetUserID

func (c *Conversation) GetUserID() string

GetUserID returns the user ID

func (*Conversation) HasPendingTools

func (c *Conversation) HasPendingTools() bool

HasPendingTools checks if the conversation has any pending tool calls awaiting approval

func (*Conversation) Send

func (c *Conversation) Send(ctx context.Context, userMessage string, opts ...SendOptions) (*Response, error)

Send sends a user message and gets an assistant response

func (*Conversation) SendStream

func (c *Conversation) SendStream(ctx context.Context, userMessage string, opts ...SendOptions) (<-chan StreamEvent, error)

SendStream sends a user message and returns a streaming response

type ConversationConfig

type ConversationConfig struct {
	// Required fields
	UserID     string // User who owns this conversation
	PromptName string // Task type from the pack (e.g., "support", "sales")

	// Optional fields
	ConversationID string                 // If empty, auto-generated
	Variables      map[string]interface{} // Template variables
	SystemPrompt   string                 // Override system prompt
	Metadata       map[string]interface{} // Custom metadata

	// Context policy (token budget management)
	ContextPolicy *middleware.ContextBuilderPolicy
}

ConversationConfig configures a new conversation

type ConversationManager

type ConversationManager struct {
	// contains filtered or unexported fields
}

ConversationManager provides high-level API for managing LLM conversations. It automatically constructs the pipeline with appropriate middleware based on the PromptPack configuration.

Key Features:

  • Load PromptPacks and create conversations for specific prompts
  • Automatic pipeline construction with middleware stack
  • State persistence via StateStore
  • Support for streaming and tool execution
  • Multi-turn conversation management

func NewConversationManager

func NewConversationManager(opts ...ManagerOption) (*ConversationManager, error)

NewConversationManager creates a new ConversationManager

func (*ConversationManager) CreateConversation

func (cm *ConversationManager) CreateConversation(ctx context.Context, pack *Pack, config ConversationConfig) (*Conversation, error)

CreateConversation creates a new conversation for a specific prompt in the pack

func (*ConversationManager) GetConversation

func (cm *ConversationManager) GetConversation(
	ctx context.Context, conversationID string, pack *Pack,
) (*Conversation, error)

GetConversation loads an existing conversation from state store

func (*ConversationManager) LoadPack

func (cm *ConversationManager) LoadPack(packPath string) (*Pack, error)

LoadPack loads a PromptPack from a file

type CustomContextMiddleware

type CustomContextMiddleware interface {
	pipeline.Middleware
}

CustomContextMiddleware is an example of custom middleware for context building. Users can implement similar middleware for their specific needs.

Example:

type MyContextMiddleware struct {
    ragClient *RAGClient
}

func (m *MyContextMiddleware) Process(execCtx *pipeline.ExecutionContext, next func() error) error {
    // Extract query from last user message
    query := execCtx.Messages[len(execCtx.Messages)-1].Content

    // Fetch relevant documents
    docs, _ := m.ragClient.Search(query, 5)

    // Add to variables for template substitution
    execCtx.Variables["rag_context"] = formatDocs(docs)

    return next()
}

type ManagerConfig

type ManagerConfig struct {
	// MaxConcurrentExecutions limits parallel pipeline executions
	MaxConcurrentExecutions int

	// DefaultTimeout for LLM requests
	DefaultTimeout time.Duration

	// EnableMetrics enables built-in metrics collection
	EnableMetrics bool
}

ManagerConfig configures the ConversationManager

type ManagerOption

type ManagerOption func(*ConversationManager) error

ManagerOption configures ConversationManager

func WithConfig

func WithConfig(config ManagerConfig) ManagerOption

WithConfig sets the manager configuration

func WithProvider

func WithProvider(provider providers.Provider) ManagerOption

WithProvider sets the LLM provider

func WithStateStore

func WithStateStore(store statestore.Store) ManagerOption

WithStateStore sets the state persistence backend

func WithToolRegistry

func WithToolRegistry(registry *tools.Registry) ManagerOption

WithToolRegistry sets the tool registry for tool execution

type MiddlewareConfig

type MiddlewareConfig struct {
	Type   string                 `json:"type"`
	Config map[string]interface{} `json:"config,omitempty"`
}

MiddlewareConfig defines a single middleware configuration

type ModelOverride

type ModelOverride struct {
	SystemTemplateSuffix string `json:"system_template_suffix,omitempty"`
}

ModelOverride defines model-specific template overrides

type ObservabilityMiddleware

type ObservabilityMiddleware interface {
	pipeline.Middleware
}

ObservabilityMiddleware is an example of observability middleware. Users can implement similar middleware for LangFuse, DataDog, etc.

Example:

type LangFuseMiddleware struct {
    client *langfuse.Client
}

func (m *LangFuseMiddleware) Process(execCtx *pipeline.ExecutionContext, next func() error) error {
    traceID := m.client.StartTrace(...)
    spanID := m.client.StartSpan(traceID, ...)

    start := time.Now()
    err := next()
    duration := time.Since(start)

    m.client.EndSpan(spanID, langfuse.SpanResult{
        Duration: duration,
        TokensInput: execCtx.CostInfo.InputTokens,
        TokensOutput: execCtx.CostInfo.OutputTokens,
        Error: err,
    })

    return err
}

type Pack

type Pack struct {
	// Pack identity
	ID          string `json:"id"`
	Name        string `json:"name"`
	Version     string `json:"version"`
	Description string `json:"description"`

	// Shared configuration across all prompts
	TemplateEngine TemplateEngine `json:"template_engine"`

	// Map of task_type -> prompt configuration
	Prompts map[string]*Prompt `json:"prompts"`

	// Shared fragments used by all prompts
	Fragments map[string]string `json:"fragments,omitempty"`

	// Tool definitions (referenced by prompts)
	Tools map[string]*Tool `json:"tools,omitempty"`
	// contains filtered or unexported fields
}

Pack represents a loaded PromptPack containing multiple prompts for related task types. A pack is a portable, JSON-based bundle created by the packc compiler.

DESIGN DECISION: Why separate Pack types in sdk vs runtime?

This SDK Pack is optimized for LOADING & EXECUTION:

  • Loaded from .pack.json files for application use
  • Includes Tools map for runtime tool access
  • Includes filePath to track source file location
  • Thread-safe with sync.RWMutex for concurrent access
  • Returns validation errors for application error handling
  • Rich types (*Variable, *Validator, *Tool) with full functionality
  • Has CreateRegistry() to convert to runtime.Registry for pipeline execution
  • Has convertToRuntimeConfig() to bridge SDK ↔ runtime formats

The runtime.prompt.Pack is optimized for COMPILATION:

  • Created by PackCompiler during prompt compilation
  • Includes Compilation and Metadata fields for provenance tracking
  • Returns validation warnings ([]string) for compiler feedback
  • No thread-safety (single-threaded compilation process)
  • Simple types for clean JSON serialization
  • No conversion methods (produces, doesn't consume)

Both types serialize to/from the SAME JSON format (.pack.json files), ensuring full interoperability between compilation and execution phases. The duplication is intentional and provides:

  1. Clear separation of concerns (compile vs execute)
  2. No circular dependencies (sdk imports runtime, not vice versa)
  3. Independent evolution of each module
  4. Type-specific optimizations (thread-safety, validation behavior)

Design: A pack contains MULTIPLE prompts (task_types) that share common configuration like template engine and fragments, but each prompt has its own template, variables, tools, and validators.

See runtime/prompt/pack.go for the corresponding runtime-side documentation.

func (*Pack) CreateRegistry

func (p *Pack) CreateRegistry() (*prompt.Registry, error)

CreateRegistry creates a runtime prompt.Registry from this pack. The registry allows the runtime pipeline to access prompts using the standard prompt assembly middleware. Each prompt in the pack is registered by its task_type.

This bridges the SDK's .pack.json format with the runtime's prompt.Registry format.

func (*Pack) GetPrompt

func (p *Pack) GetPrompt(taskType string) (*Prompt, error)

GetPrompt retrieves a specific prompt from a pack

func (*Pack) GetTools

func (p *Pack) GetTools(taskType string) ([]*Tool, error)

GetTools retrieves tools used by a specific prompt

func (*Pack) ListPrompts

func (p *Pack) ListPrompts() []string

ListPrompts returns all available task types in the pack

type PackManager

type PackManager struct {
	// contains filtered or unexported fields
}

PackManager manages loading and caching of PromptPacks

func NewPackManager

func NewPackManager() *PackManager

NewPackManager creates a new PackManager

func (*PackManager) GetPack

func (pm *PackManager) GetPack(packPath string) (*Pack, bool)

GetPack retrieves a cached pack by path

func (*PackManager) LoadPack

func (pm *PackManager) LoadPack(packPath string) (*Pack, error)

LoadPack loads a PromptPack from a .pack.json file

type Parameters

type Parameters struct {
	Temperature float32 `json:"temperature,omitempty"`
	MaxTokens   int     `json:"max_tokens,omitempty"`
	TopP        float32 `json:"top_p,omitempty"`
	TopK        *int    `json:"top_k,omitempty"`
}

Parameters defines LLM generation parameters

type PipelineBuilder

type PipelineBuilder struct {
	// contains filtered or unexported fields
}

PipelineBuilder provides low-level API for constructing custom pipelines with middleware.

Use this when you need:

  • Custom middleware injection
  • Custom context builders
  • Observability integration (LangFuse, DataDog, etc.)
  • Advanced pipeline control

For simple use cases, use ConversationManager instead.

Example:

builder := sdk.NewPipelineBuilder().
    WithProvider(provider).
    WithMiddleware(customMiddleware).
    WithMiddleware(observabilityMiddleware)

pipe := builder.Build()
result, err := pipe.Execute(ctx, "user", "Hello!")

func NewPipelineBuilder

func NewPipelineBuilder() *PipelineBuilder

NewPipelineBuilder creates a new pipeline builder

func (*PipelineBuilder) Build

func (pb *PipelineBuilder) Build() *pipeline.Pipeline

Build constructs the pipeline

func (*PipelineBuilder) WithConfig

WithConfig sets the pipeline runtime configuration

func (*PipelineBuilder) WithMiddleware

func (pb *PipelineBuilder) WithMiddleware(m pipeline.Middleware) *PipelineBuilder

WithMiddleware adds middleware to the pipeline. Middleware executes in the order added.

func (*PipelineBuilder) WithProvider

func (pb *PipelineBuilder) WithProvider(provider providers.Provider, toolRegistry *tools.Registry, toolPolicy *pipeline.ToolPolicy) *PipelineBuilder

WithProvider adds a provider middleware to the pipeline. This is a convenience method that wraps the runtime ProviderMiddleware.

func (*PipelineBuilder) WithSimpleProvider

func (pb *PipelineBuilder) WithSimpleProvider(provider providers.Provider) *PipelineBuilder

WithSimpleProvider adds a provider middleware without tools or custom config. This is the simplest way to add LLM execution to a pipeline.

func (*PipelineBuilder) WithTemplate

func (pb *PipelineBuilder) WithTemplate() *PipelineBuilder

WithTemplate adds template substitution middleware. This replaces {{variable}} placeholders in the system prompt.

type PipelineConfig

type PipelineConfig struct {
	Stages     []string            `json:"stages"`
	Middleware []*MiddlewareConfig `json:"middleware"`
}

PipelineConfig defines pipeline middleware configuration

type Prompt

type Prompt struct {
	ID          string `json:"id"`
	Name        string `json:"name"`
	Description string `json:"description"`
	Version     string `json:"version"`

	// Template
	SystemTemplate string `json:"system_template"`

	// Variables for this prompt
	Variables []*Variable `json:"variables"`

	// Tool references (names that map to pack.Tools)
	ToolNames []string `json:"tools,omitempty"`

	// Tool policy
	ToolPolicy *ToolPolicy `json:"tool_policy,omitempty"`

	// Multimodal media configuration
	MediaConfig *prompt.MediaConfig `json:"media,omitempty"`

	// Pipeline configuration
	Pipeline *PipelineConfig `json:"pipeline,omitempty"`

	// LLM parameters
	Parameters *Parameters `json:"parameters,omitempty"`

	// Validators
	Validators []*Validator `json:"validators,omitempty"`

	// Model testing results
	TestedModels []*TestedModel `json:"tested_models,omitempty"`

	// Model-specific overrides
	ModelOverrides map[string]*ModelOverride `json:"model_overrides,omitempty"`
}

Prompt represents a single prompt configuration within a pack

type Response

type Response struct {
	Content      string
	ToolCalls    []types.MessageToolCall
	TokensUsed   int
	Cost         float64
	LatencyMs    int64
	Validations  []types.ValidationResult
	Truncated    bool                    // True if context was truncated
	PendingTools []tools.PendingToolInfo // Tools awaiting external approval/input
}

Response represents a conversation turn response

type SendOptions

type SendOptions struct {
	Stream       bool                   // Enable streaming
	MaxToolCalls int                    // Max tool calls per turn (0 = use prompt default)
	Metadata     map[string]interface{} // Turn-specific metadata
}

SendOptions configures message sending behavior

type StreamEvent

type StreamEvent struct {
	Type     string // "content", "tool_call", "done", "error"
	Content  string
	ToolCall *types.MessageToolCall
	Error    error
	Final    *Response // Set when Type="done"
}

StreamEvent represents a streaming response event

type TemplateEngine

type TemplateEngine struct {
	Version  string   `json:"version"`
	Syntax   string   `json:"syntax"`
	Features []string `json:"features"`
}

TemplateEngine describes the template engine configuration shared across prompts

type TestedModel

type TestedModel struct {
	Provider     string  `json:"provider"`
	Model        string  `json:"model"`
	Date         string  `json:"date"`
	SuccessRate  float64 `json:"success_rate"`
	AvgTokens    int     `json:"avg_tokens"`
	AvgCost      float64 `json:"avg_cost"`
	AvgLatencyMs int     `json:"avg_latency_ms"`
}

TestedModel contains testing results for a specific model

type Tool

type Tool struct {
	Name        string                 `json:"name"`
	Description string                 `json:"description"`
	Parameters  map[string]interface{} `json:"parameters"`
}

Tool defines a tool that can be called by the LLM

type ToolPolicy

type ToolPolicy struct {
	ToolChoice          string   `json:"tool_choice"` // "auto", "required", "none"
	MaxRounds           int      `json:"max_rounds,omitempty"`
	MaxToolCallsPerTurn int      `json:"max_tool_calls_per_turn,omitempty"`
	Blocklist           []string `json:"blocklist,omitempty"`
}

ToolPolicy defines tool usage constraints

type Validator

type Validator struct {
	Type            string                 `json:"type"`
	Enabled         bool                   `json:"enabled"`
	FailOnViolation bool                   `json:"fail_on_violation"`
	Params          map[string]interface{} `json:"params"`
}

Validator defines a validation rule

type Variable

type Variable struct {
	Name        string                 `json:"name"`
	Type        string                 `json:"type"` // "string", "number", "boolean", "object", "array"
	Required    bool                   `json:"required"`
	Default     interface{}            `json:"default,omitempty"`
	Description string                 `json:"description"`
	Example     interface{}            `json:"example,omitempty"`
	Validation  map[string]interface{} `json:"validation,omitempty"`
}

Variable defines a template variable with validation rules

Directories

Path Synopsis
examples
hitl-approval command
streaming command
tools command

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL