Documentation
¶
Overview ¶
Package sdk provides a simple API for LLM conversations using PromptPack files.
SDK v2 is a pack-first SDK where everything starts from a .pack.json file. The pack contains prompts, variables, tools, validators, and model configuration. The SDK loads the pack and provides a minimal API to interact with LLMs.
Quick Start ¶
The simplest possible usage is just 5 lines:
conv, err := sdk.Open("./assistant.pack.json", "chat")
if err != nil {
log.Fatal(err)
}
defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")
fmt.Println(resp.Text())
Core Concepts ¶
Opening a Conversation:
Use Open to load a pack file and create a conversation for a specific prompt:
// Minimal - provider auto-detected from environment
conv, _ := sdk.Open("./demo.pack.json", "troubleshooting")
// With options - override model, provider, etc.
conv, _ := sdk.Open("./demo.pack.json", "troubleshooting",
sdk.WithModel("gpt-4o"),
sdk.WithAPIKey(os.Getenv("MY_OPENAI_KEY")),
)
Variables:
Variables defined in the pack are populated at runtime:
conv.SetVar("customer_id", "acme-corp")
conv.SetVars(map[string]any{
"customer_name": "ACME Corporation",
"tier": "premium",
})
Tools:
Tools defined in the pack just need implementation handlers:
conv.OnTool("list_devices", func(args map[string]any) (any, error) {
return myAPI.ListDevices(args["customer_id"].(string))
})
Streaming:
Stream responses chunk by chunk:
for chunk := range conv.Stream(ctx, "Tell me a story") {
fmt.Print(chunk.Text)
}
Design Principles ¶
- Pack is the Source of Truth - The .pack.json file defines prompts, tools, validators, and pipeline configuration. The SDK configures itself automatically.
- Convention Over Configuration - API keys from environment, provider auto-detection, model defaults from pack. Override only when needed.
- Progressive Disclosure - Simple things are simple, advanced features available but not required.
- Same Runtime, Same Behavior - SDK v2 uses the same runtime pipeline as Arena. Pack-defined behaviors work identically.
- Thin Wrapper - No type duplication. Core types like Message, ContentPart, CostInfo come directly from runtime/types.
Package Structure ¶
The SDK is organized into sub-packages for specific functionality:
- sdk (this package): Entry point, Open, Conversation, Response
- sdk/tools: Typed tool handlers, HITL support
- sdk/stream: Streaming response handling
- sdk/message: Multimodal message building
- sdk/hooks: Event subscription and lifecycle hooks
- sdk/validation: Validator registration and error handling
Most users only need to import the root sdk package.
Runtime Types ¶
The SDK uses runtime types directly - no duplication:
import "github.com/AltairaLabs/PromptKit/runtime/types"
msg := &types.Message{Role: "user"}
msg.AddTextPart("Hello")
Key runtime types: types.Message, types.ContentPart, types.MediaContent, types.CostInfo, types.ValidationResult.
Schema Reference ¶
All pack examples conform to the PromptPack Specification v1.1.0: https://github.com/AltairaLabs/promptpack-spec/blob/main/schema/promptpack.schema.json
Index ¶
- Variables
- type A2ACapability
- type A2AConversationOpener
- type A2AServer
- type A2AServerOption
- func WithA2ACard(card *a2a.AgentCard) A2AServerOption
- func WithA2AConversationTTL(d time.Duration) A2AServerOption
- func WithA2AIdleTimeout(d time.Duration) A2AServerOption
- func WithA2AMaxBodySize(n int64) A2AServerOption
- func WithA2APort(port int) A2AServerOption
- func WithA2AReadTimeout(d time.Duration) A2AServerOption
- func WithA2ATaskStore(store A2ATaskStore) A2AServerOption
- func WithA2ATaskTTL(d time.Duration) A2AServerOption
- func WithA2AWriteTimeout(d time.Duration) A2AServerOption
- type A2ATaskStore
- type AgentToolResolver
- type Capability
- type CapabilityContext
- type ChunkType
- type ClientToolHandler
- type ClientToolRequest
- type ClientToolRequestEvent
- type Conversation
- func (c *Conversation) CheckPending(name string, args map[string]any) (*sdktools.PendingToolCall, bool)
- func (c *Conversation) Clear() error
- func (c *Conversation) Close() error
- func (c *Conversation) Continue(ctx context.Context) (*Response, error)
- func (c *Conversation) Done() (<-chan struct{}, error)
- func (c *Conversation) EventBus() *events.EventBus
- func (c *Conversation) Fork() *Conversation
- func (c *Conversation) GetVar(name string) (string, bool)
- func (c *Conversation) ID() string
- func (c *Conversation) Messages(ctx context.Context) []types.Message
- func (c *Conversation) OnClientTool(name string, handler ClientToolHandler)
- func (c *Conversation) OnClientTools(handlers map[string]ClientToolHandler)
- func (c *Conversation) OnStreamEvent(handler StreamEventHandler)
- func (c *Conversation) OnTool(name string, handler ToolHandler)
- func (c *Conversation) OnToolAsync(name string, checkFunc func(args map[string]any) sdktools.PendingResult, ...)
- func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)
- func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)
- func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)
- func (c *Conversation) OnTools(handlers map[string]ToolHandler)
- func (c *Conversation) PendingTools() []*sdktools.PendingToolCall
- func (c *Conversation) RejectClientTool(_ context.Context, callID, reason string)
- func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)
- func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)
- func (c *Conversation) Response() (<-chan providers.StreamChunk, error)
- func (c *Conversation) Resume(ctx context.Context) (*Response, error)
- func (c *Conversation) ResumeStream(ctx context.Context) <-chan StreamChunk
- func (c *Conversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)
- func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) error
- func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) error
- func (c *Conversation) SendText(ctx context.Context, text string) error
- func (c *Conversation) SendToolResult(_ context.Context, callID string, result any) error
- func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) error
- func (c *Conversation) SessionError() error
- func (c *Conversation) SetVar(name, value string)
- func (c *Conversation) SetVars(vars map[string]any)
- func (c *Conversation) SetVarsFromEnv(prefix string)
- func (c *Conversation) Stream(ctx context.Context, message any, opts ...SendOption) <-chan StreamChunk
- func (c *Conversation) StreamRaw(ctx context.Context, message any) (<-chan streamPkg.Chunk, error)
- func (c *Conversation) StreamWithCallback(ctx context.Context, message any, opts ...SendOption) (*Response, error)
- func (c *Conversation) ToolRegistry() *tools.Registry
- func (c *Conversation) TriggerStart(ctx context.Context, message string) error
- type CredentialOption
- type EndpointResolver
- type InMemoryA2ATaskStore
- type LocalAgentExecutor
- type MCPServerBuilder
- type MapEndpointResolver
- type MultiAgentSession
- type Option
- func WithA2ATools(bridge *a2a.ToolBridge) Option
- func WithAPIKey(key string) Option
- func WithAgentEndpoints(resolver EndpointResolver) Option
- func WithAutoResize(maxWidth, maxHeight int) Option
- func WithAutoSummarize(provider providers.Provider, threshold, batchSize int) Option
- func WithAzure(endpoint, providerType, model string, opts ...PlatformOption) Option
- func WithBedrock(region, providerType, model string, opts ...PlatformOption) Option
- func WithCapability(capability Capability) Option
- func WithContextCarryForward() Option
- func WithContextRetrieval(embeddingProvider providers.EmbeddingProvider, topK int) Option
- func WithContextWindow(recentMessages int) Option
- func WithConversationID(id string) Option
- func WithEvalRegistry(r *evals.EvalTypeRegistry) Option
- func WithEvalRunner(r *evals.EvalRunner) Option
- func WithEvalsDisabled() Option
- func WithEventBus(bus *events.EventBus) Option
- func WithEventStore(store events.EventStore) Option
- func WithExecutionTimeout(d time.Duration) Option
- func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) Option
- func WithJSONMode() Option
- func WithJudgeProvider(jp handlers.JudgeProvider) Option
- func WithLogger(l *slog.Logger) Option
- func WithMCP(name, command string, args ...string) Option
- func WithMCPServer(builder *MCPServerBuilder) Option
- func WithMaxActiveSkillsOption(n int) Option
- func WithModel(model string) Option
- func WithProvider(p providers.Provider) Option
- func WithProviderHook(h hooks.ProviderHook) Option
- func WithRecording(cfg *RecordingConfig) Option
- func WithRelevanceTruncation(cfg *RelevanceConfig) Option
- func WithResponseFormat(format *providers.ResponseFormat) Option
- func WithSessionHook(h hooks.SessionHook) Option
- func WithSkillSelectorOption(s skills.SkillSelector) Option
- func WithSkillsDir(dir string) Option
- func WithSkipSchemaValidation() Option
- func WithStateStore(store statestore.Store) Option
- func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) Option
- func WithStreamingVideo(cfg *VideoStreamConfig) Option
- func WithTTS(service tts.Service) Option
- func WithTokenBudget(tokens int) Option
- func WithToolHook(h hooks.ToolHook) Option
- func WithToolRegistry(registry *tools.Registry) Option
- func WithTracerProvider(tp trace.TracerProvider) Option
- func WithTruncation(strategy string) Option
- func WithTurnDetector(detector audio.TurnDetector) Option
- func WithVADMode(sttService stt.Service, ttsService tts.Service, cfg *VADModeConfig) Option
- func WithVariableProvider(p variables.Provider) Option
- func WithVariables(vars map[string]string) Option
- func WithVertex(region, project, providerType, model string, opts ...PlatformOption) Option
- type PackError
- type PackTemplate
- type PendingClientTool
- type PendingTool
- type PlatformOption
- type ProviderError
- type RecordingConfig
- type RelevanceConfig
- type Response
- func (r *Response) ClientTools() []PendingClientTool
- func (r *Response) Cost() float64
- func (r *Response) Duration() time.Duration
- func (r *Response) HasMedia() bool
- func (r *Response) HasPendingClientTools() bool
- func (r *Response) HasToolCalls() bool
- func (r *Response) InputTokens() int
- func (r *Response) Message() *types.Message
- func (r *Response) OutputTokens() int
- func (r *Response) Parts() []types.ContentPart
- func (r *Response) PendingTools() []PendingTool
- func (r *Response) Text() string
- func (r *Response) TokensUsed() int
- func (r *Response) ToolCalls() []types.MessageToolCall
- func (r *Response) Validations() []types.ValidationResult
- type ResponseTestOption
- type SendOption
- func WithAudioData(data []byte, mimeType string) SendOption
- func WithAudioFile(path string) SendOption
- func WithDocumentData(data []byte, mimeType string) SendOption
- func WithDocumentFile(path string) SendOption
- func WithFile(name string, data []byte) SendOptiondeprecated
- func WithImageData(data []byte, mimeType string, detail ...*string) SendOption
- func WithImageFile(path string, detail ...*string) SendOption
- func WithImageURL(url string, detail ...*string) SendOption
- func WithVideoData(data []byte, mimeType string) SendOption
- func WithVideoFile(path string) SendOption
- type SessionMode
- type SkillsCapability
- type SkillsOption
- type StatefulCapability
- type StaticEndpointResolver
- type StreamChunk
- type StreamDoneEvent
- type StreamEvent
- type StreamEventHandler
- type TextDeltaEvent
- type ToolError
- type ToolHandler
- type ToolHandlerCtx
- type VADModeConfig
- type ValidationError
- type VideoStreamConfig
- type WorkflowCapability
- func (w *WorkflowCapability) Close() error
- func (w *WorkflowCapability) Init(ctx CapabilityContext) error
- func (w *WorkflowCapability) Name() string
- func (w *WorkflowCapability) RegisterTools(_ *tools.Registry)
- func (w *WorkflowCapability) RegisterToolsForState(registry *tools.Registry, state *workflow.State)
- type WorkflowConversation
- func (wc *WorkflowConversation) ActiveConversation() *Conversation
- func (wc *WorkflowConversation) AvailableEvents() []string
- func (wc *WorkflowConversation) Close() error
- func (wc *WorkflowConversation) Context() *workflow.Context
- func (wc *WorkflowConversation) CurrentPromptTask() string
- func (wc *WorkflowConversation) CurrentState() string
- func (wc *WorkflowConversation) IsComplete() bool
- func (wc *WorkflowConversation) OrchestrationMode() workflow.Orchestration
- func (wc *WorkflowConversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)
- func (wc *WorkflowConversation) Transition(event string) (string, error)
Constants ¶
This section is empty.
Variables ¶
var ( NewInMemoryA2ATaskStore = a2aserver.NewInMemoryTaskStore ErrTaskNotFound = a2aserver.ErrTaskNotFound ErrTaskAlreadyExists = a2aserver.ErrTaskAlreadyExists ErrInvalidTransition = a2aserver.ErrInvalidTransition ErrTaskTerminal = a2aserver.ErrTaskTerminal )
Re-exported constructors and sentinel errors.
var ( // ErrConversationClosed is returned when Send or Stream is called on a closed conversation. ErrConversationClosed = errors.New("conversation is closed") // ErrConversationNotFound is returned by Resume when the conversation ID doesn't exist. ErrConversationNotFound = errors.New("conversation not found") // ErrNoStateStore is returned by Resume when no state store is configured. ErrNoStateStore = errors.New("no state store configured") // ErrPromptNotFound is returned when the specified prompt doesn't exist in the pack. ErrPromptNotFound = errors.New("prompt not found in pack") // ErrPackNotFound is returned when the pack file doesn't exist. ErrPackNotFound = errors.New("pack file not found") // ErrProviderNotDetected is returned when no provider could be auto-detected. ErrProviderNotDetected = errors.New("could not detect provider: no API keys found in environment") // ErrToolNotRegistered is returned when the LLM calls a tool that has no handler. ErrToolNotRegistered = errors.New("tool handler not registered") // ErrToolNotInPack is returned when trying to register a handler for a tool not in the pack. ErrToolNotInPack = errors.New("tool not defined in pack") // ErrNoWorkflow is returned when OpenWorkflow is called on a pack without a workflow section. ErrNoWorkflow = errors.New("pack has no workflow section") // ErrWorkflowClosed is returned when Send or Transition is called on a closed WorkflowConversation. ErrWorkflowClosed = errors.New("workflow conversation is closed") // ErrWorkflowTerminal is returned when Transition is called on a terminal state. ErrWorkflowTerminal = errors.New("workflow is in terminal state") )
Sentinel errors for common failure cases.
Functions ¶
This section is empty.
Types ¶
type A2ACapability ¶ added in v1.3.1
type A2ACapability struct {
// contains filtered or unexported fields
}
A2ACapability provides A2A agent tools to conversations. It unifies both the bridge path (explicit WithA2ATools) and the pack path (agents section in pack) under a single capability.
func NewA2ACapability ¶ added in v1.3.1
func NewA2ACapability() *A2ACapability
NewA2ACapability creates a new A2ACapability.
func (*A2ACapability) Close ¶ added in v1.3.1
func (c *A2ACapability) Close() error
Close is a no-op for A2ACapability.
func (*A2ACapability) Init ¶ added in v1.3.1
func (c *A2ACapability) Init(ctx CapabilityContext) error
Init initializes the capability with pack context. If the pack has an agents section, it creates an AgentToolResolver.
func (*A2ACapability) Name ¶ added in v1.3.1
func (c *A2ACapability) Name() string
Name returns the capability identifier.
func (*A2ACapability) RegisterTools ¶ added in v1.3.1
func (c *A2ACapability) RegisterTools(registry *tools.Registry)
RegisterTools registers A2A tools into the registry. Bridge path: registers bridge tool descriptors + A2A executor. Pack path: resolves agent tools from prompt tools list + registers executor.
type A2AConversationOpener ¶ added in v1.1.11
type A2AConversationOpener = a2aserver.ConversationOpener
A2AConversationOpener creates or retrieves a conversation for a context ID.
func A2AOpener ¶ added in v1.1.11
func A2AOpener(packPath, promptName string, opts ...Option) A2AConversationOpener
A2AOpener returns an A2AConversationOpener backed by SDK conversations. Each call to the returned function opens a new conversation for the given context ID using sdk.Open with the provided pack path, prompt name, and options.
type A2AServer ¶ added in v1.1.11
A2AServer is the A2A-protocol HTTP server.
func NewA2AServer ¶ added in v1.1.11
func NewA2AServer(opener A2AConversationOpener, opts ...A2AServerOption) *A2AServer
NewA2AServer creates a new A2A server with the given opener and options.
type A2AServerOption ¶ added in v1.1.11
A2AServerOption configures an A2AServer.
func WithA2ACard ¶ added in v1.1.11
func WithA2ACard(card *a2a.AgentCard) A2AServerOption
WithA2ACard sets the agent card served at /.well-known/agent.json.
func WithA2AConversationTTL ¶ added in v1.3.2
func WithA2AConversationTTL(d time.Duration) A2AServerOption
WithA2AConversationTTL sets the conversation TTL for eviction.
func WithA2AIdleTimeout ¶ added in v1.3.2
func WithA2AIdleTimeout(d time.Duration) A2AServerOption
WithA2AIdleTimeout sets the idle timeout.
func WithA2AMaxBodySize ¶ added in v1.3.2
func WithA2AMaxBodySize(n int64) A2AServerOption
WithA2AMaxBodySize sets the max body size.
func WithA2APort ¶ added in v1.1.11
func WithA2APort(port int) A2AServerOption
WithA2APort sets the TCP port for ListenAndServe.
func WithA2AReadTimeout ¶ added in v1.3.2
func WithA2AReadTimeout(d time.Duration) A2AServerOption
WithA2AReadTimeout sets the read timeout.
func WithA2ATaskStore ¶ added in v1.1.11
func WithA2ATaskStore(store A2ATaskStore) A2AServerOption
WithA2ATaskStore sets a custom task store.
func WithA2ATaskTTL ¶ added in v1.3.2
func WithA2ATaskTTL(d time.Duration) A2AServerOption
WithA2ATaskTTL sets the task TTL for eviction.
func WithA2AWriteTimeout ¶ added in v1.3.2
func WithA2AWriteTimeout(d time.Duration) A2AServerOption
WithA2AWriteTimeout sets the write timeout.
type A2ATaskStore ¶ added in v1.1.11
A2ATaskStore is the task persistence interface.
type AgentToolResolver ¶ added in v1.3.1
type AgentToolResolver struct {
// contains filtered or unexported fields
}
AgentToolResolver resolves agent member references in tool lists to A2A-compatible tool descriptors.
func NewAgentToolResolver ¶ added in v1.3.1
func NewAgentToolResolver(pack *prompt.Pack) *AgentToolResolver
NewAgentToolResolver creates a resolver from a compiled pack. Returns nil if the pack has no agents section.
func (*AgentToolResolver) IsAgentTool ¶ added in v1.3.1
func (r *AgentToolResolver) IsAgentTool(toolName string) bool
IsAgentTool checks if a tool name refers to an agent member. It accepts both bare member keys ("summarizer") and qualified names ("a2a__summarizer").
func (*AgentToolResolver) MemberNames ¶ added in v1.3.1
func (r *AgentToolResolver) MemberNames() []string
MemberNames returns the names of all agent members known to this resolver.
func (*AgentToolResolver) ResolveAgentTools ¶ added in v1.3.1
func (r *AgentToolResolver) ResolveAgentTools(toolNames []string) []*tools.ToolDescriptor
ResolveAgentTools returns tool descriptors for all agent members that appear in the given tool names list. Each descriptor has Mode "a2a", an input schema with a required "query" field, and (if an EndpointResolver is set) an AgentURL in A2AConfig.
func (*AgentToolResolver) SetEndpointResolver ¶ added in v1.3.1
func (r *AgentToolResolver) SetEndpointResolver(er EndpointResolver)
SetEndpointResolver configures how agent URLs are resolved. When nil, descriptors are created without an AgentURL (suitable for testing or when endpoints are resolved later).
type Capability ¶ added in v1.3.1
type Capability interface {
// Name returns the capability identifier (e.g., "workflow", "a2a").
Name() string
// Init initializes the capability with pack context.
Init(ctx CapabilityContext) error
// RegisterTools registers the capability's tools into the registry.
RegisterTools(registry *tools.Registry)
// Close releases any resources held by the capability.
Close() error
}
Capability represents a platform feature that provides namespaced tools. Capabilities are auto-inferred from pack structure or explicitly added via WithCapability.
type CapabilityContext ¶ added in v1.3.1
CapabilityContext provides read-only access to pack and config during Init.
type ChunkType ¶ added in v1.1.5
type ChunkType int
ChunkType identifies the type of a streaming chunk.
const ( // ChunkText indicates the chunk contains text content. ChunkText ChunkType = iota // ChunkToolCall indicates the chunk contains a tool call. ChunkToolCall // ChunkMedia indicates the chunk contains media content. ChunkMedia // ChunkDone indicates streaming is complete. ChunkDone // ChunkClientTool indicates a client tool request that needs caller fulfillment. ChunkClientTool )
type ClientToolHandler ¶ added in v1.3.8
type ClientToolHandler func(ctx context.Context, req ClientToolRequest) (any, error)
ClientToolHandler is a function that fulfillls a client-side tool call. It receives a context (carrying the tool timeout from ClientConfig.TimeoutMs) and a ClientToolRequest with the invocation details.
The return value should be JSON-serializable and will be sent back to the LLM as the tool result.
type ClientToolRequest ¶ added in v1.3.8
type ClientToolRequest struct {
// ToolName is the tool's name as defined in the pack.
ToolName string
// CallID is the provider-assigned ID for this particular invocation.
CallID string
// Args contains the parsed arguments from the LLM.
Args map[string]any
// ConsentMsg is the human-readable consent message from the pack's
// client.consent.message field. Empty when no consent is configured.
ConsentMsg string
// Categories are the semantic consent categories (e.g., ["location"]).
Categories []string
// Descriptor provides the full tool descriptor for advanced use cases.
Descriptor *tools.ToolDescriptor
}
ClientToolRequest contains information about a client-side tool invocation. It is passed to handlers registered via Conversation.OnClientTool.
type ClientToolRequestEvent ¶ added in v1.3.8
type ClientToolRequestEvent struct {
// CallID is the provider-assigned ID for this tool invocation.
CallID string
// ToolName is the tool's name as defined in the pack.
ToolName string
// Args contains the parsed arguments from the LLM.
Args map[string]any
// ConsentMsg is the human-readable consent message.
ConsentMsg string
// Categories are the semantic consent categories.
Categories []string
}
ClientToolRequestEvent is emitted when the pipeline encounters a client tool that needs caller fulfillment.
type Conversation ¶
type Conversation struct {
// contains filtered or unexported fields
}
Conversation represents an active LLM conversation.
A conversation maintains:
- Connection to the LLM provider
- Message history (context)
- Variable state for template substitution
- Tool handlers for function calling
- Validation state
Conversations are created via Open or Resume and are safe for concurrent use. Each Open call creates an independent conversation with isolated state.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "chat")
conv.SetVar("user_name", "Alice")
resp, _ := conv.Send(ctx, "Hello!")
fmt.Println(resp.Text())
resp, _ = conv.Send(ctx, "What's my name?") // Remembers context
fmt.Println(resp.Text()) // "Your name is Alice"
func Open ¶ added in v1.1.5
func Open(packPath, promptName string, opts ...Option) (*Conversation, error)
Open loads a pack file and creates a new conversation for the specified prompt.
This is the primary entry point for SDK v2. It:
- Loads and parses the pack file
- Auto-detects the provider from environment (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
- Configures the runtime pipeline based on pack settings
- Creates an isolated conversation with its own state
Basic usage:
conv, err := sdk.Open("./assistant.pack.json", "chat")
if err != nil {
log.Fatal(err)
}
defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")
fmt.Println(resp.Text())
With options:
conv, err := sdk.Open("./assistant.pack.json", "chat",
sdk.WithModel("gpt-4o"),
sdk.WithAPIKey(os.Getenv("MY_KEY")),
sdk.WithStateStore(redisStore),
)
The packPath can be:
- Absolute path: "/path/to/assistant.pack.json"
- Relative path: "./packs/assistant.pack.json"
- URL: "https://example.com/packs/assistant.pack.json" (future)
The promptName must match a prompt ID defined in the pack's "prompts" section.
func OpenDuplex ¶ added in v1.1.6
func OpenDuplex(packPath, promptName string, opts ...Option) (*Conversation, error)
OpenDuplex loads a pack file and creates a new duplex streaming conversation for the specified prompt.
This creates a conversation in duplex mode for bidirectional streaming interactions. Use this when you need real-time streaming input/output with the LLM.
Basic usage:
conv, err := sdk.OpenDuplex("./assistant.pack.json", "chat")
if err != nil {
log.Fatal(err)
}
defer conv.Close()
// Send streaming input
go func() {
conv.SendText(ctx, "Hello, ")
conv.SendText(ctx, "how are you?")
}()
// Receive streaming output
respCh, _ := conv.Response()
for chunk := range respCh {
fmt.Print(chunk.Content)
}
The provider must support streaming input (implement providers.StreamInputSupport). Currently supported providers: Gemini with certain models.
func Resume ¶ added in v1.1.5
func Resume(conversationID, packPath, promptName string, opts ...Option) (*Conversation, error)
Resume loads an existing conversation from state storage.
Use this to continue a conversation that was previously persisted:
store := statestore.NewRedisStore("redis://localhost:6379")
conv, err := sdk.Resume("session-123", "./chat.pack.json", "assistant",
sdk.WithStateStore(store),
)
if errors.Is(err, sdk.ErrConversationNotFound) {
// Start new conversation
conv, _ = sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
sdk.WithConversationID("session-123"),
)
}
Resume requires a state store to be configured. If no state store is provided, it returns ErrNoStateStore.
func (*Conversation) CheckPending ¶ added in v1.1.5
func (c *Conversation) CheckPending( name string, args map[string]any, ) (*sdktools.PendingToolCall, bool)
CheckPending checks if a tool call should be pending and creates it if so. Returns (pending call, should wait) - if should wait is true, the tool shouldn't execute yet.
This method is used internally when processing tool calls from the LLM. It can also be useful for testing HITL workflows:
pending, shouldWait := conv.CheckPending("risky_tool", args)
if shouldWait {
// Tool requires approval
}
func (*Conversation) Clear ¶ added in v1.1.5
func (c *Conversation) Clear() error
Clear removes all messages from the conversation history.
This keeps the system prompt and variables but removes all user/assistant messages. Useful for starting fresh within the same conversation session. In duplex mode, this will close the session first if actively streaming.
func (*Conversation) Close ¶ added in v1.1.5
func (c *Conversation) Close() error
Close releases resources associated with the conversation.
After Close is called, Send and Stream will return ErrConversationClosed. It's safe to call Close multiple times.
func (*Conversation) Continue ¶
func (c *Conversation) Continue(ctx context.Context) (*Response, error)
Continue resumes conversation after resolving pending tools.
Call this after approving/rejecting all pending tools to continue the conversation with the tool results:
resp, _ := conv.Send(ctx, "Process refund")
for _, pending := range resp.PendingTools() {
conv.ResolveTool(pending.ID)
}
resp, _ = conv.Continue(ctx) // LLM receives tool results
func (*Conversation) Done ¶ added in v1.1.6
func (c *Conversation) Done() (<-chan struct{}, error)
Done returns a channel that's closed when the duplex session ends. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) EventBus ¶ added in v1.1.5
func (c *Conversation) EventBus() *events.EventBus
EventBus returns the conversation's event bus for observability.
Use this to subscribe to runtime events like tool calls, validations, and provider requests:
conv.EventBus().Subscribe(events.EventToolCallStarted, func(e *events.Event) {
log.Printf("Tool call: %s", e.Data.(*events.ToolCallStartedData).ToolName)
})
For convenience methods, see the hooks package.
func (*Conversation) Fork ¶ added in v1.1.5
func (c *Conversation) Fork() *Conversation
Fork creates a copy of the current conversation state.
Use this to explore alternative conversation branches:
conv.Send(ctx, "I want to plan a trip") conv.Send(ctx, "What cities should I visit?") // Fork to explore different paths branch := conv.Fork() conv.Send(ctx, "Tell me about Tokyo") // Original path branch.Send(ctx, "Tell me about Kyoto") // Branch path
The forked conversation is completely independent - changes to one do not affect the other.
func (*Conversation) GetVar ¶ added in v1.1.5
func (c *Conversation) GetVar(name string) (string, bool)
GetVar returns the current value of a template variable. Returns empty string and false if the variable is not set.
func (*Conversation) ID ¶ added in v1.1.5
func (c *Conversation) ID() string
ID returns the conversation's unique identifier.
func (*Conversation) Messages ¶ added in v1.1.5
func (c *Conversation) Messages(ctx context.Context) []types.Message
Messages returns the conversation history.
The returned slice is a copy - modifying it does not affect the conversation.
func (*Conversation) OnClientTool ¶ added in v1.3.8
func (c *Conversation) OnClientTool(name string, handler ClientToolHandler)
OnClientTool registers a handler for a client-side tool.
Client tools (mode: "client") are tools that must be fulfillled on the caller's device — for example GPS, camera, or biometric sensors. The handler is invoked synchronously when the LLM calls the tool.
Example:
conv.OnClientTool("get_location", func(ctx context.Context, req sdk.ClientToolRequest) (any, error) {
coords, err := deviceGPS(ctx, req.Args["accuracy"].(string))
if err != nil {
return nil, err
}
return map[string]any{"lat": coords.Lat, "lng": coords.Lng}, nil
})
func (*Conversation) OnClientTools ¶ added in v1.3.8
func (c *Conversation) OnClientTools(handlers map[string]ClientToolHandler)
OnClientTools registers multiple client tool handlers at once.
conv.OnClientTools(map[string]sdk.ClientToolHandler{
"get_location": locationHandler,
"read_contacts": contactsHandler,
})
func (*Conversation) OnStreamEvent ¶ added in v1.3.8
func (c *Conversation) OnStreamEvent(handler StreamEventHandler)
OnStreamEvent registers a handler that will be called for each stream event during Conversation.StreamWithCallback.
Example:
conv.OnStreamEvent(func(event sdk.StreamEvent) {
switch e := event.(type) {
case sdk.TextDeltaEvent:
fmt.Print(e.Delta)
case sdk.ClientToolRequestEvent:
fmt.Printf("Tool %s needs fulfillment\n", e.ToolName)
case sdk.StreamDoneEvent:
fmt.Println("\nDone!")
}
})
func (*Conversation) OnTool ¶ added in v1.1.5
func (c *Conversation) OnTool(name string, handler ToolHandler)
OnTool registers a handler for a tool defined in the pack.
The tool name must match a tool defined in the pack's tools section. When the LLM calls the tool, your handler receives the parsed arguments and returns a result.
conv.OnTool("get_weather", func(args map[string]any) (any, error) {
city := args["city"].(string)
return weatherAPI.GetCurrent(city)
})
The handler's return value is automatically serialized to JSON and sent back to the LLM as the tool result.
func (*Conversation) OnToolAsync ¶ added in v1.1.5
func (c *Conversation) OnToolAsync( name string, checkFunc func(args map[string]any) sdktools.PendingResult, execFunc ToolHandler, )
OnToolAsync registers a handler that may require approval before execution.
Use this for Human-in-the-Loop (HITL) workflows where certain actions require human approval before proceeding:
conv.OnToolAsync("process_refund", func(args map[string]any) sdk.PendingResult {
amount := args["amount"].(float64)
if amount > 1000 {
return sdk.PendingResult{
Reason: "high_value_refund",
Message: fmt.Sprintf("Refund of $%.2f requires approval", amount),
}
}
return sdk.PendingResult{} // Proceed immediately
}, func(args map[string]any) (any, error) {
// Execute the actual refund
return refundAPI.Process(args)
})
The first function checks if approval is needed, the second executes the action.
func (*Conversation) OnToolCtx ¶ added in v1.1.5
func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)
OnToolCtx registers a context-aware handler for a tool.
Use this when your tool implementation needs the request context for cancellation, deadlines, or tracing:
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) {
return db.SearchWithContext(ctx, args["query"].(string))
})
func (*Conversation) OnToolExecutor ¶ added in v1.1.5
func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)
OnToolExecutor registers a custom executor for tools.
Use this when you need full control over tool execution or want to use a runtime executor directly:
executor := &MyCustomExecutor{}
conv.OnToolExecutor("custom_tool", executor)
The executor must implement the runtime/tools.Executor interface.
func (*Conversation) OnToolHTTP ¶ added in v1.1.5
func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)
OnToolHTTP registers a tool that makes HTTP requests.
This is a convenience method for tools that call external APIs:
conv.OnToolHTTP("create_ticket", sdktools.NewHTTPToolConfig(
"https://api.tickets.example.com/tickets",
sdktools.WithMethod("POST"),
sdktools.WithHeader("Authorization", "Bearer "+apiKey),
sdktools.WithTimeout(5000),
))
The tool arguments from the LLM are serialized to JSON and sent as the request body. The response is parsed and returned to the LLM.
func (*Conversation) OnTools ¶ added in v1.1.5
func (c *Conversation) OnTools(handlers map[string]ToolHandler)
OnTools registers multiple tool handlers at once.
conv.OnTools(map[string]sdk.ToolHandler{
"get_weather": getWeatherHandler,
"search_docs": searchDocsHandler,
"send_email": sendEmailHandler,
})
func (*Conversation) PendingTools ¶ added in v1.1.5
func (c *Conversation) PendingTools() []*sdktools.PendingToolCall
PendingTools returns all pending tool calls awaiting approval.
func (*Conversation) RejectClientTool ¶ added in v1.3.8
func (c *Conversation) RejectClientTool(_ context.Context, callID, reason string)
RejectClientTool rejects a deferred client tool with a human-readable reason.
callID must match one of the [PendingClientTool.CallID] values returned in the Response. The rejection reason is sent to the LLM as the tool result.
func (*Conversation) RejectTool ¶ added in v1.1.5
func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)
RejectTool rejects a pending tool call.
Use this when the human reviewer decides not to approve the tool:
resp, _ := conv.RejectTool(pending.ID, "Not authorized for this amount")
func (*Conversation) ResolveTool ¶ added in v1.1.5
func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)
ResolveTool approves and executes a pending tool call.
After calling Send() and receiving pending tools in the response, use this to approve and execute them:
resp, _ := conv.Send(ctx, "Process refund for order #12345")
if len(resp.PendingTools()) > 0 {
pending := resp.PendingTools()[0]
// ... get approval ...
result, _ := conv.ResolveTool(pending.ID)
// Continue the conversation with the result
resp, _ = conv.Continue(ctx)
}
func (*Conversation) Response ¶ added in v1.1.6
func (c *Conversation) Response() (<-chan providers.StreamChunk, error)
Response returns the response channel for duplex streaming. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) Resume ¶ added in v1.3.8
func (c *Conversation) Resume(ctx context.Context) (*Response, error)
Resume continues pipeline execution after all deferred client tools have been resolved via Conversation.SendToolResult or Conversation.RejectClientTool.
The resolved tool results are injected as tool-result messages and a new LLM round is triggered. The returned Response contains the assistant's reply.
func (*Conversation) ResumeStream ¶ added in v1.3.8
func (c *Conversation) ResumeStream(ctx context.Context) <-chan StreamChunk
ResumeStream is the streaming equivalent of Conversation.Resume.
It continues pipeline execution after deferred client tools have been resolved, returning a channel of StreamChunk values. The final chunk (Type == ChunkDone) contains the complete Response.
Example:
conv.SendToolResult(ctx, "call-1", locationData)
for chunk := range conv.ResumeStream(ctx) {
if chunk.Error != nil { break }
fmt.Print(chunk.Text)
}
func (*Conversation) Send ¶
func (c *Conversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)
Send sends a message to the LLM and returns the response.
The message can be a simple string or a *types.Message for multimodal content. Variables are substituted into the system prompt template before sending.
Basic usage:
resp, err := conv.Send(ctx, "Hello!")
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text())
With message options:
resp, err := conv.Send(ctx, "What's in this image?",
sdk.WithImageFile("/path/to/image.jpg"),
)
Send automatically:
- Substitutes variables into the system prompt
- Runs any registered validators
- Handles tool calls if tools are defined
- Persists state if a state store is configured
func (*Conversation) SendChunk ¶ added in v1.1.6
func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) error
SendChunk sends a streaming chunk in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendFrame ¶ added in v1.1.8
func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) error
SendFrame sends an image frame in duplex mode for realtime video scenarios. Only available when the conversation was opened with OpenDuplex().
Example:
frame := &session.ImageFrame{
Data: jpegBytes,
MIMEType: "image/jpeg",
Timestamp: time.Now(),
}
conv.SendFrame(ctx, frame)
func (*Conversation) SendText ¶ added in v1.1.6
func (c *Conversation) SendText(ctx context.Context, text string) error
SendText sends text in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendToolResult ¶ added in v1.3.8
SendToolResult provides the result for a deferred client tool.
callID must match one of the [PendingClientTool.CallID] values returned in the Response. result should be JSON-serializable.
After all pending tools have been resolved (via SendToolResult or RejectClientTool), call Conversation.Resume to continue the pipeline.
func (*Conversation) SendVideoChunk ¶ added in v1.1.8
func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) error
SendVideoChunk sends a video chunk in duplex mode for encoded video streaming. Only available when the conversation was opened with OpenDuplex().
Example:
chunk := &session.VideoChunk{
Data: h264Data,
MIMEType: "video/h264",
IsKeyFrame: true,
Timestamp: time.Now(),
}
conv.SendVideoChunk(ctx, chunk)
func (*Conversation) SessionError ¶ added in v1.1.6
func (c *Conversation) SessionError() error
SessionError returns any error from the duplex session. Only available when the conversation was opened with OpenDuplex(). Note: This is named SessionError to avoid conflict with the Error interface method.
func (*Conversation) SetVar ¶ added in v1.1.5
func (c *Conversation) SetVar(name, value string)
SetVar sets a single template variable.
Variables are substituted into the system prompt template:
conv.SetVar("customer_name", "Alice")
// Template: "You are helping {{customer_name}}"
// Becomes: "You are helping Alice"
func (*Conversation) SetVars ¶ added in v1.1.5
func (c *Conversation) SetVars(vars map[string]any)
SetVars sets multiple template variables at once.
conv.SetVars(map[string]any{
"customer_name": "Alice",
"customer_tier": "premium",
"max_discount": 20,
})
func (*Conversation) SetVarsFromEnv ¶ added in v1.1.5
func (c *Conversation) SetVarsFromEnv(prefix string)
SetVarsFromEnv sets variables from environment variables with a given prefix.
Environment variables matching the prefix are added as template variables with the prefix stripped and converted to lowercase:
// If PROMPTKIT_CUSTOMER_NAME=Alice is set:
conv.SetVarsFromEnv("PROMPTKIT_")
// Sets variable "customer_name" = "Alice"
func (*Conversation) Stream ¶ added in v1.1.5
func (c *Conversation) Stream(ctx context.Context, message any, opts ...SendOption) <-chan StreamChunk
Stream sends a message and returns a channel of response chunks.
Use this for real-time streaming of LLM responses:
for chunk := range conv.Stream(ctx, "Tell me a story") {
if chunk.Error != nil {
log.Printf("Error: %v", chunk.Error)
break
}
fmt.Print(chunk.Text)
}
The channel is closed when the response is complete or an error occurs. The final chunk (Type == ChunkDone) contains the complete Response.
func (*Conversation) StreamRaw ¶ added in v1.1.5
StreamRaw returns a channel of streaming chunks for use with the stream package. This is a lower-level API that returns stream.Chunk types.
Most users should use Conversation.Stream instead. StreamRaw is useful when working with [stream.Process] or [stream.CollectText].
err := stream.Process(ctx, conv, "Hello", func(chunk stream.Chunk) error {
fmt.Print(chunk.Text)
return nil
})
func (*Conversation) StreamWithCallback ¶ added in v1.3.8
func (c *Conversation) StreamWithCallback(ctx context.Context, message any, opts ...SendOption) (*Response, error)
StreamWithCallback sends a message and invokes the registered StreamEventHandler for each chunk. This is a convenience wrapper around Conversation.Stream that translates chunks into typed events.
If no handler has been registered via Conversation.OnStreamEvent, this behaves like Stream() but discards all chunks and returns the final response.
Returns the complete Response or an error.
func (*Conversation) ToolRegistry ¶ added in v1.1.5
func (c *Conversation) ToolRegistry() *tools.Registry
ToolRegistry returns the underlying tool registry.
This is a power-user method for direct registry access. Tool descriptors are loaded from the pack; this allows inspecting them or registering custom executors.
registry := conv.ToolRegistry().(*tools.Registry)
for _, desc := range registry.Descriptors() {
fmt.Printf("Tool: %s\n", desc.Name)
}
func (*Conversation) TriggerStart ¶ added in v1.1.6
func (c *Conversation) TriggerStart(ctx context.Context, message string) error
TriggerStart sends a text message to make the model initiate the conversation. Use this in ASM mode when you want the model to speak first (e.g., introducing itself). Only available when the conversation was opened with OpenDuplex().
Example:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "interviewer", ...)
// Start processing responses first
go processResponses(conv.Response())
// Trigger the model to begin
conv.TriggerStart(ctx, "Please introduce yourself and begin the interview.")
type CredentialOption ¶ added in v1.1.9
type CredentialOption interface {
// contains filtered or unexported methods
}
CredentialOption configures credentials for a provider.
func WithCredentialAPIKey ¶ added in v1.1.9
func WithCredentialAPIKey(key string) CredentialOption
WithCredentialAPIKey sets an explicit API key.
func WithCredentialEnv ¶ added in v1.1.9
func WithCredentialEnv(envVar string) CredentialOption
WithCredentialEnv sets an environment variable name for the credential.
func WithCredentialFile ¶ added in v1.1.9
func WithCredentialFile(path string) CredentialOption
WithCredentialFile sets a credential file path.
type EndpointResolver ¶ added in v1.3.1
type EndpointResolver interface {
// Resolve returns the base URL (e.g. "http://localhost:9000") for the
// named agent member. An empty string means the agent has no reachable
// endpoint and should be skipped.
Resolve(agentName string) string
}
EndpointResolver determines the A2A endpoint URL for a given agent member. Implementations can provide static URLs, service-discovery lookups, or test-friendly mock endpoints.
type InMemoryA2ATaskStore ¶ added in v1.1.11
type InMemoryA2ATaskStore = a2aserver.InMemoryTaskStore
InMemoryA2ATaskStore is a concurrency-safe in-memory TaskStore.
type LocalAgentExecutor ¶ added in v1.3.1
type LocalAgentExecutor struct {
// contains filtered or unexported fields
}
LocalAgentExecutor routes A2A tool calls to in-process Conversations instead of making remote HTTP calls. It implements tools.Executor.
func NewLocalAgentExecutor ¶ added in v1.3.1
func NewLocalAgentExecutor(members map[string]*Conversation) *LocalAgentExecutor
NewLocalAgentExecutor creates an executor that routes tool calls to local conversations.
func (*LocalAgentExecutor) Execute ¶ added in v1.3.1
func (e *LocalAgentExecutor) Execute( ctx context.Context, descriptor *tools.ToolDescriptor, args json.RawMessage, ) (json.RawMessage, error)
Execute routes a tool call to the corresponding member conversation. It parses {"query":"..."} from args, calls member.Send(), and returns {"response":"..."}.
func (*LocalAgentExecutor) Name ¶ added in v1.3.1
func (e *LocalAgentExecutor) Name() string
Name returns the executor name. Must be "a2a" to intercept A2A tool dispatches.
type MCPServerBuilder ¶ added in v1.1.5
type MCPServerBuilder struct {
// contains filtered or unexported fields
}
MCPServerBuilder provides a fluent interface for configuring MCP servers.
func NewMCPServer ¶ added in v1.1.5
func NewMCPServer(name, command string, args ...string) *MCPServerBuilder
NewMCPServer creates a new MCP server configuration builder.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github").
WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithMCPServer(server),
)
func (*MCPServerBuilder) Build ¶ added in v1.1.5
func (b *MCPServerBuilder) Build() mcp.ServerConfig
Build returns the configured server config.
func (*MCPServerBuilder) WithArgs ¶ added in v1.1.5
func (b *MCPServerBuilder) WithArgs(args ...string) *MCPServerBuilder
WithArgs appends additional arguments to the MCP server command.
func (*MCPServerBuilder) WithEnv ¶ added in v1.1.5
func (b *MCPServerBuilder) WithEnv(key, value string) *MCPServerBuilder
WithEnv adds an environment variable to the MCP server.
type MapEndpointResolver ¶ added in v1.3.1
MapEndpointResolver maps each agent name to a specific endpoint URL. Unknown agents return an empty string and are silently skipped.
func (*MapEndpointResolver) Resolve ¶ added in v1.3.1
func (r *MapEndpointResolver) Resolve(agentName string) string
Resolve returns the endpoint URL for the given agent name, or empty string if not found.
type MultiAgentSession ¶ added in v1.3.1
type MultiAgentSession struct {
// contains filtered or unexported fields
}
MultiAgentSession manages a set of agent member conversations orchestrated through an entry conversation. Tool calls from the entry agent to member agents are routed in-process via LocalAgentExecutor.
func OpenMultiAgent ¶ added in v1.3.1
func OpenMultiAgent(packPath string, opts ...Option) (*MultiAgentSession, error)
OpenMultiAgent loads a multi-agent pack and creates conversations for all members and the entry agent. Agent-to-agent tool calls from the entry conversation are routed in-process via LocalAgentExecutor.
The pack must have an agents section with entry and members defined. Options are applied to all conversations (entry and members).
func (*MultiAgentSession) Close ¶ added in v1.3.1
func (s *MultiAgentSession) Close() error
Close closes all conversations (entry and members).
func (*MultiAgentSession) Entry ¶ added in v1.3.1
func (s *MultiAgentSession) Entry() *Conversation
Entry returns the entry conversation.
func (*MultiAgentSession) Members ¶ added in v1.3.1
func (s *MultiAgentSession) Members() map[string]*Conversation
Members returns the member conversations (excluding entry).
func (*MultiAgentSession) Send ¶ added in v1.3.1
func (s *MultiAgentSession) Send( ctx context.Context, message any, opts ...SendOption, ) (*Response, error)
Send sends a message through the entry agent.
type Option ¶ added in v1.1.5
type Option func(*config) error
Option configures a Conversation.
func WithA2ATools ¶ added in v1.1.11
func WithA2ATools(bridge *a2a.ToolBridge) Option
WithA2ATools registers tools from an A2A a2a.ToolBridge so the LLM can call remote A2A agents as tools.
The bridge must have already discovered agents via a2a.ToolBridge.RegisterAgent. Each agent skill becomes a tool with Mode "a2a" in the tool registry.
Example:
client := a2a.NewClient("https://agent.example.com")
bridge := a2a.NewToolBridge(client)
bridge.RegisterAgent(ctx)
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithA2ATools(bridge),
)
func WithAPIKey ¶ added in v1.1.5
WithAPIKey provides an explicit API key instead of reading from environment.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithAPIKey(os.Getenv("MY_CUSTOM_KEY")),
)
func WithAgentEndpoints ¶ added in v1.3.1
func WithAgentEndpoints(resolver EndpointResolver) Option
WithAgentEndpoints configures endpoint resolution for multi-agent tool routing.
When a pack has an agents section, prompts can reference other agent members as tools. This option tells the SDK how to resolve agent names to A2A endpoint URLs so that tool calls are routed to the correct agent.
Example with a single gateway:
conv, _ := sdk.Open("./multiagent.pack.json", "orchestrator",
sdk.WithAgentEndpoints(&sdk.StaticEndpointResolver{
BaseURL: "http://localhost:9000",
}),
)
Example with per-agent endpoints:
conv, _ := sdk.Open("./multiagent.pack.json", "orchestrator",
sdk.WithAgentEndpoints(&sdk.MapEndpointResolver{
Endpoints: map[string]string{
"summarizer": "http://summarizer:9001",
"translator": "http://translator:9002",
},
}),
)
func WithAutoResize ¶ added in v1.1.8
WithAutoResize is a convenience option that enables image resizing with the specified dimensions. Use this for simple cases; use WithImagePreprocessing for full control.
Example:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant",
sdk.WithAutoResize(1024, 1024), // Max 1024x1024
)
func WithAutoSummarize ¶ added in v1.3.1
WithAutoSummarize enables automatic summarization of old conversation turns.
When the message count exceeds the threshold, the oldest unsummarized batch of messages is compressed into a summary using the provided LLM provider. Summaries are prepended to the context as system messages.
A separate, cheaper provider can be used for summarization (e.g., a smaller model).
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
sdk.WithContextWindow(20),
sdk.WithAutoSummarize(summaryProvider, 100, 50), // Summarize after 100 msgs, 50 at a time
)
func WithAzure ¶ added in v1.1.9
func WithAzure(endpoint, providerType, model string, opts ...PlatformOption) Option
WithAzure configures Azure AI services as the hosting platform. The providerType specifies the provider factory (e.g., "openai") and model is the model identifier. This uses the Azure SDK default credential chain (Managed Identity, Azure CLI, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithAzure("https://my-resource.openai.azure.com", "openai", "gpt-4o"),
)
func WithBedrock ¶ added in v1.1.9
func WithBedrock(region, providerType, model string, opts ...PlatformOption) Option
WithBedrock configures AWS Bedrock as the hosting platform. The providerType specifies the provider factory (e.g., "claude", "openai") and model is the model identifier. This uses the AWS SDK default credential chain (IRSA, instance profile, env vars).
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithBedrock("us-west-2", "claude", "claude-sonnet-4-20250514"),
)
func WithCapability ¶ added in v1.3.1
func WithCapability(capability Capability) Option
WithCapability adds an explicit platform capability.
Capabilities provide namespaced tools that are automatically injected into conversations. Most capabilities are auto-inferred from pack structure (e.g., workflow capability from pack.Workflow). Use this for explicit configuration or custom capabilities.
conv, _ := sdk.Open("./assistant.pack.json", "chat",
sdk.WithCapability(sdk.NewWorkflowCapability()),
)
func WithContextCarryForward ¶ added in v1.3.1
func WithContextCarryForward() Option
WithContextCarryForward enables context carry-forward for workflow transitions.
When enabled, transitioning to a new state injects a summary of the previous state's conversation into the new conversation via the {{workflow_context}} template variable. This provides continuity across workflow states.
Default: disabled (each state gets a fresh conversation).
wc, _ := sdk.OpenWorkflow("./support.pack.json",
sdk.WithContextCarryForward(),
)
func WithContextRetrieval ¶ added in v1.3.1
func WithContextRetrieval(embeddingProvider providers.EmbeddingProvider, topK int) Option
WithContextRetrieval enables semantic search for relevant older messages.
When configured alongside WithContextWindow, the pipeline uses the embedding provider to find messages outside the hot window that are semantically similar to the current user message. These retrieved messages are inserted chronologically between summaries and the hot window.
Requires WithContextWindow to be set.
embProvider, _ := openai.NewEmbeddingProvider()
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
sdk.WithContextWindow(20),
sdk.WithContextRetrieval(embProvider, 5), // Retrieve top 5 relevant messages
)
func WithContextWindow ¶ added in v1.3.1
WithContextWindow sets the hot window size for RAG context assembly.
When set to a positive value, the pipeline uses ContextAssemblyStage and IncrementalSaveStage instead of loading all history on every turn. This dramatically reduces I/O for long conversations by only loading the most recent N messages.
Requires a state store (WithStateStore). The store's MessageReader and MessageAppender interfaces are used when available, with automatic fallback to full Load/Save when they're not.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
sdk.WithContextWindow(20), // Keep last 20 messages in hot window
)
func WithConversationID ¶ added in v1.1.5
WithConversationID sets the conversation identifier.
If not set, a unique ID is auto-generated. Set this when you want to use a specific ID for state persistence or tracking.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
sdk.WithConversationID("user-123-session-456"),
)
func WithEvalRegistry ¶ added in v1.1.11
func WithEvalRegistry(r *evals.EvalTypeRegistry) Option
WithEvalRegistry provides a custom eval type registry.
Use this to register custom eval type handlers beyond the built-in ones. If not set, the default registry with all built-in handlers is used.
func WithEvalRunner ¶ added in v1.3.9
func WithEvalRunner(r *evals.EvalRunner) Option
WithEvalRunner configures the eval runner for executing evals in-process.
Eval results are emitted as events on the EventBus (eval.completed / eval.failed). If no runner is provided and eval definitions exist in the pack, a default runner is created automatically using the configured eval registry.
Example:
registry := evals.NewEvalTypeRegistry()
runner := evals.NewEvalRunner(registry)
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithEvalRunner(runner),
)
func WithEvalsDisabled ¶ added in v1.3.9
func WithEvalsDisabled() Option
WithEvalsDisabled disables eval execution even when eval definitions exist in the pack. Use this to temporarily suppress evals without removing definitions.
func WithEventBus ¶ added in v1.1.4
WithEventBus provides a shared event bus for observability.
When set, the conversation emits events to this bus. Use this to share an event bus across multiple conversations for centralized logging, metrics, or debugging.
bus := events.NewEventBus()
bus.SubscribeAll(myMetricsCollector)
conv1, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))
conv2, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))
func WithEventStore ¶ added in v1.1.6
func WithEventStore(store events.EventStore) Option
WithEventStore configures event persistence for session recording.
When set, all events published through the conversation's event bus are automatically persisted to the store. This enables session replay and analysis.
The event store is automatically attached to the event bus. If no event bus is provided via WithEventBus, a new one is created internally.
Example with file-based storage:
store, _ := events.NewFileEventStore("/var/log/sessions")
defer store.Close()
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithEventStore(store),
)
Example with shared bus and store:
store, _ := events.NewFileEventStore("/var/log/sessions")
bus := events.NewEventBus().WithStore(store)
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithEventBus(bus),
)
func WithExecutionTimeout ¶ added in v1.3.6
WithExecutionTimeout overrides the default pipeline execution timeout (30s). Use this for pipelines that need more time, such as multi-round tool-calling with slower providers like Ollama. Pass 0 to disable the timeout entirely.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithExecutionTimeout(120 * time.Second),
)
func WithImagePreprocessing ¶ added in v1.1.8
func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) Option
WithImagePreprocessing enables automatic image preprocessing before sending to the LLM. This resizes large images to fit within provider limits, reducing token usage and preventing errors.
The default configuration resizes images to max 1024x1024 with 85% quality.
Example with defaults:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant",
sdk.WithImagePreprocessing(nil), // Use default settings
)
Example with custom config:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant",
sdk.WithImagePreprocessing(&stage.ImagePreprocessConfig{
Resize: stage.ImageResizeStageConfig{
MaxWidth: 2048,
MaxHeight: 2048,
Quality: 90,
},
EnableResize: true,
}),
)
func WithJSONMode ¶ added in v1.1.8
func WithJSONMode() Option
WithJSONMode is a convenience option that enables simple JSON output mode. The model will return valid JSON objects but without schema enforcement. Use WithResponseFormat for more control including schema validation.
Example:
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithJSONMode(),
)
resp, _ := conv.Send(ctx, "List 3 colors as JSON")
// Response: {"colors": ["red", "green", "blue"]}
func WithJudgeProvider ¶ added in v1.1.11
func WithJudgeProvider(jp handlers.JudgeProvider) Option
WithJudgeProvider configures the LLM judge provider for judge-based evals.
If not set, an SDKJudgeProvider is created automatically using the conversation's provider.
func WithLogger ¶ added in v1.3.3
WithLogger sets a custom *slog.Logger for the SDK. This replaces the process-wide default logger, so all PromptKit components (runtime, pipeline, providers, evals) will use it.
Since all major Go logging libraries ship slog adapters (e.g. zapslog, slogzerolog), this gives full control over the logging backend without requiring a custom interface.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithLogger(slog.New(slog.NewJSONHandler(os.Stdout, nil))),
)
func WithMCP ¶ added in v1.1.5
WithMCP adds an MCP (Model Context Protocol) server for tool execution.
MCP servers provide external tools that can be called by the LLM. The server is started automatically when the conversation opens and stopped when the conversation is closed.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"),
)
With environment variables:
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithMCP("github", "npx", "@modelcontextprotocol/server-github").
WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN")),
)
Multiple servers:
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"),
sdk.WithMCP("memory", "npx", "@modelcontextprotocol/server-memory"),
)
func WithMCPServer ¶ added in v1.1.5
func WithMCPServer(builder *MCPServerBuilder) Option
WithMCPServer adds a pre-configured MCP server.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github").
WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithMCPServer(server),
)
func WithMaxActiveSkillsOption ¶ added in v1.3.1
WithMaxActiveSkillsOption sets the maximum number of concurrently active skills. Default is 5 if not set.
conv, _ := sdk.Open("./assistant.pack.json", "chat",
sdk.WithMaxActiveSkillsOption(10),
)
func WithModel ¶ added in v1.1.5
WithModel overrides the default model specified in the pack.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithModel("gpt-4o"),
)
func WithProvider ¶
WithProvider uses a custom provider instance.
This bypasses auto-detection and uses the provided provider directly. Use this for custom provider implementations or when you need full control over provider configuration.
provider := openai.NewProvider(openai.Config{...})
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithProvider(provider),
)
func WithProviderHook ¶ added in v1.3.2
func WithProviderHook(h hooks.ProviderHook) Option
WithProviderHook registers a provider hook for intercepting LLM calls.
Provider hooks run synchronously before and after each LLM call. Multiple hooks are executed in order; the first deny short-circuits.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithProviderHook(hooks.NewBannedWords([]string{"secret"})),
)
func WithRecording ¶ added in v1.3.8
func WithRecording(cfg *RecordingConfig) Option
WithRecording enables session recording by inserting RecordingStages into the pipeline. These stages capture full binary content and publish events directly to the EventBus, bypassing the emitter's binary stripping.
If cfg is nil, default settings are used (audio=true, video=false, images=true). An EventBus is automatically created if none was provided via WithEventBus.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithRecording(nil), // use defaults
)
// Or with custom config:
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithRecording(&sdk.RecordingConfig{
IncludeAudio: true,
IncludeVideo: true,
IncludeImages: true,
}),
)
func WithRelevanceTruncation ¶ added in v1.1.6
func WithRelevanceTruncation(cfg *RelevanceConfig) Option
WithRelevanceTruncation configures embedding-based relevance truncation.
This automatically sets the truncation strategy to "relevance" and configures the embedding provider for semantic similarity scoring.
Example with OpenAI embeddings:
embProvider, _ := openai.NewEmbeddingProvider()
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithTokenBudget(8000),
sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{
EmbeddingProvider: embProvider,
MinRecentMessages: 3,
SimilarityThreshold: 0.3,
}),
)
Example with Gemini embeddings:
embProvider, _ := gemini.NewEmbeddingProvider()
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithTokenBudget(8000),
sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{
EmbeddingProvider: embProvider,
}),
)
func WithResponseFormat ¶ added in v1.1.8
func WithResponseFormat(format *providers.ResponseFormat) Option
WithResponseFormat configures the LLM response format for JSON mode output. This instructs the model to return responses in the specified format.
For simple JSON object output:
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithResponseFormat(&providers.ResponseFormat{
Type: providers.ResponseFormatJSON,
}),
)
For structured JSON output with a schema:
schema := json.RawMessage(`{"type":"object","properties":{"name":{"type":"string"}}}`)
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithResponseFormat(&providers.ResponseFormat{
Type: providers.ResponseFormatJSONSchema,
JSONSchema: schema,
SchemaName: "person",
Strict: true,
}),
)
func WithSessionHook ¶ added in v1.3.2
func WithSessionHook(h hooks.SessionHook) Option
WithSessionHook registers a session hook for tracking conversation lifecycle.
Session hooks are called on session start, after each turn, and on session end.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithSessionHook(mySessionLogger),
)
func WithSkillSelectorOption ¶ added in v1.3.1
func WithSkillSelectorOption(s skills.SkillSelector) Option
WithSkillSelectorOption sets the skill selector for filtering available skills. The selector determines which skills from the available set are presented to the model in the Phase 1 index.
conv, _ := sdk.Open("./assistant.pack.json", "chat",
sdk.WithSkillSelectorOption(skills.NewTagSelector([]string{"coding"})),
)
func WithSkillsDir ¶ added in v1.3.1
WithSkillsDir adds a directory-based skill source. Skills are discovered by scanning for SKILL.md files in the directory. Multiple directories can be added by calling this option multiple times.
conv, _ := sdk.Open("./assistant.pack.json", "chat",
sdk.WithSkillsDir("./skills"),
)
func WithSkipSchemaValidation ¶ added in v1.1.5
func WithSkipSchemaValidation() Option
WithSkipSchemaValidation disables JSON schema validation during pack loading.
By default, packs are validated against the PromptPack JSON schema to ensure they are well-formed. Use this option to skip validation, for example when loading legacy packs or during development.
conv, _ := sdk.Open("./legacy.pack.json", "assistant",
sdk.WithSkipSchemaValidation(),
)
func WithStateStore ¶
func WithStateStore(store statestore.Store) Option
WithStateStore configures persistent state storage.
When configured, conversation state (messages, metadata) is automatically persisted after each turn and can be resumed later via Resume.
store := statestore.NewRedisStore("redis://localhost:6379")
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithStateStore(store),
)
func WithStreamingConfig ¶ added in v1.1.6
func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) Option
WithStreamingConfig configures streaming for duplex mode. When set, enables ASM (Audio Streaming Model) mode with continuous bidirectional streaming. When nil (default), uses VAD (Voice Activity Detection) mode with turn-based streaming.
ASM mode is for models with native bidirectional audio support (e.g., gemini-2.0-flash-exp). VAD mode is for standard text-based models with audio transcription.
Example for ASM mode:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat",
sdk.WithStreamingConfig(&providers.StreamingInputConfig{
Type: types.ContentTypeAudio,
SampleRate: 16000,
Channels: 1,
}),
)
func WithStreamingVideo ¶ added in v1.1.8
func WithStreamingVideo(cfg *VideoStreamConfig) Option
WithStreamingVideo enables realtime video/image streaming for duplex sessions. This is used for webcam feeds, screen sharing, and continuous frame analysis.
The FrameRateLimitStage is added to the pipeline when TargetFPS > 0, dropping frames to maintain the target frame rate for LLM processing.
Example with defaults (1 FPS):
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat",
sdk.WithStreamingVideo(nil), // Use default settings
)
Example with custom config:
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat",
sdk.WithStreamingVideo(&sdk.VideoStreamConfig{
TargetFPS: 2.0, // 2 frames per second
MaxWidth: 1280, // Resize large frames
MaxHeight: 720,
Quality: 80,
}),
)
Sending frames:
for frame := range webcam.Frames() {
session.SendFrame(ctx, &session.ImageFrame{
Data: frame.JPEG(),
MIMEType: "image/jpeg",
Timestamp: time.Now(),
})
}
func WithTTS ¶ added in v1.1.6
WithTTS configures text-to-speech for the Pipeline.
TTS is applied via Pipeline middleware during streaming responses.
conv, _ := sdk.Open("./assistant.pack.json", "voice",
sdk.WithTTS(tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))),
)
func WithTokenBudget ¶ added in v1.1.5
WithTokenBudget sets the maximum tokens for context (prompt + history).
When the conversation history exceeds this budget, older messages are truncated according to the truncation strategy.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithTokenBudget(8000),
)
func WithToolHook ¶ added in v1.3.2
WithToolHook registers a tool hook for intercepting tool execution.
Tool hooks run synchronously before and after each LLM-initiated tool call. Multiple hooks are executed in order; the first deny short-circuits.
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithToolHook(myToolAuditHook),
)
func WithToolRegistry ¶
WithToolRegistry provides a pre-configured tool registry.
This is a power-user option for scenarios requiring direct registry access. Tool descriptors are still loaded from the pack; this allows providing custom executors or middleware.
registry := tools.NewRegistry()
registry.RegisterExecutor(&myCustomExecutor{})
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithToolRegistry(registry),
)
func WithTracerProvider ¶ added in v1.3.2
func WithTracerProvider(tp trace.TracerProvider) Option
WithTracerProvider sets the OpenTelemetry TracerProvider for distributed tracing.
When set, the conversation emits OTel spans for pipeline, provider, tool, middleware, and workflow events. These spans nest under the provider's trace tree, enabling end-to-end observability across services.
If not set, no spans are created (zero overhead).
tp := sdktrace.NewTracerProvider(...)
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithTracerProvider(tp),
)
func WithTruncation ¶ added in v1.1.5
WithTruncation sets the truncation strategy for context management.
Strategies:
"sliding": Remove oldest messages first (default)
"summarize": Summarize old messages before removing
"relevance": Remove least relevant messages based on embedding similarity
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000), sdk.WithTruncation("summarize"), )
func WithTurnDetector ¶ added in v1.1.6
func WithTurnDetector(detector audio.TurnDetector) Option
WithTurnDetector configures turn detection for the Pipeline.
Turn detectors determine when a user has finished speaking in audio sessions.
conv, _ := sdk.Open("./assistant.pack.json", "voice",
sdk.WithTurnDetector(audio.NewSilenceDetector(500 * time.Millisecond)),
)
func WithVADMode ¶ added in v1.1.6
WithVADMode configures VAD mode for voice conversations with standard text-based LLMs. VAD mode processes audio through a pipeline: Audio → VAD → STT → LLM → TTS → Audio
This is an alternative to ASM mode (WithStreamingConfig) for providers without native audio streaming support.
Example:
sttService := stt.NewOpenAI(os.Getenv("OPENAI_API_KEY"))
ttsService := tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat",
sdk.WithProvider(openai.NewProvider(openai.Config{...})),
sdk.WithVADMode(sttService, ttsService, nil), // nil uses defaults
)
With custom config:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat",
sdk.WithProvider(openai.NewProvider(openai.Config{...})),
sdk.WithVADMode(sttService, ttsService, &sdk.VADModeConfig{
SilenceDuration: 500 * time.Millisecond,
Voice: "nova",
}),
)
func WithVariableProvider ¶ added in v1.1.6
WithVariableProvider adds a variable provider for dynamic variable resolution.
Variables are resolved before each Send() and merged with static variables. Later providers in the chain override earlier ones with the same key.
conv, _ := sdk.Open("./assistant.pack.json", "support",
sdk.WithVariableProvider(variables.Time()),
sdk.WithVariableProvider(variables.State()),
)
func WithVariables ¶ added in v1.1.6
WithVariables sets initial variables for template substitution.
These variables are available immediately when the conversation opens, before any messages are sent. Use this for variables that must be set before the first LLM call (e.g., in streaming/ASM mode).
Variables set here override prompt defaults but can be further modified via conv.SetVar() for subsequent messages.
conv, _ := sdk.Open("./assistant.pack.json", "assistant",
sdk.WithVariables(map[string]string{
"user_name": "Alice",
"language": "en",
}),
)
func WithVertex ¶ added in v1.1.9
func WithVertex(region, project, providerType, model string, opts ...PlatformOption) Option
WithVertex configures Google Cloud Vertex AI as the hosting platform. The providerType specifies the provider factory (e.g., "claude", "gemini") and model is the model identifier. This uses Application Default Credentials (Workload Identity, gcloud auth, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant",
sdk.WithVertex("us-central1", "my-project", "gemini", "gemini-2.0-flash"),
)
type PackError ¶ added in v1.1.5
type PackError struct {
// Path is the pack file path.
Path string
// Cause is the underlying error.
Cause error
}
PackError represents an error loading or parsing a pack file.
type PackTemplate ¶ added in v1.3.2
type PackTemplate struct {
// contains filtered or unexported fields
}
PackTemplate is a pre-loaded, immutable representation of a pack file.
Use PackTemplate when creating many conversations from the same pack to avoid redundant file I/O, JSON parsing, schema validation, prompt registry construction, and tool repository construction on each Open() call.
PackTemplate is safe for concurrent use. All cached artifacts are immutable after construction.
Usage:
tmpl, err := sdk.LoadTemplate("./assistant.pack.json")
if err != nil {
log.Fatal(err)
}
// Create conversations efficiently — pack is loaded once
for req := range requests {
conv, err := tmpl.Open("chat", sdk.WithProvider(myProvider))
if err != nil {
log.Printf("open failed: %v", err)
continue
}
go handleConversation(conv, req)
}
func LoadTemplate ¶ added in v1.3.2
func LoadTemplate(packPath string, opts ...Option) (*PackTemplate, error)
LoadTemplate loads a pack file and pre-builds shared, immutable resources.
The returned PackTemplate caches:
- The parsed pack structure
- The prompt registry (prompt configs, fragments)
- The tool repository (tool descriptors)
These are shared across all conversations created from this template. Per-conversation resources (tool executors, state stores, sessions) are still created fresh for each conversation.
Options that affect pack loading can be passed:
- WithSkipSchemaValidation() to skip JSON schema validation
func (*PackTemplate) Open ¶ added in v1.3.2
func (t *PackTemplate) Open(promptName string, opts ...Option) (*Conversation, error)
Open creates a new conversation from this template for the given prompt.
This is equivalent to sdk.Open but reuses pre-loaded pack resources, avoiding per-conversation file I/O and parsing overhead.
Per-conversation resources are still created fresh:
- Tool registry (with shared repository but per-conversation executors)
- State store and session
- Capabilities
- Event bus and hooks
func (*PackTemplate) OpenDuplex ¶ added in v1.3.2
func (t *PackTemplate) OpenDuplex(promptName string, opts ...Option) (*Conversation, error)
OpenDuplex creates a new duplex streaming conversation from this template.
This is equivalent to sdk.OpenDuplex but reuses pre-loaded pack resources.
func (*PackTemplate) Pack ¶ added in v1.3.2
func (t *PackTemplate) Pack() *pack.Pack
Pack returns the loaded pack for inspection. The returned pack must not be modified.
type PendingClientTool ¶ added in v1.3.8
type PendingClientTool struct {
// CallID is the provider-assigned ID for this tool invocation.
CallID string
// ToolName is the tool's name as defined in the pack.
ToolName string
// Args contains the parsed arguments from the LLM.
Args map[string]any
// ConsentMsg is the human-readable consent message from the pack's
// client.consent.message field. Empty when no consent is configured.
ConsentMsg string
// Categories are the semantic consent categories (e.g., ["location"]).
Categories []string
}
PendingClientTool represents a client-mode tool call that was deferred because no OnClientTool handler was registered. The caller must supply a result via Conversation.SendToolResult or reject it via Conversation.RejectClientTool and then call Conversation.Resume.
type PendingTool ¶ added in v1.1.5
type PendingTool struct {
// Unique identifier for this pending call
ID string
// Tool name
Name string
// Arguments passed to the tool
Arguments map[string]any
// Reason the tool requires approval
Reason string
// Human-readable message about why approval is needed
Message string
}
PendingTool represents a tool call that requires external approval.
type PlatformOption ¶ added in v1.1.9
type PlatformOption interface {
// contains filtered or unexported methods
}
PlatformOption configures a platform for a provider.
func WithPlatformEndpoint ¶ added in v1.1.9
func WithPlatformEndpoint(endpoint string) PlatformOption
WithPlatformEndpoint sets a custom endpoint URL.
func WithPlatformProject ¶ added in v1.1.9
func WithPlatformProject(project string) PlatformOption
WithPlatformProject sets the cloud project (for Vertex).
func WithPlatformRegion ¶ added in v1.1.9
func WithPlatformRegion(region string) PlatformOption
WithPlatformRegion sets the cloud region.
type ProviderError ¶ added in v1.1.5
type ProviderError struct {
// Provider name (e.g., "openai", "anthropic").
Provider string
// StatusCode is the HTTP status code if available.
StatusCode int
// Message is the error message from the provider.
Message string
// Cause is the underlying error.
Cause error
}
ProviderError represents an error from the LLM provider.
func (*ProviderError) Error ¶ added in v1.1.5
func (e *ProviderError) Error() string
Error implements the error interface.
func (*ProviderError) Unwrap ¶ added in v1.1.5
func (e *ProviderError) Unwrap() error
Unwrap returns the underlying error.
type RecordingConfig ¶ added in v1.3.8
type RecordingConfig struct {
// IncludeAudio records audio data (may be large). Default: true.
IncludeAudio bool
// IncludeVideo records video data (may be large). Default: false.
IncludeVideo bool
// IncludeImages records image data. Default: true.
IncludeImages bool
}
RecordingConfig configures session recording via RecordingStage. RecordingStages capture full message content (including binary data) and publish directly to the EventBus for session replay.
func DefaultRecordingConfig ¶ added in v1.3.8
func DefaultRecordingConfig() RecordingConfig
DefaultRecordingConfig returns a RecordingConfig with sensible defaults.
type RelevanceConfig ¶ added in v1.1.6
type RelevanceConfig struct {
// EmbeddingProvider generates embeddings for similarity scoring.
// Required for relevance-based truncation.
EmbeddingProvider providers.EmbeddingProvider
// MinRecentMessages always keeps the N most recent messages regardless of relevance.
// Default: 3
MinRecentMessages int
// AlwaysKeepSystemRole keeps all system role messages regardless of score.
// Default: true
AlwaysKeepSystemRole bool
// SimilarityThreshold is the minimum score (0.0-1.0) to consider a message relevant.
// Messages below this threshold are dropped first. Default: 0.0 (no threshold)
SimilarityThreshold float64
// QuerySource determines what text to compare messages against.
// Values: "last_user" (default), "last_n", "custom"
QuerySource string
// LastNCount is the number of messages to use when QuerySource is "last_n".
// Default: 3
LastNCount int
// CustomQuery is the query text when QuerySource is "custom".
CustomQuery string
}
RelevanceConfig configures embedding-based relevance truncation. Used when truncation strategy is "relevance".
type Response ¶
type Response struct {
// contains filtered or unexported fields
}
Response represents the result of a conversation turn.
Response wraps the assistant's message with convenience methods and additional metadata like timing and validation results.
Basic usage:
resp, _ := conv.Send(ctx, "Hello!") fmt.Println(resp.Text()) // Text content fmt.Println(resp.TokensUsed()) // Total tokens fmt.Println(resp.Cost()) // Total cost in USD
For multimodal responses:
if resp.HasMedia() {
for _, part := range resp.Parts() {
if part.Media != nil {
fmt.Printf("Media: %s\n", part.Media.URL)
}
}
}
func NewResponseForTest ¶ added in v1.3.2
func NewResponseForTest(text string, toolCalls []types.MessageToolCall, opts ...ResponseTestOption) *Response
NewResponseForTest creates a Response for use in tests outside the sdk package. This is not intended for production use.
func (*Response) ClientTools ¶ added in v1.3.8
func (r *Response) ClientTools() []PendingClientTool
ClientTools returns client tools awaiting fulfillment by the caller.
When no Conversation.OnClientTool handler is registered for a tool, the pipeline suspends and the pending client tools are returned here. The caller should fulfillthem via Conversation.SendToolResult or Conversation.RejectClientTool, then call Conversation.Resume.
func (*Response) HasMedia ¶ added in v1.1.5
HasMedia returns true if the response contains any media content.
func (*Response) HasPendingClientTools ¶ added in v1.3.8
HasPendingClientTools returns true if the response contains client tools that the caller must fulfillbefore the conversation can continue.
func (*Response) HasToolCalls ¶ added in v1.1.5
HasToolCalls returns true if the response contains tool calls.
func (*Response) InputTokens ¶ added in v1.1.5
InputTokens returns the number of input (prompt) tokens used.
func (*Response) Message ¶ added in v1.1.5
Message returns the underlying runtime Message.
Use this when you need direct access to the message structure, such as for serialization or passing to other runtime components.
func (*Response) OutputTokens ¶ added in v1.1.5
OutputTokens returns the number of output (completion) tokens used.
func (*Response) Parts ¶ added in v1.1.5
func (r *Response) Parts() []types.ContentPart
Parts returns all content parts in the response.
Use this for multimodal responses that may contain text, images, audio, or other content types.
func (*Response) PendingTools ¶
func (r *Response) PendingTools() []PendingTool
PendingTools returns tools that are awaiting external approval.
This is used for Human-in-the-Loop (HITL) workflows where certain tools require approval before execution.
func (*Response) Text ¶ added in v1.1.5
Text returns the text content of the response.
This is a convenience method that extracts all text parts and joins them. For responses with only text content, this returns the full response. For multimodal responses, use Response.Parts to access all content.
func (*Response) TokensUsed ¶
TokensUsed returns the total number of tokens used (input + output).
func (*Response) ToolCalls ¶
func (r *Response) ToolCalls() []types.MessageToolCall
ToolCalls returns the tool calls made during this turn.
Tool calls are requests from the LLM to execute functions. If you have registered handlers via Conversation.OnTool, they will be executed automatically and the results sent back to the LLM.
func (*Response) Validations ¶
func (r *Response) Validations() []types.ValidationResult
Validations returns the results of all validators that ran.
Validators are defined in the pack and run automatically on responses. Check this to see which validators passed or failed.
type ResponseTestOption ¶ added in v1.3.8
type ResponseTestOption func(*Response)
ResponseTestOption configures a Response created by NewResponseForTest.
func WithClientToolsForTest ¶ added in v1.3.8
func WithClientToolsForTest(tools []PendingClientTool) ResponseTestOption
WithClientToolsForTest attaches pending client tools to a test response.
type SendOption ¶ added in v1.1.5
type SendOption func(*sendConfig) error
SendOption configures a single Send call.
func WithAudioData ¶ added in v1.1.8
func WithAudioData(data []byte, mimeType string) SendOption
WithAudioData attaches audio from raw bytes.
resp, _ := conv.Send(ctx, "Transcribe this audio",
sdk.WithAudioData(audioBytes, "audio/mp3"),
)
func WithAudioFile ¶ added in v1.1.5
func WithAudioFile(path string) SendOption
WithAudioFile attaches audio from a file path.
resp, _ := conv.Send(ctx, "Transcribe this audio",
sdk.WithAudioFile("/path/to/audio.mp3"),
)
func WithDocumentData ¶ added in v1.1.9
func WithDocumentData(data []byte, mimeType string) SendOption
WithDocumentData attaches a document from raw data with the specified MIME type.
resp, _ := conv.Send(ctx, "Review this PDF",
sdk.WithDocumentData(pdfBytes, types.MIMETypePDF),
)
func WithDocumentFile ¶ added in v1.1.9
func WithDocumentFile(path string) SendOption
WithDocumentFile attaches a document from a file path (PDF, Word, markdown, etc.).
resp, _ := conv.Send(ctx, "Analyze this document",
sdk.WithDocumentFile("contract.pdf"),
)
func WithFile
deprecated
added in
v1.1.5
func WithFile(name string, data []byte) SendOption
WithFile attaches a file with the given name and content.
Deprecated: Use WithDocumentFile or WithDocumentData instead for proper document handling. This function is kept for backward compatibility but should not be used for new code as it cannot properly handle binary files.
resp, _ := conv.Send(ctx, "Analyze this data",
sdk.WithFile("data.csv", csvBytes),
)
func WithImageData ¶ added in v1.1.5
func WithImageData(data []byte, mimeType string, detail ...*string) SendOption
WithImageData attaches an image from raw bytes.
resp, _ := conv.Send(ctx, "What's in this image?",
sdk.WithImageData(imageBytes, "image/png"),
)
func WithImageFile ¶ added in v1.1.5
func WithImageFile(path string, detail ...*string) SendOption
WithImageFile attaches an image from a file path.
resp, _ := conv.Send(ctx, "What's in this image?",
sdk.WithImageFile("/path/to/image.jpg"),
)
func WithImageURL ¶ added in v1.1.5
func WithImageURL(url string, detail ...*string) SendOption
WithImageURL attaches an image from a URL.
resp, _ := conv.Send(ctx, "What's in this image?",
sdk.WithImageURL("https://example.com/photo.jpg"),
)
func WithVideoData ¶ added in v1.1.8
func WithVideoData(data []byte, mimeType string) SendOption
WithVideoData attaches a video from raw bytes.
resp, _ := conv.Send(ctx, "Describe this video",
sdk.WithVideoData(videoBytes, "video/mp4"),
)
func WithVideoFile ¶ added in v1.1.8
func WithVideoFile(path string) SendOption
WithVideoFile attaches a video from a file path.
resp, _ := conv.Send(ctx, "Describe this video",
sdk.WithVideoFile("/path/to/video.mp4"),
)
type SessionMode ¶ added in v1.1.6
type SessionMode int
SessionMode represents the conversation's session mode.
const ( // UnaryMode for request/response conversations. UnaryMode SessionMode = iota // DuplexMode for bidirectional streaming conversations. DuplexMode )
type SkillsCapability ¶ added in v1.3.1
type SkillsCapability struct {
// contains filtered or unexported fields
}
SkillsCapability provides skill activation/deactivation tools to conversations. Skills are loaded from directories or inline definitions and can be dynamically activated by the LLM via the skill__activate tool.
func NewSkillsCapability ¶ added in v1.3.1
func NewSkillsCapability( sources []skills.SkillSource, opts ...SkillsOption, ) *SkillsCapability
NewSkillsCapability creates a new SkillsCapability from the given sources.
func (*SkillsCapability) Close ¶ added in v1.3.1
func (c *SkillsCapability) Close() error
Close is a no-op for SkillsCapability.
func (*SkillsCapability) Executor ¶ added in v1.3.1
func (c *SkillsCapability) Executor() *skills.Executor
Executor returns the underlying skills executor for testing.
func (*SkillsCapability) Init ¶ added in v1.3.1
func (c *SkillsCapability) Init(ctx CapabilityContext) error
Init discovers skills from sources and creates an executor.
func (*SkillsCapability) Name ¶ added in v1.3.1
func (c *SkillsCapability) Name() string
Name returns the capability identifier.
func (*SkillsCapability) RegisterTools ¶ added in v1.3.1
func (c *SkillsCapability) RegisterTools(registry *tools.Registry)
RegisterTools registers the skill management tools into the registry.
type SkillsOption ¶ added in v1.3.1
type SkillsOption func(*SkillsCapability)
SkillsOption configures a SkillsCapability.
func WithMaxActiveSkills ¶ added in v1.3.1
func WithMaxActiveSkills(n int) SkillsOption
WithMaxActiveSkills sets the maximum number of concurrently active skills.
func WithSkillSelector ¶ added in v1.3.1
func WithSkillSelector(s skills.SkillSelector) SkillsOption
WithSkillSelector sets a custom skill selector for filtering available skills.
type StatefulCapability ¶ added in v1.3.1
type StatefulCapability interface {
Capability
RefreshTools(registry *tools.Registry)
}
StatefulCapability can update tools dynamically (e.g., after workflow state changes).
type StaticEndpointResolver ¶ added in v1.3.1
type StaticEndpointResolver struct {
BaseURL string
}
StaticEndpointResolver returns the same base URL for every agent. This is useful when all agents are behind a single gateway or when developing locally against a single A2A server.
func (*StaticEndpointResolver) Resolve ¶ added in v1.3.1
func (r *StaticEndpointResolver) Resolve(_ string) string
Resolve returns the static base URL for any agent name.
type StreamChunk ¶ added in v1.1.5
type StreamChunk struct {
// Type of this chunk
Type ChunkType
// Text content (for ChunkText type)
Text string
// Tool call (for ChunkToolCall type)
ToolCall *types.MessageToolCall
// Media content (for ChunkMedia type)
Media *types.MediaContent
// ClientTool contains a pending client tool request (for ChunkClientTool type).
// The caller should fulfill it via SendToolResult/RejectClientTool, then call ResumeStream.
ClientTool *PendingClientTool
// Complete response (for ChunkDone type)
Message *Response
// Error (if any occurred)
Error error
}
StreamChunk represents a single chunk in a streaming response.
type StreamDoneEvent ¶ added in v1.3.8
type StreamDoneEvent struct {
// Response contains the complete response with metadata.
Response *Response
}
StreamDoneEvent is emitted when the stream completes.
type StreamEvent ¶
type StreamEvent interface {
// contains filtered or unexported methods
}
StreamEvent is the interface for all stream event types. Use a type switch to handle specific events.
type StreamEventHandler ¶ added in v1.3.8
type StreamEventHandler func(event StreamEvent)
StreamEventHandler is called for each event during streaming.
type TextDeltaEvent ¶ added in v1.3.8
type TextDeltaEvent struct {
// Delta is the new text content in this chunk.
Delta string
}
TextDeltaEvent is emitted for each text delta during streaming.
type ToolError ¶ added in v1.1.5
type ToolError struct {
// ToolName is the name of the tool that failed.
ToolName string
// Cause is the underlying error from the tool handler.
Cause error
}
ToolError represents an error executing a tool.
type ToolHandler ¶ added in v1.1.5
ToolHandler is a function that executes a tool call. It receives the parsed arguments from the LLM and returns a result.
The args map contains the arguments as specified in the tool's schema. The return value should be JSON-serializable.
conv.OnTool("get_weather", func(args map[string]any) (any, error) {
city := args["city"].(string)
return weatherAPI.GetCurrent(city)
})
type ToolHandlerCtx ¶ added in v1.1.5
ToolHandlerCtx is like ToolHandler but receives a context. Use this when your tool implementation needs context for cancellation or deadlines.
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) {
return db.SearchWithContext(ctx, args["query"].(string))
})
type VADModeConfig ¶ added in v1.1.6
type VADModeConfig struct {
// SilenceDuration is how long silence must persist to trigger turn complete.
// Default: 800ms
SilenceDuration time.Duration
// MinSpeechDuration is minimum speech before turn can complete.
// Default: 200ms
MinSpeechDuration time.Duration
// MaxTurnDuration is maximum turn length before forcing completion.
// Default: 30s
MaxTurnDuration time.Duration
// SampleRate is the audio sample rate.
// Default: 16000
SampleRate int
// Language is the language hint for STT (e.g., "en", "es").
// Default: "en"
Language string
// Voice is the TTS voice to use.
// Default: "alloy"
Voice string
// Speed is the TTS speech rate (0.5-2.0).
// Default: 1.0
Speed float64
}
VADModeConfig configures VAD (Voice Activity Detection) mode for voice conversations. In VAD mode, the pipeline processes audio through: AudioTurnStage → STTStage → ProviderStage → TTSStage
This enables voice conversations using standard text-based LLMs.
func DefaultVADModeConfig ¶ added in v1.1.6
func DefaultVADModeConfig() *VADModeConfig
DefaultVADModeConfig returns sensible defaults for VAD mode.
type ValidationError ¶ added in v1.1.5
type ValidationError struct {
// ValidatorType is the type of validator that failed (e.g., "banned_words").
ValidatorType string
// Message describes what validation rule was violated.
Message string
// Details contains validator-specific information about the failure.
Details map[string]any
}
ValidationError represents a validation failure.
func AsValidationError ¶ added in v1.1.5
func AsValidationError(err error) (*ValidationError, bool)
AsValidationError checks if an error is a ValidationError and returns it.
resp, err := conv.Send(ctx, message)
if err != nil {
if vErr, ok := sdk.AsValidationError(err); ok {
fmt.Printf("Validation failed: %s\n", vErr.ValidatorType)
}
}
func (*ValidationError) Error ¶ added in v1.1.5
func (e *ValidationError) Error() string
Error implements the error interface.
type VideoStreamConfig ¶ added in v1.1.8
type VideoStreamConfig struct {
// TargetFPS is the target frame rate for streaming.
// Frames exceeding this rate will be dropped.
// Default: 1.0 (one frame per second, suitable for most LLM vision scenarios)
TargetFPS float64
// MaxWidth is the maximum frame width in pixels.
// Frames larger than this are resized. 0 means no limit.
// Default: 0 (no resizing)
MaxWidth int
// MaxHeight is the maximum frame height in pixels.
// Frames larger than this are resized. 0 means no limit.
// Default: 0 (no resizing)
MaxHeight int
// Quality is the JPEG compression quality (1-100) for frame encoding.
// Higher values = better quality, larger size.
// Default: 85
Quality int
// EnableResize enables automatic frame resizing when dimensions exceed limits.
// Default: true (resizing enabled when MaxWidth/MaxHeight are set)
EnableResize bool
}
VideoStreamConfig configures realtime video/image streaming for duplex sessions. This enables webcam feeds, screen sharing, and continuous frame analysis.
func DefaultVideoStreamConfig ¶ added in v1.1.8
func DefaultVideoStreamConfig() *VideoStreamConfig
DefaultVideoStreamConfig returns sensible defaults for video streaming.
type WorkflowCapability ¶ added in v1.3.1
type WorkflowCapability struct {
// contains filtered or unexported fields
}
WorkflowCapability provides the workflow__transition tool for LLM-initiated state transitions.
func NewWorkflowCapability ¶ added in v1.3.1
func NewWorkflowCapability() *WorkflowCapability
NewWorkflowCapability creates a new WorkflowCapability.
func (*WorkflowCapability) Close ¶ added in v1.3.1
func (w *WorkflowCapability) Close() error
Close is a no-op for WorkflowCapability.
func (*WorkflowCapability) Init ¶ added in v1.3.1
func (w *WorkflowCapability) Init(ctx CapabilityContext) error
Init stores the workflow spec from the pack for later tool registration.
func (*WorkflowCapability) Name ¶ added in v1.3.1
func (w *WorkflowCapability) Name() string
Name returns the capability identifier.
func (*WorkflowCapability) RegisterTools ¶ added in v1.3.1
func (w *WorkflowCapability) RegisterTools(_ *tools.Registry)
RegisterTools is a no-op at conversation level. WorkflowConversation calls RegisterToolsForState per-state.
func (*WorkflowCapability) RegisterToolsForState ¶ added in v1.3.1
func (w *WorkflowCapability) RegisterToolsForState( registry *tools.Registry, state *workflow.State, )
RegisterToolsForState registers workflow__transition for a specific state. Called by WorkflowConversation when opening a conversation for a state.
type WorkflowConversation ¶ added in v1.3.1
type WorkflowConversation struct {
// contains filtered or unexported fields
}
WorkflowConversation manages a stateful workflow that transitions between different prompts in a pack based on events.
Each state in the workflow maps to a prompt_task in the pack. When a transition occurs, the current conversation is closed and a new one is opened for the target state's prompt.
Basic usage:
wc, err := sdk.OpenWorkflow("./support.pack.json")
if err != nil {
log.Fatal(err)
}
defer wc.Close()
resp, _ := wc.Send(ctx, "I need help with billing")
fmt.Println(resp.Text())
newState, _ := wc.Transition("Escalate")
fmt.Println("Moved to:", newState)
func OpenWorkflow ¶ added in v1.3.1
func OpenWorkflow(packPath string, opts ...Option) (*WorkflowConversation, error)
OpenWorkflow loads a pack file and creates a WorkflowConversation.
The pack must contain a workflow section. The initial conversation is opened for the workflow's entry state prompt_task.
wc, err := sdk.OpenWorkflow("./support.pack.json",
sdk.WithModel("gpt-4o"),
)
func ResumeWorkflow ¶ added in v1.3.1
func ResumeWorkflow(workflowID, packPath string, opts ...Option) (*WorkflowConversation, error)
ResumeWorkflow restores a WorkflowConversation from a previously persisted state.
The workflow context is loaded from the state store's metadata["workflow"] key. A state store must be configured via WithStateStore.
wc, err := sdk.ResumeWorkflow("workflow-123", "./support.pack.json",
sdk.WithStateStore(store),
)
func (*WorkflowConversation) ActiveConversation ¶ added in v1.3.1
func (wc *WorkflowConversation) ActiveConversation() *Conversation
ActiveConversation returns the current state's Conversation. Use this to access conversation-specific methods like SetVar, OnTool, etc.
func (*WorkflowConversation) AvailableEvents ¶ added in v1.3.1
func (wc *WorkflowConversation) AvailableEvents() []string
AvailableEvents returns the events available in the current state, sorted alphabetically.
func (*WorkflowConversation) Close ¶ added in v1.3.1
func (wc *WorkflowConversation) Close() error
Close closes the active conversation and marks the workflow as closed.
func (*WorkflowConversation) Context ¶ added in v1.3.1
func (wc *WorkflowConversation) Context() *workflow.Context
Context returns a snapshot of the workflow execution context including transition history and metadata.
func (*WorkflowConversation) CurrentPromptTask ¶ added in v1.3.1
func (wc *WorkflowConversation) CurrentPromptTask() string
CurrentPromptTask returns the prompt_task for the current state.
func (*WorkflowConversation) CurrentState ¶ added in v1.3.1
func (wc *WorkflowConversation) CurrentState() string
CurrentState returns the current workflow state name.
func (*WorkflowConversation) IsComplete ¶ added in v1.3.1
func (wc *WorkflowConversation) IsComplete() bool
IsComplete returns true if the workflow is in a terminal state (no outgoing transitions).
func (*WorkflowConversation) OrchestrationMode ¶ added in v1.3.1
func (wc *WorkflowConversation) OrchestrationMode() workflow.Orchestration
OrchestrationMode returns the orchestration mode of the current state. External orchestration means transitions are driven by outside callers (e.g., HTTP handlers, message queues) rather than from within the conversation loop.
func (*WorkflowConversation) Send ¶ added in v1.3.1
func (wc *WorkflowConversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)
Send sends a message to the active state's conversation and returns the response. If the LLM calls the workflow__transition tool, the transition is processed after the Send completes.
resp, err := wc.Send(ctx, "Hello!") fmt.Println(resp.Text())
func (*WorkflowConversation) Transition ¶ added in v1.3.1
func (wc *WorkflowConversation) Transition(event string) (string, error)
Transition processes an event and moves to the next state.
The current conversation is closed and a new one is opened for the target state's prompt_task. Returns the new state name.
newState, err := wc.Transition("Escalate")
if errors.Is(err, workflow.ErrInvalidEvent) {
fmt.Println("Available events:", wc.AvailableEvents())
}
Source Files
¶
- a2a.go
- a2a_exports.go
- a2a_server_adapter.go
- agent_resolver.go
- capability.go
- capability_a2a.go
- capability_skills.go
- capability_workflow.go
- client_tools.go
- conversation.go
- conversation_tools.go
- doc.go
- errors.go
- eval_middleware.go
- local_agent_executor.go
- local_agent_executor_integration.go
- multi_agent.go
- multi_agent_integration.go
- options.go
- response.go
- response_export.go
- sdk.go
- session_hooks.go
- stream_events.go
- streaming.go
- template.go
- tool_executors.go
- workflow.go
- workflow_integration.go
Directories
¶
| Path | Synopsis |
|---|---|
|
Package agui provides bidirectional converters between PromptKit internal types and the AG-UI Go SDK types, enabling interoperability between the two systems.
|
Package agui provides bidirectional converters between PromptKit internal types and the AG-UI Go SDK types, enabling interoperability between the two systems. |
|
examples
|
|
|
audio-analysis
command
Package main demonstrates audio analysis capabilities with the PromptKit SDK.
|
Package main demonstrates audio analysis capabilities with the PromptKit SDK. |
|
client-tools
command
Package main demonstrates client-side tool execution with the PromptKit SDK.
|
Package main demonstrates client-side tool execution with the PromptKit SDK. |
|
hello
command
Package main demonstrates the simplest PromptKit SDK usage.
|
Package main demonstrates the simplest PromptKit SDK usage. |
|
hitl
command
Package main demonstrates Human-in-the-Loop (HITL) tool approval with the PromptKit SDK.
|
Package main demonstrates Human-in-the-Loop (HITL) tool approval with the PromptKit SDK. |
|
hooks
command
Package main demonstrates hooks and guardrails with the PromptKit SDK.
|
Package main demonstrates hooks and guardrails with the PromptKit SDK. |
|
image-preprocessing
command
Package main demonstrates image preprocessing capabilities with the PromptKit SDK.
|
Package main demonstrates image preprocessing capabilities with the PromptKit SDK. |
|
long-conversation
command
Package main demonstrates long conversation context management.
|
Package main demonstrates long conversation context management. |
|
multimodal
command
Package main demonstrates multimodal capabilities with the PromptKit SDK.
|
Package main demonstrates multimodal capabilities with the PromptKit SDK. |
|
openai-realtime
command
Package main demonstrates OpenAI Realtime API streaming with text mode.
|
Package main demonstrates OpenAI Realtime API streaming with text mode. |
|
realtime-video
command
Package main demonstrates realtime video/image streaming with the PromptKit SDK.
|
Package main demonstrates realtime video/image streaming with the PromptKit SDK. |
|
session-recording
command
Package main demonstrates session recording and replay.
|
Package main demonstrates session recording and replay. |
|
streaming
command
Package main demonstrates streaming responses with the PromptKit SDK.
|
Package main demonstrates streaming responses with the PromptKit SDK. |
|
tools
command
Package main demonstrates tool handling with the PromptKit SDK.
|
Package main demonstrates tool handling with the PromptKit SDK. |
|
vad-demo
command
Package main demonstrates Voice Activity Detection (VAD) in PromptKit.
|
Package main demonstrates Voice Activity Detection (VAD) in PromptKit. |
|
variables
command
Package main demonstrates the Variable Providers feature in the PromptKit SDK.
|
Package main demonstrates the Variable Providers feature in the PromptKit SDK. |
|
workflow-external
command
Package main demonstrates external orchestration mode for workflows.
|
Package main demonstrates external orchestration mode for workflows. |
|
Package hooks provides convenience methods for subscribing to SDK events.
|
Package hooks provides convenience methods for subscribing to SDK events. |
|
internal
|
|
|
a2a
Package a2a provides the A2A task store and executor for the SDK.
|
Package a2a provides the A2A task store and executor for the SDK. |
|
pack
Package pack provides internal pack loading functionality.
|
Package pack provides internal pack loading functionality. |
|
pipeline
Package pipeline provides internal pipeline construction for the SDK.
|
Package pipeline provides internal pipeline construction for the SDK. |
|
provider
Package provider provides internal provider detection and initialization.
|
Package provider provides internal provider detection and initialization. |
|
Package session provides session abstractions for managing conversations.
|
Package session provides session abstractions for managing conversations. |
|
Package stream provides streaming support for SDK v2.
|
Package stream provides streaming support for SDK v2. |
|
Package tools provides HITL (Human-in-the-Loop) tool support.
|
Package tools provides HITL (Human-in-the-Loop) tool support. |