Documentation
¶
Index ¶
- Constants
- Variables
- type AgentEvent
- type AgentEventType
- type AgentMode
- type AgentResponse
- type ApprovalEvent
- type ApprovalHandler
- type ApprovalRequest
- type ApprovalSender
- type ApprovalStatus
- type JSONSchema
- type LLMReasoning
- type LLMSampling
- type LLMUsage
- type MCPLoopback
- type MCPStdio
- type MCPStreamableHTTP
- type MCPToolFilter
- type MCPTransportConfig
- type MCPTransportType
- type RunAsyncResult
- type SubAgentRoute
- type ToolApprovalKind
- type ToolCallEvent
- type ToolCallStatus
- type ToolSpec
Constants ¶
const ( DefaultMCPTimeout = 30 * time.Second DefaultMCPRetryAttempts = 3 )
Default MCP settings applied when fields are zero.
const MaxApprovalTimeout = 31 * 24 * time.Hour
maxApprovalTimeout caps how long a single approval wait may last in the runtime.
const SubAgentToolParamQuery = "query"
SubAgentToolParamQuery is the tool/JSON parameter name for the query sent to a sub-agent.
Variables ¶
var ErrTemporalDialTimeout = errors.New("temporal dial timeout")
ErrTemporalDialTimeout is returned when the Temporal runtime cannot establish a gRPC connection before the internal deadline (see internal/runtime/temporal newTemporalClient).
var ErrTemporalNamespaceCheckTimeout = errors.New("temporal namespace check timeout")
ErrTemporalNamespaceCheckTimeout is returned when the Temporal namespace cannot be verified in time.
Functions ¶
This section is empty.
Types ¶
type AgentEvent ¶
type AgentEvent struct {
Type AgentEventType `json:"type"`
AgentName string `json:"agent_name,omitempty"`
Content string `json:"content,omitempty"`
ToolCall *ToolCallEvent `json:"tool_call,omitempty"`
Approval *ApprovalEvent `json:"approval,omitempty"` // for AgentEventApproval
Error error `json:"error,omitempty"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
// Usage is set on AgentEventComplete for the root agent: aggregated token usage for the run.
Usage *LLMUsage `json:"usage,omitempty"`
Timestamp time.Time `json:"timestamp"`
WorkflowID string `json:"workflow_id,omitempty"` // optional run identifier for correlation (implementation-defined)
}
AgentEvent is published to subscribers when the agent produces output or errors. AgentName identifies which agent in a delegation tree emitted the event (main or sub-agent). Stream uses it so AgentEventComplete from a sub-agent does not close the root stream. For AgentEventApproval, the requesting agent is also on AgentName (not duplicated on Approval).
type AgentEventType ¶
type AgentEventType string
AgentEventType identifies a streamed agent event kind.
const ( AgentEventContent AgentEventType = "content" AgentEventContentDelta AgentEventType = "content_delta" // partial token stream AgentEventThinking AgentEventType = "thinking" AgentEventThinkingDelta AgentEventType = "thinking_delta" // Anthropic extended thinking stream AgentEventToolCall AgentEventType = "tool_call" AgentEventToolResult AgentEventType = "tool_result" AgentEventApproval AgentEventType = "approval" AgentEventError AgentEventType = "error" AgentEventComplete AgentEventType = "complete" )
const AgentEventAll AgentEventType = "*"
AgentEventAll is the EventTypes sentinel meaning "emit every event type" (JSON "*").
type AgentMode ¶ added in v0.1.3
type AgentMode string
AgentMode distinguishes how the agent is driven: human-in-the-loop versus self-directed runs. The string value is stable for configuration and fingerprints (see pkg/agent.WithAgentMode).
const ( // AgentModeInteractive is the default: the agent expects user turns, approvals, or other // interactive signals between steps when the product requires them. AgentModeInteractive AgentMode = "interactive" // AgentModeAutonomous indicates a run where the agent proceeds without blocking on user input // for each step (subject to tool policy and limits). AgentModeAutonomous AgentMode = "autonomous" )
type AgentResponse ¶
type AgentResponse struct {
Content string `json:"content"`
AgentName string `json:"agent_name"`
Model string `json:"model"`
Metadata map[string]any `json:"metadata"`
// Usage is the sum of token usage across all LLM calls in this run (when reported by the provider).
Usage *LLMUsage `json:"usage,omitempty"`
}
AgentResponse is the structured result of a completed run (content, model, metadata).
type ApprovalEvent ¶
type ApprovalEvent struct {
ToolCallID string `json:"tool_call_id,omitempty"`
ToolName string `json:"tool_name"`
Args map[string]any `json:"args,omitempty"`
ApprovalToken string `json:"approval_token,omitempty"`
// Kind is tool vs sub-agent delegation; use for UI copy.
Kind ToolApprovalKind `json:"kind,omitempty"`
// DelegateToName is set when Kind is delegation: display name of the target sub-agent.
SubAgentName string `json:"sub_agent_name,omitempty"`
}
ApprovalEvent is the payload for AgentEventApproval (Stream). The agent that requested approval is on AgentEvent.AgentName, not repeated here. Use with Agent.OnApproval when the user approves or rejects; see streaming examples.
type ApprovalHandler ¶
type ApprovalHandler func(ctx context.Context, req *ApprovalRequest)
ApprovalHandler is called when a tool needs approval (Run with WithApprovalHandler). req.Respond is always set: call req.Respond(ApprovalStatusApproved) or Rejected when ready. The handler may return immediately after starting async work. Multiple invocations may run concurrently when tools are invoked in parallel.
type ApprovalRequest ¶
type ApprovalRequest struct {
ToolName string `json:"tool_name"`
Args map[string]any `json:"args"`
Respond ApprovalSender `json:"-"`
// Kind matches ApprovalEvent: distinguish normal tools from sub-agent delegation.
Kind ToolApprovalKind `json:"kind,omitempty"`
// AgentName is the agent that requested approval for the current run.
AgentName string `json:"agent_name,omitempty"`
// SubAgentName is set for delegation: human-friendly target specialist name.
SubAgentName string `json:"sub_agent_name,omitempty"`
}
ApprovalRequest describes a pending tool approval for Run and RunAsync. Respond is always set; call it once with ApprovalStatusApproved or ApprovalStatusRejected. For Stream approvals, use OnApproval with the approval event payload instead.
type ApprovalSender ¶
type ApprovalSender func(status ApprovalStatus) error
ApprovalSender sends an approval result. Call once per request. Safe for concurrent use— multiple approvals may be pending when tools run in parallel.
type ApprovalStatus ¶
type ApprovalStatus string
const ( ApprovalStatusNone ApprovalStatus = "NONE" ApprovalStatusPending ApprovalStatus = "PENDING" ApprovalStatusApproved ApprovalStatus = "APPROVED" ApprovalStatusRejected ApprovalStatus = "REJECTED" ApprovalStatusUnavailable ApprovalStatus = "UNAVAILABLE" )
type JSONSchema ¶ added in v0.1.2
func (JSONSchema) MarshalJSON ¶ added in v0.1.2
func (s JSONSchema) MarshalJSON() ([]byte, error)
type LLMReasoning ¶ added in v0.1.2
type LLMReasoning struct {
// Enabled requests reasoning/thinking where the provider supports it.
// Anthropic: if true and BudgetTokens is 0, uses the minimum extended-thinking budget (1024 tokens).
// OpenAI: does not infer reasoning_effort from Enabled alone (standard models reject that param).
// Gemini: contributes to turning on thought output with IncludeThoughts.
Enabled bool
// Effort is a generic reasoning intensity: "none", "minimal", "low", "medium", "high", "xhigh".
// OpenAI: sent as reasoning_effort only when non-empty; use only with reasoning-capable models.
// Gemini: mapped to ThinkingLevel when recognized (low/medium/high/minimal), unless BudgetTokens > 0.
// Anthropic: not used (use Enabled and BudgetTokens for extended thinking).
Effort string
// BudgetTokens is the token budget for internal reasoning / extended thinking.
// Anthropic: extended thinking; must be >= 1024 when non-zero (values below are clamped).
// Gemini: ThinkingBudget. If non-zero, Effort is not mapped to ThinkingLevel (API allows only one).
// OpenAI: not used.
BudgetTokens int
}
LLMReasoning configures reasoning/thinking in a provider-agnostic way. Each LLM client maps these fields to its API; fields that do not apply are ignored.
type LLMSampling ¶
type LLMSampling struct {
Temperature *float64 // 0-2 OpenAI, 0-1 Anthropic; also Gemini
MaxTokens int // 0 = provider default
TopP *float64 // 0-1; OpenAI and Gemini (not Anthropic)
TopK *int // Anthropic only
// Reasoning: optional generic reasoning/thinking; mapped per provider.
Reasoning *LLMReasoning
}
LLMSampling holds per-agent LLM sampling overrides. nil/0 = provider default. One LLM client can serve multiple agents with different sampling.
type LLMUsage ¶ added in v0.1.2
type LLMUsage struct {
PromptTokens int64 `json:"prompt_tokens,omitempty"`
CompletionTokens int64 `json:"completion_tokens,omitempty"`
TotalTokens int64 `json:"total_tokens,omitempty"`
CachedPromptTokens int64 `json:"cached_prompt_tokens,omitempty"`
ReasoningTokens int64 `json:"reasoning_tokens,omitempty"`
}
LLMUsage reports token counts from the provider for one completion. Values are best-effort: some fields may be zero when the API does not return them.
type MCPLoopback ¶ added in v0.1.2
type MCPLoopback struct {
Transport any
}
MCPLoopback is test-only wiring: it holds a pre-built protocol transport as a dynamic value. External users should use pkg/mcp transport types (MCPStdio, MCPStreamableHTTP). MCPLoopback is not re-exported from pkg/mcp.
func (MCPLoopback) Kind ¶ added in v0.1.2
func (MCPLoopback) Kind() MCPTransportType
func (MCPLoopback) Validate ¶ added in v0.1.2
func (lb MCPLoopback) Validate() error
Validate ensures Transport is a non-nil sdkmcp.Transport.
type MCPStdio ¶ added in v0.1.2
MCPStdio runs an MCP server as a subprocess (stdio).
func (MCPStdio) Kind ¶ added in v0.1.2
func (MCPStdio) Kind() MCPTransportType
type MCPStreamableHTTP ¶ added in v0.1.2
type MCPStreamableHTTP struct {
URL string
// Token is a static bearer token when OAuthClientCreds is not used for auth.
Token string
// OAuthClientCreds configures OAuth2 client-credentials; when any OAuth field is set, Token must be empty and id/secret/token_url are required together.
OAuthClientCreds *clientcredentials.Config
Headers map[string]string
SkipTLSVerify bool
}
MCPStreamableHTTP uses the streamable HTTP MCP transport.
Optional static bearer Token, or OAuthClientCreds for OAuth2 client credentials. Token and active OAuth client-credentials must not both be set; omit both for URL-only access (use Headers for custom auth headers such as API keys).
func (MCPStreamableHTTP) Kind ¶ added in v0.1.2
func (MCPStreamableHTTP) Kind() MCPTransportType
func (MCPStreamableHTTP) Validate ¶ added in v0.1.2
func (h MCPStreamableHTTP) Validate() error
Validate checks URL, rejects mixing Token with a populated OAuth client-credentials config, and rejects incomplete OAuth when any OAuth field is set.
type MCPToolFilter ¶ added in v0.1.2
MCPToolFilter restricts which tools from Discover are registered (exact name match). Set either AllowTools (allow-list) or BlockTools (block-list), not both. MCPToolFilter.Validate checks constraints (call from config build, e.g. github.com/agenticenv/agent-sdk-go/pkg/mcp/client.BuildConfig). MCPToolFilter.Apply filters tool specs and assumes Validate already passed for non-empty filters.
func (MCPToolFilter) Apply ¶ added in v0.1.2
func (f MCPToolFilter) Apply(specs []ToolSpec) []ToolSpec
Apply returns filtered specs when AllowTools or BlockTools is non-empty; otherwise returns specs unchanged. For any non-empty list, the receiver must already satisfy MCPToolFilter.Validate (mutually exclusive lists).
func (MCPToolFilter) Validate ¶ added in v0.1.2
func (f MCPToolFilter) Validate() error
Validate returns an error if both AllowTools and BlockTools are set.
type MCPTransportConfig ¶ added in v0.1.2
type MCPTransportConfig interface {
// Kind returns a stable transport id ("stdio", "streamable_http") for logging and routing.
Kind() MCPTransportType
// Validate checks the transport is usable before connect (the default MCP client calls this from NewClient).
Validate() error
}
MCPTransportConfig describes how to reach one MCP server. Concrete types are MCPStdio, MCPStreamableHTTP, and MCPLoopback (tests).
type MCPTransportType ¶ added in v0.1.2
type MCPTransportType string
const ( MCPTransportTypeStdio MCPTransportType = "stdio" MCPTransportTypeStreamableHTTP MCPTransportType = "streamable_http" )
const MCPTransportTypeLoopback MCPTransportType = "loopback"
MCPTransportTypeLoopback is only for in-repo tests (see MCPLoopback). Not exposed on the public agent API.
type RunAsyncResult ¶
type RunAsyncResult struct {
Response *AgentResponse
Err error
}
RunAsyncResult is the single outcome from RunAsync. After the channel closes, Err is non-nil on failure; otherwise Response is non-nil.
type SubAgentRoute ¶
type SubAgentRoute struct {
Name string `json:"name"`
TaskQueue string `json:"task_queue"`
ChildRoutes map[string]SubAgentRoute `json:"child_routes,omitempty"`
AgentFingerprint string `json:"agent_fingerprint,omitempty"`
}
SubAgentRoute tells the runtime how to delegate to a sub-agent (child run on TaskQueue), with nested routes for that sub-agent's sub-agents (frozen at parent run start). AgentFingerprint is the agent config digest for that sub-agent (pkg/agent + temporal.ComputeAgentFingerprint) so the child worker can reject runs when its deployed config does not match the caller.
type ToolApprovalKind ¶
type ToolApprovalKind string
ToolApprovalKind classifies what the user is approving (same event type for Stream).
const ( // ToolApprovalKindTool is a normal tool execution (default when Kind is empty for older payloads). ToolApprovalKindTool ToolApprovalKind = "tool" // ToolApprovalKindDelegation is approval to run a registered sub-agent (delegate). ToolApprovalKindDelegation ToolApprovalKind = "delegation" )
type ToolCallEvent ¶
type ToolCallStatus ¶
type ToolCallStatus string
const ( ToolCallStatusPending ToolCallStatus = "pending" ToolCallStatusRunning ToolCallStatus = "running" ToolCallStatusCompleted ToolCallStatus = "completed" ToolCallStatusDenied ToolCallStatus = "denied" ToolCallStatusFailed ToolCallStatus = "failed" )
type ToolSpec ¶ added in v0.1.2
type ToolSpec struct {
Name string `json:"name"`
Description string `json:"description"`
Parameters JSONSchema `json:"parameters"`
}
ToolSpec is the schema sent to the LLM for tool selection. Convert from Tool via ToolToSpec.