Documentation
¶
Overview ¶
Package claudeexecutor provides a generic executor for Claude-based agents that reduces boilerplate while maintaining flexibility for agent-specific logic.
The executor handles the common conversation loop pattern including:
- Prompt rendering from templates
- Message streaming and accumulation
- Tool call execution and response handling
- JSON response parsing
- Trace management for evaluation
Basic Usage ¶
Create an executor with a client and prompt template:
client := anthropic.NewClient(
vertex.WithGoogleAuth(ctx, region, projectID),
)
tmpl, _ := template.New("prompt").Parse("Analyze: {{.Input}}")
exec, err := claudeexecutor.New[*Request, *Response](
client,
tmpl,
claudeexecutor.WithModel[*Request, *Response]("claude-3-opus@20240229"),
claudeexecutor.WithMaxTokens[*Request, *Response](16000),
)
if err != nil {
return nil, err
}
// Define tools if needed
tools := map[string]claudetool.Metadata[*Response]{
"read_file": {
Definition: anthropic.ToolParam{
Name: "read_file",
Description: anthropic.String("Read a file"),
InputSchema: anthropic.ToolInputSchemaParam{
Properties: map[string]interface{}{
"path": map[string]interface{}{
"type": "string",
"description": "File path",
},
},
Required: []string{"path"},
},
},
Handler: func(ctx context.Context, toolUse anthropic.ToolUseBlock, trace *agenttrace.Trace[*Response]) map[string]interface{} {
// Tool implementation
return map[string]interface{}{"content": "file contents"}
},
},
}
// Execute the agent
response, err := exec.Execute(ctx, request, tools)
Options ¶
The executor supports several configuration options:
- WithModel: Override the default model (defaults to claude-sonnet-4@20250514)
- WithMaxTokens: Set maximum response tokens (defaults to 8192, max 32000)
- WithTemperature: Set response temperature (defaults to 0.1)
- WithSystemInstructions: Provide system-level instructions
- WithThinking: Enable extended thinking mode with a token budget
Extended Thinking ¶
Extended thinking allows Claude to show its internal reasoning process before responding. When enabled, reasoning blocks are captured in the trace:
exec, err := claudeexecutor.New[*Request, *Response](
client,
prompt,
claudeexecutor.WithThinking[*Request, *Response](2048), // 2048 token budget for thinking
)
Reasoning blocks are stored in trace.Reasoning as []agenttrace.ReasoningContent, where each block contains:
- Thinking: the reasoning text
Note: When thinking is enabled, temperature is automatically set to 1.0 as required by the Claude API. See: https://docs.claude.com/en/docs/build-with-claude/extended-thinking
Type Safety ¶
The executor is generic over Request and Response types, ensuring type safety throughout the conversation. The trace parameter in tool handlers is properly typed with the Response type.
Index ¶
- type Interface
- type Option
- func WithAttributeEnricher[Request promptbuilder.Bindable, Response any](enricher metrics.AttributeEnricher) Option[Request, Response]
- func WithMaxTokens[Request promptbuilder.Bindable, Response any](tokens int64) Option[Request, Response]
- func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]
- func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]
- func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]
- func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]
- func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]
- func WithTemperature[Request promptbuilder.Bindable, Response any](temp float64) Option[Request, Response]
- func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int64) Option[Request, Response]
- type SubmitResultProvider
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Interface ¶
type Interface[Request promptbuilder.Bindable, Response any] interface { // Execute runs the agent conversation with the given request and tools // Optional seed tool calls can be provided - these will be executed and their results prepended to the conversation Execute(ctx context.Context, request Request, tools map[string]claudetool.Metadata[Response], seedToolCalls ...anthropic.ToolUseBlock) (Response, error) }
Interface is the public interface for Claude agent execution
type Option ¶
type Option[Request promptbuilder.Bindable, Response any] func(*executor[Request, Response]) error
Option is a functional option for configuring the executor
func WithAttributeEnricher ¶
func WithAttributeEnricher[Request promptbuilder.Bindable, Response any](enricher metrics.AttributeEnricher) Option[Request, Response]
WithAttributeEnricher sets a custom attribute enricher for metrics. The enricher is called before recording each metric, allowing the application to add contextual attributes (e.g., repository, pull_request, package_version, etc.) If not provided, metrics will only include base attributes (model, tool).
func WithMaxTokens ¶
func WithMaxTokens[Request promptbuilder.Bindable, Response any](tokens int64) Option[Request, Response]
WithMaxTokens sets the maximum tokens for responses
func WithModel ¶
func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]
WithModel allows overriding the model name
func WithResourceLabels ¶
func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]
WithResourceLabels sets labels for GCP billing attribution when using Claude via Vertex AI. Automatically includes default labels from environment variables:
- service_name: from K_SERVICE (defaults to "unknown")
- product: from CHAINGUARD_PRODUCT (defaults to "unknown")
- team: from CHAINGUARD_TEAM (defaults to "unknown")
Custom labels passed to this function will override defaults if they use the same keys.
func WithRetryConfig ¶
func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]
WithRetryConfig sets the retry configuration for handling transient Claude API errors. This is particularly useful for handling 429 rate limit and 529 overloaded errors. If not set, a default configuration is used.
func WithSubmitResultProvider ¶
func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]
WithSubmitResultProvider registers the submit_result tool using the supplied provider. This is opt-in - agents must explicitly call this to enable submit_result.
func WithSystemInstructions ¶
func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]
WithSystemInstructions sets custom system instructions
func WithTemperature ¶
func WithTemperature[Request promptbuilder.Bindable, Response any](temp float64) Option[Request, Response]
WithTemperature sets the temperature for responses Claude models support temperature values from 0.0 to 1.0 Lower values (e.g., 0.1) produce more deterministic outputs Higher values (e.g., 0.9) produce more creative/random outputs
func WithThinking ¶
func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int64) Option[Request, Response]
WithThinking enables extended thinking mode with the specified token budget The budget_tokens parameter sets the maximum tokens Claude can use for reasoning This must be less than max_tokens and at least 1024 tokens is recommended
type SubmitResultProvider ¶
type SubmitResultProvider[Response any] func() (claudetool.Metadata[Response], error)
SubmitResultProvider constructs tool metadata for submit_result.