claudeexecutor

package
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 2, 2026 License: Apache-2.0 Imports: 20 Imported by: 0

Documentation

Overview

Package claudeexecutor provides a generic executor for Claude-based agents that reduces boilerplate while maintaining flexibility for agent-specific logic.

The executor handles the common conversation loop pattern including:

  • Prompt rendering from templates
  • Message streaming and accumulation
  • Tool call execution and response handling
  • JSON response parsing
  • Trace management for evaluation

Basic Usage

Create an executor with a client and prompt template:

client := anthropic.NewClient(
    vertex.WithGoogleAuth(ctx, region, projectID, "https://www.googleapis.com/auth/cloud-platform"),
)

tmpl, _ := template.New("prompt").Parse("Analyze: {{.Input}}")

exec, err := claudeexecutor.New[*Request, *Response](
    client,
    tmpl,
    claudeexecutor.WithModel[*Request, *Response]("claude-3-opus@20240229"),
    claudeexecutor.WithMaxTokens[*Request, *Response](16000),
)
if err != nil {
    return nil, err
}

// Define tools if needed
tools := map[string]claudetool.Metadata[*Response]{
    "read_file": {
        Definition: anthropic.ToolParam{
            Name:        "read_file",
            Description: anthropic.String("Read a file"),
            InputSchema: anthropic.ToolInputSchemaParam{
                Properties: map[string]interface{}{
                    "path": map[string]interface{}{
                        "type": "string",
                        "description": "File path",
                    },
                },
                Required: []string{"path"},
            },
        },
        Handler: func(ctx context.Context, toolUse anthropic.ToolUseBlock, trace *agenttrace.Trace[*Response]) map[string]interface{} {
            // Tool implementation
            return map[string]interface{}{"content": "file contents"}
        },
    },
}

// Execute the agent
response, err := exec.Execute(ctx, request, tools)

Options

The executor supports several configuration options:

  • WithModel: Override the default model (defaults to claude-sonnet-4@20250514)
  • WithMaxTokens: Set maximum response tokens (defaults to 8192, max 32000)
  • WithTemperature: Set response temperature (defaults to 0.1)
  • WithSystemInstructions: Provide system-level instructions
  • WithThinking: Enable extended thinking mode with a token budget

Extended Thinking

Extended thinking allows Claude to show its internal reasoning process before responding. When enabled, reasoning blocks are captured in the trace:

exec, err := claudeexecutor.New[*Request, *Response](
    client,
    prompt,
    claudeexecutor.WithThinking[*Request, *Response](2048), // 2048 token budget for thinking
)

Reasoning blocks are stored in trace.Reasoning as []agenttrace.ReasoningContent, where each block contains:

  • Thinking: the reasoning text

Note: When thinking is enabled, temperature is automatically set to 1.0 as required by the Claude API. See: https://docs.claude.com/en/docs/build-with-claude/extended-thinking

Type Safety

The executor is generic over Request and Response types, ensuring type safety throughout the conversation. The trace parameter in tool handlers is properly typed with the Response type.

Index

Examples

Constants

View Source
const DefaultMaxTurns = 50

DefaultMaxTurns is the default maximum number of conversation turns (LLM round-trips) before the executor aborts. Each turn corresponds to one Claude API call. This prevents runaway loops when the model keeps calling tools without converging on a result.

Variables

This section is empty.

Functions

This section is empty.

Types

type Interface

type Interface[Request promptbuilder.Bindable, Response any] interface {
	// Execute runs the agent conversation with the given request and tools
	// Optional seed tool calls can be provided - these will be executed and their results prepended to the conversation
	Execute(ctx context.Context, request Request, tools map[string]claudetool.Metadata[Response], seedToolCalls ...anthropic.ToolUseBlock) (Response, error)
}

Interface is the public interface for Claude agent execution

func New

func New[Request promptbuilder.Bindable, Response any](
	client anthropic.Client,
	prompt *promptbuilder.Prompt,
	opts ...Option[Request, Response],
) (Interface[Request, Response], error)

New creates a new Executor with minimal required configuration

type Option

type Option[Request promptbuilder.Bindable, Response any] func(*executor[Request, Response]) error

Option is a functional option for configuring the executor

func WithMaxTokens

func WithMaxTokens[Request promptbuilder.Bindable, Response any](tokens int64) Option[Request, Response]

WithMaxTokens sets the maximum tokens for responses

Example

ExampleWithMaxTokens demonstrates configuring the maximum number of tokens the executor may generate per response.

package main

import (
	"fmt"

	"chainguard.dev/driftlessaf/agents/executor/claudeexecutor"
	"chainguard.dev/driftlessaf/agents/promptbuilder"
)

func main() {
	opt := claudeexecutor.WithMaxTokens[promptbuilder.Noop, *struct{}](16000)
	fmt.Printf("option is nil: %v\n", opt == nil)
}
Output:
option is nil: false

func WithMaxTurns added in v0.2.0

func WithMaxTurns[Request promptbuilder.Bindable, Response any](turns int) Option[Request, Response]

WithMaxTurns sets the maximum number of conversation turns (LLM round-trips) before the executor aborts. This prevents runaway loops where the model keeps calling tools without converging on a result. Default is DefaultMaxTurns (50).

func WithModel

func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]

WithModel allows overriding the model name

Example

ExampleWithModel demonstrates configuring the Claude model used by the executor.

package main

import (
	"fmt"

	"chainguard.dev/driftlessaf/agents/executor/claudeexecutor"
	"chainguard.dev/driftlessaf/agents/promptbuilder"
)

func main() {
	opt := claudeexecutor.WithModel[promptbuilder.Noop, *struct{}]("claude-3-opus@20240229")
	fmt.Printf("option is nil: %v\n", opt == nil)
}
Output:
option is nil: false

func WithResourceLabels

func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]

WithResourceLabels sets labels for GCP billing attribution when using Claude via Vertex AI. Automatically includes default labels from environment variables:

  • service_name: from K_SERVICE (defaults to "unknown")
  • product: from CHAINGUARD_PRODUCT (defaults to "unknown")
  • team: from CHAINGUARD_TEAM (defaults to "unknown")

Custom labels passed to this function will override defaults if they use the same keys.

func WithRetryConfig

func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]

WithRetryConfig sets the retry configuration for handling transient Claude API errors. This is particularly useful for handling 429 rate limit and 529 overloaded errors. If not set, a default configuration is used.

func WithSubmitResultProvider

func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]

WithSubmitResultProvider registers the submit_result tool using the supplied provider. This is opt-in - agents must explicitly call this to enable submit_result.

func WithSystemInstructions

func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]

WithSystemInstructions sets custom system instructions

func WithTemperature

func WithTemperature[Request promptbuilder.Bindable, Response any](temp float64) Option[Request, Response]

WithTemperature sets the temperature for responses Claude models support temperature values from 0.0 to 1.0 Lower values (e.g., 0.1) produce more deterministic outputs Higher values (e.g., 0.9) produce more creative/random outputs

func WithThinking

func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int64) Option[Request, Response]

WithThinking enables extended thinking mode with the specified token budget The budget_tokens parameter sets the maximum tokens Claude can use for reasoning This must be less than max_tokens and at least 1024 tokens is recommended

func WithoutCacheControl added in v0.2.0

func WithoutCacheControl[Request promptbuilder.Bindable, Response any]() Option[Request, Response]

WithoutCacheControl disables Anthropic prompt caching.

Prompt caching is enabled by default because it significantly reduces input token costs for multi-turn agentic workflows. The API caches the request prefix (tool definitions + system prompt) and serves it at 10% of the normal input token price on subsequent turns. The only cost is a 1.25x write premium on the first turn, which is amortized across all subsequent cache reads within the 5-min TTL.

You would only disable this if you have a single-turn agent that runs less than once every 5 minutes, where the 1.25x write cost would never be recouped. See: https://platform.claude.com/docs/en/build-with-claude/prompt-caching

type SubmitResultProvider

type SubmitResultProvider[Response any] func() (claudetool.Metadata[Response], error)

SubmitResultProvider constructs tool metadata for submit_result.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL