googleexecutor

package
v0.0.0-...-640d511 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 18, 2026 License: Apache-2.0 Imports: 16 Imported by: 0

Documentation

Overview

Package googleexecutor provides a generic Google AI (Gemini) executor for AI agents.

This package implements a reusable pattern for Google AI-based agents, handling:

  • Prompt template rendering
  • Chat session management
  • Tool/function calling
  • Response parsing and extraction
  • Trace management for evaluation

Architecture

The executor follows a generic design pattern where Request and Response types are parameterized, allowing different agents to reuse the same core logic:

type MyRequest struct {
    Input string
}

type MyResponse struct {
    Output string
}

executor, err := googleexecutor.New[*MyRequest, *MyResponse](
    client,
    promptTemplate,
    googleexecutor.WithModel[*MyRequest, *MyResponse]("gemini-2.5-flash"),
)

Tool Support

The executor supports Google AI function calling through the Metadata type:

tools := map[string]googletool.Metadata[*MyResponse]{
    "my_tool": {
        Definition: &genai.FunctionDeclaration{
            Name:        "my_tool",
            Description: "Tool description",
            Parameters: &genai.Schema{...},
        },
        Handler: func(ctx context.Context, call *genai.FunctionCall, trace *agenttrace.Trace[*MyResponse]) *genai.FunctionResponse {
            // Tool implementation
        },
    },
}

response, err := executor.Execute(ctx, request, tools)

Options

The executor supports various configuration options:

  • WithModel: Set the Gemini model to use
  • WithTemperature: Control response randomness (0.0-2.0)
  • WithMaxOutputTokens: Set maximum response length
  • WithSystemInstructions: Provide system-level instructions
  • WithResponseMIMEType: Set response format (e.g., "application/json")
  • WithResponseSchema: Define structured output schema
  • WithThinking: Enable thinking mode with a token budget

Thinking Mode

Thinking mode allows Gemini to show its internal reasoning process. When enabled, thought blocks are captured in the trace:

executor, err := googleexecutor.New[*Request, *Response](
    client,
    prompt,
    googleexecutor.WithThinking[*Request, *Response](2048), // 2048 token budget for thinking
)

Reasoning blocks are stored in trace.Reasoning as []agenttrace.ReasoningContent, where each block contains:

  • Thinking: the reasoning text

Integration with Evaluation

The executor automatically integrates with the evals package for tracing:

  • Creates traces for each execution
  • Records tool calls and responses
  • Tracks bad tool calls for debugging
  • Provides complete execution history

Error Handling

The executor provides comprehensive error handling:

  • Template rendering errors
  • Chat creation failures
  • Malformed function calls (with automatic retry)
  • Response parsing errors
  • Tool execution errors

Usage Example

// Create client
client, err := genai.NewClient(ctx, &genai.ClientConfig{
    Project:  projectID,
    Location: region,
    Backend:  genai.BackendVertexAI,
})

// Parse template
tmpl := template.Must(template.New("prompt").Parse("Analyze: {{.Input}}"))

// Create executor
executor, err := googleexecutor.New[*Request, *Response](
    client,
    tmpl,
    googleexecutor.WithModel[*Request, *Response]("gemini-2.5-flash"),
    googleexecutor.WithTemperature[*Request, *Response](0.1),
    googleexecutor.WithResponseMIMEType[*Request, *Response]("application/json"),
)

// Execute
response, err := executor.Execute(ctx, request, nil)

Performance Considerations

  • Templates are executed for each request (consider pre-rendering if static)
  • Chat sessions are created per execution (not reused)
  • Tool responses are sent synchronously
  • Large response schemas may impact latency

Thread Safety

The executor is safe for concurrent use. Each Execute call creates its own chat session and maintains independent state.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Interface

type Interface[Request promptbuilder.Bindable, Response any] interface {
	// Execute runs the Google AI conversation with the given request and tools
	// Optional seed tool calls can be provided - these will be executed and their results prepended to the conversation
	Execute(ctx context.Context, request Request, tools map[string]googletool.Metadata[Response], seedToolCalls ...*genai.FunctionCall) (Response, error)
}

Interface defines the contract for Google AI executors

func New

func New[Request promptbuilder.Bindable, Response any](
	client *genai.Client,
	prompt *promptbuilder.Prompt,
	options ...Option[Request, Response],
) (Interface[Request, Response], error)

New creates a new Google AI executor with the given configuration

type Option

type Option[Request promptbuilder.Bindable, Response any] func(*executor[Request, Response]) error

Option is a functional option for configuring an executor

func WithAttributeEnricher

func WithAttributeEnricher[Request promptbuilder.Bindable, Response any](enricher metrics.AttributeEnricher) Option[Request, Response]

WithAttributeEnricher sets a custom attribute enricher for metrics. The enricher is called before recording each metric, allowing the application to add contextual attributes (e.g., repository, pull_request, package_version, etc.) If not provided, metrics will only include base attributes (model, tool).

func WithMaxOutputTokens

func WithMaxOutputTokens[Request promptbuilder.Bindable, Response any](tokens int32) Option[Request, Response]

WithMaxOutputTokens sets the maximum output tokens for generation

func WithModel

func WithModel[Request promptbuilder.Bindable, Response any](model string) Option[Request, Response]

WithModel sets the model to use for generation

func WithResourceLabels

func WithResourceLabels[Request promptbuilder.Bindable, Response any](labels map[string]string) Option[Request, Response]

WithResourceLabels sets labels that are sent with each Vertex AI API request. Automatically includes default labels from environment variables:

  • service_name: from K_SERVICE (defaults to "unknown")
  • product: from CHAINGUARD_PRODUCT (defaults to "unknown")
  • team: from CHAINGUARD_TEAM (defaults to "unknown")

Custom labels passed to this function will override defaults if they use the same keys.

func WithResponseMIMEType

func WithResponseMIMEType[Request promptbuilder.Bindable, Response any](mimeType string) Option[Request, Response]

WithResponseMIMEType sets the response MIME type (e.g., "application/json")

func WithResponseSchema

func WithResponseSchema[Request promptbuilder.Bindable, Response any](schema *genai.Schema) Option[Request, Response]

WithResponseSchema sets the response schema for structured output

func WithRetryConfig

func WithRetryConfig[Request promptbuilder.Bindable, Response any](cfg retry.RetryConfig) Option[Request, Response]

WithRetryConfig sets the retry configuration for handling transient Vertex AI errors. This is particularly useful for handling 429 RESOURCE_EXHAUSTED errors that occur when quota limits are hit. If not set, a default configuration is used.

func WithSubmitResultProvider

func WithSubmitResultProvider[Request promptbuilder.Bindable, Response any](provider SubmitResultProvider[Response]) Option[Request, Response]

WithSubmitResultProvider registers the submit_result tool using the supplied provider. This is opt-in - agents must explicitly call this to enable submit_result.

func WithSystemInstructions

func WithSystemInstructions[Request promptbuilder.Bindable, Response any](prompt *promptbuilder.Prompt) Option[Request, Response]

WithSystemInstructions sets the system instructions for the model

func WithTemperature

func WithTemperature[Request promptbuilder.Bindable, Response any](temperature float32) Option[Request, Response]

WithTemperature sets the temperature for generation Gemini models support temperature values from 0.0 to 2.0 This is a wider range than Claude (0.0-1.0) allowing for more creative outputs Lower values (e.g., 0.1) produce more deterministic outputs Higher values (e.g., 1.5-2.0) produce very creative/random outputs

func WithThinking

func WithThinking[Request promptbuilder.Bindable, Response any](budgetTokens int32) Option[Request, Response]

WithThinking enables thinking mode with the specified token budget The budget parameter sets the maximum tokens the model can use for reasoning Special value -1 enables dynamic thinking where the model adjusts based on complexity See https://ai.google.dev/gemini-api/docs/thinking Must be less than max_output_tokens to leave room for actual output

type SubmitResultProvider

type SubmitResultProvider[Response any] func() (googletool.Metadata[Response], error)

SubmitResultProvider constructs tool metadata for submit_result.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL