agentbridge

package module
v0.1.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 29, 2026 License: MIT Imports: 10 Imported by: 0

README

agentbridge

A unified Go library for driving LLM agent CLIs as subprocess backends.

Instead of implementing LLM API clients, agentbridge drives each vendor's official CLI tool as a subprocess and exposes a single Backend interface. The caller assembles the prompt; agentbridge translates a RunRequest into the correct CLI invocation and parses the output back into a RunResult.

Supported Backends

Provider CLI Thread resume Token usage
codex codex
claude claude
gemini gemini
kimi kimi CLI

Installation

go get github.com/Alice-space/agentbridge

Zero external dependencies. Only the Go standard library is required.

Quick Start

package main

import (
    "context"
    "fmt"

    agentbridge "github.com/Alice-space/agentbridge"
)

func main() {
    backend, err := agentbridge.NewBackend(agentbridge.FactoryConfig{
        Provider: agentbridge.ProviderClaude,
        Claude: agentbridge.ClaudeConfig{
            Command: "claude",
        },
    })
    if err != nil {
        panic(err)
    }

    result, err := backend.Run(context.Background(), agentbridge.RunRequest{
        UserText: "What is 2 + 2?",
    })
    if err != nil {
        panic(err)
    }
    fmt.Println(result.Reply)
}

Multi-backend Routing

Route requests to different CLI backends based on RunRequest.Provider:

multi, err := agentbridge.NewMultiBackend("codex", map[string]agentbridge.Backend{
    "codex":  codexBackend,
    "claude": claudeBackend,
    "gemini": geminiBackend,
})

result, err := multi.Run(ctx, agentbridge.RunRequest{
    Provider: "claude",
    UserText:  "Hello!",
})

Thread Resumption

All backends support resuming a previous conversation via ThreadID:

// First turn
result, err := backend.Run(ctx, agentbridge.RunRequest{
    UserText: "Start a new task",
})

// Second turn — resume the same session
result2, err := backend.Run(ctx, agentbridge.RunRequest{
    ThreadID: result.NextThreadID,
    UserText: "Continue from where you left off",
})

Streaming Progress

Receive intermediate messages during a long-running codex session:

result, err := backend.Run(ctx, agentbridge.RunRequest{
    UserText: "Refactor this file",
    OnProgress: func(step string) {
        if strings.HasPrefix(step, "[file_change] ") {
            fmt.Println("File changed:", strings.TrimPrefix(step, "[file_change] "))
        } else {
            fmt.Println("Agent:", step)
        }
    },
})

Design

The library does not assemble prompts. RunRequest.UserText is passed directly to the CLI. The caller is responsible for constructing the final prompt (system instructions, reply tokens, etc.) before calling Run.

No logging. The library returns errors and lets callers decide how to log them.

Provider-specific flags (model, sandbox policy, reasoning effort, personality) are mapped to the appropriate CLI arguments by each backend.

Configuration Reference

CodexConfig
agentbridge.CodexConfig{
    Command:            "codex",          // CLI binary name or path
    Timeout:            10 * time.Minute, // overall execution timeout
    DefaultIdleTimeout: 15 * time.Minute, // idle timeout (default reasoning)
    HighIdleTimeout:    30 * time.Minute, // idle timeout for high reasoning
    XHighIdleTimeout:   60 * time.Minute, // idle timeout for xhigh reasoning
    Model:              "o4-mini",
    ReasoningEffort:    "medium",
    Env:                map[string]string{"MY_KEY": "value"},
    WorkspaceDir:       "/path/to/project",
    DefaultExecPolicy: agentbridge.ExecPolicyConfig{
        Sandbox:        "workspace-write",
        AskForApproval: "never",
    },
    ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
        "executor": {ReasoningEffort: "xhigh"},
    },
}
ClaudeConfig / GeminiConfig / KimiConfig
agentbridge.ClaudeConfig{
    Command:      "claude",
    Timeout:      10 * time.Minute,
    Env:          map[string]string{},
    WorkspaceDir: "/path/to/project",
    ProfileOverrides: map[string]agentbridge.ProfileRunnerConfig{
        "fast": {Command: "claude-fast"},
    },
}

License

MIT

Documentation

Overview

Package agentbridge provides a unified interface for running LLM agent CLIs (claude, codex, gemini, kimi, etc.) as subprocess backends.

Instead of implementing LLM API clients, agentbridge drives each vendor's official CLI tool as a subprocess and exposes a single Backend interface. Prompts are assembled by the caller and passed as UserText; the library is responsible only for translating a RunRequest into the correct CLI invocation and parsing the CLI's stdout/stderr back into a RunResult.

Index

Constants

View Source
const (
	ProviderCodex    = "codex"
	ProviderClaude   = "claude"
	ProviderGemini   = "gemini"
	ProviderKimi     = "kimi"
	ProviderOpenCode = "opencode"
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Backend

type Backend interface {
	Run(ctx context.Context, req RunRequest) (RunResult, error)
}

Backend runs a single LLM agent CLI.

func NewBackend

func NewBackend(cfg FactoryConfig) (Backend, error)

NewBackend is a convenience wrapper around NewProvider that returns the Backend directly.

type ClaudeConfig

type ClaudeConfig struct {
	Command      string
	Timeout      time.Duration
	Env          map[string]string
	WorkspaceDir string
	// ProfileOverrides maps profile name → per-profile runner overrides.
	ProfileOverrides map[string]ProfileRunnerConfig
}

ClaudeConfig configures the claude CLI backend.

type CodexConfig

type CodexConfig struct {
	Command            string
	Timeout            time.Duration
	DefaultIdleTimeout time.Duration
	HighIdleTimeout    time.Duration
	XHighIdleTimeout   time.Duration
	Model              string
	ReasoningEffort    string
	Env                map[string]string
	WorkspaceDir       string
	DefaultExecPolicy  ExecPolicyConfig
	// ProfileOverrides maps profile name → per-profile runner overrides.
	ProfileOverrides map[string]ProfileRunnerConfig
}

CodexConfig configures the codex CLI backend.

type ExecPolicyConfig

type ExecPolicyConfig struct {
	Sandbox        string
	AskForApproval string
	AddDirs        []string
}

ExecPolicyConfig controls sandbox and approval settings for backends that support them (currently codex only).

type FactoryConfig

type FactoryConfig struct {
	Provider string
	Codex    CodexConfig
	Claude   ClaudeConfig
	Gemini   GeminiConfig
	Kimi     KimiConfig
	OpenCode OpenCodeConfig
}

FactoryConfig holds configuration for all supported backends. Only the fields relevant to the selected Provider need to be populated.

type GeminiConfig

type GeminiConfig struct {
	Command      string
	Timeout      time.Duration
	Env          map[string]string
	WorkspaceDir string
	// ProfileOverrides maps profile name → per-profile runner overrides.
	ProfileOverrides map[string]ProfileRunnerConfig
}

GeminiConfig configures the gemini CLI backend.

type KimiConfig

type KimiConfig struct {
	Command      string
	Timeout      time.Duration
	Env          map[string]string
	WorkspaceDir string
	// ProfileOverrides maps profile name → per-profile runner overrides.
	ProfileOverrides map[string]ProfileRunnerConfig
}

KimiConfig configures the kimi CLI backend.

type MultiBackend

type MultiBackend struct {
	// contains filtered or unexported fields
}

MultiBackend routes Run calls to one of several backends based on RunRequest.Provider, falling back to the default provider when unset.

func NewMultiBackend

func NewMultiBackend(defaultProvider string, backends map[string]Backend) (*MultiBackend, error)

NewMultiBackend constructs a MultiBackend. At least one backend must be provided. When defaultProvider is empty and exactly one backend is registered, that backend becomes the default.

func (*MultiBackend) Run

func (m *MultiBackend) Run(ctx context.Context, req RunRequest) (RunResult, error)

Run dispatches to the backend for req.Provider, or the default backend when req.Provider is empty.

type OpenCodeConfig

type OpenCodeConfig struct {
	Command      string
	Timeout      time.Duration
	Model        string
	Variant      string
	Env          map[string]string
	WorkspaceDir string
	// ProfileOverrides maps profile name → per-profile runner overrides.
	ProfileOverrides map[string]ProfileRunnerConfig
}

OpenCodeConfig configures the opencode CLI backend.

type ProfileRunnerConfig

type ProfileRunnerConfig struct {
	Command         string
	Timeout         time.Duration
	ProviderProfile string
	ExecPolicy      ExecPolicyConfig
}

ProfileRunnerConfig carries per-profile overrides keyed by the profile name (e.g. "executor", "reviewer").

type ProgressFunc

type ProgressFunc func(step string)

ProgressFunc is called with intermediate agent messages while a run is in progress. Code-editing backends may also emit file-change notifications prefixed with "[file_change] ".

type Provider

type Provider interface {
	Backend() Backend
}

Provider wraps a Backend and is returned by NewProvider.

func NewProvider

func NewProvider(cfg FactoryConfig) (Provider, error)

NewProvider constructs a Provider for the backend specified by cfg.Provider. An empty Provider defaults to codex.

type RawEvent

type RawEvent struct {
	// Kind identifies the event category:
	//   "stdout_line" – every raw stdout JSON-line from the CLI
	//   "reasoning"   – a reasoning/thinking block (codex)
	//   "tool_call"   – a command_execution item (codex)
	//   "tool_use"    – a tool/tool_use block in an assistant message (claude, kimi, opencode)
	Kind string
	// Line is the original JSON-lines string from the CLI stdout (always set).
	Line string
	// Detail is a human-readable parsed summary (set for reasoning, tool_call,
	// and tool_use; empty for stdout_line).
	Detail string
}

RawEvent is a raw backend-internal event emitted before higher-level filtering. It is delivered via RunRequest.OnRawEvent when non-nil.

type RawEventFunc

type RawEventFunc func(event RawEvent)

RawEventFunc is called with each raw backend-internal event. It is optional; nil disables raw event delivery.

type RunRequest

type RunRequest struct {
	// ThreadID resumes an existing session when non-empty.
	ThreadID string
	// AgentName is informational metadata; not passed to the CLI.
	AgentName string
	// UserText is the fully assembled prompt sent to the CLI.
	UserText string
	// Scene is caller-defined metadata; not passed to the CLI.
	Scene string
	// Provider selects which backend to use when running through a MultiBackend.
	Provider string
	// Model overrides the default model for this request.
	Model string
	// Variant overrides the default variant for this request (e.g. "max", "high", "minimal").
	Variant string
	// Profile selects a named configuration profile defined in the backend config.
	Profile string
	// ReasoningEffort is forwarded to backends that support it (codex).
	ReasoningEffort string
	// Personality is forwarded to backends that accept it as a CLI flag (codex).
	Personality string
	// WorkspaceDir overrides the working directory for this request.
	WorkspaceDir string
	// ExecPolicy overrides sandbox/approval settings for this request (codex).
	ExecPolicy ExecPolicyConfig
	// Env is merged over the process environment before spawning the CLI.
	Env map[string]string
	// OnProgress receives streaming progress updates during execution.
	OnProgress ProgressFunc
	// OnRawEvent optionally receives low-level backend-internal events before
	// any higher-level filtering is applied. See RawEvent for the delivered
	// kinds. Nil disables raw event delivery.
	OnRawEvent RawEventFunc
}

RunRequest is the input to Backend.Run. The caller is responsible for assembling the final prompt in UserText; the library does not perform any prompt templating.

type RunResult

type RunResult struct {
	// Reply is the final assistant message produced by the CLI.
	Reply string
	// NextThreadID is the session/thread ID to pass as ThreadID on the next
	// call to continue the conversation.
	NextThreadID string
	// Usage contains token counts if the backend reported them.
	Usage Usage
}

RunResult is the output from Backend.Run.

type Usage

type Usage struct {
	InputTokens       int64
	CachedInputTokens int64
	OutputTokens      int64
}

Usage holds token consumption reported by the CLI backend. Fields are zero when the backend does not expose usage information.

func (Usage) HasUsage

func (u Usage) HasUsage() bool

HasUsage reports whether any token counts were captured.

func (Usage) TotalTokens

func (u Usage) TotalTokens() int64

TotalTokens returns InputTokens + OutputTokens.

Directories

Path Synopsis
Package claude drives the claude CLI as a subprocess and parses its stream-json output into a plain text reply.
Package claude drives the claude CLI as a subprocess and parses its stream-json output into a plain text reply.
Package codex drives the codex CLI as a subprocess and parses its JSON-lines output into a plain text reply with optional file-change events.
Package codex drives the codex CLI as a subprocess and parses its JSON-lines output into a plain text reply with optional file-change events.
Package gemini drives the gemini CLI as a subprocess and parses its JSON output into a plain text reply.
Package gemini drives the gemini CLI as a subprocess and parses its JSON output into a plain text reply.
internal
Package kimi drives the kimi CLI as a subprocess and parses its stream-json output into a plain text reply.
Package kimi drives the kimi CLI as a subprocess and parses its stream-json output into a plain text reply.
Package opencode drives the opencode CLI as a subprocess and parses its JSON-lines output into a plain text reply, session ID, and token usage.
Package opencode drives the opencode CLI as a subprocess and parses its JSON-lines output into a plain text reply, session ID, and token usage.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL