expert

package
v0.1.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 10, 2026 License: Apache-2.0 Imports: 28 Imported by: 0

Documentation

Overview

Package expert provides the LLM "expert" abstraction used by Genie's orchestrator and ReAcTree execution engine.

It solves the problem of wrapping trpc-agent-go's runner and tool execution behind a single interface (Expert) with a named persona (ExpertBio), so that different agents (e.g. front-desk classifier, codeowner, report writer) can be invoked with the same pattern. Each expert has its own tools, model config, and session; the orchestrator routes user requests to the appropriate expert and coordinates multi-step workflows (e.g. ReAcTree stages) via Expert.Run.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IsTransientError

func IsTransientError(err error) bool

IsTransientError returns true if the error looks like a transient upstream LLM provider error (503 / rate-limit / overloaded) that is worth retrying.

func NewPIIModelCallbacks

func NewPIIModelCallbacks() *model.Callbacks

NewPIIModelCallbacks creates model.Callbacks that redact PII from user and tool-result messages before they reach the LLM, and rehydrate the original values in the response so the end-user sees unmasked output.

Types

type Expert

type Expert interface {
	Do(ctx context.Context, req Request) (Response, error)
	GetBio() ExpertBio
}

type ExpertBio

type ExpertBio struct {
	Name        string
	Description string
	Personality string
	Tools       []tool.Tool
}

func (ExpertBio) ToExpert

func (e ExpertBio) ToExpert(
	ctx context.Context,
	modelProvider modelprovider.ModelProvider,
	auditor audit.Auditor,
	toolwrapSvc *toolwrap.Service,
	expertOpts ...ExpertOption,
) (_ Expert, err error)

type ExpertConfig

type ExpertConfig struct {
	// MaxLLMCalls is the maximum number of LLM calls per invocation (default: 15)
	MaxLLMCalls int
	// MaxToolIterations is the maximum number of tool iterations per invocation (default: 12)
	MaxToolIterations int
	// MaxHistoryRuns is the maximum number of history messages to include (default: 5)
	MaxHistoryRuns int
	// DisableParallelTools disables parallel tool execution (default: false)
	DisableParallelTools bool
	// ReasoningContentMode
	ReasoningContentMode string
	// PersonaTokenThreshold is the warning limit for the system prompt size (default: 2000)
	PersonaTokenThreshold int
	// Silent disables emitting events to the TUI, useful for background tasks (default: false)
	Silent bool
}

ExpertConfig contains configuration options for token optimization and agent behavior. This struct provides presets for different use cases to balance cost vs. capability.

func CostOptimizedConfig

func CostOptimizedConfig() ExpertConfig

CostOptimizedConfig returns config optimized for minimal token usage. Use this for simple tasks or when cost is the primary concern.

func DefaultExpertConfig

func DefaultExpertConfig() ExpertConfig

DefaultExpertConfig returns sensible defaults for token optimization. These settings balance cost efficiency with agent capability for typical IaC generation tasks.

func HighPerformanceConfig

func HighPerformanceConfig() ExpertConfig

HighPerformanceConfig returns config optimized for complex tasks requiring more iterations. Use this for architectures with many components or complex dependencies.

type ExpertOption

type ExpertOption func(*expert)

ExpertOption configures optional expert behavior.

func WithExpertSessionService

func WithExpertSessionService(svc session.Service) ExpertOption

WithExpertSessionService injects a session.Service for persistent conversation history. When set, the expert uses this service instead of creating a new inmemory.SessionService per runner.

type Request

type Request struct {
	Message         string
	AdditionalTools []tool.Tool
	// Process each choices as they are generated
	ChoiceProcessor func(choices ...model.Choice) `json:"-"`
	// TaskType to use
	TaskType modelprovider.TaskType

	Mode ExpertConfig

	// WorkingMemory is an optional shared memory used to cache file-read tool results.
	// When set, ToolWrapper will automatically cache results from read_file, list_file,
	// and read_multiple_files, preventing redundant reads within the same session.
	WorkingMemory *rtmemory.WorkingMemory

	// Attachments holds file/media attachments from the incoming message.
	// Image attachments with LocalPath are added as multimodal visual content.
	// Audio attachments are added as audio content parts (with OGG→WAV conversion).
	// Video attachments are embedded via the File API with explicit MIME type.
	// Other attachments (PDF, DOCX) are described textually or embedded as files.
	Attachments []messenger.Attachment
}

type Response

type Response struct {
	Choices []model.Choice
	Usage   *model.Usage
}

func HandleExpertError

func HandleExpertError(ctx context.Context, err error) (Response, error)

HandleExpertError inspects errors returned from the expert runner. If the error is due to hitting the max tool iteration limit, it synthesizes a partial success response with an explanatory message. Otherwise, it returns the original error.

Directories

Path Synopsis
Code generated by counterfeiter.
Code generated by counterfeiter.
modelproviderfakes
Code generated by counterfeiter.
Code generated by counterfeiter.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL