expert

package
v0.1.8-rc.22 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 29, 2026 License: Apache-2.0 Imports: 29 Imported by: 0

Documentation

Overview

Package expert provides the LLM "expert" abstraction used by Genie's orchestrator and ReAcTree execution engine.

It solves the problem of wrapping trpc-agent-go's runner and tool execution behind a single interface (Expert) with a named persona (ExpertBio), so that different agents (e.g. front-desk classifier, codeowner, report writer) can be invoked with the same pattern. Each expert has its own tools, model config, and session; the orchestrator routes user requests to the appropriate expert and coordinates multi-step workflows (e.g. ReAcTree stages) via Expert.Run.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func IsTransientError

func IsTransientError(err error) bool

IsTransientError returns true if the error looks like a transient upstream LLM provider error (503 / rate-limit / overloaded) that is worth retrying.

func NewPIIModelCallbacks

func NewPIIModelCallbacks() *model.Callbacks

NewPIIModelCallbacks creates model.Callbacks that redact PII from user and tool-result messages before they reach the LLM, and rehydrate the original values in the response so the end-user sees unmasked output.

Types

type EventErrorTranslator

type EventErrorTranslator interface {
	TranslateEventError(ctx context.Context, message string) (Response, bool)
}

EventErrorTranslator translates streamed event error messages into a user-facing assistant response.

This exists so host applications can inject domain-specific handling for tool/runtime errors (for example, policy denials) without hard-coding those semantics into the expert event loop.

func NonRetriableToolErrorTranslator

func NonRetriableToolErrorTranslator() EventErrorTranslator

NonRetriableToolErrorTranslator returns an EventErrorTranslator that converts toolwrap non-retriable error markers into a final assistant response.

This translator exists so non-retriable tool failures can stop execution immediately and surface a human-readable reason instead of letting the LLM continue from stale context.

type EventErrorTranslatorFunc

type EventErrorTranslatorFunc func(ctx context.Context, message string) (Response, bool)

EventErrorTranslatorFunc adapts a function to EventErrorTranslator.

func (EventErrorTranslatorFunc) TranslateEventError

func (f EventErrorTranslatorFunc) TranslateEventError(ctx context.Context, message string) (Response, bool)

TranslateEventError invokes f.

type Expert

type Expert interface {
	Do(ctx context.Context, req Request) (Response, error)
	GetBio() ExpertBio
}

type ExpertBio

type ExpertBio struct {
	Name        string
	Description string
	Personality string
	Tools       []tool.Tool
}

func (ExpertBio) ToExpert

func (e ExpertBio) ToExpert(
	ctx context.Context,
	modelProvider modelprovider.ModelProvider,
	auditor audit.Auditor,
	toolwrapSvc *toolwrap.Service,
	expertOpts ...ExpertOption,
) (_ Expert, err error)

type ExpertConfig

type ExpertConfig struct {
	// MaxLLMCalls is the maximum number of LLM calls per invocation (default: 15)
	MaxLLMCalls int
	// MaxHistoryRuns is the maximum number of history messages to include (default: 5)
	MaxHistoryRuns int
	// DisableParallelTools disables parallel tool execution (default: false)
	DisableParallelTools bool
	// ReasoningContentMode
	ReasoningContentMode string
	// PersonaTokenThreshold is the warning limit for the system prompt size (default: 2000)
	PersonaTokenThreshold int
	// Silent disables emitting events to the TUI, useful for background tasks (default: false)
	Silent bool
}

ExpertConfig contains configuration options for token optimization and agent behavior. This struct provides presets for different use cases to balance cost vs. capability.

func CostOptimizedConfig

func CostOptimizedConfig() ExpertConfig

CostOptimizedConfig returns config optimized for minimal token usage. Use this for simple tasks or when cost is the primary concern.

func DefaultExpertConfig

func DefaultExpertConfig() ExpertConfig

DefaultExpertConfig returns sensible defaults for token optimization. These settings balance cost efficiency with agent capability for typical IaC generation tasks.

func HighPerformanceConfig

func HighPerformanceConfig() ExpertConfig

HighPerformanceConfig returns config optimized for complex tasks requiring more iterations. Use this for architectures with many components or complex dependencies.

type ExpertOption

type ExpertOption func(*expert)

ExpertOption configures optional expert behavior.

func WithEventErrorTranslator

func WithEventErrorTranslator(translator EventErrorTranslator) ExpertOption

WithEventErrorTranslator registers a streamed event-error translator.

This exists so callers can plug domain-specific mapping from runtime/tool errors to final assistant responses without modifying the expert loop.

func WithExpertSessionService

func WithExpertSessionService(svc session.Service) ExpertOption

WithExpertSessionService injects a session.Service for persistent conversation history. When set, the expert uses this service instead of creating a new inmemory.SessionService per runner.

func WithTestRunner

func WithTestRunner(r runner.Runner) ExpertOption

WithTestRunner injects a runner for testing. When set, getRunner returns it instead of creating a new one, so tests can stub the event stream (e.g. emit an error event and assert Do() returns the translated response).

type Request

type Request struct {
	Message         string
	AdditionalTools []tool.Tool
	// Process each choices as they are generated
	ChoiceProcessor func(choices ...model.Choice) `json:"-"`
	// TaskType to use
	TaskType modelprovider.TaskType

	Mode ExpertConfig

	// Temperature overrides the default LLM sampling temperature (0.3).
	// Use nil to keep the default. Set explicitly for models that only accept
	// specific values — for example, OpenAI reasoning models (o1, o3) require
	// temperature=1.
	Temperature *float64

	// WorkingMemory is an optional shared memory used to cache file-read tool results.
	// When set, ToolWrapper will automatically cache results from read_file, list_file,
	// and read_multiple_files, preventing redundant reads within the same session.
	WorkingMemory *rtmemory.WorkingMemory

	// Attachments holds file/media attachments from the incoming message.
	// Image attachments with LocalPath are added as multimodal visual content.
	// Audio attachments are added as audio content parts (with OGG→WAV conversion).
	// Video attachments are embedded via the File API with explicit MIME type.
	// Other attachments (PDF, DOCX) are described textually or embedded as files.
	Attachments []messenger.Attachment
}

type Response

type Response struct {
	Choices []model.Choice
	Usage   *model.Usage
}

func HandleExpertError

func HandleExpertError(ctx context.Context, err error) (Response, error)

HandleExpertError inspects errors returned from the expert runner. It converts internal errors (context cancellation, LLM limits) into user-friendly messages instead of leaking raw Go error strings.

Directories

Path Synopsis
Code generated by counterfeiter.
Code generated by counterfeiter.
modelproviderfakes
Code generated by counterfeiter.
Code generated by counterfeiter.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL