Documentation
¶
Overview ¶
Package testutil provides shared test helpers, mocks, and utilities for Talon tests.
Index ¶
- Constants
- func NewOpenAICompatibleServer(content string, inputTokens, outputTokens int) *httptest.Server
- func WriteBlockOnPIIPolicyFile(t *testing.T, dir, name string, blockOnPII bool) string
- func WriteInputOutputRedactPolicyFile(t *testing.T, dir, name string, redactInput, redactOutput bool) string
- func WriteInputRedactWithAuditPolicyFile(t *testing.T, dir, name string, includeOriginalPrompts bool) string
- func WriteOutputScanPolicyFile(t *testing.T, dir, name string, redactPII, blockOnPII bool) string
- func WriteStrictPolicyFile(t *testing.T, dir, name string) string
- func WriteTestPolicyFile(t *testing.T, dir, name string) string
- type CapturingMockProvider
- type MockProvider
- func (m *MockProvider) EstimateCost(_ string, _, _ int) float64
- func (m *MockProvider) Generate(_ context.Context, req *llm.Request) (*llm.Response, error)
- func (m *MockProvider) HealthCheck(_ context.Context) error
- func (m *MockProvider) Metadata() llm.ProviderMetadata
- func (m *MockProvider) Name() string
- func (m *MockProvider) Stream(_ context.Context, _ *llm.Request, _ chan<- llm.StreamChunk) error
- func (m *MockProvider) ValidateConfig() error
- func (m *MockProvider) WithHTTPClient(_ *http.Client) llm.Provider
- type OpenAICompatibleResponse
- type ToolCallMockProvider
- func (p *ToolCallMockProvider) EstimateCost(_ string, _, _ int) float64
- func (p *ToolCallMockProvider) Generate(_ context.Context, req *llm.Request) (*llm.Response, error)
- func (p *ToolCallMockProvider) HealthCheck(_ context.Context) error
- func (p *ToolCallMockProvider) Metadata() llm.ProviderMetadata
- func (p *ToolCallMockProvider) Name() string
- func (p *ToolCallMockProvider) Stream(_ context.Context, _ *llm.Request, _ chan<- llm.StreamChunk) error
- func (p *ToolCallMockProvider) ValidateConfig() error
- func (p *ToolCallMockProvider) WithHTTPClient(_ *http.Client) llm.Provider
Constants ¶
const ( TestEncryptionKey = "12345678901234567890123456789012" TestSigningKey = "test-signing-key-1234567890123456" )
Test signing and encryption keys for use in tests only. 32 bytes for AES-256 / HMAC key material.
Variables ¶
This section is empty.
Functions ¶
func NewOpenAICompatibleServer ¶
NewOpenAICompatibleServer starts an httptest.Server that responds to POST /v1/chat/completions with a minimal valid OpenAI-style JSON response. Content is the assistant message body; inputTokens/outputTokens set usage. Caller must call server.Close() or register t.Cleanup(server.Close).
func WriteBlockOnPIIPolicyFile ¶
WriteBlockOnPIIPolicyFile creates a minimal valid .talon.yaml with data_classification (input_scan and block_on_pii). Cost limits are high so policy deny is only from block_on_pii when blockOnPII is true.
func WriteInputOutputRedactPolicyFile ¶
func WriteInputOutputRedactPolicyFile(t *testing.T, dir, name string, redactInput, redactOutput bool) string
WriteInputOutputRedactPolicyFile creates a .talon.yaml that enables input_scan + output_scan and uses the granular redact_input / redact_output fields for controlling PII redaction direction.
func WriteInputRedactWithAuditPolicyFile ¶
func WriteInputRedactWithAuditPolicyFile(t *testing.T, dir, name string, includeOriginalPrompts bool) string
WriteInputRedactWithAuditPolicyFile creates a .talon.yaml with input redaction enabled and audit prompt logging configured. Used to test that the prompt version store respects GDPR Art. 5(1)(c) data minimization by storing the redacted (not original) prompt.
func WriteOutputScanPolicyFile ¶
WriteOutputScanPolicyFile creates a .talon.yaml with data_classification that enables output_scan and optionally redact_pii and block_on_pii for output PII enforcement tests.
func WriteStrictPolicyFile ¶
WriteStrictPolicyFile creates a .talon.yaml that denies high-cost requests.
Types ¶
type CapturingMockProvider ¶
type CapturingMockProvider struct {
MockProvider
LastPrompt string
// contains filtered or unexported fields
}
CapturingMockProvider is like MockProvider but records the last prompt it received.
func (*CapturingMockProvider) Generate ¶
func (c *CapturingMockProvider) Generate(ctx context.Context, req *llm.Request) (*llm.Response, error)
Generate records the last user-role message prompt and delegates to MockProvider.
func (*CapturingMockProvider) GetLastPrompt ¶
func (c *CapturingMockProvider) GetLastPrompt() string
GetLastPrompt returns the last captured user prompt (thread-safe).
type MockProvider ¶
type MockProvider struct {
ProviderName string // provider identifier, e.g. "openai"
Content string // canned response; empty = "mock response from " + ProviderName
Err error // if set, Generate returns this error
}
MockProvider implements llm.Provider for tests without live API calls. When Content is empty, Generate returns "mock response from " + ProviderName; otherwise uses Content. Set Err to simulate LLM errors.
func (*MockProvider) EstimateCost ¶
func (m *MockProvider) EstimateCost(_ string, _, _ int) float64
EstimateCost returns a fixed cost for tests.
func (*MockProvider) HealthCheck ¶
func (m *MockProvider) HealthCheck(_ context.Context) error
HealthCheck always succeeds for tests.
func (*MockProvider) Metadata ¶
func (m *MockProvider) Metadata() llm.ProviderMetadata
Metadata returns minimal metadata for tests.
func (*MockProvider) Name ¶
func (m *MockProvider) Name() string
Name returns the provider identifier (implements llm.Provider).
func (*MockProvider) Stream ¶
func (m *MockProvider) Stream(_ context.Context, _ *llm.Request, _ chan<- llm.StreamChunk) error
Stream is not implemented; returns llm.ErrNotImplemented.
func (*MockProvider) ValidateConfig ¶
func (m *MockProvider) ValidateConfig() error
ValidateConfig always succeeds for tests.
func (*MockProvider) WithHTTPClient ¶
func (m *MockProvider) WithHTTPClient(_ *http.Client) llm.Provider
WithHTTPClient returns the receiver unchanged (tests do not need client injection).
type OpenAICompatibleResponse ¶
type OpenAICompatibleResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Model string `json:"model"`
Choices []struct {
Message struct {
Role string `json:"role"`
Content string `json:"content"`
} `json:"message"`
FinishReason string `json:"finish_reason"`
} `json:"choices"`
Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
} `json:"usage"`
}
OpenAICompatibleResponse is the minimal chat completions response for tests.
type ToolCallMockProvider ¶
type ToolCallMockProvider struct {
Responses []*llm.Response // sequence of responses; call N gets Responses[N] or last if N >= len
CallCount int // incremented on each Generate call
ReceivedMessages [][]llm.Message
EstimateCostPerCall float64 // cost returned by EstimateCost (default 0.001)
ErrOnCall int // 1-based; when CallCount == ErrOnCall, Generate returns (nil, Err). 0 = never
Err error // error to return when ErrOnCall is hit
// contains filtered or unexported fields
}
ToolCallMockProvider implements llm.Provider for testing the agentic loop. It returns a configurable sequence of responses (e.g. tool calls then final answer), tracks call count and received messages for assertions, and Name() returns "openai" so the runner's agentic loop is active. Set ErrOnCall (1-based) and Err to make Generate return an error on that call (e.g. mid-loop failure).
func (*ToolCallMockProvider) EstimateCost ¶
func (p *ToolCallMockProvider) EstimateCost(_ string, _, _ int) float64
EstimateCost returns the configured per-call cost for tests.
func (*ToolCallMockProvider) Generate ¶
Generate returns the next response in the sequence and records the request.
func (*ToolCallMockProvider) HealthCheck ¶
func (p *ToolCallMockProvider) HealthCheck(_ context.Context) error
HealthCheck always succeeds for tests.
func (*ToolCallMockProvider) Metadata ¶
func (p *ToolCallMockProvider) Metadata() llm.ProviderMetadata
Metadata returns minimal metadata for tests.
func (*ToolCallMockProvider) Name ¶
func (p *ToolCallMockProvider) Name() string
Name returns "openai" so the agentic loop is used in tests.
func (*ToolCallMockProvider) Stream ¶
func (p *ToolCallMockProvider) Stream(_ context.Context, _ *llm.Request, _ chan<- llm.StreamChunk) error
Stream is not implemented; returns llm.ErrNotImplemented.
func (*ToolCallMockProvider) ValidateConfig ¶
func (p *ToolCallMockProvider) ValidateConfig() error
ValidateConfig always succeeds for tests.
func (*ToolCallMockProvider) WithHTTPClient ¶
func (p *ToolCallMockProvider) WithHTTPClient(_ *http.Client) llm.Provider
WithHTTPClient returns the receiver unchanged.