Documentation
¶
Overview ¶
Package mock provides mock servers for E2E testing.
Package mock provides test doubles for external services.
Index ¶
- type AGNTCYServer
- func (s *AGNTCYServer) GetRegistrations() map[string]*AgentRegistration
- func (s *AGNTCYServer) MetricsReceived() int64
- func (s *AGNTCYServer) Start(addr string) error
- func (s *AGNTCYServer) Stats() map[string]any
- func (s *AGNTCYServer) Stop() error
- func (s *AGNTCYServer) TracesReceived() int64
- func (s *AGNTCYServer) URL() string
- type AgentRegistration
- type ChatCompletionRequest
- type ChatCompletionResponse
- type ChatMessage
- type Choice
- type FunctionCall
- type FunctionDef
- type OpenAIServer
- func (s *OpenAIServer) Addr() string
- func (s *OpenAIServer) LastRequest() *ChatCompletionRequest
- func (s *OpenAIServer) RequestCount() int
- func (s *OpenAIServer) ResetSequence()
- func (s *OpenAIServer) SequenceIndex() int
- func (s *OpenAIServer) Start(addr string) error
- func (s *OpenAIServer) Stop() error
- func (s *OpenAIServer) URL() string
- func (s *OpenAIServer) WithCompletionContent(content string) *OpenAIServer
- func (s *OpenAIServer) WithRequestDelay(d time.Duration) *OpenAIServer
- func (s *OpenAIServer) WithResponseSequence(responses []string) *OpenAIServer
- func (s *OpenAIServer) WithRoleResponses(resps []RoleResponse) *OpenAIServer
- func (s *OpenAIServer) WithRoleToolCallSequence(calls []RoleToolCall) *OpenAIServer
- func (s *OpenAIServer) WithSubmitAfter(n int) *OpenAIServer
- func (s *OpenAIServer) WithToolArgs(toolName, argsJSON string) *OpenAIServer
- type RoleResponse
- type RoleToolCall
- type Tool
- type ToolCall
- type Usage
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AGNTCYServer ¶
type AGNTCYServer struct {
// contains filtered or unexported fields
}
AGNTCYServer provides mock endpoints for AGNTCY integration testing. It handles: - Directory registration (/v1/agents/register, /v1/agents/heartbeat) - OTEL HTTP traces (/v1/traces) - Health checks (/health)
func NewAGNTCYServer ¶
func NewAGNTCYServer() *AGNTCYServer
NewAGNTCYServer creates a new mock AGNTCY server.
func (*AGNTCYServer) GetRegistrations ¶
func (s *AGNTCYServer) GetRegistrations() map[string]*AgentRegistration
GetRegistrations returns all agent registrations.
func (*AGNTCYServer) MetricsReceived ¶
func (s *AGNTCYServer) MetricsReceived() int64
MetricsReceived returns the number of metric exports received.
func (*AGNTCYServer) Start ¶
func (s *AGNTCYServer) Start(addr string) error
Start starts the server on the given address.
func (*AGNTCYServer) Stats ¶
func (s *AGNTCYServer) Stats() map[string]any
Stats returns server statistics.
func (*AGNTCYServer) TracesReceived ¶
func (s *AGNTCYServer) TracesReceived() int64
TracesReceived returns the number of trace exports received.
type AgentRegistration ¶
type AgentRegistration struct {
AgentID string `json:"agent_id"`
OASFRecord map[string]any `json:"oasf_record"`
RegisteredAt time.Time `json:"registered_at"`
LastHeartbeat time.Time `json:"last_heartbeat"`
TTL string `json:"ttl"`
}
AgentRegistration represents a registered agent.
type ChatCompletionRequest ¶
type ChatCompletionRequest struct {
Model string `json:"model"`
Messages []ChatMessage `json:"messages"`
Tools []Tool `json:"tools,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
Temperature float32 `json:"temperature,omitempty"`
Stream bool `json:"stream,omitempty"`
}
ChatCompletionRequest matches OpenAI API request format.
type ChatCompletionResponse ¶
type ChatCompletionResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
Model string `json:"model"`
Choices []Choice `json:"choices"`
Usage Usage `json:"usage"`
}
ChatCompletionResponse matches OpenAI API response format.
type ChatMessage ¶
type ChatMessage struct {
Role string `json:"role"`
Content string `json:"content,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
}
ChatMessage matches OpenAI API message format.
type Choice ¶
type Choice struct {
Index int `json:"index"`
Message ChatMessage `json:"message"`
FinishReason string `json:"finish_reason"`
}
Choice matches OpenAI API choice format.
type FunctionCall ¶
FunctionCall matches OpenAI API function call format.
type FunctionDef ¶
type FunctionDef struct {
Name string `json:"name"`
Description string `json:"description"`
Parameters any `json:"parameters"`
}
FunctionDef matches OpenAI API function definition.
type OpenAIServer ¶
type OpenAIServer struct {
// contains filtered or unexported fields
}
OpenAIServer is a mock OpenAI-compatible server for testing.
func NewOpenAIServer ¶
func NewOpenAIServer() *OpenAIServer
NewOpenAIServer creates a new mock OpenAI server.
func (*OpenAIServer) Addr ¶
func (s *OpenAIServer) Addr() string
Addr returns the address the server is listening on.
func (*OpenAIServer) LastRequest ¶
func (s *OpenAIServer) LastRequest() *ChatCompletionRequest
LastRequest returns the last request received (for assertions).
func (*OpenAIServer) RequestCount ¶
func (s *OpenAIServer) RequestCount() int
RequestCount returns the number of requests received.
func (*OpenAIServer) ResetSequence ¶
func (s *OpenAIServer) ResetSequence()
ResetSequence resets the response sequence to the beginning.
func (*OpenAIServer) SequenceIndex ¶
func (s *OpenAIServer) SequenceIndex() int
SequenceIndex returns the current position in the response sequence.
func (*OpenAIServer) Start ¶
func (s *OpenAIServer) Start(addr string) error
Start starts the mock server on the given address. If addr is empty or ":0", a random available port is used.
func (*OpenAIServer) Stop ¶
func (s *OpenAIServer) Stop() error
Stop stops the mock server gracefully.
func (*OpenAIServer) URL ¶
func (s *OpenAIServer) URL() string
URL returns the base URL for the server.
func (*OpenAIServer) WithCompletionContent ¶
func (s *OpenAIServer) WithCompletionContent(content string) *OpenAIServer
WithCompletionContent configures the content returned on completion.
func (*OpenAIServer) WithRequestDelay ¶
func (s *OpenAIServer) WithRequestDelay(d time.Duration) *OpenAIServer
WithRequestDelay configures an artificial delay per request.
func (*OpenAIServer) WithResponseSequence ¶
func (s *OpenAIServer) WithResponseSequence(responses []string) *OpenAIServer
WithResponseSequence configures a sequence of completion contents. Each call to the chat completion endpoint will return the next response in the sequence. After the sequence is exhausted, it returns the last response.
func (*OpenAIServer) WithRoleResponses ¶
func (s *OpenAIServer) WithRoleResponses(resps []RoleResponse) *OpenAIServer
WithRoleResponses configures prompt-content-based routing for the completion body. The mock scans the system+user messages in the incoming request for each marker in order; the first match wins. When no marker matches (or when roleResponses is empty) the server falls back to the response sequence, then to the default completion content. The tool-call turn is unaffected.
func (*OpenAIServer) WithRoleToolCallSequence ¶
func (s *OpenAIServer) WithRoleToolCallSequence(calls []RoleToolCall) *OpenAIServer
WithRoleToolCallSequence scripts a sequence of tool calls to return when specific role markers match incoming requests. Each scenario sees matches in declaration order — the cursor advances on each fire, and after the sequence is exhausted the final entry sticks. When a request matches an entry but does not advertise the named tool (e.g. coordinator prompt matches but the request's tools list has no "decide"), the mock falls through to its normal completion/tool-call behaviour so the scenario author gets a deterministic failure rather than a surprise tool call.
func (*OpenAIServer) WithSubmitAfter ¶
func (s *OpenAIServer) WithSubmitAfter(n int) *OpenAIServer
WithSubmitAfter configures the mock to emit a submit_work tool call after n tool rounds have completed, provided submit_work is advertised in the request's tool list. n=0 disables the behaviour (default). n=1 submits after the first tool round; use a higher value to exercise multi-tool flows before completion. Requires a registered submit_work executor on the agentic-tools side to actually terminate the loop — without one the call surfaces as a "tool not found" error.
func (*OpenAIServer) WithToolArgs ¶
func (s *OpenAIServer) WithToolArgs(toolName, argsJSON string) *OpenAIServer
WithToolArgs configures the arguments returned for a specific tool.
type RoleResponse ¶
type RoleResponse struct {
// Marker is a substring searched for in system+user message content.
// Match is case-sensitive; keep markers specific enough to avoid overlap
// between roles.
Marker string
// Content is the completion body returned when Marker matches.
Content string
}
RoleResponse pairs a prompt-content marker with the completion body the mock should return when that marker is present. Matching runs against concatenated system and user message content in the incoming request, in declaration order — first marker to match wins. Use this to give different agent roles (researcher, synthesizer, etc.) distinct deterministic outputs without coupling the mock to the full prompt.
type RoleToolCall ¶
type RoleToolCall struct {
// Marker is a substring searched for in system+user message content.
Marker string
// ToolName is the function name to call.
ToolName string
// Args is serialised to JSON and placed on ToolCall.Function.Arguments.
// Must be non-nil for JSON marshal to produce "{}" at minimum.
Args map[string]any
}
RoleToolCall pairs a prompt-content marker with a tool call the mock should emit instead of completion content. Matching runs against concatenated system and user message content in the incoming request. Use this to force a coordinator-style structured output (e.g. a specific decide() call) when the scenario needs determinism at the tool-call layer, not just completion-text layer.
Sequence semantics: WithRoleToolCallSequence advances a cursor each time a marker match fires, so callers can script "first coordinator call → fan_out; second coordinator call → synthesize" as a single slice. When the cursor exceeds the slice length, subsequent matches return the last entry (sticky behaviour, same as WithResponseSequence).
type Tool ¶
type Tool struct {
Type string `json:"type"`
Function FunctionDef `json:"function"`
}
Tool matches OpenAI API tool format.
type ToolCall ¶
type ToolCall struct {
ID string `json:"id"`
Type string `json:"type"`
Function FunctionCall `json:"function"`
}
ToolCall matches OpenAI API tool call format.