auditlog

package
v0.0.0-...-5fe9b4e Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2026 License: MIT Imports: 39 Imported by: 0

Documentation

Overview

Package auditlog provides audit logging for the AI gateway. It captures request/response metadata and stores it in configurable backends.

Index

Constants

View Source
const (
	CacheTypeExact    = "exact"
	CacheTypeSemantic = "semantic"

	AuthMethodAPIKey    = "api_key"
	AuthMethodMasterKey = "master_key"
	AuthMethodNoKey     = "no_key"
)
View Source
const (
	// MaxBodyCapture is the maximum size of request/response bodies to capture (1MB).
	// Prevents memory exhaustion from large payloads.
	MaxBodyCapture = 1024 * 1024

	// MaxContentCapture is the maximum size of accumulated streaming content (1MB).
	// Used by the stream observer to limit reconstructed response body size.
	MaxContentCapture = 1024 * 1024

	// BatchFlushThreshold is the number of entries that triggers an immediate flush.
	// When the batch reaches this size, it's written to storage without waiting for the timer.
	BatchFlushThreshold = 100

	// APIKeyHashPrefixLength is the number of hex characters from SHA256 hash.
	// 16 hex chars = 64 bits of entropy for identification without exposure.
	APIKeyHashPrefixLength = 16
)

Buffer and capture limits for audit logging.

View Source
const (
	// LogEntryKey is the context key for storing the log entry.
	LogEntryKey contextKey = "auditlog_entry"

	// LogEntryStreamingKey is the context key for marking a request as streaming.
	// When true, the middleware skips logging because the stream observer path
	// handles streaming audit logging.
	LogEntryStreamingKey contextKey = "auditlog_entry_streaming"
)
View Source
const CleanupInterval = 1 * time.Hour

CleanupInterval is how often the cleanup goroutine runs to delete old log entries.

Variables

View Source
var ErrPartialWrite = errors.New("partial write failure")

ErrPartialWrite indicates that a batch write only partially succeeded. Use errors.As to extract details about the failure.

View Source
var RedactedHeaders = []string{
	"authorization",
	"x-api-key",
	"cookie",
	"set-cookie",
	"x-auth-token",
	"x-access-token",
	"proxy-authorization",
	"x-gomodel-key",
}

RedactedHeaders contains headers that should be automatically redacted. Values are replaced with "[REDACTED]" to prevent leaking secrets.

Functions

func CaptureInternalJSONExchange

func CaptureInternalJSONExchange(
	entry *LogEntry,
	ctx context.Context,
	method,
	path string,
	requestBody,
	responseBody any,
	responseErr error,
	cfg Config,
)

CaptureInternalJSONExchange applies normal audit capture policy to an internal JSON request/response pair without requiring the caller to synthesize HTTP transport details in the server layer.

func EnrichEntry

func EnrichEntry(c *echo.Context, model, provider string)

EnrichEntry retrieves the log entry from context for enrichment by handlers. This allows handlers to add model and provider information.

func EnrichEntryWithAuthKeyID

func EnrichEntryWithAuthKeyID(c *echo.Context, authKeyID string)

EnrichEntryWithAuthKeyID attaches the authenticated managed auth key id to the live audit entry.

func EnrichEntryWithAuthMethod

func EnrichEntryWithAuthMethod(c *echo.Context, method string)

EnrichEntryWithAuthMethod records which authentication mechanism was used for the request.

func EnrichEntryWithCacheType

func EnrichEntryWithCacheType(c *echo.Context, cacheType string)

EnrichEntryWithCacheType attaches cache-hit metadata to the live audit entry. The value is intentionally sourced directly from the cache middleware, not inferred from response headers after the fact.

func EnrichEntryWithCachedStreamResponse

func EnrichEntryWithCachedStreamResponse(c *echo.Context, path string, body []byte)

EnrichEntryWithCachedStreamResponse reconstructs the OpenAI-compatible response body for a cached SSE replay when audit body capture is enabled.

func EnrichEntryWithError

func EnrichEntryWithError(c *echo.Context, errorType, errorMessage string)

EnrichEntryWithError adds error information to the log entry.

func EnrichEntryWithFailover

func EnrichEntryWithFailover(c *echo.Context, targetModel string)

EnrichEntryWithFailover records the configured failover selector used for the live request when translated execution redirected away from the primary selector.

func EnrichEntryWithResolvedRoute

func EnrichEntryWithResolvedRoute(c *echo.Context, resolvedModel, providerType, providerName string)

EnrichEntryWithResolvedRoute attaches the final executed route to the live audit entry after execution resolved to a concrete provider/model.

func EnrichEntryWithStream

func EnrichEntryWithStream(c *echo.Context, stream bool)

EnrichEntryWithStream marks the log entry as a streaming request.

func EnrichEntryWithUserPath

func EnrichEntryWithUserPath(c *echo.Context, userPath string)

EnrichEntryWithUserPath attaches the effective user path to the live audit entry.

func EnrichEntryWithWorkflow

func EnrichEntryWithWorkflow(c *echo.Context, workflow *core.Workflow)

EnrichEntryWithWorkflow attaches workflow metadata to the live audit entry. This is preferred over resolution-only enrichment once workflow resolution has completed for the request.

func EnrichLogEntryWithFailover

func EnrichLogEntryWithFailover(entry *LogEntry, targetModel string)

EnrichLogEntryWithFailover attaches failover redirect metadata directly to an existing audit log entry.

func EnrichLogEntryWithRequestContext

func EnrichLogEntryWithRequestContext(entry *LogEntry, ctx context.Context)

EnrichLogEntryWithRequestContext attaches auth and effective user-path metadata from context directly to an existing log entry.

func EnrichLogEntryWithResolvedRoute

func EnrichLogEntryWithResolvedRoute(entry *LogEntry, resolvedModel, providerType, providerName string)

EnrichLogEntryWithResolvedRoute attaches the final executed route directly to an existing audit log entry.

func EnrichLogEntryWithWorkflow

func EnrichLogEntryWithWorkflow(entry *LogEntry, workflow *core.Workflow)

EnrichLogEntryWithWorkflow attaches workflow metadata directly to an existing log entry. Internal translated executors can use this without depending on Echo middleware state.

func IsEntryMarkedAsStreaming

func IsEntryMarkedAsStreaming(c interface{ Get(string) any }) bool

IsEntryMarkedAsStreaming checks if the entry is marked as streaming.

func MarkEntryAsStreaming

func MarkEntryAsStreaming(c interface{ Set(string, any) }, isStreaming bool)

MarkEntryAsStreaming marks the entry as a streaming request so the middleware knows not to log it (the stream observer path will handle logging).

func Middleware

func Middleware(logger LoggerInterface) echo.MiddlewareFunc

Middleware creates an Echo middleware for audit logging. It captures request metadata at the start and response metadata at the end, then writes the log entry asynchronously.

func PopulateRequestData

func PopulateRequestData(entry *LogEntry, req *http.Request, cfg Config)

PopulateRequestData copies the configured request capture fields into the log entry. Streaming handlers call this before creating the detached stream entry so request metadata is preserved even though the middleware finishes later.

func PopulateResponseData

func PopulateResponseData(entry *LogEntry, headers http.Header, body []byte, bodyTruncated bool, cfg Config)

PopulateResponseData copies the configured response capture fields into the log entry from already-buffered response bytes.

func PopulateResponseHeaders

func PopulateResponseHeaders(entry *LogEntry, headers http.Header)

PopulateResponseHeaders copies response headers into the log entry when header logging is enabled.

func RedactHeaders

func RedactHeaders(headers map[string]string) map[string]string

RedactHeaders redacts sensitive headers from a header map. The original map is not modified; a new map is returned.

func RunCleanupLoop

func RunCleanupLoop(stop <-chan struct{}, cleanupFn func())

RunCleanupLoop runs a cleanup function periodically until the stop channel is closed. It runs cleanup immediately on start, then at CleanupInterval intervals.

Types

type Config

type Config struct {
	// Enabled controls whether audit logging is active
	Enabled bool

	// LogBodies enables logging of full request/response bodies
	LogBodies bool

	// LogHeaders enables logging of request/response headers
	LogHeaders bool

	// BufferSize is the number of log entries to buffer before flushing
	BufferSize int

	// FlushInterval is how often to flush buffered logs
	FlushInterval time.Duration

	// RetentionDays is how long to keep logs (0 = forever)
	RetentionDays int

	// OnlyModelInteractions limits logging to AI model endpoints only
	// When true, only /v1/chat/completions, /v1/responses, /v1/embeddings, /v1/files, and /v1/batches are logged
	OnlyModelInteractions bool
}

Config holds audit logging configuration

func DefaultConfig

func DefaultConfig() Config

DefaultConfig returns a Config with sensible defaults

type ConversationResult

type ConversationResult struct {
	AnchorID string     `json:"anchor_id"`
	Entries  []LogEntry `json:"entries"`
}

ConversationResult holds a linear conversation thread centered around an anchor log.

type FailoverSnapshot

type FailoverSnapshot struct {
	TargetModel string `json:"target_model,omitempty" bson:"target_model,omitempty"`
}

FailoverSnapshot stores the runtime failover selection used for one request. The target model is the configured failover selector, not the model echoed by the provider response body.

type LogData

type LogData struct {
	// Identity
	UserAgent  string `json:"user_agent,omitempty" bson:"user_agent,omitempty"`
	APIKeyHash string `json:"api_key_hash,omitempty" bson:"api_key_hash,omitempty"`

	// WorkflowFeatures captures the request-time effective workflow features
	// after runtime caps were applied. This keeps audit views historically accurate
	// even if the active process config changes later.
	WorkflowFeatures *WorkflowFeaturesSnapshot `json:"workflow_features,omitempty" bson:"workflow_features,omitempty"`

	// Failover captures runtime redirect details when translated execution
	// moved from the primary selector to a configured failover target.
	Failover *FailoverSnapshot `json:"failover,omitempty" bson:"failover,omitempty"`

	// Request parameters
	Temperature *float64 `json:"temperature,omitempty" bson:"temperature,omitempty"`
	MaxTokens   *int     `json:"max_tokens,omitempty" bson:"max_tokens,omitempty"`

	// Error details (message can be long, so kept in JSON)
	ErrorMessage string `json:"error_message,omitempty" bson:"error_message,omitempty"`

	// Optional headers (when LOGGING_LOG_HEADERS=true)
	// Sensitive headers are auto-redacted
	RequestHeaders  map[string]string `json:"request_headers,omitempty" bson:"request_headers,omitempty"`
	ResponseHeaders map[string]string `json:"response_headers,omitempty" bson:"response_headers,omitempty"`

	// Optional bodies (when LOGGING_LOG_BODIES=true)
	// Stored as interface{} so MongoDB serializes as native BSON documents (queryable/readable)
	// instead of BSON Binary (base64 in Compass)
	RequestBody  any `json:"request_body,omitempty" bson:"request_body,omitempty"`
	ResponseBody any `json:"response_body,omitempty" bson:"response_body,omitempty"`

	// Body capture status flags (set when body exceeds 1MB limit)
	RequestBodyTooBigToHandle  bool `json:"request_body_too_big_to_handle,omitempty" bson:"request_body_too_big_to_handle,omitempty"`
	ResponseBodyTooBigToHandle bool `json:"response_body_too_big_to_handle,omitempty" bson:"response_body_too_big_to_handle,omitempty"`
}

LogData contains flexible request/response information. Fields that are commonly filtered are stored as columns in LogEntry. This struct contains the remaining flexible data.

type LogEntry

type LogEntry struct {
	// ID is a unique identifier for this log entry (UUID)
	ID string `json:"id" bson:"_id"`

	// Timestamp is when the request started
	Timestamp time.Time `json:"timestamp" bson:"timestamp"`

	// DurationNs is the request duration in nanoseconds
	DurationNs int64 `json:"duration_ns" bson:"duration_ns"`

	// Core fields (indexed for queries)
	RequestedModel    string `json:"requested_model" bson:"requested_model,omitempty"`
	ResolvedModel     string `json:"resolved_model,omitempty" bson:"resolved_model,omitempty"`
	Provider          string `json:"provider" bson:"provider"` // canonical provider type used for routing and filters
	ProviderName      string `json:"provider_name,omitempty" bson:"provider_name,omitempty"`
	AliasUsed         bool   `json:"alias_used,omitempty" bson:"alias_used,omitempty"`
	WorkflowVersionID string `json:"workflow_version_id,omitempty" bson:"workflow_version_id,omitempty"`
	CacheType         string `json:"cache_type,omitempty" bson:"cache_type,omitempty"`
	StatusCode        int    `json:"status_code" bson:"status_code"`

	// Extracted fields for efficient filtering (indexed in relational DBs)
	RequestID  string `json:"request_id,omitempty" bson:"request_id,omitempty"`
	AuthKeyID  string `json:"auth_key_id,omitempty" bson:"auth_key_id,omitempty"`
	AuthMethod string `json:"auth_method,omitempty" bson:"auth_method,omitempty"`
	ClientIP   string `json:"client_ip,omitempty" bson:"client_ip,omitempty"`
	Method     string `json:"method,omitempty" bson:"method,omitempty"`
	Path       string `json:"path,omitempty" bson:"path,omitempty"`
	UserPath   string `json:"user_path,omitempty" bson:"user_path,omitempty"`
	Stream     bool   `json:"stream,omitempty" bson:"stream,omitempty"`
	ErrorType  string `json:"error_type,omitempty" bson:"error_type,omitempty"`

	// Data contains flexible request/response information as JSON
	Data *LogData `json:"data,omitempty" bson:"data,omitempty"`
}

LogEntry represents a single audit log entry. Core fields are indexed for efficient queries.

func CreateStreamEntry

func CreateStreamEntry(baseEntry *LogEntry) *LogEntry

CreateStreamEntry creates a new log entry for a streaming request. This should be called before starting the stream.

func GetStreamEntryFromContext

func GetStreamEntryFromContext(c interface{ Get(string) any }) *LogEntry

GetStreamEntryFromContext retrieves the log entry from Echo context for streaming. This allows handlers to get the entry for wrapping streams.

type LogListResult

type LogListResult struct {
	Entries []LogEntry `json:"entries"`
	Total   int        `json:"total"`
	Limit   int        `json:"limit"`
	Offset  int        `json:"offset"`
}

LogListResult holds a paginated list of audit log entries.

type LogQueryParams

type LogQueryParams struct {
	QueryParams
	RequestedModel string
	Provider       string // filter by provider name or provider type
	Method         string
	Path           string
	UserPath       string
	ErrorType      string
	Search         string
	StatusCode     *int
	Stream         *bool
	Limit          int
	Offset         int
}

LogQueryParams specifies query parameters for paginated audit log retrieval.

type LogStore

type LogStore interface {
	// WriteBatch writes multiple log entries to storage.
	// This is called by the Logger when flushing buffered entries.
	WriteBatch(ctx context.Context, entries []*LogEntry) error

	// Flush forces any pending writes to complete.
	// Called during graceful shutdown.
	Flush(ctx context.Context) error

	// Close releases resources and flushes pending writes.
	Close() error
}

LogStore defines the interface for audit log storage backends. Implementations must be safe for concurrent use.

type Logger

type Logger struct {
	// contains filtered or unexported fields
}

Logger provides async buffered logging with batch writes. It collects log entries in a channel and flushes them to storage either when the buffer is full or at regular intervals.

func NewLogger

func NewLogger(store LogStore, cfg Config) *Logger

NewLogger creates a new async buffered Logger. The logger starts a background goroutine for flushing entries.

func (*Logger) Close

func (l *Logger) Close() error

Close stops the logger and flushes remaining entries. This should be called during graceful shutdown. Close is idempotent - calling it multiple times is safe.

func (*Logger) Config

func (l *Logger) Config() Config

Config returns the logger configuration

func (*Logger) Write

func (l *Logger) Write(entry *LogEntry)

Write queues a log entry for async writing. This method is non-blocking. If the buffer is full or the logger is closed, the entry is dropped and a warning is logged.

type LoggerInterface

type LoggerInterface interface {
	Write(entry *LogEntry)
	Config() Config
	Close() error
}

LoggerInterface defines the interface for loggers (both real and noop)

type MongoDBReader

type MongoDBReader struct {
	// contains filtered or unexported fields
}

MongoDBReader implements Reader for MongoDB.

func NewMongoDBReader

func NewMongoDBReader(database *mongo.Database) (*MongoDBReader, error)

NewMongoDBReader creates a new MongoDB audit log reader.

func (*MongoDBReader) GetConversation

func (r *MongoDBReader) GetConversation(ctx context.Context, logID string, limit int) (*ConversationResult, error)

GetConversation returns a linear conversation thread around a seed log entry.

func (*MongoDBReader) GetLogByID

func (r *MongoDBReader) GetLogByID(ctx context.Context, id string) (*LogEntry, error)

GetLogByID returns a single audit log entry by ID.

func (*MongoDBReader) GetLogs

func (r *MongoDBReader) GetLogs(ctx context.Context, params LogQueryParams) (*LogListResult, error)

GetLogs returns a paginated list of audit log entries.

type MongoDBStore

type MongoDBStore struct {
	// contains filtered or unexported fields
}

MongoDBStore implements LogStore for MongoDB.

func NewMongoDBStore

func NewMongoDBStore(database *mongo.Database, retentionDays int) (*MongoDBStore, error)

NewMongoDBStore creates a new MongoDB audit log store. It creates the collection and indexes if they don't exist. MongoDB handles TTL-based cleanup automatically via TTL indexes.

func (*MongoDBStore) Close

func (s *MongoDBStore) Close() error

Close is a no-op for MongoDB as the client is managed by the storage layer.

func (*MongoDBStore) Flush

func (s *MongoDBStore) Flush(_ context.Context) error

Flush is a no-op for MongoDB as writes are synchronous.

func (*MongoDBStore) WriteBatch

func (s *MongoDBStore) WriteBatch(ctx context.Context, entries []*LogEntry) error

WriteBatch writes multiple log entries to MongoDB using InsertMany.

type NoopLogger

type NoopLogger struct{}

NoopLogger is a logger that does nothing (used when logging is disabled)

func (*NoopLogger) Close

func (l *NoopLogger) Close() error

Close does nothing

func (*NoopLogger) Config

func (l *NoopLogger) Config() Config

Config returns an empty config

func (*NoopLogger) Write

func (l *NoopLogger) Write(_ *LogEntry)

Write does nothing

type PartialWriteError

type PartialWriteError struct {
	TotalEntries int
	FailedCount  int
	Cause        mongo.BulkWriteException
}

PartialWriteError wraps a mongo.BulkWriteException with additional context about how many entries failed vs succeeded.

func (*PartialWriteError) Error

func (e *PartialWriteError) Error() string

func (*PartialWriteError) Unwrap

func (e *PartialWriteError) Unwrap() error

type PostgreSQLReader

type PostgreSQLReader struct {
	// contains filtered or unexported fields
}

PostgreSQLReader implements Reader for PostgreSQL databases.

func NewPostgreSQLReader

func NewPostgreSQLReader(pool *pgxpool.Pool) (*PostgreSQLReader, error)

NewPostgreSQLReader creates a new PostgreSQL audit log reader.

func (*PostgreSQLReader) GetConversation

func (r *PostgreSQLReader) GetConversation(ctx context.Context, logID string, limit int) (*ConversationResult, error)

GetConversation returns a linear conversation thread around a seed log entry.

func (*PostgreSQLReader) GetLogByID

func (r *PostgreSQLReader) GetLogByID(ctx context.Context, id string) (*LogEntry, error)

GetLogByID returns a single audit log entry by ID.

func (*PostgreSQLReader) GetLogs

func (r *PostgreSQLReader) GetLogs(ctx context.Context, params LogQueryParams) (*LogListResult, error)

GetLogs returns a paginated list of audit log entries.

type PostgreSQLStore

type PostgreSQLStore struct {
	// contains filtered or unexported fields
}

PostgreSQLStore implements LogStore for PostgreSQL databases.

func NewPostgreSQLStore

func NewPostgreSQLStore(pool *pgxpool.Pool, retentionDays int) (*PostgreSQLStore, error)

NewPostgreSQLStore creates a new PostgreSQL audit log store. It creates the audit_logs table if it doesn't exist and starts a background cleanup goroutine if retention is configured.

func (*PostgreSQLStore) Close

func (s *PostgreSQLStore) Close() error

Close stops the cleanup goroutine. Note: We don't close the pool here as it's managed by the storage layer. Safe to call multiple times.

func (*PostgreSQLStore) Flush

func (s *PostgreSQLStore) Flush(_ context.Context) error

Flush is a no-op for PostgreSQL as writes are synchronous.

func (*PostgreSQLStore) WriteBatch

func (s *PostgreSQLStore) WriteBatch(ctx context.Context, entries []*LogEntry) error

WriteBatch writes multiple log entries to PostgreSQL using batch insert.

type QueryParams

type QueryParams struct {
	StartDate time.Time // Inclusive start (day precision)
	EndDate   time.Time // Inclusive end (day precision)
}

QueryParams specifies the date range for audit log retrieval.

type Reader

type Reader interface {
	// GetLogs returns a paginated list of audit log entries with optional filtering.
	GetLogs(ctx context.Context, params LogQueryParams) (*LogListResult, error)

	// GetLogByID returns a single audit log entry by ID.
	// Returns (nil, nil) when no entry exists for the given ID.
	GetLogByID(ctx context.Context, id string) (*LogEntry, error)

	// GetConversation returns a linear conversation thread around a seed log entry.
	// It follows Responses API linkage fields when available:
	// request_body.previous_response_id and response_body.id.
	GetConversation(ctx context.Context, logID string, limit int) (*ConversationResult, error)
}

Reader provides read access to audit log data for the admin API.

func NewReader

func NewReader(store storage.Storage) (Reader, error)

NewReader creates an audit log Reader from a storage backend. Returns nil when store is nil.

type Result

type Result struct {
	Logger  LoggerInterface
	Storage storage.Storage
}

Result holds the initialized audit logger and its dependencies. The caller is responsible for calling Close() to release resources.

func New

func New(ctx context.Context, cfg *config.Config) (*Result, error)

New creates an audit logger from configuration. Returns a Result containing the logger and storage for lifecycle management. The caller must call Result.Close() during shutdown.

If logging is disabled in the config, returns a NoopLogger with nil storage.

func (*Result) Close

func (r *Result) Close() error

Close releases all resources held by the audit logger. Safe to call multiple times.

type SQLiteReader

type SQLiteReader struct {
	// contains filtered or unexported fields
}

SQLiteReader implements Reader for SQLite databases.

func NewSQLiteReader

func NewSQLiteReader(db *sql.DB) (*SQLiteReader, error)

NewSQLiteReader creates a new SQLite audit log reader.

func (*SQLiteReader) GetConversation

func (r *SQLiteReader) GetConversation(ctx context.Context, logID string, limit int) (*ConversationResult, error)

GetConversation returns a linear conversation thread around a seed log entry.

func (*SQLiteReader) GetLogByID

func (r *SQLiteReader) GetLogByID(ctx context.Context, id string) (*LogEntry, error)

GetLogByID returns a single audit log entry by ID.

func (*SQLiteReader) GetLogs

func (r *SQLiteReader) GetLogs(ctx context.Context, params LogQueryParams) (*LogListResult, error)

GetLogs returns a paginated list of audit log entries.

type SQLiteStore

type SQLiteStore struct {
	// contains filtered or unexported fields
}

SQLiteStore implements LogStore for SQLite databases.

func NewSQLiteStore

func NewSQLiteStore(db *sql.DB, retentionDays int) (*SQLiteStore, error)

NewSQLiteStore creates a new SQLite audit log store. It creates the audit_logs table if it doesn't exist and starts a background cleanup goroutine if retention is configured.

func (*SQLiteStore) Close

func (s *SQLiteStore) Close() error

Close stops the cleanup goroutine. Note: We don't close the DB here as it's managed by the storage layer. Safe to call multiple times.

func (*SQLiteStore) Flush

func (s *SQLiteStore) Flush(_ context.Context) error

Flush is a no-op for SQLite as writes are synchronous.

func (*SQLiteStore) WriteBatch

func (s *SQLiteStore) WriteBatch(ctx context.Context, entries []*LogEntry) error

WriteBatch writes multiple log entries to SQLite using batch insert. Entries are chunked to stay within SQLite's parameter limit.

type StreamLogObserver

type StreamLogObserver struct {
	// contains filtered or unexported fields
}

StreamLogObserver reconstructs stream metadata and optional response bodies from parsed SSE JSON payloads.

func NewStreamLogObserver

func NewStreamLogObserver(logger LoggerInterface, entry *LogEntry, path string) *StreamLogObserver

func (*StreamLogObserver) OnJSONEvent

func (o *StreamLogObserver) OnJSONEvent(event map[string]any)

func (*StreamLogObserver) OnStreamClose

func (o *StreamLogObserver) OnStreamClose()

type WorkflowFeaturesSnapshot

type WorkflowFeaturesSnapshot struct {
	Cache      bool `json:"cache" bson:"cache"`
	Audit      bool `json:"audit" bson:"audit"`
	Usage      bool `json:"usage" bson:"usage"`
	Guardrails bool `json:"guardrails" bson:"guardrails"`
	Fallback   bool `json:"fallback" bson:"fallback"`
}

WorkflowFeaturesSnapshot stores the effective workflow feature state that applied to one request. Fields intentionally do not use omitempty so "false" remains explicit once the snapshot exists.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL