Documentation
¶
Overview ¶
Package logsense is a Go library for AI-powered log analysis.
logsense embeds into a Go service to ingest logs from files or via inline reporting, group them into templates using the Drain algorithm, score each cluster by anomaly signals, and use a configured LLM provider to produce summaries, root-cause hypotheses, and suggested actions.
Quickstart:
ll, err := logsense.New(logsense.Config{
Sources: []logsense.SourceConfig{
{Kind: "file", Path: "/var/log/myapp.log"},
},
AI: logsense.AIConfig{Provider: "logsense-ai"},
})
if err != nil { return err }
defer ll.Close()
if err := ll.Start(ctx); err != nil { return err }
Storage defaults to a local SQLite file (./logsense.db). For Postgres, set: Storage.Kind = "postgres" and provide a DSN.
View clusters and analyses with the logsense binary:
go install github.com/Tragidra/logsense/cmd/logsense@latest logsense ui --db ./logsense.db
See https://github.com/Tragidra/logsense for full documentation
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrAIDisabled = errors.New("logsense: AI provider not configured")
ErrAIDisabled is returned when AnalyzeNow is called but the configured AI provider is unusable (e.g. missing credentials).
var ErrNotStarted = errors.New("logsense: not started or already closed")
ErrNotStarted is returned by Report and AnalyzeNow if the logsense instance has not been started or has already been closed.
Functions ¶
func New ¶
New constructs a logsense instance from a Config. It does not start any goroutines - call Start() for that.
Example ¶
ExampleNew shows the minimal file-mode setup: logsense tails one log file and groups events into clusters Run the dashboard to inspect results:
logsense ui --db ./logsense.db
package main
import (
"context"
"log"
"github.com/Tragidra/logsense"
)
func main() {
ll, err := logsense.New(logsense.Config{
Sources: []logsense.SourceConfig{
{
Kind: "file",
Path: "/var/log/myapp.log",
Service: "myapp",
},
},
AI: logsense.AIConfig{
Provider: "logsense-ai", // local LM Studio / Ollama on :1234
},
Storage: logsense.StorageConfig{
Kind: "sqlite",
SQLitePath: "./logsense.db",
},
})
if err != nil {
log.Fatal(err)
}
defer ll.Close()
if err := ll.Start(context.Background()); err != nil {
log.Fatal(err)
}
// Block until the process is interrupted
select {}
}
Output:
Example (YamlConfig) ¶
ExampleNew_yamlConfig shows the same setup loaded from a YAML file, the file supports ${VAR} and ${VAR:-default} env expansion.
package main
import (
"context"
"log"
"github.com/Tragidra/logsense"
)
func main() {
// logsense.yaml:
//
// sources:
// - kind: file
// path: /var/log/myapp.log
// ai:
// provider: openrouter
// api_key: ${OPENROUTER_API_KEY}
// model: anthropic/claude-3.5-haiku
// storage:
// kind: sqlite
// sqlite_path: ./logsense.db
ll, err := logsense.NewFromYAML("logsense.yaml")
if err != nil {
log.Fatal(err)
}
defer ll.Close()
if err := ll.Start(context.Background()); err != nil {
log.Fatal(err)
}
select {}
}
Output:
func NewFromYAML ¶
NewFromYAML loads configuration from a YAML file (with ${VAR} env expansion) and constructs a logsense instance
Types ¶
type AIConfig ¶
type AIConfig struct {
// "logsense-ai" (default — local OpenAI-compatible endpoint, LM Studio) or "openrouter".
Provider string `yaml:"provider"`
// Required for "openrouter".
APIKey string `yaml:"api_key"`
// Required for "openrouter" (for example "anthropic/claude-3.5-sonnet").
// Optional for "logsense-ai", whatever model the local server has loaded.
Model string `yaml:"model"`
// For "logsense-ai": defaults to http://localhost:1234/v1.
// For "openrouter": defaults to https://openrouter.ai/api/v1.
BaseURL string `yaml:"base_url"`
Timeout time.Duration `yaml:"timeout"` // default 45s
MaxRetries int `yaml:"max_retries"` // default 2
// MaxTokens caps the LLM response. Default 2000.
MaxTokens int `yaml:"max_tokens"`
// Temperature for the LLM. Default 0.2 for openrouter, forced to 0 for
// logsense-ai (small local models give poor structured output otherwise).
Temperature float64 `yaml:"temperature"`
}
AIConfig configures the LLM provider used for cluster analysis.
type ClusterConfig ¶
type ClusterConfig struct {
SimilarityThreshold float64 `yaml:"similarity_threshold"` // default 0.4
MaxDepth int `yaml:"max_depth"` // default 3
PruneAfter time.Duration `yaml:"prune_after"` // default 72h
}
ClusterConfig tunes the Drain clustering algorithm (https://jiemingzhu.github.io/pub/pjhe_icws2017.pdf).
type Config ¶
type Config struct {
// Sources to import, may be empty if Inline mode is the only input.
Sources []SourceConfig `yaml:"sources"`
// AI provider configuration.
AI AIConfig `yaml:"ai"`
// Where to persist clusters and analyses, defaults to in-memory.
Storage StorageConfig `yaml:"storage"`
// Inline mode — when enabled, ll.Report() pushes events into the
// pipeline so user code errors are clustered with file-sourced logs.
Inline InlineConfig `yaml:"inline"`
// Clustering tuning, sensible defaults if zero.
Cluster ClusterConfig `yaml:"cluster"`
// Logger. If nil, a default slog handler at WARN level is used.
Logger *slog.Logger `yaml:"-"`
}
Config is the public configuration for the logsense library, you can pass it to New(); or load it from disk with NewFromYAML(), like /internal/config
All fields have sensible defaults — only Sources or Inline. Enabled is strictly required (you need at least one input)
type Fields ¶
Fields is a flat key-value map of structured data attached to a Report() call. Nested values are JSON-marshalled when stored.
type InlineConfig ¶
type InlineConfig struct {
// Enabled allows ll.Report() to feed synthesised events into the pipeline.
Enabled bool `yaml:"enabled"`
// MinPriority is the cluster priority above which inline AI analysis is
// triggered automatically. Default 50.
MinPriority int `yaml:"min_priority"`
// MaxConcurrent caps the number of background AI requests in flight at once.
// Default 2.
MaxConcurrent int `yaml:"max_concurrent"`
}
InlineConfig controls the behaviour of ll.Report() and the inline AI path.
type SourceConfig ¶
type SourceConfig struct {
Kind string `yaml:"kind"` // "file"
Path string `yaml:"path"` // file path
Service string `yaml:"service"` // optional label; defaults to base filename
Format string `yaml:"format"` // "auto" | "json" | "text"
StartFrom string `yaml:"start_from"` // "beginning" | "end" (default "end")
FollowRotation bool `yaml:"follow_rotation"` // default true
}
SourceConfig describes a single input source. Only "file" is supported in the library form.
type Stats ¶
type Stats struct {
// Dropped counts events that Report() could not enqueue because the pipeline channel was full
Dropped int64
}
Stats reports cumulative counters useful for monitoring the library from the host service.
type StorageConfig ¶
type StorageConfig struct {
// "sqlite" (default), "postgres", or "memory".
Kind string `yaml:"kind"`
// SQLitePath is the file path for the SQLite database. Default "./logsense.db".
SQLitePath string `yaml:"sqlite_path"`
// PostgresDSN is the connection string for postgres mode.
PostgresDSN string `yaml:"postgres_dsn"`
}
StorageConfig selects and configures the persistence backend.
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
loglens
command
Package main is the logsense CLI entry point.
|
Package main is the logsense CLI entry point. |
|
loglens/cmd
Package cmd defines the logsense CLI command tree.
|
Package cmd defines the logsense CLI command tree. |
|
examples
|
|
|
basic
command
Basic demonstrates the minimal file-mode integration: logsense tails one or more log files and clusters them in the background.
|
Basic demonstrates the minimal file-mode integration: logsense tails one or more log files and clusters them in the background. |
|
demo
command
Demo:
|
Demo: |
|
headless
command
Headless demonstrates reading stored recommendations programmatically, no UI needed.
|
Headless demonstrates reading stored recommendations programmatically, no UI needed. |
|
inline
command
Inline demonstrates embedding logsense in an HTTP service.
|
Inline demonstrates embedding logsense in an HTTP service. |
|
internal
|
|
|
analyze
Package analyze sends high-priority clusters to an LLM provider and stores the resulting analyses.
|
Package analyze sends high-priority clusters to an LLM provider and stores the resulting analyses. |
|
api
Package api implements the HTTP server that the logsense UI and external consumers read from.
|
Package api implements the HTTP server that the logsense UI and external consumers read from. |
|
config
Package config handles loading and validating logsense configuration, like config.go in root repo
|
Package config handles loading and validating logsense configuration, like config.go in root repo |
|
ingest
Package ingest defines the Source interface for log ingestion.
|
Package ingest defines the Source interface for log ingestion. |
|
ingest/file
Package file implements a tail-based log ingest source.
|
Package file implements a tail-based log ingest source. |
|
llm
Package llm defines the Provider interface and shared types used by all LLM provider implementations.
|
Package llm defines the Provider interface and shared types used by all LLM provider implementations. |
|
llm/logsenseai
Package logsenseai implements llm.Provider for a local OpenAI-compatible local server (LM Studio, for example).
|
Package logsenseai implements llm.Provider for a local OpenAI-compatible local server (LM Studio, for example). |
|
normalize
Package normalize converts RawLog records into structured LogEvents.
|
Package normalize converts RawLog records into structured LogEvents. |
|
score
Package score assigns a priority (0–100) to each cluster and records anomaly flags such as "burst", "novel", "rare", and "cross-service".
|
Package score assigns a priority (0–100) to each cluster and records anomaly flags such as "burst", "novel", "rare", and "cross-service". |
|
storage
Package storage defines the repository interface used by the rest of logsense, implementations live in subpackages
|
Package storage defines the repository interface used by the rest of logsense, implementations live in subpackages |
|
storage/memory
Package memory provides an in-memory storage.Repository implementation.
|
Package memory provides an in-memory storage.Repository implementation. |
|
storage/migrator
Package migrator runs ordered SQL migrations from an embedded filesystem.
|
Package migrator runs ordered SQL migrations from an embedded filesystem. |
|
storage/postgres
Package postgres is the PostgreSQL-backed implementation of storage.Repository
|
Package postgres is the PostgreSQL-backed implementation of storage.Repository |
|
storage/sqlite
Package sqlite is the SQLite-backed implementation of storage.Repository
|
Package sqlite is the SQLite-backed implementation of storage.Repository |
|
Package model defines the domain types used by logsense and returned by lib public API.
|
Package model defines the domain types used by logsense and returned by lib public API. |
|
pkg
|
|
|
version
Package version holds build-time version information injected via ldflags.
|
Package version holds build-time version information injected via ldflags. |
|
Package web embeds the built Vue Dashboard into the binary.
|
Package web embeds the built Vue Dashboard into the binary. |