common

package
v0.0.0-...-ffc4fba Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 13, 2026 License: Apache-2.0, Apache-2.0 Imports: 20 Imported by: 0

Documentation

Overview

Package common provides common functionality.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func ExtractEntityName

func ExtractEntityName(n *node.Node) (string, bool)

ExtractEntityName extracts a name from a node (function, variable, class, etc.). It tries properties["name"], then token, then the first child's token/properties.

func ExtractNameFromChildren

func ExtractNameFromChildren(n *node.Node, childIndex int) (string, bool)

ExtractNameFromChildren extracts a name from node children.

func ExtractNameFromProps

func ExtractNameFromProps(n *node.Node, propKey string) (string, bool)

ExtractNameFromProps extracts a name from node properties.

func ExtractNameFromToken

func ExtractNameFromToken(n *node.Node) (string, bool)

ExtractNameFromToken extracts a name from node token.

func FilterByInterface

func FilterByInterface[T any, U any](items []T, cast func(T) (U, bool)) []U

FilterByInterface returns a new slice containing only those items from the input where cast returns (value, true). Preserves input order.

Types

type Aggregator

type Aggregator struct {
	// contains filtered or unexported fields
}

Aggregator provides generic aggregation capabilities for analyzers.

func NewAggregator

func NewAggregator(
	analyzerName string,
	numericKeys, countKeys []string,
	collectionKey string,
	identifierKeys []string,
	messageBuilder func(float64) string,
	emptyResultBuilder func() analyze.Report,
) *Aggregator

NewAggregator creates a new Aggregator with configurable components. identifierKeys specifies the key(s) used for deduplication. When multiple keys are provided, they form a composite dedup key (e.g., ["_source_file", "name"]) to prevent cross-file overwrites of items with the same primary name.

func (*Aggregator) Aggregate

func (a *Aggregator) Aggregate(results map[string]analyze.Report)

Aggregate combines multiple analysis results.

func (*Aggregator) EstimatedStateSize

func (a *Aggregator) EstimatedStateSize() int64

EstimatedStateSize returns the estimated in-memory state size in bytes. Sums MetricsProcessor and SpillableDataCollector estimates.

func (*Aggregator) GetDataCollector

func (a *Aggregator) GetDataCollector() *SpillableDataCollector

GetDataCollector returns the data collector.

func (*Aggregator) GetMetricsProcessor

func (a *Aggregator) GetMetricsProcessor() *MetricsProcessor

GetMetricsProcessor returns the metrics processor.

func (*Aggregator) GetResult

func (a *Aggregator) GetResult() analyze.Report

GetResult returns the aggregated analysis result.

func (*Aggregator) GetResultBuilder

func (a *Aggregator) GetResultBuilder() *ResultBuilder

GetResultBuilder returns the result builder.

func (*Aggregator) SetAggregationMode

func (a *Aggregator) SetAggregationMode(mode analyze.AggregationMode)

SetAggregationMode sets the aggregation mode on the data collector. In analyze.AggregationModeSummaryOnly, per-item data collection is disabled.

func (*Aggregator) SetSpillThreshold

func (a *Aggregator) SetSpillThreshold(threshold int)

SetSpillThreshold configures the spill threshold on the data collector. A threshold of 0 disables spilling.

type CheckpointHelper

type CheckpointHelper[T any] struct {
	// contains filtered or unexported fields
}

CheckpointHelper provides SaveCheckpoint and LoadCheckpoint methods backed by a persist.Persister with pre-bound build and restore callbacks. Embed *CheckpointHelper[T] in an analyzer struct to promote these methods and partially satisfy the checkpoint.Checkpointable interface.

func NewCheckpointHelper

func NewCheckpointHelper[T any](
	basename string,
	codec persist.Codec,
	build func() *T,
	restore func(*T),
) *CheckpointHelper[T]

NewCheckpointHelper creates a helper that saves/loads state of type T using the given basename, codec, and callbacks.

func (*CheckpointHelper[T]) LoadCheckpoint

func (h *CheckpointHelper[T]) LoadCheckpoint(dir string) error

LoadCheckpoint restores the state from the given directory.

func (*CheckpointHelper[T]) SaveCheckpoint

func (h *CheckpointHelper[T]) SaveCheckpoint(dir string) error

SaveCheckpoint writes the state to the given directory.

type Classifier

type Classifier[T cmp.Ordered] struct {
	// contains filtered or unexported fields
}

Classifier maps ordered values to string labels using descending thresholds. It is safe for concurrent use after construction.

func NewClassifier

func NewClassifier[T cmp.Ordered](thresholds []Threshold[T], defaultLabel string) Classifier[T]

NewClassifier creates a classifier from the given thresholds and default label. Thresholds are copied and sorted in descending order by Limit. The input slice is not modified.

func (Classifier[T]) Classify

func (c Classifier[T]) Classify(value T) string

Classify returns the label of the first threshold where value >= Limit. If no threshold matches, the default label is returned.

type ContextStack

type ContextStack[T any] struct {
	// contains filtered or unexported fields
}

ContextStack is a generic LIFO stack for tracking nested analysis contexts during UAST tree traversal. It replaces the repeated push/pop/peek pattern found in visitor implementations.

func NewContextStack

func NewContextStack[T any]() *ContextStack[T]

NewContextStack creates a new empty ContextStack.

func (*ContextStack[T]) Current

func (s *ContextStack[T]) Current() (T, bool)

Current returns the top element without removing it. Returns the zero value and false if the stack is empty.

func (*ContextStack[T]) Depth

func (s *ContextStack[T]) Depth() int

Depth returns the number of elements on the stack.

func (*ContextStack[T]) Pop

func (s *ContextStack[T]) Pop() (T, bool)

Pop removes and returns the top element. Returns the zero value and false if the stack is empty.

func (*ContextStack[T]) Push

func (s *ContextStack[T]) Push(ctx T)

Push adds an element to the top of the stack.

type DataExtractor

type DataExtractor struct {
	// contains filtered or unexported fields
}

DataExtractor provides generic data extraction capabilities.

func NewDataExtractor

func NewDataExtractor(config ExtractionConfig) *DataExtractor

NewDataExtractor creates a new DataExtractor with configurable extraction settings.

func (*DataExtractor) ExtractChildCount

func (de *DataExtractor) ExtractChildCount(n *node.Node) (int, bool)

ExtractChildCount extracts the number of children.

func (*DataExtractor) ExtractName

func (de *DataExtractor) ExtractName(n *node.Node, extractorKey string) (string, bool)

ExtractName extracts a name from a node using the specified extractor.

func (*DataExtractor) ExtractNameFromChildren

func (de *DataExtractor) ExtractNameFromChildren(n *node.Node, childIndex int) (string, bool)

ExtractNameFromChildren extracts a name from node children.

func (*DataExtractor) ExtractNameFromProps

func (de *DataExtractor) ExtractNameFromProps(n *node.Node, propKey string) (string, bool)

ExtractNameFromProps extracts a name from node properties.

func (*DataExtractor) ExtractNameFromToken

func (de *DataExtractor) ExtractNameFromToken(n *node.Node) (string, bool)

ExtractNameFromToken extracts a name from node token.

func (*DataExtractor) ExtractNodePosition

func (de *DataExtractor) ExtractNodePosition(target *node.Node) (map[string]any, bool)

ExtractNodePosition extracts the node position.

func (*DataExtractor) ExtractNodeProperties

func (de *DataExtractor) ExtractNodeProperties(n *node.Node) (map[string]string, bool)

ExtractNodeProperties extracts all node properties.

func (*DataExtractor) ExtractNodeRoles

func (de *DataExtractor) ExtractNodeRoles(n *node.Node) ([]string, bool)

ExtractNodeRoles extracts the node roles.

func (*DataExtractor) ExtractNodeType

func (de *DataExtractor) ExtractNodeType(n *node.Node) (string, bool)

ExtractNodeType extracts the node type.

func (*DataExtractor) ExtractValue

func (de *DataExtractor) ExtractValue(n *node.Node, extractorKey string) (any, bool)

ExtractValue extracts a value from a node using the specified extractor.

type DetailedDataCollector

type DetailedDataCollector struct {
	// contains filtered or unexported fields
}

DetailedDataCollector collects detailed per-item data from individual file reports and merges them into the aggregated result. Unlike SpillableDataCollector, it appends all items without deduplication.

Supports both legacy []map[string]any collections and analyze.TypedCollection wrappers. TypedCollection items are stored as-is and converted to maps only in DetailedDataCollector.AddToResult.

func NewDetailedDataCollector

func NewDetailedDataCollector(keys ...string) *DetailedDataCollector

NewDetailedDataCollector creates a collector for the given report keys.

func (*DetailedDataCollector) AddToResult

func (d *DetailedDataCollector) AddToResult(result analyze.Report)

AddToResult adds all non-empty collections to the result report. TypedCollection items are converted to []map[string]any at this point.

func (*DetailedDataCollector) CollectFromReports

func (d *DetailedDataCollector) CollectFromReports(results map[string]analyze.Report)

CollectFromReports extracts data for all keys from all non-nil reports. In analyze.AggregationModeSummaryOnly mode, this is a no-op.

func (*DetailedDataCollector) SetAggregationMode

func (d *DetailedDataCollector) SetAggregationMode(mode analyze.AggregationMode)

SetAggregationMode sets the aggregation mode. In analyze.AggregationModeSummaryOnly, CollectFromReports becomes a no-op.

type ExtractionConfig

type ExtractionConfig struct {
	NameExtractors    map[string]NameExtractor
	ValueExtractors   map[string]ValueExtractor
	DefaultExtractors bool
}

ExtractionConfig defines configuration for data extraction.

type FormatConfig

type FormatConfig struct {
	SortBy           string
	SortOrder        string
	MaxItems         int
	ShowProgressBars bool
	ShowTables       bool
	ShowDetails      bool
	SkipHeader       bool
}

FormatConfig defines configuration for formatting.

type Formatter

type Formatter struct {
	// contains filtered or unexported fields
}

Formatter provides generic formatting capabilities for analysis results.

func NewFormatter

func NewFormatter(config FormatConfig) *Formatter

NewFormatter creates a new Formatter with configurable formatting settings.

func (*Formatter) FormatReport

func (f *Formatter) FormatReport(report analyze.Report) string

FormatReport formats an analysis report for display.

type IdentityMixin

type IdentityMixin struct {
	Identity           *plumbing.IdentityDetector
	ReversedPeopleDict []string
}

IdentityMixin deduplicates the identity-resolution pattern shared by burndown, couples, imports, and devs history analyzers.

Each of those analyzers needs two fields — an IdentityDetector reference (set by the pipeline) and a fallback reversed-people-dict (set from Configure facts). The GetReversedPeopleDict method encapsulates the two-tier resolution: prefer IdentityDetector's dict when available, fall back to the manually-set ReversedPeopleDict.

func (*IdentityMixin) GetReversedPeopleDict

func (m *IdentityMixin) GetReversedPeopleDict() []string

GetReversedPeopleDict returns the identity-resolved people dictionary. It prefers IdentityDetector's dict when available and non-empty, falling back to the manually-set ReversedPeopleDict.

type MetricResult

type MetricResult struct {
	Name        string // Machine-readable identifier (e.g., "typo_list").
	Display     string // Human-readable label (e.g., "Typo List").
	Description string // Short description of what this metric measures.
	Type        string // Data type hint (e.g., "list", "aggregate", "scalar").
	Value       any    // The computed metric value.
}

MetricResult represents a single computed metric with its metadata.

type MetricSet

type MetricSet struct {
	// contains filtered or unexported fields
}

MetricSet holds computed metrics for an analyzer and provides the AnalyzerName, ToJSON, and ToYAML methods required by the serialization chain in analyze.BaseHistoryAnalyzer.

func ComputeAllMetrics

func ComputeAllMetrics(
	analyzerName string,
	computers []func(analyze.Report) MetricResult,
	report analyze.Report,
) *MetricSet

ComputeAllMetrics evaluates each computer function against the report and collects the results into a MetricSet.

func (*MetricSet) AnalyzerName

func (ms *MetricSet) AnalyzerName() string

AnalyzerName returns the name of the analyzer that produced these metrics.

func (*MetricSet) Metrics

func (ms *MetricSet) Metrics() []MetricResult

Metrics returns the underlying metric results.

func (*MetricSet) ToJSON

func (ms *MetricSet) ToJSON() any

ToJSON returns a map keyed by metric name for JSON serialization. The structure mirrors the per-analyzer ComputedMetrics JSON tags.

func (*MetricSet) ToYAML

func (ms *MetricSet) ToYAML() any

ToYAML returns the same map as MetricSet.ToJSON for YAML serialization.

type MetricsProcessor

type MetricsProcessor struct {
	// contains filtered or unexported fields
}

MetricsProcessor handles extraction and calculation of metrics from reports.

func NewMetricsProcessor

func NewMetricsProcessor(numericKeys, countKeys []string) *MetricsProcessor

NewMetricsProcessor creates a new MetricsProcessor with configurable key types.

func (*MetricsProcessor) CalculateAverages

func (mp *MetricsProcessor) CalculateAverages() map[string]float64

CalculateAverages returns the calculated average metrics.

func (*MetricsProcessor) EstimatedStateBytes

func (mp *MetricsProcessor) EstimatedStateBytes() int64

EstimatedStateBytes returns the estimated memory usage of the processor state.

func (*MetricsProcessor) GetCount

func (mp *MetricsProcessor) GetCount(key string) int

GetCount returns a specific count total.

func (*MetricsProcessor) GetCounts

func (mp *MetricsProcessor) GetCounts() map[string]int

GetCounts returns the total counts.

func (*MetricsProcessor) GetMetric

func (mp *MetricsProcessor) GetMetric(key string) float64

GetMetric returns a specific metric total.

func (*MetricsProcessor) GetReportCount

func (mp *MetricsProcessor) GetReportCount() int

GetReportCount returns the total report count.

func (*MetricsProcessor) ProcessReport

func (mp *MetricsProcessor) ProcessReport(report analyze.Report)

ProcessReport extracts metrics from a single report.

type NameExtractor

type NameExtractor func(*node.Node) (string, bool)

NameExtractor extracts a name from a node.

type NoStateHibernation

type NoStateHibernation struct{}

NoStateHibernation is an embeddable zero-size mixin that provides no-op implementations of streaming.Hibernatable for analyzers that accumulate no working state between streaming chunks.

func (NoStateHibernation) Boot

func (NoStateHibernation) Boot() error

Boot is a no-op. Returns nil.

func (NoStateHibernation) Hibernate

func (NoStateHibernation) Hibernate() error

Hibernate is a no-op. Returns nil.

type NodeFilter

type NodeFilter struct {
	Roles    []string
	Types    []string
	MinLines int
	MaxLines int
}

NodeFilter defines criteria for filtering UAST nodes.

type ReportConfig

type ReportConfig struct {
	Format         string
	SortBy         string
	SortOrder      string
	MetricKeys     []string
	CountKeys      []string
	MaxItems       int
	IncludeDetails bool
}

ReportConfig defines configuration for report generation.

type Reporter

type Reporter struct {
	// contains filtered or unexported fields
}

Reporter provides generic reporting capabilities for analysis results.

func NewReporter

func NewReporter(config ReportConfig) *Reporter

NewReporter creates a new Reporter with configurable reporting settings.

func (*Reporter) GenerateComparisonReport

func (r *Reporter) GenerateComparisonReport(reports map[string]analyze.Report) (string, error)

GenerateComparisonReport generates a comparison report between multiple reports.

func (*Reporter) GenerateReport

func (r *Reporter) GenerateReport(report analyze.Report) (string, error)

GenerateReport generates a report in the specified format.

type ResultBuilder

type ResultBuilder struct{}

ResultBuilder provides generic result building capabilities for analyzers.

func NewResultBuilder

func NewResultBuilder() *ResultBuilder

NewResultBuilder creates a new ResultBuilder.

func (*ResultBuilder) BuildBasicResult

func (rb *ResultBuilder) BuildBasicResult(analyzerName string, totalItems int, message string) analyze.Report

BuildBasicResult creates a basic result with common fields.

func (*ResultBuilder) BuildCollectionResult

func (rb *ResultBuilder) BuildCollectionResult(
	analyzerName, collectionKey string, items []map[string]any, metrics map[string]any, message string,
) analyze.Report

BuildCollectionResult creates a result with a collection of items.

func (*ResultBuilder) BuildCustomEmptyResult

func (rb *ResultBuilder) BuildCustomEmptyResult(fields map[string]any) analyze.Report

BuildCustomEmptyResult creates an empty result with custom fields.

func (*ResultBuilder) BuildDetailedResult

func (rb *ResultBuilder) BuildDetailedResult(analyzerName string, fields map[string]any) analyze.Report

BuildDetailedResult creates a detailed result with custom fields.

func (*ResultBuilder) BuildEmptyResult

func (rb *ResultBuilder) BuildEmptyResult(analyzerName string) analyze.Report

BuildEmptyResult creates a standard empty result for when no data is found.

func (*ResultBuilder) BuildMetricResult

func (rb *ResultBuilder) BuildMetricResult(analyzerName string, metrics map[string]any, message string) analyze.Report

BuildMetricResult creates a result focused on metrics.

type SpillableDataCollector

type SpillableDataCollector struct {
	// contains filtered or unexported fields
}

SpillableDataCollector manages per-item data collection with transparent spill-to-disk when the in-memory buffer exceeds a configurable threshold. It collects per-item data keyed by identifier, with last-write-wins deduplication.

func NewSpillableDataCollector

func NewSpillableDataCollector(collectionKey, identifierKey string, threshold int) *SpillableDataCollector

NewSpillableDataCollector creates a collector that spills to disk when the in-memory item count reaches threshold. A threshold of 0 disables spilling.

func NewSpillableDataCollectorComposite

func NewSpillableDataCollectorComposite(collectionKey string, identifierKeys []string, threshold int) *SpillableDataCollector

NewSpillableDataCollectorComposite creates a collector that uses multiple keys to build a composite dedup identifier. This prevents cross-file overwrites when items from different files share the same primary name. The last key in identifierKeys is used as the sort key for GetSortedData.

func (*SpillableDataCollector) Cleanup

func (sdc *SpillableDataCollector) Cleanup()

Cleanup removes the temp spill directory. Safe to call multiple times.

func (*SpillableDataCollector) CollectFromReport

func (sdc *SpillableDataCollector) CollectFromReport(report analyze.Report)

CollectFromReport extracts per-item data from a report. In analyze.AggregationModeSummaryOnly mode, this is a no-op. Handles both legacy []map[string]any and analyze.TypedCollection values.

func (*SpillableDataCollector) EstimatedBufferBytes

func (sdc *SpillableDataCollector) EstimatedBufferBytes() int64

EstimatedBufferBytes returns the estimated memory usage of the in-memory buffer. Does not include spilled data on disk.

func (*SpillableDataCollector) GetCollectionKey

func (sdc *SpillableDataCollector) GetCollectionKey() string

GetCollectionKey returns the collection key.

func (*SpillableDataCollector) GetDataCount

func (sdc *SpillableDataCollector) GetDataCount() int

GetDataCount returns the number of items in the current in-memory buffer. This does not include spilled items.

func (*SpillableDataCollector) GetIdentifierKey

func (sdc *SpillableDataCollector) GetIdentifierKey() string

GetIdentifierKey returns the identifier key.

func (*SpillableDataCollector) GetSortedData

func (sdc *SpillableDataCollector) GetSortedData() []map[string]any

GetSortedData returns all collected items (in-memory + spilled) sorted by identifier key. Spill files are cleaned up after this call.

func (*SpillableDataCollector) SetAggregationMode

func (sdc *SpillableDataCollector) SetAggregationMode(mode analyze.AggregationMode)

SetAggregationMode sets the aggregation mode. In analyze.AggregationModeSummaryOnly, CollectFromReport becomes a no-op.

func (*SpillableDataCollector) SpillCount

func (sdc *SpillableDataCollector) SpillCount() int

SpillCount returns the number of spill files written.

func (*SpillableDataCollector) SpillDir

func (sdc *SpillableDataCollector) SpillDir() string

SpillDir returns the temp directory path, or empty if no spills occurred.

type Threshold

type Threshold[T cmp.Ordered] struct {
	Limit T
	Label string
}

Threshold defines a single classification boundary. Values >= Limit are assigned the given Label.

type ThresholdLabeler

type ThresholdLabeler []Threshold[float64]

ThresholdLabeler maps a float64 score to a string label using an ordered list of Threshold[float64] values. Thresholds must be sorted descending by Limit (highest first) — the first threshold where score >= Limit wins. A catch-all fallback can be added as the last entry with Limit set to 0 or the minimum possible score.

Example:

labeler := common.ThresholdLabeler{
    {Limit: 0.8, Label: "Excellent"},
    {Limit: 0.6, Label: "Good"},
    {Limit: 0.4, Label: "Fair"},
    {Limit: 0.0, Label: "Poor"},
}
labeler.Label(0.75) // → "Good"

func (ThresholdLabeler) Label

func (l ThresholdLabeler) Label(score float64) string

Label returns the label of the first threshold where score >= Limit. Returns "" if the labeler is empty or no threshold matches.

type TraversalConfig

type TraversalConfig struct {
	Filters     []NodeFilter
	MaxDepth    int
	IncludeRoot bool
}

TraversalConfig defines configuration for UAST traversal.

type UASTTraverser

type UASTTraverser struct {
	// contains filtered or unexported fields
}

UASTTraverser provides generic UAST traversal capabilities.

func NewUASTTraverser

func NewUASTTraverser(config TraversalConfig) *UASTTraverser

NewUASTTraverser creates a new UASTTraverser with configurable traversal settings.

func (*UASTTraverser) CountLines

func (ut *UASTTraverser) CountLines(root *node.Node) int

CountLines counts the total number of lines in a node and its children.

func (*UASTTraverser) FindNodes

func (ut *UASTTraverser) FindNodes(root *node.Node, predicate func(*node.Node) bool) []*node.Node

FindNodes returns all nodes for which predicate returns true.

func (*UASTTraverser) FindNodesByFilter

func (ut *UASTTraverser) FindNodesByFilter(root *node.Node, filter NodeFilter) []*node.Node

FindNodesByFilter finds all nodes matching the specified filter criteria.

func (*UASTTraverser) FindNodesByFilters

func (ut *UASTTraverser) FindNodesByFilters(root *node.Node, filters []NodeFilter) []*node.Node

FindNodesByFilters finds all nodes matching any of the specified filter criteria.

func (*UASTTraverser) FindNodesByRoles

func (ut *UASTTraverser) FindNodesByRoles(root *node.Node, roles []string) []*node.Node

FindNodesByRoles finds all nodes with specified roles in the UAST.

func (*UASTTraverser) FindNodesByType

func (ut *UASTTraverser) FindNodesByType(root *node.Node, nodeTypes []string) []*node.Node

FindNodesByType finds all nodes of specified types in the UAST.

func (*UASTTraverser) GetNodePosition

func (ut *UASTTraverser) GetNodePosition(n *node.Node) (startLine, endLine int)

GetNodePosition returns the position information for a node.

type ValueExtractor

type ValueExtractor func(*node.Node) (any, bool)

ValueExtractor extracts a value from a node.

Directories

Path Synopsis
Package plotpage provides HTML visualization components for analyzer output.
Package plotpage provides HTML visualization components for analyzer output.
Package renderer provides section rendering for analyzer reports.
Package renderer provides section rendering for analyzer reports.
Package reportutil provides type-safe accessors for map[string]any fields.
Package reportutil provides type-safe accessors for map[string]any fields.
Package spillstore provides generic disk-backed stores that spill accumulated data to temporary files during streaming hibernation, freeing memory between chunks while preserving the full dataset for Finalize.
Package spillstore provides generic disk-backed stores that spill accumulated data to temporary files during streaming hibernation, freeing memory between chunks while preserving the full dataset for Finalize.
Package terminal provides terminal rendering utilities for beautiful CLI output.
Package terminal provides terminal rendering utilities for beautiful CLI output.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL