Documentation
¶
Overview ¶
Package quality tracks code quality metrics (complexity, Halstead, comments, cohesion) across commit history by running static analyzers on per-commit UAST-parsed changed files.
Index ¶
- Constants
- func AggregateCommitsToTicks(commitQuality map[string]*TickQuality, commitsByTick map[int][]gitlib.Hash) map[int]*TickQuality
- func GenerateStoreSections(reader analyze.ReportReader) ([]plotpage.Section, error)
- func RegisterPlotSections()
- func RegisterStoreTimeSeriesExtractor()
- type AggregateData
- type Analyzer
- func (a *Analyzer) ApplySnapshot(snap analyze.PlumbingSnapshot)
- func (a *Analyzer) CPUHeavy() bool
- func (a *Analyzer) Configure(facts map[string]any) error
- func (a *Analyzer) Consume(ctx context.Context, ac *analyze.Context) (analyze.TC, error)
- func (a *Analyzer) ExtractCommitTimeSeries(report analyze.Report) map[string]any
- func (a *Analyzer) Fork(n int) []analyze.HistoryAnalyzer
- func (a *Analyzer) Initialize(_ *gitlib.Repository) error
- func (a *Analyzer) Merge(_ []analyze.HistoryAnalyzer)
- func (a *Analyzer) NeedsUAST() bool
- func (a *Analyzer) ReleaseSnapshot(snap analyze.PlumbingSnapshot)
- func (a *Analyzer) ReportFromTICKs(ctx context.Context, ticks []analyze.TICK) (analyze.Report, error)
- func (a *Analyzer) SnapshotPlumbing() analyze.PlumbingSnapshot
- func (a *Analyzer) WriteToStore(ctx context.Context, ticks []analyze.TICK, w analyze.ReportWriter) error
- type ComputedMetrics
- type ReportData
- type TickData
- type TickQuality
- type TickStats
- type TimeSeriesEntry
Constants ¶
const ( DimComplexityMedian = "complexity_median" DimComplexityP95 = "complexity_p95" DimHalsteadVolMedian = "halstead_vol_median" DimDeliveredBugsSum = "delivered_bugs_sum" DimCommentScoreMin = "comment_score_min" DimCohesionMin = "cohesion_min" )
Dimension names used by the quality time series extractor.
const ( KindTimeSeries = "time_series" KindAggregate = "aggregate" )
Store record kind constants.
Variables ¶
This section is empty.
Functions ¶
func AggregateCommitsToTicks ¶
func AggregateCommitsToTicks( commitQuality map[string]*TickQuality, commitsByTick map[int][]gitlib.Hash, ) map[int]*TickQuality
AggregateCommitsToTicks groups per-commit TickQuality data into per-tick TickQuality by merging all commits that belong to each tick.
func GenerateStoreSections ¶
func GenerateStoreSections(reader analyze.ReportReader) ([]plotpage.Section, error)
GenerateStoreSections reads pre-computed quality data from a ReportReader and builds the same plot sections as GenerateSections, without materializing a full Report or recomputing metrics.
func RegisterPlotSections ¶
func RegisterPlotSections()
RegisterPlotSections registers the quality plot section renderer with the analyze package.
func RegisterStoreTimeSeriesExtractor ¶
func RegisterStoreTimeSeriesExtractor()
RegisterStoreTimeSeriesExtractor registers the quality analyzer's store-based time series extractor with the anomaly package for cross-analyzer anomaly detection.
Types ¶
type AggregateData ¶
type AggregateData struct {
TotalTicks int `json:"total_ticks" yaml:"total_ticks"`
TotalFilesAnalyzed int `json:"total_files_analyzed" yaml:"total_files_analyzed"`
ComplexityMedianMean float64 `json:"complexity_median_mean" yaml:"complexity_median_mean"`
ComplexityP95Mean float64 `json:"complexity_p95_mean" yaml:"complexity_p95_mean"`
HalsteadVolMedianMean float64 `json:"halstead_vol_median_mean" yaml:"halstead_vol_median_mean"`
TotalDeliveredBugs float64 `json:"total_delivered_bugs" yaml:"total_delivered_bugs"`
CommentScoreMeanMean float64 `json:"comment_score_mean_mean" yaml:"comment_score_mean_mean"`
MinCommentScore float64 `json:"min_comment_score" yaml:"min_comment_score"`
CohesionMeanMean float64 `json:"cohesion_mean_mean" yaml:"cohesion_mean_mean"`
MinCohesion float64 `json:"min_cohesion" yaml:"min_cohesion"`
}
AggregateData contains overall summary statistics.
type Analyzer ¶
type Analyzer struct {
*analyze.BaseHistoryAnalyzer[*ComputedMetrics]
common.NoStateHibernation
UAST *plumbing.UASTChangesAnalyzer
Ticks *plumbing.TicksSinceStart
// contains filtered or unexported fields
}
Analyzer tracks code quality metrics across commit history by running static analyzers on UAST-parsed changed files per commit.
func (*Analyzer) ApplySnapshot ¶
func (a *Analyzer) ApplySnapshot(snap analyze.PlumbingSnapshot)
ApplySnapshot restores plumbing state from a previously captured snapshot.
func (*Analyzer) CPUHeavy ¶
CPUHeavy returns true because quality analysis performs UAST processing per commit.
func (*Analyzer) Consume ¶
Consume processes a single commit, running static analyzers on each changed file's UAST. Returns a TC with the per-commit *TickQuality as payload.
func (*Analyzer) ExtractCommitTimeSeries ¶
ExtractCommitTimeSeries implements analyze.CommitTimeSeriesProvider. It converts per-commit TickQuality data into summary statistics for the unified timeseries output, covering complexity, halstead, comments, and cohesion.
func (*Analyzer) Fork ¶
func (a *Analyzer) Fork(n int) []analyze.HistoryAnalyzer
Fork creates independent copies of the analyzer for parallel processing.
func (*Analyzer) Initialize ¶
func (a *Analyzer) Initialize(_ *gitlib.Repository) error
Initialize prepares the analyzer for processing commits.
func (*Analyzer) Merge ¶
func (a *Analyzer) Merge(_ []analyze.HistoryAnalyzer)
Merge is a no-op. Per-commit results are emitted as TCs and collected by the framework, not accumulated inside the analyzer.
func (*Analyzer) ReleaseSnapshot ¶
func (a *Analyzer) ReleaseSnapshot(snap analyze.PlumbingSnapshot)
ReleaseSnapshot releases UAST trees owned by the snapshot.
func (*Analyzer) ReportFromTICKs ¶
func (a *Analyzer) ReportFromTICKs(ctx context.Context, ticks []analyze.TICK) (analyze.Report, error)
ReportFromTICKs converts aggregated TICKs into a Report.
func (*Analyzer) SnapshotPlumbing ¶
func (a *Analyzer) SnapshotPlumbing() analyze.PlumbingSnapshot
SnapshotPlumbing captures the current plumbing output state.
func (*Analyzer) WriteToStore ¶
func (a *Analyzer) WriteToStore(ctx context.Context, ticks []analyze.TICK, w analyze.ReportWriter) error
WriteToStore implements analyze.StoreWriter. It extracts per-commit quality data from TICKs, computes metrics, and streams pre-computed results as individual records:
- "time_series": per-tick TimeSeriesEntry records (sorted by tick).
- "aggregate": single AggregateData record.
type ComputedMetrics ¶
type ComputedMetrics struct {
TimeSeries []TimeSeriesEntry `json:"time_series" yaml:"time_series"`
Aggregate AggregateData `json:"aggregate" yaml:"aggregate"`
}
ComputedMetrics holds all computed metric results for the quality analyzer.
func ComputeAllMetrics ¶
func ComputeAllMetrics(report analyze.Report) (*ComputedMetrics, error)
ComputeAllMetrics runs all quality metrics and returns the results.
type ReportData ¶
type ReportData struct {
TickQuality map[int]*TickQuality
TickBounds map[int]analyze.TickBounds
}
ReportData is the parsed input data for quality metrics computation.
func ParseReportData ¶
func ParseReportData(report analyze.Report) (*ReportData, error)
ParseReportData extracts ReportData from an analyzer report. Expects canonical format: commit_quality and commits_by_tick.
type TickData ¶
type TickData struct {
// CommitQuality maps commit hash (hex) to per-commit TickQuality.
CommitQuality map[string]*TickQuality
}
TickData is the per-tick aggregated payload for the quality analyzer. It holds per-commit quality for the canonical report format.
type TickQuality ¶
type TickQuality struct {
// Per-file complexity values.
Complexities []float64 // Cyclomatic complexity per file.
Cognitives []float64 // Cognitive complexity per file.
MaxComplexities []int // Max single-function complexity per file.
Functions []int // Function count per file.
// Per-file Halstead values.
HalsteadVolumes []float64
HalsteadEfforts []float64
DeliveredBugs []float64
// Per-file comment/doc values.
CommentScores []float64
DocCoverages []float64
// Per-file cohesion values.
CohesionScores []float64
}
TickQuality holds per-file quality metric values for a single tick. Values are appended per-file during Consume; statistics are computed at output time.
type TickStats ¶
type TickStats struct {
// Complexity.
ComplexityMean float64 `json:"complexity_mean" yaml:"complexity_mean"`
ComplexityMedian float64 `json:"complexity_median" yaml:"complexity_median"`
ComplexityP95 float64 `json:"complexity_p95" yaml:"complexity_p95"`
ComplexityMax float64 `json:"complexity_max" yaml:"complexity_max"`
// Halstead.
HalsteadVolMean float64 `json:"halstead_vol_mean" yaml:"halstead_vol_mean"`
HalsteadVolMedian float64 `json:"halstead_vol_median" yaml:"halstead_vol_median"`
HalsteadVolP95 float64 `json:"halstead_vol_p95" yaml:"halstead_vol_p95"`
HalsteadVolSum float64 `json:"halstead_vol_sum" yaml:"halstead_vol_sum"`
// Delivered Bugs.
DeliveredBugsSum float64 `json:"delivered_bugs_sum" yaml:"delivered_bugs_sum"`
// Comments.
CommentScoreMean float64 `json:"comment_score_mean" yaml:"comment_score_mean"`
CommentScoreMin float64 `json:"comment_score_min" yaml:"comment_score_min"`
DocCoverageMean float64 `json:"doc_coverage_mean" yaml:"doc_coverage_mean"`
// Cohesion.
CohesionMean float64 `json:"cohesion_mean" yaml:"cohesion_mean"`
CohesionMin float64 `json:"cohesion_min" yaml:"cohesion_min"`
// Bookkeeping.
FilesAnalyzed int `json:"files_analyzed" yaml:"files_analyzed"`
TotalFunctions int `json:"total_functions" yaml:"total_functions"`
MaxComplexity int `json:"max_complexity" yaml:"max_complexity"`
}
TickStats holds computed statistics for a single tick.
type TimeSeriesEntry ¶
type TimeSeriesEntry struct {
Tick int `json:"tick" yaml:"tick"`
StartTime string `json:"start_time,omitempty" yaml:"start_time,omitempty"`
EndTime string `json:"end_time,omitempty" yaml:"end_time,omitempty"`
Stats TickStats `json:"stats" yaml:"stats"`
}
TimeSeriesEntry holds per-tick quality data for the time series output.