agent-readyness

command module
v0.0.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 12, 2026 License: MIT Imports: 1 Imported by: 0

README ΒΆ

Agent Readiness Score (ARS)

Measure how ready your codebase is for AI agents

Go Reference Go Report Card Coverage License Release

ARS

Quick Start β€’ Features β€’ Installation β€’ Usage β€’ Documentation β€’ Contributing


AI agents are already writing, refactoring, and debugging code at scale. But they don't fail gracefully like human developersβ€”they fail catastrophically. The properties that make code "AI-friendly" are similar to those that make it "human-friendly," but agents have zero tolerance for deviation (Borg et al., 2026).

Humans compensate for bad code with intuition, tribal knowledge, and pattern recognition. Agents cannot. Where a senior developer slows down, an agent breaks.

The Bottom Line:

Investing in code quality is the highest-leverage action you can take to enable AI agent productivity.

You could spend $10M/year on the best LLM API credits. Or you could refactor your God Classes, add architecture docs, and improve test coverageβ€”and get better results with cheaper models.

The research is clear:

  • βœ… Clean code reduces agent break rates by 7-15 percentage points
  • βœ… Modular architecture enables 4.5x better context retrieval
  • βœ… Documentation boosts success rates by 32.8%
  • βœ… Test-driven workflows achieve 82.8% task completion

Agent Readiness isn't a nice-to-haveβ€”it's the difference between an agent that ships code and one that creates busywork.

πŸ“– Read the detailed research evidence β†’


Quick Start

Install
go install github.com/ingo-eichhorst/agent-readyness@latest

Make sure $GOPATH/bin (usually ~/go/bin) is in your PATH.

Run Your First Scan
# Scan current directory (C4 & C7 auto-enabled if Claude CLI detected)
ars scan .

# Generate beautiful HTML report
ars scan . --output-html report.html

# Disable LLM features for faster scans (CI/CD)
ars scan . --no-llm

That's it! You'll get a comprehensive analysis of your codebase's agent-readiness across 7 research-backed categories.


Features

Research-Backed Analysis

7 categories, 38+ metrics, all grounded in peer-reviewed research with inline citations

Beautiful Reports

Interactive HTML reports with charts, expandable sections, and mobile-responsive design

AI-Powered Insights

Optional LLM analysis for documentation quality and live agent evaluation

Multi-Language Support

Auto-detects and analyzes Go, Python, and TypeScript codebases

Actionable Recommendations

Ranked improvement suggestions with impact scores and effort estimates

Baseline Comparison

Track progress over time by comparing against previous scans


πŸ“ Architecture

Agent Readiness Score follows a pipeline architecture with pluggable analyzers. For detailed architecture documentation:

For AI agents and detailed implementation guidance, see CLAUDE.md and agent.md.


Prerequisites

Required
  • Go 1.25+ - The programming language runtime

    πŸ“¦ Install Go

    macOS:

    brew install go
    # OR download from: https://go.dev/dl/
    

    Linux:

    wget https://go.dev/dl/go1.22.5.linux-amd64.tar.gz
    sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.22.5.linux-amd64.tar.gz
    export PATH=$PATH:/usr/local/go/bin
    

    Windows:

    # Download and run installer from: https://go.dev/dl/
    # Or via Chocolatey:
    choco install golang
    

    Verify installation: go version

Optional (for LLM Features)
  • Claude Code CLI - For advanced documentation analysis (C4) and live agent evaluation (C7)

    πŸ€– Install Claude Code CLI

    macOS / Linux:

    curl -fsSL https://claude.ai/install.sh | bash
    

    Windows (PowerShell):

    irm https://claude.ai/install.ps1 | iex
    

    Setup:

    # Complete one-time OAuth authentication
    claude auth login
    
    # Verify installation
    claude --version
    

    Note: No API key configuration needed - the CLI handles authentication automatically.


Installation

go install github.com/ingo-eichhorst/agent-readyness@latest

The binary will be installed to $GOPATH/bin (usually ~/go/bin). Make sure this is in your PATH.

Build from Source
git clone https://github.com/ingo-eichhorst/agent-readyness.git
cd agent-readyness
go build -o ars .
Pre-built Binaries

Pre-built binaries will be available on the releases page for future releases.


Usage

Basic Commands
# Scan current directory
ars scan .

# Scan specific directory
ars scan /path/to/project

# Generate interactive HTML report
ars scan . --output-html report.html

# JSON output for CI/CD integration
ars scan . --json > results.json
LLM Features

ARS includes optional AI-powered analysis that automatically enables when Claude CLI is detected:

# LLM features auto-enabled when Claude CLI is available
ars scan .
# Output: "Claude CLI detected (claude 2.x.x) - LLM features enabled"
# Both C4 (documentation analysis) and C7 (agent evaluation) run automatically

# Explicitly disable LLM features for faster scans (CI/CD)
ars scan . --no-llm
# Output: "LLM features disabled (--no-llm flag)"
Debug Mode

When investigating C7 Agent Evaluation scores, use debug mode:

# Show debug output (prompts, responses, scoring traces)
ars scan . --debug

# Save responses for offline analysis (no Claude CLI calls on replay)
ars scan . --debug --debug-dir ./c7-debug

# Pipe output to files
ars scan . --debug --json > results.json 2>debug.log

πŸ” What Gets Analyzed

Agent Readiness Score evaluates your codebase across 7 research-backed categories:

Category What It Measures Key Metrics
C1
Code Quality
Structural complexity and maintainability patterns that affect agent comprehension β€’ Cyclomatic complexity
β€’ Function length
β€’ Code duplication
β€’ Coupling metrics
C2
Semantics
Explicitness of types, names, and intentions that help agents understand purpose β€’ Type annotation coverage
β€’ Naming consistency
β€’ Magic number detection
β€’ Interface clarity
C3
Architecture
Structural organization and dependency patterns that enable navigation β€’ Directory depth
β€’ Module coupling
β€’ Circular dependencies
β€’ Dead code detection
C4
Documentation
Quality and completeness of human and machine-readable documentation β€’ README presence & clarity
β€’ Comment density
β€’ API documentation
β€’ Example quality (AI-evaluated)
C5
Temporal Dynamics
Change patterns and stability indicators from git history β€’ Code churn rate
β€’ Temporal coupling
β€’ Hotspot identification
β€’ Change frequency
C6
Testing
Test infrastructure that enables safe agent modifications β€’ Test-to-source ratio
β€’ Code coverage
β€’ Test isolation
β€’ Assertion quality
C7
Agent Evaluation
Live AI agent performance on real tasks (requires Claude CLI) β€’ Task execution consistency
β€’ Code comprehension
β€’ Cross-file navigation
β€’ Documentation accuracy detection
Score Tiers
Score Tier Meaning
8.0-10.0 🟒 Agent-Ready Agents work efficiently with minimal supervision
6.0-7.9 🟑 Agent-Assisted Agents are productive with human oversight
4.0-5.9 🟠 Agent-Limited Agents struggle and require significant guidance
0.0-3.9 πŸ”΄ Agent-Hostile Agents fail frequently or produce incorrect results

Documentation


🀝 Contributing

We welcome contributions from both humans and AI agents! πŸ€–πŸ€πŸ‘₯

For Human Contributors
  1. Check out issues labeled good first issue
  2. Read CONTRIBUTING.md for setup and workflow
  3. Submit PRs following Conventional Commits
For AI Coding Agents
  1. Read AGENTS.md for precise technical boundaries
  2. Follow exact commands and code patterns specified
  3. Complement with CLAUDE.md for architecture details
Development Setup
# Clone and build
git clone https://github.com/ingo-eichhorst/agent-readyness.git
cd agent-readyness
go build -o ars .

# Run tests
go test ./...

# Run tests with coverage
go test ./... -coverprofile=cover.out
go tool cover -html=cover.out

# Update coverage badge (run this script to see current coverage)
./scripts/update-coverage-badge.sh

# Format code
gofmt -w .

# Run scan on the project itself
./ars scan .

πŸ“Š Example Output

Agent Readiness Score: 6.6 / 10
Tier: Agent-Assisted 🟑
────────────────────────────────────────
C1: Code Quality             7.2 / 10
C2: Semantic Explicitness    8.1 / 10
C3: Architectural Design     5.4 / 10
C4: Documentation Quality    4.8 / 10
C5: Temporal Dynamics        7.3 / 10
C6: Testing Infrastructure   9.1 / 10
C7: Agent Evaluation         8.9 / 10
────────────────────────────────────────

Top Recommendations
══════════════════════════════════════
  1. Improve Documentation Coverage
     Impact: +1.2 points
     Effort: Medium
     Action: Add missing README sections and API docs

  2. Reduce Architectural Complexity
     Impact: +0.8 points
     Effort: High
     Action: Break down large modules and reduce coupling

Star History

If you find this project useful, please consider giving it a star! ⭐


License

This project is licensed under the MIT License - see the LICENSE file for details.


Acknowledgments

Built with research from leading institutions and grounded in peer-reviewed publications. See RESEARCH.md for 58+ academic citations spanning:

  • Software Engineering (McCabe, Fowler, Martin, Gamma)
  • Programming Language Theory (Pierce, Cardelli, Wright)
  • Empirical Software Studies (Nagappan, Bird, Hassan, Mockus)
  • AI & LLM Research (Jimenez, Kapoor, Ouyang, Haroon, Borg)

Made with ❀️ for the future of AI-assisted development

Report Bug β€’ Request Feature

Documentation ΒΆ

The Go Gopher

There is no documentation for this package.

Directories ΒΆ

Path Synopsis
internal
agent
Package agent provides C7 agent evaluation infrastructure for headless Claude Code execution.
Package agent provides C7 agent evaluation infrastructure for headless Claude Code execution.
agent/metrics
Package metrics provides 5 MECE agent evaluation metrics for C7.
Package metrics provides 5 MECE agent evaluation metrics for C7.
analyzer
Package analyzer provides code analysis implementations for the ARS pipeline.
Package analyzer provides code analysis implementations for the ARS pipeline.
analyzer/c3_architecture
Package c3 analyzes C3 (Architecture) metrics for agent-readiness.
Package c3 analyzes C3 (Architecture) metrics for agent-readiness.
analyzer/c4_documentation
Package analyzer provides code analysis implementations for the ARS pipeline.
Package analyzer provides code analysis implementations for the ARS pipeline.
analyzer/c5_temporal
Package analyzer provides code analysis implementations for the ARS pipeline.
Package analyzer provides code analysis implementations for the ARS pipeline.
analyzer/c6_testing
Package analyzer provides code analysis implementations for the ARS pipeline.
Package analyzer provides code analysis implementations for the ARS pipeline.
analyzer/shared
Package shared provides common utilities for analyzer implementations.
Package shared provides common utilities for analyzer implementations.
config
Package config handles .arsrc.yml project-level configuration.
Package config handles .arsrc.yml project-level configuration.
output
Package output renders analysis results to various output formats (terminal, JSON, HTML, badges).
Package output renders analysis results to various output formats (terminal, JSON, HTML, badges).
parser
Package parser provides Go package loading using go/packages for type-aware AST parsing, type information, and import graph resolution.
Package parser provides Go package loading using go/packages for type-aware AST parsing, type information, and import graph resolution.
scoring
Package scoring converts raw analysis metrics to normalized scores (1-10 scale) using piecewise linear interpolation over configurable breakpoints.
Package scoring converts raw analysis metrics to normalized scores (1-10 scale) using piecewise linear interpolation over configurable breakpoints.
pkg
version
Package version provides the ARS tool version.
Package version provides the ARS tool version.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL