promptext-notes

module
v0.8.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 12, 2025 License: MIT

README ΒΆ

Promptext-Notes

CI Go Report Card

A Go-based CLI tool that generates intelligent, context-aware release notes by combining git history analysis with code context extraction using the promptext library.

Features

  • πŸ“Š Git History Analysis: Automatically analyzes commits since the last tag
  • πŸ” Code Context Extraction: Uses promptext to extract relevant code changes with token-aware analysis
  • πŸ“ Conventional Commits: Categorizes changes by type (feat, fix, docs, breaking, etc.)
  • πŸ€– Integrated AI Generation: Generate AI-enhanced changelogs directly with --generate flag
  • ✨ 2-Stage Polish Workflow: Combine accurate discovery with customer-friendly polish for premium quality
  • 🚫 Auto-Exclude-Meta (v0.8.0): Automatically excludes CI configs, CHANGELOG, README from AI context
  • 🌐 Multi-Provider Support: Works with OpenRouter (200+ models), Anthropic, OpenAI, Cerebras, Groq, and local Ollama
  • βš™οΈ YAML Configuration: Customize behavior with .promptext-notes.yml config file
  • πŸ“‹ Keep a Changelog Format: Produces standardized markdown output
  • ⚑ Fast & Lightweight: Single binary with no runtime dependencies (except Git)
  • πŸ”Œ Easy Integration: Add to any repository with GitHub Actions (See Guide)
  • πŸ†“ Free Options: Use Cerebras, Groq, or local Ollama (no API cost)

Installation

Using go install
go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
From Source
git clone https://github.com/1broseidon/promptext-notes.git
cd promptext-notes
go build -o promptext-notes ./cmd/promptext-notes
sudo mv promptext-notes /usr/local/bin/
Download Pre-built Binary

Download the latest release from the releases page.

Usage

Basic Release Notes

Generate release notes for a specific version:

promptext-notes --version v1.0.0

Output:

## [v1.0.0] - 2025-11-10

### Added
- New feature for code analysis
- Support for additional file types

### Fixed
- Bug in token counting
- Edge case in file filtering

### Statistics
- **Files changed**: 12
- **Commits**: 8
- **Context analyzed**: ~7,850 tokens
AI-Enhanced Release Notes (Integrated)

NEW! Generate AI-enhanced changelog directly with a single command:

# Using Cerebras (default with config file - free tier, best for large prompts)
export CEREBRAS_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0

# Or specify provider inline
promptext-notes --generate --provider groq --model llama-3.3-70b-versatile --version v1.0.0

The --generate flag will:

  1. Analyze git history and extract code context
  2. Send the comprehensive prompt to your AI provider
  3. Return polished, production-ready release notes

Legacy Method: Generate a prompt to paste into an LLM manually:

promptext-notes --version v1.0.0 --ai-prompt > prompt.txt

Then paste the contents of prompt.txt into Claude, ChatGPT, or your preferred LLM.

2-Stage Polish Workflow ✨

NEW! Combine the accuracy of technical discovery models with the polish of customer-facing language models for premium quality changelogs.

Quick Start:

# Enable polish with CLI flag (uses config from .promptext-notes.yml)
promptext-notes --generate --polish --version v1.0.0

# Or configure in .promptext-notes.yml

How it works:

  1. Stage 1 (Discovery): Uses a model optimized for code understanding to analyze all changes
  2. Stage 2 (Polish): Refines the output into polished, customer-friendly language

Recommended Setup (from benchmarks):

ai:
  provider: cerebras
  model: zai-glm-4.6  # Stage 1 (Discovery) - FREE, 10/10 accuracy

  polish:
    enabled: true  # Or use --polish CLI flag
    polish_model: "anthropic/claude-sonnet-4.5"  # Stage 2 - Premium polish
    polish_provider: "openrouter"  # Different provider for polish stage
    polish_api_key_env: "OPENROUTER_API_KEY"
    polish_max_tokens: 4000
    polish_temperature: 0.3

Benefits:

  • βœ… GLM-4.6: Best free model (10/10 accuracy, catches all technical details)
  • βœ… Claude Sonnet 4.5: Premium polish (8/10 quality, minimal hallucination)
  • βœ… Total cost: ~$0.004/run (discovery is FREE, polish is cheap)
  • βœ… Can mix FREE models (Cerebras) with paid polish (OpenRouter)
  • βœ… Auto-exclude-meta (v0.8.0): Keeps changelogs focused on user-facing changes

Cost Analysis:

  • Single-stage (GLM-4.6): $0 (FREE)
  • 2-stage (GLM + Claude Sonnet): ~$0.004/run
  • 2-stage (GLM + Haiku): ~$0.001/run (cheaper but less accurate)
Custom Date Range

Specify a starting tag/commit:

promptext-notes --version v1.0.0 --since v0.5.0
Output to File

Write release notes to a file:

promptext-notes --version v1.0.0 --output RELEASE_NOTES.md
Append to CHANGELOG
promptext-notes --version v1.0.0 --output release-notes.md
cat release-notes.md >> CHANGELOG.md

Configuration

You can configure promptext-notes using a YAML configuration file. Copy .promptext-notes.example.yml to .promptext-notes.yml and customize:

version: "1"

ai:
  provider: cerebras      # cerebras, anthropic, openai, groq, openrouter, ollama
  model: zai-glm-4.6      # Best free model (10/10 accuracy)
  api_key_env: CEREBRAS_API_KEY
  max_tokens: 8000
  temperature: 0.3
  timeout: 30s

  polish:
    enabled: false  # Enable with --polish flag
    polish_model: "anthropic/claude-sonnet-4.5"
    polish_provider: "openrouter"
    polish_api_key_env: "OPENROUTER_API_KEY"

output:
  format: keepachangelog
  sections: [breaking, added, changed, fixed, docs]

filters:
  files:
    auto_exclude_meta: true  # NEW in v0.8.0 - excludes CI, CHANGELOG, README
    include: ["*.go", "*.md", "*.yml"]
    exclude: ["*_test.go", "vendor/*"]

See CONFIGURATION.md for full configuration options and USAGE.md for usage examples.

Flags

Flag Type Default Description
--version string "" Version to generate notes for (e.g., v0.7.4)
--since string "" Generate notes since this tag (auto-detects if empty)
--output string "" Output file path (stdout if empty)
--generate bool false NEW! Generate AI-enhanced changelog directly
--polish bool false NEW! Enable 2-stage polish workflow (discovery + refinement)
--provider string "" AI provider (anthropic, openai, cerebras, groq, openrouter, ollama)
--model string "" AI model to use (overrides config)
--exclude-files string "" Comma-separated files to exclude from AI context (e.g., CHANGELOG.md,README.md)
--config string ".promptext-notes.yml" Configuration file path
--quiet bool false Suppress progress messages
--ai-prompt bool false Generate AI prompt only (legacy mode)

How It Works

  1. Git Analysis: Retrieves changed files and commit messages since the last tag (or specified tag)
  2. Context Extraction: Uses promptext to extract code context from changed files (.go, .md, .yml, .yaml)
  3. Categorization: Parses commit messages using conventional commit format
  4. Generation: Produces either:
    • Basic Mode: Keep a Changelog formatted release notes
    • AI Mode: Comprehensive prompt with full code context for LLM enhancement

Automated Release Notes (AI-Enhanced)

This project includes an automated workflow that generates AI-enhanced release notes using multiple AI providers: OpenAI, Anthropic, Cerebras, or Groq.

πŸ“š Want to use this in your own repository? See the Complete Integration Guide for step-by-step instructions on adding automated AI-enhanced release notes to any project.

How It Works

When you push a version tag (e.g., v1.0.0), the workflow automatically:

  1. βœ… Builds the promptext-notes binary
  2. πŸ” Analyzes git history and extracts code context
  3. πŸ€– Sends the prompt to your chosen AI provider for enhancement
  4. πŸ“ Creates a GitHub release with polished notes
  5. πŸ“‹ Updates CHANGELOG.md in the repository
Supported AI Providers
Provider Default Model Context Limit Cost Setup URL
OpenRouter πŸ†• openai/gpt-4o-mini Varies πŸ’° Pay-as-you-go openrouter.ai/keys
Cerebras zai-glm-4.6 65K tokens βœ… Free cerebras.ai
Groq llama-3.3-70b-versatile 32K tokens βœ… Free console.groq.com
Ollama llama3.2 Varies βœ… Free (Local) ollama.com
OpenAI gpt-4o-mini 128K tokens πŸ’° $0.15/$0.60 per 1M platform.openai.com
Anthropic claude-haiku-4-5 200K tokens πŸ’° $0.80/$4.00 per 1M console.anthropic.com
Setup
  1. Get an API key from your chosen provider (see Setup URL column above)

  2. Add API key(s) to GitHub Secrets:

    • Go to your repository β†’ Settings β†’ Secrets and variables β†’ Actions
    • Click "New repository secret"
    • Add one or more of these secrets:
      • OPENROUTER_API_KEY - For OpenRouter (access 200+ models through one API)
      • CEREBRAS_API_KEY - For Cerebras (free, best for large prompts)
      • GROQ_API_KEY - For Groq (free, good for smaller prompts)
      • OPENAI_API_KEY - For OpenAI
      • ANTHROPIC_API_KEY - For Anthropic
  3. (Optional) Configure models via GitHub Variables:

    • Go to your repository β†’ Settings β†’ Secrets and variables β†’ Actions β†’ Variables tab
    • Add variables to customize models (otherwise defaults are used):
      • OPENAI_MODEL (default: gpt-5-nano)
      • ANTHROPIC_MODEL (default: claude-haiku-4-5)
      • CEREBRAS_MODEL (default: gpt-oss-120b)
      • GROQ_MODEL (default: llama-3.3-70b-versatile)
  4. Push a version tag:

    git tag v1.0.0
    git push origin v1.0.0
    

The workflow will automatically generate and publish AI-enhanced release notes using Cerebras (default, free) or your configured provider!

NEW! Use the integrated --generate flag for one-command AI-enhanced changelogs:

# Using Anthropic (create .promptext-notes.yml config first)
export ANTHROPIC_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0

# Or specify provider inline
export OPENAI_API_KEY="your-key-here"
promptext-notes --generate --provider openai --model gpt-4o-mini --version v1.0.0

# Using OpenRouter (access 200+ models through one API)
export OPENROUTER_API_KEY="your-key-here"
promptext-notes --generate --provider openrouter --model anthropic/claude-sonnet-4 --version v1.0.0

# Using Cerebras (default, free tier, best for large prompts)
export CEREBRAS_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0

# Using Groq (free tier, good for smaller prompts)
export GROQ_API_KEY="your-key-here"
promptext-notes --generate --provider groq --version v1.0.0

# Using Ollama (local, free, no API key needed!)
# First: ollama pull llama3.2
promptext-notes --generate --provider ollama --model llama3.2 --version v1.0.0
Legacy Script Method

You can also use the shell script (will be deprecated in future versions):

# Using Cerebras (default)
export CEREBRAS_API_KEY="your-key-here"
./scripts/generate-release-notes.sh v1.0.0

# Using OpenAI
export OPENAI_API_KEY="your-key-here"
./scripts/generate-release-notes.sh v1.0.0 v0.9.0 openai
Available Models by Provider

OpenRouter (pay-as-you-go, access 200+ models):

  • openai/gpt-4o-mini (default) - Cost-effective OpenAI model
  • anthropic/claude-sonnet-4 - Latest Claude model
  • google/gemini-pro-1.5 - Google's Gemini
  • meta-llama/llama-3.3-70b-instruct - Open source Llama
  • openai/gpt-4o - Premium OpenAI model
  • And 200+ more models! See openrouter.ai/models

Cerebras (free, ultra-fast, best for large prompts):

  • zai-glm-4.6 (default) - Multilingual support, best for large context
  • gpt-oss-120b - 120B params, best free quality
  • llama-3.3-70b - 70B params, good balance

Groq (free, fast, best for smaller prompts):

  • llama-3.3-70b-versatile (default) - Best for general use, 32K context
  • mixtral-8x7b-32768 - Good for technical content
  • llama-3.1-70b-versatile - Alternative option
  • moonshotai/kimi-k2-instruct-0905 - Kimi K2 model with 128K context (requires paid tier for large prompts)

OpenAI (paid, 2025 models):

  • gpt-5-nano (default) - Most economical ($0.05/$0.40 per 1M tokens)
  • gpt-5-mini - Good balance ($0.25/$2.00 per 1M tokens)
  • gpt-5 - Best quality ($1.25/$10 per 1M tokens)

Anthropic (paid, 2025 models):

  • claude-haiku-4-5 (default) - Best value ($1/$5 per 1M, 73.3% SWE-bench)
  • claude-sonnet-4-5 - Best coding model (frontier performance)
  • claude-opus-4-1 - Highest reasoning capability

CI/CD Integration

GitHub Actions (Basic)
- name: Generate Release Notes
  run: |
    go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
    promptext-notes --version ${{ github.ref_name }} --output RELEASE_NOTES.md

- name: Create Release
  uses: softprops/action-gh-release@v1
  with:
    body_path: RELEASE_NOTES.md
GitHub Actions (With AI Enhancement)

The repository includes a complete automated workflow. See .github/workflows/auto-docs.yml.

GitLab CI
release:
  script:
    - go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
    - promptext-notes --version $CI_COMMIT_TAG --output RELEASE_NOTES.md

Development

Prerequisites
  • Go 1.22 or higher
  • Git
  • staticcheck (optional but recommended): go install honnef.co/go/tools/cmd/staticcheck@latest
  • gocyclo (optional but recommended): go install github.com/fzipp/gocyclo/cmd/gocyclo@latest
Setup Pre-commit Hooks

Install Git hooks to automatically run quality checks before each commit:

./scripts/install-hooks.sh

This will run go fmt, go vet, staticcheck, gocyclo, and tests before allowing commits. To skip hooks for a specific commit:

git commit --no-verify
Build
go build -o promptext-notes ./cmd/promptext-notes
Test
go test ./... -v
Test with Coverage
go test ./... -cover

Current coverage: 88.66%

Quality Checks
# Format code
go fmt ./...

# Run staticcheck
staticcheck ./...

# Check cyclomatic complexity
gocyclo -over 20 .

# Run go vet
go vet ./...
Project Structure
promptext-notes/
β”œβ”€β”€ cmd/
β”‚   └── promptext-notes/           # CLI entry point
β”‚       └── main.go
β”œβ”€β”€ internal/
β”‚   β”œβ”€β”€ ai/                        # AI provider integrations (NEW!)
β”‚   β”‚   β”œβ”€β”€ provider.go            # Provider interface
β”‚   β”‚   β”œβ”€β”€ anthropic.go           # Anthropic (Claude)
β”‚   β”‚   β”œβ”€β”€ openai.go              # OpenAI (GPT)
β”‚   β”‚   β”œβ”€β”€ cerebras.go            # Cerebras (free)
β”‚   β”‚   β”œβ”€β”€ groq.go                # Groq (free)
β”‚   β”‚   β”œβ”€β”€ ollama.go              # Local Ollama
β”‚   β”‚   └── retry.go               # Retry logic
β”‚   β”œβ”€β”€ config/                    # Configuration (NEW!)
β”‚   β”‚   β”œβ”€β”€ config.go              # YAML config support
β”‚   β”‚   └── config_test.go
β”‚   β”œβ”€β”€ workflow/                  # Orchestration (NEW!)
β”‚   β”‚   └── workflow.go            # End-to-end workflow
β”‚   β”œβ”€β”€ analyzer/                  # Commit categorization
β”‚   β”‚   β”œβ”€β”€ analyzer.go
β”‚   β”‚   └── analyzer_test.go
β”‚   β”œβ”€β”€ context/                   # Code context extraction
β”‚   β”‚   β”œβ”€β”€ extractor.go
β”‚   β”‚   └── extractor_test.go
β”‚   β”œβ”€β”€ generator/                 # Release notes generation
β”‚   β”‚   β”œβ”€β”€ generator.go
β”‚   β”‚   └── generator_test.go
β”‚   β”œβ”€β”€ git/                       # Git operations
β”‚   β”‚   β”œβ”€β”€ git.go
β”‚   β”‚   └── git_test.go
β”‚   └── prompt/                    # AI prompt generation
β”‚       β”œβ”€β”€ prompt.go
β”‚       └── prompt_test.go
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       β”œβ”€β”€ ci.yml                 # CI/CD pipeline
β”‚       └── auto-docs.yml          # Automated release notes
β”œβ”€β”€ scripts/
β”‚   └── generate-release-notes.sh  # Shell script (legacy)
β”œβ”€β”€ .promptext-notes.example.yml   # Example config (NEW!)
β”œβ”€β”€ go.mod
β”œβ”€β”€ go.sum
β”œβ”€β”€ README.md
β”œβ”€β”€ LICENSE
└── .gitignore

Examples

Example 1: Quick Release Notes
$ promptext-notes --version v0.7.4

## [v0.7.4] - 2025-11-10

### Added
- Token budget support for code extraction
- File filtering by extension

### Fixed
- Panic when no git tags exist

### Statistics
- **Files changed**: 5
- **Commits**: 3
- **Context analyzed**: ~2,150 tokens

---
Example 2: AI Prompt Generation
$ promptext-notes --version v0.7.4 --ai-prompt

# Release Notes Enhancement Request

Please generate comprehensive release notes for version v0.7.4

## Context

- **Version**: v0.7.4
- **Changes since**: v0.7.3
- **Commits analyzed**: 3
- **Files changed**: 5
- **Context extracted**: ~2,150 tokens

## Commit History

feat: add token budget support fix: handle missing git tags docs: update README examples


... (full prompt with code context)

Troubleshooting

API Key Issues

Problem: ❌ OPENAI_API_KEY environment variable not set

Solution:

  • Make sure you've added the API key to GitHub Secrets (for CI) or set it as an environment variable (for local use)
  • Check the secret name matches exactly: OPENAI_API_KEY, ANTHROPIC_API_KEY, CEREBRAS_API_KEY, or GROQ_API_KEY
Invalid API Key

Problem: ❌ OpenAI API Error (invalid_api_key): Invalid API key

Solution:

  • Verify your API key is correct and hasn't expired
  • For OpenAI/Groq: Key format is sk-... or similar
  • For Anthropic: Key format is sk-ant-...
  • For Cerebras: Check cerebras.ai for correct key format
Model Not Found

Problem: ❌ Cerebras API Error: Model llama-3.1-70b does not exist

Solution:

  • Check the "Available Models by Provider" section above for valid model names
  • Update the CEREBRAS_MODEL GitHub Variable or use a different model in the command
  • Common mistake: llama3.1-70b (no dash) vs llama-3.1-70b (with dashes)
Context Length Exceeded

Problem: ❌ API Error: Current length is 8950 while limit is 8192

Solution:

  • Your code changes are too large for the model's context window
  • Switch to a provider with a larger context limit (see "Supported AI Providers" table)
  • Recommended: Anthropic (200K), OpenAI (128K), or Cerebras (65K)
Rate Limiting

Problem: ❌ API Error (429): Rate limit exceeded

Solution:

  • Wait a few minutes and try again
  • Consider upgrading to a paid tier for higher rate limits
  • Switch to a different provider (free tiers: Cerebras, Groq)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes using conventional commits (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Support

If you encounter any issues or have questions, please open an issue.

Directories ΒΆ

Path Synopsis
cmd
promptext-notes command
internal
ai
git

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL