README
ΒΆ
Promptext-Notes
A Go-based CLI tool that generates intelligent, context-aware release notes by combining git history analysis with code context extraction using the promptext library.
Features
- π Git History Analysis: Automatically analyzes commits since the last tag
- π Code Context Extraction: Uses promptext to extract relevant code changes with token-aware analysis
- π Conventional Commits: Categorizes changes by type (feat, fix, docs, breaking, etc.)
- π€ Integrated AI Generation: Generate AI-enhanced changelogs directly with
--generateflag - β¨ 2-Stage Polish Workflow: Combine accurate discovery with customer-friendly polish for premium quality
- π« Auto-Exclude-Meta (v0.8.0): Automatically excludes CI configs, CHANGELOG, README from AI context
- π Multi-Provider Support: Works with OpenRouter (200+ models), Anthropic, OpenAI, Cerebras, Groq, and local Ollama
- βοΈ YAML Configuration: Customize behavior with
.promptext-notes.ymlconfig file - π Keep a Changelog Format: Produces standardized markdown output
- β‘ Fast & Lightweight: Single binary with no runtime dependencies (except Git)
- π Easy Integration: Add to any repository with GitHub Actions (See Guide)
- π Free Options: Use Cerebras, Groq, or local Ollama (no API cost)
Installation
Using go install
go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
From Source
git clone https://github.com/1broseidon/promptext-notes.git
cd promptext-notes
go build -o promptext-notes ./cmd/promptext-notes
sudo mv promptext-notes /usr/local/bin/
Download Pre-built Binary
Download the latest release from the releases page.
Usage
Basic Release Notes
Generate release notes for a specific version:
promptext-notes --version v1.0.0
Output:
## [v1.0.0] - 2025-11-10
### Added
- New feature for code analysis
- Support for additional file types
### Fixed
- Bug in token counting
- Edge case in file filtering
### Statistics
- **Files changed**: 12
- **Commits**: 8
- **Context analyzed**: ~7,850 tokens
AI-Enhanced Release Notes (Integrated)
NEW! Generate AI-enhanced changelog directly with a single command:
# Using Cerebras (default with config file - free tier, best for large prompts)
export CEREBRAS_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0
# Or specify provider inline
promptext-notes --generate --provider groq --model llama-3.3-70b-versatile --version v1.0.0
The --generate flag will:
- Analyze git history and extract code context
- Send the comprehensive prompt to your AI provider
- Return polished, production-ready release notes
Legacy Method: Generate a prompt to paste into an LLM manually:
promptext-notes --version v1.0.0 --ai-prompt > prompt.txt
Then paste the contents of prompt.txt into Claude, ChatGPT, or your preferred LLM.
2-Stage Polish Workflow β¨
NEW! Combine the accuracy of technical discovery models with the polish of customer-facing language models for premium quality changelogs.
Quick Start:
# Enable polish with CLI flag (uses config from .promptext-notes.yml)
promptext-notes --generate --polish --version v1.0.0
# Or configure in .promptext-notes.yml
How it works:
- Stage 1 (Discovery): Uses a model optimized for code understanding to analyze all changes
- Stage 2 (Polish): Refines the output into polished, customer-friendly language
Recommended Setup (from benchmarks):
ai:
provider: cerebras
model: zai-glm-4.6 # Stage 1 (Discovery) - FREE, 10/10 accuracy
polish:
enabled: true # Or use --polish CLI flag
polish_model: "anthropic/claude-sonnet-4.5" # Stage 2 - Premium polish
polish_provider: "openrouter" # Different provider for polish stage
polish_api_key_env: "OPENROUTER_API_KEY"
polish_max_tokens: 4000
polish_temperature: 0.3
Benefits:
- β GLM-4.6: Best free model (10/10 accuracy, catches all technical details)
- β Claude Sonnet 4.5: Premium polish (8/10 quality, minimal hallucination)
- β Total cost: ~$0.004/run (discovery is FREE, polish is cheap)
- β Can mix FREE models (Cerebras) with paid polish (OpenRouter)
- β Auto-exclude-meta (v0.8.0): Keeps changelogs focused on user-facing changes
Cost Analysis:
- Single-stage (GLM-4.6): $0 (FREE)
- 2-stage (GLM + Claude Sonnet): ~$0.004/run
- 2-stage (GLM + Haiku): ~$0.001/run (cheaper but less accurate)
Custom Date Range
Specify a starting tag/commit:
promptext-notes --version v1.0.0 --since v0.5.0
Output to File
Write release notes to a file:
promptext-notes --version v1.0.0 --output RELEASE_NOTES.md
Append to CHANGELOG
promptext-notes --version v1.0.0 --output release-notes.md
cat release-notes.md >> CHANGELOG.md
Configuration
You can configure promptext-notes using a YAML configuration file. Copy .promptext-notes.example.yml to .promptext-notes.yml and customize:
version: "1"
ai:
provider: cerebras # cerebras, anthropic, openai, groq, openrouter, ollama
model: zai-glm-4.6 # Best free model (10/10 accuracy)
api_key_env: CEREBRAS_API_KEY
max_tokens: 8000
temperature: 0.3
timeout: 30s
polish:
enabled: false # Enable with --polish flag
polish_model: "anthropic/claude-sonnet-4.5"
polish_provider: "openrouter"
polish_api_key_env: "OPENROUTER_API_KEY"
output:
format: keepachangelog
sections: [breaking, added, changed, fixed, docs]
filters:
files:
auto_exclude_meta: true # NEW in v0.8.0 - excludes CI, CHANGELOG, README
include: ["*.go", "*.md", "*.yml"]
exclude: ["*_test.go", "vendor/*"]
See CONFIGURATION.md for full configuration options and USAGE.md for usage examples.
Flags
| Flag | Type | Default | Description |
|---|---|---|---|
--version |
string | "" | Version to generate notes for (e.g., v0.7.4) |
--since |
string | "" | Generate notes since this tag (auto-detects if empty) |
--output |
string | "" | Output file path (stdout if empty) |
--generate |
bool | false | NEW! Generate AI-enhanced changelog directly |
--polish |
bool | false | NEW! Enable 2-stage polish workflow (discovery + refinement) |
--provider |
string | "" | AI provider (anthropic, openai, cerebras, groq, openrouter, ollama) |
--model |
string | "" | AI model to use (overrides config) |
--exclude-files |
string | "" | Comma-separated files to exclude from AI context (e.g., CHANGELOG.md,README.md) |
--config |
string | ".promptext-notes.yml" | Configuration file path |
--quiet |
bool | false | Suppress progress messages |
--ai-prompt |
bool | false | Generate AI prompt only (legacy mode) |
How It Works
- Git Analysis: Retrieves changed files and commit messages since the last tag (or specified tag)
- Context Extraction: Uses promptext to extract code context from changed files (.go, .md, .yml, .yaml)
- Categorization: Parses commit messages using conventional commit format
- Generation: Produces either:
- Basic Mode: Keep a Changelog formatted release notes
- AI Mode: Comprehensive prompt with full code context for LLM enhancement
Automated Release Notes (AI-Enhanced)
This project includes an automated workflow that generates AI-enhanced release notes using multiple AI providers: OpenAI, Anthropic, Cerebras, or Groq.
π Want to use this in your own repository? See the Complete Integration Guide for step-by-step instructions on adding automated AI-enhanced release notes to any project.
How It Works
When you push a version tag (e.g., v1.0.0), the workflow automatically:
- β Builds the promptext-notes binary
- π Analyzes git history and extracts code context
- π€ Sends the prompt to your chosen AI provider for enhancement
- π Creates a GitHub release with polished notes
- π Updates CHANGELOG.md in the repository
Supported AI Providers
| Provider | Default Model | Context Limit | Cost | Setup URL |
|---|---|---|---|---|
| OpenRouter π | openai/gpt-4o-mini | Varies | π° Pay-as-you-go | openrouter.ai/keys |
| Cerebras | zai-glm-4.6 | 65K tokens | β Free | cerebras.ai |
| Groq | llama-3.3-70b-versatile | 32K tokens | β Free | console.groq.com |
| Ollama | llama3.2 | Varies | β Free (Local) | ollama.com |
| OpenAI | gpt-4o-mini | 128K tokens | π° $0.15/$0.60 per 1M | platform.openai.com |
| Anthropic | claude-haiku-4-5 | 200K tokens | π° $0.80/$4.00 per 1M | console.anthropic.com |
Setup
-
Get an API key from your chosen provider (see Setup URL column above)
-
Add API key(s) to GitHub Secrets:
- Go to your repository β Settings β Secrets and variables β Actions
- Click "New repository secret"
- Add one or more of these secrets:
OPENROUTER_API_KEY- For OpenRouter (access 200+ models through one API)CEREBRAS_API_KEY- For Cerebras (free, best for large prompts)GROQ_API_KEY- For Groq (free, good for smaller prompts)OPENAI_API_KEY- For OpenAIANTHROPIC_API_KEY- For Anthropic
-
(Optional) Configure models via GitHub Variables:
- Go to your repository β Settings β Secrets and variables β Actions β Variables tab
- Add variables to customize models (otherwise defaults are used):
OPENAI_MODEL(default:gpt-5-nano)ANTHROPIC_MODEL(default:claude-haiku-4-5)CEREBRAS_MODEL(default:gpt-oss-120b)GROQ_MODEL(default:llama-3.3-70b-versatile)
-
Push a version tag:
git tag v1.0.0 git push origin v1.0.0
The workflow will automatically generate and publish AI-enhanced release notes using Cerebras (default, free) or your configured provider!
Local CLI Usage (Recommended)
NEW! Use the integrated --generate flag for one-command AI-enhanced changelogs:
# Using Anthropic (create .promptext-notes.yml config first)
export ANTHROPIC_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0
# Or specify provider inline
export OPENAI_API_KEY="your-key-here"
promptext-notes --generate --provider openai --model gpt-4o-mini --version v1.0.0
# Using OpenRouter (access 200+ models through one API)
export OPENROUTER_API_KEY="your-key-here"
promptext-notes --generate --provider openrouter --model anthropic/claude-sonnet-4 --version v1.0.0
# Using Cerebras (default, free tier, best for large prompts)
export CEREBRAS_API_KEY="your-key-here"
promptext-notes --generate --version v1.0.0
# Using Groq (free tier, good for smaller prompts)
export GROQ_API_KEY="your-key-here"
promptext-notes --generate --provider groq --version v1.0.0
# Using Ollama (local, free, no API key needed!)
# First: ollama pull llama3.2
promptext-notes --generate --provider ollama --model llama3.2 --version v1.0.0
Legacy Script Method
You can also use the shell script (will be deprecated in future versions):
# Using Cerebras (default)
export CEREBRAS_API_KEY="your-key-here"
./scripts/generate-release-notes.sh v1.0.0
# Using OpenAI
export OPENAI_API_KEY="your-key-here"
./scripts/generate-release-notes.sh v1.0.0 v0.9.0 openai
Available Models by Provider
OpenRouter (pay-as-you-go, access 200+ models):
openai/gpt-4o-mini(default) - Cost-effective OpenAI modelanthropic/claude-sonnet-4- Latest Claude modelgoogle/gemini-pro-1.5- Google's Geminimeta-llama/llama-3.3-70b-instruct- Open source Llamaopenai/gpt-4o- Premium OpenAI model- And 200+ more models! See openrouter.ai/models
Cerebras (free, ultra-fast, best for large prompts):
zai-glm-4.6(default) - Multilingual support, best for large contextgpt-oss-120b- 120B params, best free qualityllama-3.3-70b- 70B params, good balance
Groq (free, fast, best for smaller prompts):
llama-3.3-70b-versatile(default) - Best for general use, 32K contextmixtral-8x7b-32768- Good for technical contentllama-3.1-70b-versatile- Alternative optionmoonshotai/kimi-k2-instruct-0905- Kimi K2 model with 128K context (requires paid tier for large prompts)
OpenAI (paid, 2025 models):
gpt-5-nano(default) - Most economical ($0.05/$0.40 per 1M tokens)gpt-5-mini- Good balance ($0.25/$2.00 per 1M tokens)gpt-5- Best quality ($1.25/$10 per 1M tokens)
Anthropic (paid, 2025 models):
claude-haiku-4-5(default) - Best value ($1/$5 per 1M, 73.3% SWE-bench)claude-sonnet-4-5- Best coding model (frontier performance)claude-opus-4-1- Highest reasoning capability
CI/CD Integration
GitHub Actions (Basic)
- name: Generate Release Notes
run: |
go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
promptext-notes --version ${{ github.ref_name }} --output RELEASE_NOTES.md
- name: Create Release
uses: softprops/action-gh-release@v1
with:
body_path: RELEASE_NOTES.md
GitHub Actions (With AI Enhancement)
The repository includes a complete automated workflow. See .github/workflows/auto-docs.yml.
GitLab CI
release:
script:
- go install github.com/1broseidon/promptext-notes/cmd/promptext-notes@latest
- promptext-notes --version $CI_COMMIT_TAG --output RELEASE_NOTES.md
Development
Prerequisites
- Go 1.22 or higher
- Git
- staticcheck (optional but recommended):
go install honnef.co/go/tools/cmd/staticcheck@latest - gocyclo (optional but recommended):
go install github.com/fzipp/gocyclo/cmd/gocyclo@latest
Setup Pre-commit Hooks
Install Git hooks to automatically run quality checks before each commit:
./scripts/install-hooks.sh
This will run go fmt, go vet, staticcheck, gocyclo, and tests before allowing commits. To skip hooks for a specific commit:
git commit --no-verify
Build
go build -o promptext-notes ./cmd/promptext-notes
Test
go test ./... -v
Test with Coverage
go test ./... -cover
Current coverage: 88.66%
Quality Checks
# Format code
go fmt ./...
# Run staticcheck
staticcheck ./...
# Check cyclomatic complexity
gocyclo -over 20 .
# Run go vet
go vet ./...
Project Structure
promptext-notes/
βββ cmd/
β βββ promptext-notes/ # CLI entry point
β βββ main.go
βββ internal/
β βββ ai/ # AI provider integrations (NEW!)
β β βββ provider.go # Provider interface
β β βββ anthropic.go # Anthropic (Claude)
β β βββ openai.go # OpenAI (GPT)
β β βββ cerebras.go # Cerebras (free)
β β βββ groq.go # Groq (free)
β β βββ ollama.go # Local Ollama
β β βββ retry.go # Retry logic
β βββ config/ # Configuration (NEW!)
β β βββ config.go # YAML config support
β β βββ config_test.go
β βββ workflow/ # Orchestration (NEW!)
β β βββ workflow.go # End-to-end workflow
β βββ analyzer/ # Commit categorization
β β βββ analyzer.go
β β βββ analyzer_test.go
β βββ context/ # Code context extraction
β β βββ extractor.go
β β βββ extractor_test.go
β βββ generator/ # Release notes generation
β β βββ generator.go
β β βββ generator_test.go
β βββ git/ # Git operations
β β βββ git.go
β β βββ git_test.go
β βββ prompt/ # AI prompt generation
β βββ prompt.go
β βββ prompt_test.go
βββ .github/
β βββ workflows/
β βββ ci.yml # CI/CD pipeline
β βββ auto-docs.yml # Automated release notes
βββ scripts/
β βββ generate-release-notes.sh # Shell script (legacy)
βββ .promptext-notes.example.yml # Example config (NEW!)
βββ go.mod
βββ go.sum
βββ README.md
βββ LICENSE
βββ .gitignore
Examples
Example 1: Quick Release Notes
$ promptext-notes --version v0.7.4
## [v0.7.4] - 2025-11-10
### Added
- Token budget support for code extraction
- File filtering by extension
### Fixed
- Panic when no git tags exist
### Statistics
- **Files changed**: 5
- **Commits**: 3
- **Context analyzed**: ~2,150 tokens
---
Example 2: AI Prompt Generation
$ promptext-notes --version v0.7.4 --ai-prompt
# Release Notes Enhancement Request
Please generate comprehensive release notes for version v0.7.4
## Context
- **Version**: v0.7.4
- **Changes since**: v0.7.3
- **Commits analyzed**: 3
- **Files changed**: 5
- **Context extracted**: ~2,150 tokens
## Commit History
feat: add token budget support fix: handle missing git tags docs: update README examples
... (full prompt with code context)
Troubleshooting
API Key Issues
Problem: β OPENAI_API_KEY environment variable not set
Solution:
- Make sure you've added the API key to GitHub Secrets (for CI) or set it as an environment variable (for local use)
- Check the secret name matches exactly:
OPENAI_API_KEY,ANTHROPIC_API_KEY,CEREBRAS_API_KEY, orGROQ_API_KEY
Invalid API Key
Problem: β OpenAI API Error (invalid_api_key): Invalid API key
Solution:
- Verify your API key is correct and hasn't expired
- For OpenAI/Groq: Key format is
sk-...or similar - For Anthropic: Key format is
sk-ant-... - For Cerebras: Check cerebras.ai for correct key format
Model Not Found
Problem: β Cerebras API Error: Model llama-3.1-70b does not exist
Solution:
- Check the "Available Models by Provider" section above for valid model names
- Update the
CEREBRAS_MODELGitHub Variable or use a different model in the command - Common mistake:
llama3.1-70b(no dash) vsllama-3.1-70b(with dashes)
Context Length Exceeded
Problem: β API Error: Current length is 8950 while limit is 8192
Solution:
- Your code changes are too large for the model's context window
- Switch to a provider with a larger context limit (see "Supported AI Providers" table)
- Recommended: Anthropic (200K), OpenAI (128K), or Cerebras (65K)
Rate Limiting
Problem: β API Error (429): Rate limit exceeded
Solution:
- Wait a few minutes and try again
- Consider upgrading to a paid tier for higher rate limits
- Switch to a different provider (free tiers: Cerebras, Groq)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes using conventional commits (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- promptext - Token-aware code context extraction
- Keep a Changelog - Changelog format
- Conventional Commits - Commit message convention
Related Projects
- promptext - Extract code context with token awareness
- conventional-changelog - Generate changelogs from git metadata
Support
If you encounter any issues or have questions, please open an issue.