term-llm
Translate natural language into shell commands using LLMs.
$ term-llm exec "find all go files modified today"
> find . -name "*.go" -mtime 0 Uses find with name pattern
fd -e go --changed-within 1d Uses fd (faster alternative)
find . -name "*.go" -newermt "today" Alternative find syntax
something else...
Installation
One-liner (recommended)
curl -fsSL https://raw.githubusercontent.com/samsaffron/term-llm/main/install.sh | sh
Or with options:
curl -fsSL https://raw.githubusercontent.com/samsaffron/term-llm/main/install.sh | sh -s -- --version v0.1.0 --install-dir ~/bin
Go install
go install github.com/samsaffron/term-llm@latest
Build from source
git clone https://github.com/samsaffron/term-llm
cd term-llm
go build
Setup
On first run, term-llm will prompt you to choose a provider (Anthropic, OpenAI, Gemini, or Zen).
Option 1: Use existing CLI credentials (recommended)
If you have Claude Code, Codex, or gemini-cli installed and logged in, term-llm can use those credentials:
# In ~/.config/term-llm/config.yaml
anthropic:
credentials: claude # uses Claude Code credentials
openai:
credentials: codex # uses Codex credentials
gemini:
credentials: gemini-cli # uses gemini-cli OAuth credentials
Option 2: Use API key
Set your API key as an environment variable:
# For Anthropic
export ANTHROPIC_API_KEY=your-key
# For OpenAI
export OPENAI_API_KEY=your-key
# For Gemini
export GEMINI_API_KEY=your-key
Option 3: Use OpenCode Zen (free tier available)
OpenCode Zen provides free access to GLM 4.7 and other models. No API key required for free tier, or set ZEN_API_KEY for paid models:
# In ~/.config/term-llm/config.yaml
provider: zen
zen:
model: glm-4.7-free # default model (free)
# api_key: optional - leave empty for free tier, or set for paid models
Or use the --provider flag:
term-llm exec --provider zen "list files"
term-llm ask --provider zen "explain git rebase"
Usage
term-llm exec "your request here"
Use arrow keys to select a command, Enter to execute, or press h for detailed help on the highlighted command. Select "something else..." to refine your request.
Flags
| Flag |
Short |
Description |
--provider |
|
Override provider (anthropic, openai, gemini, zen) |
--auto-pick |
-a |
Auto-execute the best suggestion without prompting |
--max N |
-n N |
Limit to N options in the selection UI |
--search |
-s |
Enable web search for current information |
--print-only |
-p |
Print the command instead of executing |
--debug |
-d |
Show full LLM request and response |
Examples
term-llm exec "list files by size" # interactive selection
term-llm exec "compress folder" --auto-pick # auto-execute best
term-llm exec "find large files" -n 3 # show max 3 options
term-llm exec "install latest node" -s # with web search
term-llm exec "disk usage" -p # print only
term-llm exec --provider zen "git status" # use specific provider
# Ask a question
term-llm ask "What is the difference between TCP and UDP?"
term-llm ask "latest node.js version" -s # with web search
term-llm ask --provider zen "explain docker" # use specific provider
Shell Integration (Recommended)
Commands run by term-llm don't appear in your shell history. To fix this, add a shell function that uses --print-only mode.
Zsh
Add to ~/.zshrc:
tl() {
local cmd=$(term-llm exec --print-only "$@")
if [[ -n "$cmd" ]]; then
print -s "$cmd" # add to history
eval "$cmd"
fi
}
Bash
Add to ~/.bashrc:
tl() {
local cmd=$(term-llm exec --print-only "$@")
if [[ -n "$cmd" ]]; then
history -s "$cmd" # add to history
eval "$cmd"
fi
}
Then use tl instead of term-llm:
tl "find large files"
tl "install latest docker" -s # with web search
tl "compress this folder" -a # auto-pick best
Configuration
term-llm config # Show current config
term-llm config edit # Edit config file
term-llm config path # Print config file path
Version & Updates
term-llm automatically checks for updates once per day and notifies you when a new version is available.
term-llm version # Show version info
term-llm upgrade # Upgrade to latest version
term-llm upgrade --version v0.2.0 # Install specific version
To disable update checks, set TERM_LLM_SKIP_UPDATE_CHECK=1.
Config is stored at ~/.config/term-llm/config.yaml:
provider: anthropic # anthropic, openai, gemini, or zen
exec:
suggestions: 3 # number of command suggestions
instructions: |
I use Arch Linux with zsh.
I prefer ripgrep over grep, fd over find.
ask:
instructions: |
Be concise. I'm an experienced developer.
anthropic:
model: claude-sonnet-4-5
credentials: claude # or "api_key" (default)
openai:
model: gpt-5.2
credentials: codex # or "api_key" (default)
gemini:
model: gemini-3-flash-preview
credentials: gemini-cli # or "api_key" (default)
zen:
model: glm-4.7-free
# api_key is optional - leave empty for free tier
Credentials
Each provider supports a credentials field:
| Provider |
Value |
Description |
| All |
api_key |
Use environment variable (default) |
| Anthropic |
claude |
Use Claude Code credentials |
| OpenAI |
codex |
Use Codex CLI credentials |
| Gemini |
gemini-cli |
Use gemini-cli OAuth credentials |
| Zen |
api_key |
Optional: empty for free tier, or set ZEN_API_KEY for paid models |
Claude Code (credentials: claude):
- macOS: System keychain (via
security command)
- Linux:
~/.claude/.credentials.json
Codex (credentials: codex):
- Reads from
~/.codex/auth.json
gemini-cli (credentials: gemini-cli):
- Reads OAuth credentials from
~/.gemini/oauth_creds.json
- Uses Google Code Assist API (same backend as gemini-cli)
Shell Completions
Generate and install shell completions:
term-llm config completion zsh --install # Install for zsh
term-llm config completion bash --install # Install for bash
term-llm config completion fish --install # Install for fish
License
MIT