CLI Reference
Complete reference for all llmswap CLI commands.
Table of contents
- Core Commands
- Analytics Commands
- Configuration Commands
- Advanced Commands
- Global Options
- Environment Variables
- Exit Codes
Core Commands
llmswap ask
Quick one-off questions to AI.
llmswap ask "Your question here"
Options:
--provider PROVIDER
- Specify provider (openai, anthropic, gemini, etc.)--no-cache
- Disable response caching--quiet
- Minimal output--age N
- Adapt language for age N--audience TYPE
- Target audience (developer, student, business, etc.)--teach
- Enable teaching mode--explain
- Add detailed explanations
Examples:
# Simple question
llmswap ask "What is Docker?"
# Explain to a specific audience
llmswap ask "Explain kubernetes" --audience "business owner"
# Age-appropriate explanation
llmswap ask "What is AI?" --age 10
# Use specific provider
llmswap ask "Generate a UUID" --provider openai
llmswap chat
Start an interactive chat session with conversation memory.
llmswap chat
Options:
--provider PROVIDER
- Initial provider--no-cache
- Disable caching--age N
- Set age context for session--audience TYPE
- Set target audience
Chat Commands:
/help
- Show available commands/provider
- Show current provider/switch <provider>
- Switch to different provider/clear
- Clear conversation history/stats
- Show session statistics/age N
- Update age context/audience TYPE
- Update audience/quit
or/exit
- End chat session
Example Session:
$ llmswap chat
๐ค Starting chat with claude-3-5-sonnet
You: Hello, my name is Sarah
Assistant: Hello Sarah! Nice to meet you. How can I help you today?
You: /switch gemini
๐ Switched to gemini
You: What's my name?
Assistant: Your name is Sarah.
You: /stats
๐ Session Statistics:
Messages: 4
Total tokens: 156
Session duration: 2m 15s
You: /quit
๐ Goodbye!
llmswap generate
Generate code or commands from natural language descriptions.
llmswap generate "description of what you want"
Options:
--language LANG
- Target programming language--execute
- Execute generated bash commands (with confirmation)--save FILENAME
- Save output to file--explain
- Add explanations to generated code--provider PROVIDER
- Use specific provider
Examples:
# Generate bash command
llmswap generate "find files larger than 100MB"
# Output: find . -type f -size +100M
# Generate Python code
llmswap generate "async function to fetch JSON from API" --language python
# Generate and save
llmswap generate "nginx reverse proxy config" --save nginx.conf
# Generate and execute (asks for confirmation)
llmswap generate "create Python project structure" --execute
llmswap review
AI-powered code review for your files.
llmswap review <file>
Options:
--focus AREA
- Focus area: bugs, security, style, performance, general--language LANG
- Override detected language--provider PROVIDER
- Use specific provider
Examples:
# General review
llmswap review app.py
# Security-focused review
llmswap review api.js --focus security
# Performance review
llmswap review database.py --focus performance
# Style review
llmswap review components/Header.jsx --focus style
llmswap debug
Analyze errors and get debugging help.
llmswap debug --error "error message"
llmswap debug --file error.log
Options:
--error MESSAGE
- Error message to analyze--file FILE
- File containing error/logs--context
- Include system context--language LANG
- Programming language context
Examples:
# Debug an error message
llmswap debug --error "TypeError: Cannot read property 'map' of undefined"
# Analyze error log
llmswap debug --file npm-debug.log
# Debug with context
llmswap debug --error "Connection refused" --language python --context
llmswap providers
Show status of all configured providers.
llmswap providers
Output Example:
โญโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโฎ
โ Provider โ Status โ Model โ Issue โ
โโโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโค
โ ANTHROPIC โ โ Ready โ claude-3-5-sonnet โ โ
โ OPENAI โ โ Ready โ gpt-4o โ โ
โ GEMINI โ โ โ โ Missing API key โ
โ COHERE โ โ Ready โ command-r-plus โ โ
โ PERPLEXITY โ โ โ โ Missing API key โ
โ WATSONX โ โ โ โ Missing project ID โ
โ GROQ โ โ Ready โ llama-3.3-70b โ โ
โ OLLAMA โ โ Ready โ llama3.1 โ โ
โฐโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโฏ
Analytics Commands
llmswap compare
Compare costs across providers for a given token count.
llmswap compare --input-tokens <N> --output-tokens <N>
Options:
--input-tokens N
- Number of input tokens--output-tokens N
- Number of output tokens--monthly
- Show monthly cost projection
Example:
$ llmswap compare --input-tokens 1000 --output-tokens 500
Provider Cost Comparison (1000 input, 500 output tokens):
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Provider โ Cost โ Savings vs Most Expensive
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Ollama โ $0.0000 โ 100.0%
Groq โ $0.0001 โ 99.5%
Gemini โ $0.0019 โ 90.5%
Claude โ $0.0150 โ 25.0%
GPT-4 โ $0.0200 โ 0.0%
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
llmswap usage
View usage statistics and analytics.
llmswap usage [--days N]
Options:
--days N
- Show last N days (default: 7)--provider PROVIDER
- Filter by provider--export FILE
- Export to CSV/JSON
Example:
$ llmswap usage --days 30
Usage Statistics (Last 30 days):
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Provider โ Queries โ Tokens โ Cost
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
OpenAI โ 142 โ 45,231 โ $2.15
Anthropic โ 89 โ 31,452 โ $1.87
Gemini โ 203 โ 67,891 โ $0.45
Ollama โ 567 โ 234,567 โ $0.00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total โ 1,001 โ 379,141 โ $4.47
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
llmswap costs
Get cost analysis and optimization recommendations.
llmswap costs
Output includes:
- Current monthโs spending
- Cost breakdown by provider
- Optimization recommendations
- Potential savings
Configuration Commands
llmswap config
Manage llmswap configuration.
llmswap config <action> [options]
Actions:
set KEY VALUE
- Set configuration valueget KEY
- Get configuration valueunset KEY
- Remove configuration keyshow [SECTION]
- Display configurationreset [SECTION]
- Reset to defaultsexport --file FILE
- Export configurationimport --file FILE
- Import configurationvalidate
- Validate configurationdoctor
- Run diagnostics
Examples:
# Set default provider
llmswap config set provider.default anthropic
# Set model for specific provider
llmswap config set provider.models.openai gpt-4-turbo
# View all configuration
llmswap config show
# Reset provider settings
llmswap config reset provider
# Export configuration
llmswap config export --file my-config.yaml
Advanced Commands
llmswap logs
Analyze log files with AI assistance.
llmswap logs --analyze <logfile>
Options:
--analyze FILE
- Log file to analyze--since TIME
- Filter logs since time--level LEVEL
- Filter by log level--pattern PATTERN
- Search pattern--correlate
- Find correlated events
Examples:
# Analyze application logs
llmswap logs --analyze app.log
# Find errors in last 2 hours
llmswap logs --analyze system.log --since "2h ago" --level error
# Correlate events
llmswap logs --analyze nginx.log --correlate
Global Options
These options work with all commands:
--provider PROVIDER
- Override provider selection--no-cache
- Disable response caching--quiet
- Minimal output mode--debug
- Enable debug output--version
- Show version information--help
- Show help for any command
Environment Variables
Configure providers via environment variables:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Google Gemini
export GEMINI_API_KEY="..."
# IBM Watson
export WATSONX_API_KEY="..."
export WATSONX_PROJECT_ID="..."
# Groq
export GROQ_API_KEY="gsk_..."
# Cohere
export COHERE_API_KEY="..."
# Perplexity
export PERPLEXITY_API_KEY="pplx-..."
Exit Codes
0
- Success1
- General error2
- Configuration error3
- Provider error4
- Network error5
- Authentication error
Need more help? Check out our tutorials for real-world examples.