Universal AI CLI
8 Providers. One Interface. Zero Lock-in.
🚀 Transform Your AI Workflow
llmswap is a universal AI CLI and Python SDK that lets you seamlessly use OpenAI, Claude, Gemini, Watson, Groq, Cohere, Perplexity, and local Ollama models through a single interface.
$ llmswap chat
🤖 Starting chat (Provider: claude)
You: What's the weather like?
Assistant: I don't have access to real-time weather data...
You: /switch gemini
🔄 Switched to gemini
You: Tell me about Python decorators
Assistant: Python decorators are a powerful feature...
⚡ Why llmswap?
🔄 No Vendor Lock-in
Switch between 8 AI providers instantly. Use OpenAI today, Claude tomorrow - your choice.
💰 90% Cost Savings
Pay-per-use instead of subscriptions. Use cheaper providers for simple tasks, premium for complex ones.
🤖 Supported Providers
OpenAI GPT-4
Claude 3.5
Google Gemini
IBM Watson
Groq
Cohere
Perplexity
Ollama (Local)
📊 Compare with Others
| Feature | llmswap | GitHub Copilot CLI | Claude CLI | Gemini CLI |
|---------|---------|-------------------|------------|------------|
| **Providers** | 8 providers | 3 (locked) | 1 only | 1 only |
| **Cost** | Pay-per-use | $10/month | $20/month | Limited free |
| **Conversation Context** | ✓ | ✓ | ✓ | ✓ |
| **Provider Switching** | ✓ | ✗ | ✗ | ✗ |
| **Local Models** | ✓ | ✗ | ✗ | ✗ |
| **Code Generation** | ✓ | ✓ | ✗ | ✗ |
| **Cost Analytics** | ✓ | ✗ | ✗ | ✗ |
| **Open Source** | ✓ | ✗ | ✗ | ✗ |
🎯 Quick Examples
Natural Language to Code
$ llmswap generate "create nginx config for load balancing"
upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
server backend3.example.com:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Interactive Chat with Context
$ llmswap chat
You: My name is Alice
Assistant: Nice to meet you, Alice! How can I help you today?
You: What's my name?
Assistant: Your name is Alice.
Cost Comparison
$ llmswap compare --input-tokens 1000 --output-tokens 500
Provider Cost Comparison:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Provider | Cost | Savings
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Ollama | $0.000 | 100%
Groq | $0.001 | 95%
Gemini | $0.002 | 90%
Claude | $0.015 | 25%
GPT-4 | $0.020 | 0%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 Quick Install
# Install with pip
pip install llmswap
# Set up your preferred provider (only need one)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
# Start using
llmswap chat
# Install with Homebrew
brew install llmswap/tap/llmswap
# Configure provider
export ANTHROPIC_API_KEY="sk-ant-..."
# Start using
llmswap chat
# Clone repository
git clone https://github.com/sreenathmmenon/llmswap
cd llmswap
# Install dependencies
pip install -e .
# Configure and run
export OPENAI_API_KEY="sk-..."
llmswap chat
📈 Trusted by Developers
🎓 Learn More
- Getting Started Guide - 5-minute quickstart
- CLI Reference - Complete command documentation
- Python SDK - Use llmswap in your Python code
- Provider Setup - Configure each AI provider
- Examples - Real-world use cases