Universal AI CLI

8 Providers. One Interface. Zero Lock-in.

🚀 Transform Your AI Workflow

llmswap is a universal AI CLI and Python SDK that lets you seamlessly use OpenAI, Claude, Gemini, Watson, Groq, Cohere, Perplexity, and local Ollama models through a single interface.

$ llmswap chat
🤖 Starting chat (Provider: claude)
You: What's the weather like?
Assistant: I don't have access to real-time weather data...

You: /switch gemini
🔄 Switched to gemini

You: Tell me about Python decorators
Assistant: Python decorators are a powerful feature...

⚡ Why llmswap?

🔄 No Vendor Lock-in

Switch between 8 AI providers instantly. Use OpenAI today, Claude tomorrow - your choice.

💰 90% Cost Savings

Pay-per-use instead of subscriptions. Use cheaper providers for simple tasks, premium for complex ones.

🧠 Conversation Memory

Maintains context across messages. The AI remembers your conversation, just like ChatGPT.

🛠️ Code Generation

Natural language to code. Like GitHub Copilot CLI but works with any provider.

🏢 Enterprise Ready

Configuration management, async support, and production-grade architecture.

🔒 Privacy First

No conversation storage by default. Run locally with Ollama for complete privacy.

🤖 Supported Providers

OpenAI GPT-4 Claude 3.5 Google Gemini IBM Watson Groq Cohere Perplexity Ollama (Local)

📊 Compare with Others

| Feature | llmswap | GitHub Copilot CLI | Claude CLI | Gemini CLI | |---------|---------|-------------------|------------|------------| | **Providers** | 8 providers | 3 (locked) | 1 only | 1 only | | **Cost** | Pay-per-use | $10/month | $20/month | Limited free | | **Conversation Context** | | | | | | **Provider Switching** | | | | | | **Local Models** | | | | | | **Code Generation** | | | | | | **Cost Analytics** | | | | | | **Open Source** | | | | |

🎯 Quick Examples

Natural Language to Code

$ llmswap generate "create nginx config for load balancing"

upstream backend {
    server backend1.example.com:8080;
    server backend2.example.com:8080;
    server backend3.example.com:8080;
}

server {
    listen 80;
    location / {
        proxy_pass http://backend;
    }
}

Interactive Chat with Context

$ llmswap chat
You: My name is Alice
Assistant: Nice to meet you, Alice! How can I help you today?

You: What's my name?
Assistant: Your name is Alice.

Cost Comparison

$ llmswap compare --input-tokens 1000 --output-tokens 500

Provider Cost Comparison:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Provider    | Cost    | Savings
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Ollama      | $0.000  | 100%
Groq        | $0.001  | 95%
Gemini      | $0.002  | 90%
Claude      | $0.015  | 25%
GPT-4       | $0.020  | 0%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🚀 Quick Install

# Install with pip
pip install llmswap

# Set up your preferred provider (only need one)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."

# Start using
llmswap chat
# Install with Homebrew
brew install llmswap/tap/llmswap

# Configure provider
export ANTHROPIC_API_KEY="sk-ant-..."

# Start using
llmswap chat
# Clone repository
git clone https://github.com/sreenathmmenon/llmswap
cd llmswap

# Install dependencies
pip install -e .

# Configure and run
export OPENAI_API_KEY="sk-..."
llmswap chat

📈 Trusted by Developers

11K+

Downloads in 50 days

8

AI Providers

40+

Models Supported

90%

Cost Savings

🎓 Learn More

🌟 Get Involved


Copyright © 2025 llmswap. Distributed under the MIT license.