Getting Started with llmswap

Get up and running with llmswap in 5 minutes.

Table of contents

  1. Installation
    1. Option 1: Install with pip (Recommended)
    2. Option 2: Install with Homebrew
    3. Option 3: Install from source
  2. Provider Setup
    1. Quick Setup (Choose One)
      1. OpenAI
      2. Anthropic Claude
      3. Google Gemini
      4. Local Models (Ollama)
  3. First Commands
    1. 1. Check Provider Status
    2. 2. Ask a Question
    3. 3. Start a Chat
    4. 4. Generate Code
    5. 5. Review Code
  4. Configuration
    1. Set Default Provider
    2. Set Default Models
    3. View Configuration
  5. Python SDK Quick Start
    1. Basic Usage
    2. Conversation with Context
    3. Switch Providers
  6. Next Steps
  7. Troubleshooting
    1. Provider Not Available
    2. API Key Issues
    3. Ollama Connection Issues
  8. Getting Help

Installation

pip install llmswap

Option 2: Install with Homebrew

brew install llmswap/tap/llmswap

Option 3: Install from source

git clone https://github.com/sreenathmmenon/llmswap
cd llmswap
pip install -e .

Provider Setup

llmswap supports 8 AI providers. You only need to configure one to get started.

Quick Setup (Choose One)

OpenAI

export OPENAI_API_KEY="sk-..."
llmswap chat  # Ready to use!

Anthropic Claude

export ANTHROPIC_API_KEY="sk-ant-..."
llmswap chat  # Ready to use!

Google Gemini

export GEMINI_API_KEY="..."
llmswap chat  # Ready to use!

Local Models (Ollama)

# Install Ollama first
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.1

# Use with llmswap
llmswap chat --provider ollama

First Commands

1. Check Provider Status

See which providers are configured and available:

llmswap providers

Output:

╭─────────────┬──────────┬──────────────────────┬─────────────────────╮
│ Provider    │ Status   │ Model                │ Issue               │
├─────────────┼──────────┼──────────────────────┼─────────────────────┤
│ ANTHROPIC   │ ✓ Ready  │ claude-3-5-sonnet    │                     │
│ OPENAI      │ ✓ Ready  │ gpt-4o               │                     │
│ GEMINI      │ ✗        │                      │ Missing API key     │
│ OLLAMA      │ ✓ Ready  │ llama3.1             │                     │
╰─────────────┴──────────┴──────────────────────┴─────────────────────╯

2. Ask a Question

Simple one-off question:

llmswap ask "What is the capital of France?"

3. Start a Chat

Interactive conversation with context:

llmswap chat

Example session:

🤖 Starting chat with claude-3-5-sonnet
Type '/help' for commands, '/quit' to exit

You: Hi, my name is Alice
Assistant: Hello Alice! It's nice to meet you. How can I help you today?

You: What's my name?
Assistant: Your name is Alice.

You: /switch openai
🔄 Switched to openai (gpt-4o)

You: Can you help me write Python code?
Assistant: Of course! I'd be happy to help you write Python code...

4. Generate Code

Transform natural language to code:

# Generate a bash command
llmswap generate "find all Python files modified in last 24 hours"

# Generate Python code
llmswap generate "function to validate email addresses" --language python

# Generate and execute (with confirmation)
llmswap generate "create backup of current directory" --execute

5. Review Code

Get AI-powered code review:

# Review a Python file
llmswap review app.py

# Focus on security issues
llmswap review app.py --focus security

# Review with specific focus
llmswap review src/api.js --focus performance

Configuration

Set Default Provider

llmswap config set provider.default anthropic

Set Default Models

llmswap config set provider.models.openai gpt-4-turbo
llmswap config set provider.models.anthropic claude-3-opus

View Configuration

llmswap config show

Python SDK Quick Start

Basic Usage

from llmswap import LLMClient

# Initialize client (auto-detects provider)
client = LLMClient()

# Simple query
response = client.query("Explain quantum computing in simple terms")
print(response.content)

# Check which provider was used
print(f"Provider: {response.provider}")
print(f"Model: {response.model}")

Conversation with Context

from llmswap import LLMClient

client = LLMClient()

# Start a conversation
messages = [
    {"role": "user", "content": "My name is Bob"}
]
response = client.chat(messages)
print(response.content)

# Continue conversation
messages.append({"role": "assistant", "content": response.content})
messages.append({"role": "user", "content": "What's my name?"})
response = client.chat(messages)
print(response.content)  # Will remember your name is Bob

Switch Providers

from llmswap import LLMClient

client = LLMClient()

# Use OpenAI
client.set_provider("openai", model="gpt-4")
response = client.query("Hello from GPT-4")

# Switch to Claude
client.set_provider("anthropic", model="claude-3-5-sonnet")
response = client.query("Hello from Claude")

# Use local Ollama
client.set_provider("ollama", model="llama3.1")
response = client.query("Hello from local Llama")

Next Steps

Troubleshooting

Provider Not Available

If you see “No providers available”:

  1. Check you’ve set at least one API key
  2. Verify the key is correct
  3. Run llmswap providers to see status

API Key Issues

# Check if key is set
echo $OPENAI_API_KEY

# Set key in shell config (.bashrc, .zshrc, etc.)
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.zshrc
source ~/.zshrc

Ollama Connection Issues

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama service
ollama serve

Getting Help


Ready to explore more? Check out the CLI Reference for all available commands.


Copyright © 2025 llmswap. Distributed under the MIT license.