Why Run AI Locally?
Self-Hosted
Run on your own hardware. You manage the setup and configuration.
Works Offline
With local models, ClawdBot can work without internet.
Lower Latency
No network round-trips can mean faster responses for some tasks.
No API Costs
With local models, there are no per-request charges. Hardware is your main cost.
How Local AI Works
ClawdBot Runs on Your Computer
Unlike Siri, Alexa, or ChatGPT, ClawdBot runs entirely on your Mac, Windows, or Linux machine. No cloud connection required for core functionality.
Choose Your AI Backend
You decide where AI processing happens:
- Local models (Ollama) - Runs offline on your hardware
- Cloud APIs (Anthropic/OpenAI) - Better quality, requires internet
- Hybrid - Use local for simple tasks, cloud for complex ones
Configure Access
Configure what ClawdBot can access on your system - files, emails, calendar, etc.
Local AI Options
Ollama + Local Models
Run Llama, Mistral, and other open models directly on your hardware.
- Runs offline
- No API costs
- Requires decent hardware (16GB+ RAM)
Anthropic Claude API
High quality responses via cloud API.
- High quality responses
- Cloud-based API
- Pay-per-use pricing
OpenAI GPT API
Alternative cloud option with familiar GPT models.
- GPT-4 quality
- Wide compatibility
- Pay-per-use pricing
Hardware for Local AI
Want to run AI completely locally? Here's what you need:
Minimum Requirements
- RAM: 16GB minimum (for 7B parameter models)
- Storage: 10-50GB for models
- CPU: Modern multi-core (M1+ Mac or recent Intel/AMD)
Recommended for Best Performance
- RAM: 32GB+ (for larger models)
- GPU: NVIDIA with 8GB+ VRAM (for fast inference)
- Storage: SSD with 100GB+ free
Great Local AI Setups
- Apple Silicon Mac: M1/M2/M3 with unified memory - excellent for local AI
- Gaming PC: RTX 3060+ GPU runs models very fast
- Mini PC: Intel NUC with 32GB RAM for always-on assistant
Set Up Your Local AI
# 1. Install ClawdBot
curl -fsSL https://clawd.bot/install.sh | bash
# 2. Install Ollama for local models (optional)
curl -fsSL https://ollama.ai/install.sh | bash
ollama pull llama2 # or mistral, codellama, etc.
# 3. Configure ClawdBot
clawdbot onboard --install-daemon
# 4. Start using your local AI assistant!
Full Setup Guide