Hardware Guide

The right machine for
your AI agent

AI Nightstand runs on hardware you already own or can buy for a few hundred dollars. Here's how to match your setup to your needs.

Three tiers, one right answer for you

You don't need a GPU cluster to run AI Nightstand. Overnight batch tasks are forgiving — the agent has hours to work, not milliseconds. Choose based on what you already have or your budget.

Tier 1 · Entry
Raspberry Pi 5 / Old Laptop
~$80–$150
Runs small quantized models (1–3B parameters). Suitable for basic overnight tasks — calendar prep, simple email drafts, short summaries. Processing is slow, so schedule the agent early (9–10 PM) to ensure the briefing is ready by morning.
  • Best for: Personal use, light workloads
  • Recommended model: Phi-3 Mini, TinyLlama
  • Storage needed: 8–16GB for OS + model
RAM8GB
Model size1–3B
SpeedSlow
Tier 2 · Recommended
Mini PC (16GB RAM)
~$200–$400
The sweet spot. Handles 7–13B models comfortably — capable of nuanced email drafting, multi-source news synthesis, and thorough research summaries. Runs quietly, consumes minimal power overnight. Beelink SER5, MinisForum UM690, or similar AMD-based mini PCs are ideal.
  • Best for: Individuals, home offices, small teams
  • Recommended model: Mistral 7B, Llama 3.1 8B
  • Storage needed: 256GB SSD minimum
RAM16–32GB
Model size7–13B
SpeedGood
Tier 3 · Power
Home Lab / GPU Server
$800+
Runs 30B+ models with GPU acceleration. Handles complex reasoning, long documents, multiple simultaneous agent tasks. If you already have an Ollama stack running, AI Nightstand simply becomes another scheduled job. NVIDIA RTX 3060+ or equivalent recommended.
  • Best for: Power users, small businesses, developers
  • Recommended model: Mixtral 8x7B, Llama 3.1 70B
  • Storage needed: 1TB+ NVMe SSD
RAM32–128GB
Model size30B–70B
SpeedFast

Which model should
you run?

All models below are open-source, run via Ollama, and keep your data entirely local. Choose based on your hardware tier and primary use case.

Model
Size
Best For
Hardware Tier
phi3:mini
2.3GB
Basic summaries, notes
Tier 1
mistral:7b
4.1GB
Email drafts, triage
Tier 2 ★
llama3.1:8b
4.7GB
Writing, research summaries
Tier 2 ★
llama3.1:13b
7.4GB
Complex reasoning, long docs
Tier 2–3
mixtral:8x7b
26GB
Multi-task, high quality
Tier 3
llama3.1:70b
40GB
Near-frontier quality
Tier 3

Everything that makes
it run

AI Nightstand is built on a lean, open-source stack. Every component is free, well-maintained, and keeps you in full control.

🦙
Ollama
Local LLM inference engine. Runs open-source models on your hardware with a simple API. The brain of the operation.
Required
Cron / Task Scheduler
Triggers the AI Nightstand agent on your schedule. Linux/Mac use cron; Windows uses Task Scheduler. Built into every OS.
Required
🐍
Python 3.10+
Agent orchestration, API calls, file handling, and output generation. Runs the nightstand workflow scripts.
Required
🌐
Caddy
Lightweight web server and reverse proxy. Serves your morning briefing as a clean local web page. Optionally enables secure remote access via HTTPS.
Recommended
☁️
Cloudflare Tunnel
Read your briefing from your phone without opening ports. Encrypted tunnel from Cloudflare to your home server — no port forwarding, no exposed IP.
Optional
📡
RSS / Email APIs
Connects the agent to your news sources and inbox. Supports standard RSS, IMAP, and select calendar APIs for data ingestion.
Optional

Ready to build your setup?

Setup Guide Get Help