Prerequisites
What you need
AI Nightstand runs on any machine capable of hosting a local LLM. You don't need a powerful GPU — a modest mini PC or even a spare laptop will handle most overnight tasks. The key ingredients are Ollama for model inference and a scheduler to trigger the agent at bedtime.
Installation
Five steps to your first morning briefing
Install Ollama
Ollama handles local model inference. Install it on your host machine — Linux, macOS, or Windows. Takes about two minutes.
curl -fsSL https://ollama.com/install.sh | sh
Pull a model
Choose a model appropriate for your hardware. Mistral 7B is a solid all-rounder for most overnight tasks. Llama 3.1 8B is excellent for writing and summarization.
ollama pull mistral
Configure your agent
Edit the AI Nightstand config file to define your data sources, tasks, and output format. Tell it which email accounts to check, which news feeds to monitor, and how you want your briefing structured.
nano ~/.ainightstand/config.yaml
Schedule the agent
Use cron (Linux/Mac) or Task Scheduler (Windows) to trigger the agent at your preferred time. Most users set it to run around 11 PM so the briefing is ready by 6–7 AM.
crontab -e # add: 0 23 * * * ainightstand run
Read your briefing
Open your morning briefing via browser, terminal, or email — your choice. Review drafted replies, scan the news digest, check your agenda. Your day starts with clarity.
open http://localhost:8080/briefing
Optional — Remote Access
Access your briefing anywhere
Want to read your morning briefing on your phone before you even get out of bed? Pair AI Nightstand with Caddy as a reverse proxy and Cloudflare Tunnel for secure remote access — no port forwarding required, no data leaving your network unencrypted. Your briefing stays on your server; you just read it remotely.