Let Your AI Sleep
Your brain spends a third of its life in sleep mode — not resting, but cleaning. It prunes memories, consolidates learning, and clears noise. More importantly, it decides what to keep and what to discard. Sleep doesn't preserve everything; it curates.
Our LLMs don't get that luxury.
We keep them awake 24/7, stuffing new context into their memories — Claude.md, cursor-rules, prompts.json — but never letting them process what actually matters. Over time, these memory files drift, contradict, and decay. The model remembers everything and nothing at once. As context windows grow and memory files accumulate, we're building systems that hoard rather than curate.
Recent AI research has begun exploring exactly this problem. Researchers are now implementing "sleep-time compute" approaches where LLMs process and consolidate context during idle periods—precomputing summaries and reorganizing information before user queries even arrive. Some systems treat forgetting as a strategic feature rather than a failure, pruning what no longer serves their purpose. The neuroscience is clear: human memory consolidation during sleep doesn't just store information—it transforms it, making it more efficient and useful.1
It's time we gave our AIs a nightly cleanup cycle. Imagine Claude Code running an end-of-day pass in CI to refresh its Claude.md based on the day's commits to main:
- Identifying contradictions between old instructions and new patterns
- Merging redundant directives
- Pruning one-off instructions that no longer apply
- Strengthening patterns that keep recurring
Right now we're building AI systems with perfect recall but no wisdom. Sleep isn't about forgetting—it's about learning what matters. Memory is critical, and maintenance is what makes it useful.
Footnotes
-
(Lin, K., et al. "Sleep-time Compute: Beyond Inference Scaling at Test-time." Letta/UC Berkeley, 2025. https://arxiv.org/abs/2504.13171. ↩