Nightly Knowledge Extraction: Teaching My AI to Remember What Matters
I was sick in bed yesterday, scrolling through my feed, and found Felix Kraus’s post about his OpenClaw automation setup. One idea stuck: nightly knowledge extraction from conversations and notes.
The problem is familiar to anyone who takes notes. You write things down throughout the day — meeting notes, project updates, random insights — and most of it slowly rots in place. The useful bits get buried under the noise.
The setup
I already had the pieces:
- Daily memory notes that my AI assistant writes during our conversations
- An Obsidian vault with ~400 notes across work, personal, and tech topics
- A life knowledge graph — structured JSON files organizing durable facts about people, companies, and projects
What was missing was the glue. Something to review the day’s notes every evening and pull out what actually matters.
How it works
A cron job fires at 23:00 every night. It spawns an isolated AI session that:
- Reads today’s daily notes from my workspace memory
- Scans Obsidian for files modified in the last 24 hours
- Extracts durable facts — milestones, status changes, new relationships, context that has long-term value
- Updates the knowledge graph by adding facts to the right entity (a person, a project, a company)
- Runs a memory sync to update summaries and decay scores
The extraction follows a strict guide: no one-off tasks, no temporary notes, no duplicates. Only facts that will still be useful in a month.
What counts as a durable fact?
| Category | Example |
|---|---|
| Milestone | “Published ADR for agentic coding at Billie” |
| Status | “Migrated QA previews to HTTPS via Traefik” |
| Relationship | “Michael reports to Maksym in SRE” |
| Context | “DS team needs VPC Lattice for cross-account access” |
| Preference | “Prefers vanilla Rails approach for new projects” |
Why not just search?
You could argue full-text search over Obsidian is good enough. And for some queries it is. But search requires you to know what to look for. A knowledge graph surfaces connections you didn’t think to ask about.
When my AI assistant starts a session, it loads recent facts automatically. It knows that yesterday I was working on provisioner fixes, that Michael’s 1:1 is coming up, that the DS team is still waiting on VPC Lattice. I didn’t have to tell it any of this — the nightly extraction already did.
The economics
The whole thing runs on Claude Sonnet — cheap enough that I don’t think about it. One run processes maybe 5-10 notes and costs a few cents. For that, I get a continuously updated knowledge graph that makes every future conversation with my AI more useful.
The broader pattern
This is part of a shift I’ve been noticing in how I use AI. It’s less about asking questions and more about building systems that accumulate context over time. Each nightly run makes the next day’s interactions slightly better. It compounds.
Felix’s post had several other ideas I’m planning to implement — a proactive calendar briefing, a travel bot with email parsing, Home Assistant automation driven by calendar events. But the knowledge extraction felt like the right foundation to start with. Everything else gets better when the AI actually remembers what happened yesterday.
The cron job ran for the first time tonight. Tomorrow morning, I’ll wake up and my assistant will already know what I did today. That’s the kind of automation I want more of — invisible, compounding, and actually useful.