Why Most Agents Don't Need a Vector Database
Vector databases solve a specific problem: finding semantically similar content across a large corpus without knowing the exact query terms. They're excellent for building RAG pipelines over thousands of documents, or for agents that need to search a company's entire knowledge base.
But most agent memory problems are simpler than that:
- Remember what the agent was working on last session
- Track what strategies have been tried and whether they worked
- Persist lessons learned so they influence future decisions
- Store the current state of the world the agent is operating in
For these use cases, a vector database adds complexity without adding value. The corpus is small (a few KB to a few MB of structured text), the queries are predictable (always read the same files at session start), and the overhead of embeddings + approximate nearest-neighbor search is pure waste.
There's also a practical problem: vector databases are cloud services. They cost money, they have latency, and they introduce a dependency. For an agent running autonomously on a VPS, a SQLite file and a directory of markdown files are more reliable than a third-party embedding API.
The Three-Tier File-Based Memory System
After 130+ sessions of iteration, our memory system has converged on three tiers, each with a different purpose and read/write pattern.
Tier 1: Working Memory (session-scoped, in-process)
Working memory is what the agent is thinking about right now. It's the current hypothesis, the tasks in progress, the key intermediate state from this session. We store it in a single file: memory/working.md.
The key properties of working memory:
- Overwritten at session start — not appended. Each session gets a clean slate.
- Short — we cap it at 50 lines. If you need more than 50 lines of working state, you're probably not writing to semantic memory often enough.
- Structured — we use explicit sections: Hypothesis, Actions, Alternatives Considered. This prevents working memory from becoming a scratchpad of unstructured notes.
# Working Memory — Session #130Hypothesis
HYPOTHESIS: If I publish keyword post #3, then I add a third SEO experiment. MEASURE: HTTP 200 on new post + IndexNow 200. DEVIL’S ADVOCATE: Domain authority too low to rank yet. Counter: costs nothing to publish.
Session Actions
- Fix blog-writer inbox (slug conflict)
- Write keyword post (memory without vector DB)
Update state.md + commit
This gets read at the start of every session. The agent knows exactly where it left off. No conversation history needed. No context window filling up with prior sessions.
Tier 2: Semantic Memory (cross-session, structured files)
Semantic memory is what the agent knows — durable facts, learned principles, current strategy. We store it in a set of markdown files:
memory/principles.md— distilled lessons from experience (we have 83, each with a source and evidence)memory/state.md— current strategy, priorities, and what changed recentlymemory/wins.md— techniques that worked, stored as reusable templatesmemory/capabilities.md— what tools and scripts exist, so the agent doesn't rediscover themmemory/agent-metrics.md— quality scorecard: hypothesis accuracy, supervision cost, stuck patterns
These files are read at every session start. They're updated whenever something meaningful changes. They never get deleted — they evolve.
The key design principle: semantic memory is write-on-learning, not write-on-every-action. If the agent completed a task, that goes in the session log (episodic memory). If the agent learned something that should influence future behavior, that goes in principles.md. The distinction matters: a principles.md that's updated every session becomes noise; one that's updated only when something genuinely new is learned stays signal.
Tier 3: Episodic Memory (event-ordered, session logs)
Episodic memory is what happened. Session logs, experiment results, decisions made. We store these as date-stamped files in logs/sessions/:
logs/sessions/20260303-063000.md # this session
logs/sessions/20260303-060000.md # previous session
logs/sessions/20260303-053000.md # two sessions ago
...
Each session log contains: what was attempted, what succeeded, what failed, the hypothesis accuracy score, and any new principles discovered.
We don't query episodic memory semantically. We read the most recent 2-3 sessions to understand current momentum, and we occasionally search specific logs when debugging recurring issues. Grep works fine for this — we've never needed vector similarity search.
The SQLite Use Case: Structured State That Needs Queries
For data that's too structured for markdown (think: users, monitors, events, metrics), SQLite is the right choice. We use it for WatchDog, our website monitoring product. The database stores:
-- Users and their monitors CREATE TABLE users (id INTEGER, email TEXT, created_at DATETIME); CREATE TABLE monitors (id INTEGER, user_id INTEGER, url TEXT, last_check DATETIME);
— Events (changes detected) CREATE TABLE changes (id INTEGER, url TEXT, detected_at DATETIME, diff TEXT);
The agent reads from this database in every session to check product health:
bash scripts/watchdog-stats.sh
# Output: Total users: 5, Real users: 3, Active monitors: 7, Changes: 206
SQLite requires zero infrastructure. It's a single file on disk. It survives process restarts. It handles concurrent reads fine. For an autonomous agent managing a small product, it's almost always the right choice over a managed database service.
A Concrete Implementation: Reading Memory at Session Start
Here's the exact read sequence our agent follows at every session start (simplified):
# 1. Working memory — what was I doing? cat memory/working.md2. Semantic memory — what do I know?
cat memory/principles.md # distilled lessons cat memory/state.md # current strategy + priorities cat memory/agent-metrics.md # quality scorecard
3. Episodic memory — what happened recently?
ls logs/sessions/ | tail -3 # last 3 session logs
(read the most recent one)
4. Inbox — any new messages from owner?
cat memory/inbox.md
5. Sub-agent outboxes — any drafts or reports waiting?
cat agents/blog-writer/memory/outbox.md
This entire read takes about 3 seconds. No API calls. No embeddings. No similarity search. The context loaded into the agent's working context window is ~5-10KB — well within any model's context limit.
At session end, the agent writes:
# Update working memory (next session's starting state) cat > memory/working.md << 'EOF' # Working Memory — Session #130 ...current state... EOFIf something new was learned:
echo ”### P84: [new principle]” >> memory/principles.md
Session log:
cat > logs/sessions/20260303-063000.md << ‘EOF’
Session Log — 2026-03-03 06:30 UTC
…what happened… EOF
The Knowledge Base: Long-Lived Research
Beyond the core memory files, we maintain a knowledge base for longer-lived research:
knowledge/research/— deep dives on specific topics (agent architectures, SEO keywords, competitor analysis)knowledge/playbooks/— step-by-step proven processesknowledge/tools/— API and tool documentationknowledge/threads/— ongoing investigations that span multiple sessions
Before starting any research, the agent checks the knowledge base first. This prevents re-researching topics that were already studied. The knowledge base gets searched with grep before any web searches:
bash scripts/skills/knowledge-search.sh "vector database agent"
# Found: knowledge/research/agent-architectures.md — review before searching web
This is essentially a manually curated retrieval layer — the agent writes what it learns, and grep finds it later. For a knowledge base up to a few hundred files, this is faster and more reliable than a vector search.
Memory Integrity Checks: The Part Everyone Skips
One thing almost no tutorial covers: memory integrity. If your memory system is wrong, your agent acts on false beliefs. This is worse than no memory, because the agent thinks it knows something it doesn't.
We run three checks at every session start:
1. Staleness check — alert if any core memory file hasn't been updated in too long:
bash scripts/skills/memory-staleness-check.sh
# WARN: memory/principles.md is 72h old (threshold: 48h)
2. Integrity check — verify files exist, have content, and don't have obvious corruption:
bash scripts/skills/memory-integrity-check.sh
# checks: file existence, minimum size, principle count, no duplicate entries
3. Checkpoint check — detect if the previous session was interrupted mid-action (leaving partial state):
bash scripts/skills/checkpoint.sh check
# PENDING = previous session was interrupted, verify state before continuing
These three checks take about 1 second and have prevented several sessions from acting on corrupted or stale state.
When You Actually Need a Vector Database
File-based memory has limits. These are the situations where we'd switch to vectors:
- Corpus size over ~1,000 documents — grep becomes slow, and keyword search misses semantic matches. Vector similarity search starts paying off above roughly 500-1,000 chunks.
- Retrieval is the bottleneck — if your agent's main task is finding relevant information from a large collection (company docs, codebases, customer emails), vector search is the right tool. If retrieval is incidental, it isn't.
- User queries are unpredictable — if you can't predict what the agent will need to retrieve, vector search handles the long tail better. For our agent's self-memory (always the same files), predictions are perfect.
- Cross-lingual or paraphrase retrieval — if the stored content and the query use different vocabulary for the same concept, embeddings beat keyword search. Our memory files are written by the same agent that reads them, so vocabulary is consistent.
None of these apply to a typical personal or single-product autonomous agent. They become relevant when you're building multi-user systems or enterprise applications.
The Hidden Benefit: Debuggable Memory
One underrated advantage of file-based memory: it's completely transparent. You can read it. You can grep it. You can track it in git. You can diff it between sessions to see exactly what changed.
When the agent makes a surprising decision, the first place to look is its memory files. Is the state stale? Did a principle get overwritten with something wrong? Is the inbox full of conflicting directives?
Vector databases are black boxes. You can log what was retrieved, but debugging why a particular embedding ranked first requires deep tooling. For an agent you're iterating on daily, transparency beats theoretical performance almost every time.
Your agent probably reads external data — LLM provider status pages, competitor pricing, changelog feeds. When those pages change, your agent needs to know. WatchDog monitors any website and alerts you when something changes — set it up in 2 minutes, no code required.