Every architectural decision, invariant, gotcha, and abandoned approach — captured automatically, structured, versioned in git, and injected right into your AI coding sessions.
Zero configuration required. Start capturing context in seconds.
Lore categorizes engineering knowledge into four distinct, semantic types, ensuring context is always highly relevant to the problem at hand.
The clear "Why" behind your tech stack choices. Stop debating the same architectural tradeoffs every six months (e.g., "Use Postgres over MongoDB").
Critical, unbreakable rules. e.g., "All auth tokens must be validated on every request." If these are broken, the system fails.
Non-obvious behaviors and footguns. Documented once, so your team never wastes hours debugging the same
Date.now() flaky test.
Failed experiments and abandoned approaches. Never waste time re-trying GraphQL six months later when the n+1 queries killed the DB.
Lore acts seamlessly in the bounds of your daily Unix workflow. It watches, it mines, and it serves context exactly when you need it.
You don't have to stop coding to document. Lore passively scans your source code for signal phrases (`WARNING:`, `hack`, `we chose`) and automatically drafts entries.
If you delete a file over 100 lines, Lore drafts a Graveyard record. If you edit the same file 5 times in a week, it drafts a Gotcha.
Forget reading JSON files. Spin up a stunning, real-time local web server on port 3000 to search your project memory.
View your Lore Score health, manage drafts, and read beautifully formatted Markdown entries of your project's history in a dedicated graphical interface.
Run lore prompt "Refactoring Auth" and Lore instantly uses semantic vector search to compile a
perfectly formatted, zero-shot system prompt.
Saves Context Tokens: Rather than dumping your entire project documentation into the AI, Lore uses graph-weighted context to inject only the precise invariants, decisions, and gotchas the LLM needs to know before it touches your code.
Solves the "Blank Slate" Problem: Add the MCP server to your Claude Code or Cursor config and Lore operates as an active, persistent memory bank.
Prevents Hallucinated Refactors: Claude
automatically calls lore_why when you
ask it to work on a file. It won't break edge cases because it is told the constraints upfront. It won't
suggest adding an inline fraud-check if Lore's Invariant demands a 200ms SLA.
{
"mcpServers": {
"lore": {
"command": "lore",
"args": ["serve"]
}
}
}
Traditional wikis die because they go out of date. Lore links entries to specific files. If that file is
modified in the git history, Lore flags the entry as [Stale].
Run lore score to gamify your documentation and measure your project's memory health based on
Coverage, Freshness, and Depth.
Understand how different pieces of your codebase interact. Lore maps the relationships between your components, invariants, and decisions to give you a bird's-eye view of your architecture.
The Dependency Graph ensures you never accidentally break a downstream service by refactoring an upstream module.