Lore is an open-source, CLI-first Project Memory system designed specifically for engineers and AI Coding
Agents (like Claude, Copilot, and Cursor).
It acts as the ultimate "glue" between human context and AI
output by capturing architectural decisions, constraints, and known traps, and structurally injecting them
straight into your AI's context window.
Getting a project up and running with Lore takes less than 60 seconds.
Lore is published on the global NPM registry. Install it globally so you can use it in any repository:
Navigate to the root of your existing codebase and run lore init.
This instantly creates a hidden .lore/ database in your repository (which you will commit to
Git so your entire team shares the brain), and it installs an optional "Architect-in-the-Loop" post-commit
hook.
To make your AI respect your project completely unprompted, drop a file named CLAUDE.md (or
.cursorrules) into the root of your repository with this exact rule:
"Before you modify, create, or explain any file in this codebase, you must ALWAYS execute the
lore_why tool on that file path first to check if there are any architectural invariants,
gotchas, or decisions you need to respect."
Finally, connect the MCP server to your AI agent's config to enable the tools natively.
{
"mcpServers": {
"lore": { "command": "lore", "args": ["serve"] }
}
}
Here is exactly how you use Lore as your daily driver:
If you ever forget a command, simply type lore into your terminal and hit Enter. You will be
greeted with a stunning interactive menu that guides you through every featureβno manual-reading required!
When you make a technical decision, or discover a nasty bug ("Gotcha"), log it instantly:
lore log
The CLI will beautifully prompt you to categorize the entry into one of four types:
Rules that must never be broken (e.g. "Never console.log auth tokens").
Technical choices (e.g. "We chose Next.js over Vite").
Traps that trigger bugs (e.g. "Stripe webhooks can fire twice").
Abandoned code (e.g. "We removed Redis in v2").
Software engineers hate writing Wikis natively. With Lore, you don't have to.
Run lore watch --daemon in the background. As you code in your IDE, if you type
// WARNING: or // HACK:, the Lore daemon intercepts your OS file save, reads your
comment, and instantly drafts a beautiful Markdown rule in the background. Review them at the end of the
sprint with lore drafts.
Want to visually explore your project's brain?
Run lore ui to spin up a blazingly fast local web server on port 3333. You can explore the
health of your project (The "Lore Score"), search your database, and view all your team's architectural
decisions mathematically linked to the exact files they govern.
Traditional wikis suffer from "Context Rot"βthey go out of date and people stop trusting them.
Because you link Lore rules directly to source code files (src/auth.js), if a developer
refactors that file 6 months later, Lore compares the OS timestamps and automatically flags the rule as
[Stale]. Run lore stale to instantly catch outdated documentation before it
confuses your AI!
Don't use Claude Code? If you prefer the ChatGPT web browser, don't dump your entire repository into the chat (it wastes context and money).
Run lore prompt "Refactoring Auth" | pbcopy
Lore will use completely offline Semantic Vector Search (via Ollama) and mathematically compute the PageRank "Blast Radius" of your authentication code. It will instantly format a perfectly concise set of strict rules and drop them right onto your clipboard, ready to paste into ChatGPT!
lore init and you're done. Zero YAML, zero env vars.See the difference context makes in your AI coding sessions.
Lore isn't a wiki. It's not a note-taking app. It's a structured, git-native memory layer.
| Feature | Lore | Notion | Confluence | README.md |
|---|---|---|---|---|
| Git-versioned | β | β | β | β |
| Auto-captures from code | β | β | β | β |
| AI-native (MCP) | β | β | β | β |
| Staleness detection | β | β | β | β |
| Structured types | β | ~ | ~ | β |
| 100% offline | β | β | β | β |
| Free & open source | β | β | β | β |
Lore integrates seamlessly with the tools you already use.
Your codebase's intelligence stays entirely on your machine.
No server connection required. Your codebase never leaves your machine. Local Ollama embeddings execute the Context Compactor entirely offline.
Your Lore database (.lore/) is just Markdown and JSON. It lives
entirely inside your repository and syncs securely via your existing Git provider alongside your code.
For 90% of the daily workflow (logging, passive mining, searching), Lore requires absolutely zero LLM execution. It is fundamentally a deterministic, high-speed CLI tool.
Start capturing your team's institutional knowledge in under 5 minutes. Free, open source, and runs entirely on your machine.