Local-First Project Memory

Your codebase has a story.
Lore remembers it.

Every architectural decision, invariant, gotcha, and abandoned approach — captured automatically, structured, versioned in git, and injected right into your AI coding sessions.

bash
~lore ui # Local browser dashboard
~lore prompt "auth" # Context Compactor

Install & Quick Start

Zero configuration required. Start capturing context in seconds.

bash
$npm install -g lore-memory # Install globally
$cd your-project
$lore init # Sets up .lore/ and scans for comments
$lore watch --daemon # Starts passive capture in the background
$lore ui # View memory bank in a local web dashboard
$lore prompt "auth" # Generate AI system prompt

The 4 Pillars of Lore

Lore categorizes engineering knowledge into four distinct, semantic types, ensuring context is always highly relevant to the problem at hand.

Decisions

The clear "Why" behind your tech stack choices. Stop debating the same architectural tradeoffs every six months (e.g., "Use Postgres over MongoDB").

Invariants 🔴

Critical, unbreakable rules. e.g., "All auth tokens must be validated on every request." If these are broken, the system fails.

Gotchas ⚠️

Non-obvious behaviors and footguns. Documented once, so your team never wastes hours debugging the same Date.now() flaky test.

The Graveyard 🪦

Failed experiments and abandoned approaches. Never waste time re-trying GraphQL six months later when the n+1 queries killed the DB.

The Developer Workflow

Lore acts seamlessly in the bounds of your daily Unix workflow. It watches, it mines, and it serves context exactly when you need it.

1. Passive Capture via Watcher

You don't have to stop coding to document. Lore passively scans your source code for signal phrases (`WARNING:`, `hack`, `we chose`) and automatically drafts entries.

If you delete a file over 100 lines, Lore drafts a Graveyard record. If you edit the same file 5 times in a week, it drafts a Gotcha.

Lore Pending Drafts UI

2. The Local UI Dashboard

Forget reading JSON files. Spin up a stunning, real-time local web server on port 3000 to search your project memory.

View your Lore Score health, manage drafts, and read beautifully formatted Markdown entries of your project's history in a dedicated graphical interface.

Lore Knowledge Base UI

3. Zero-Shot "Context Compactor"

Run lore prompt "Refactoring Auth" and Lore instantly uses semantic vector search to compile a perfectly formatted, zero-shot system prompt.

Saves Context Tokens: Rather than dumping your entire project documentation into the AI, Lore uses graph-weighted context to inject only the precise invariants, decisions, and gotchas the LLM needs to know before it touches your code.

bash
$lore prompt "auth" | pbcopy
Semantic search complete
3 invariants injected
1 gotcha injected

System prompt copied to clipboard!

4. Connect Claude & Cursor via MCP

Solves the "Blank Slate" Problem: Add the MCP server to your Claude Code or Cursor config and Lore operates as an active, persistent memory bank.

Prevents Hallucinated Refactors: Claude automatically calls lore_why when you ask it to work on a file. It won't break edge cases because it is told the constraints upfront. It won't suggest adding an inline fraud-check if Lore's Invariant demands a 200ms SLA.

mcpServer config
{
  "mcpServers": {
    "lore": {
      "command": "lore",
      "args": ["serve"]
    }
  }
}

5. Maintain Health & Staleness

Traditional wikis die because they go out of date. Lore links entries to specific files. If that file is modified in the git history, Lore flags the entry as [Stale].

Run lore score to gamify your documentation and measure your project's memory health based on Coverage, Freshness, and Depth.

Lore Health Score UI
Lore Dependency Graph UI

6. Visualize Project Context

Understand how different pieces of your codebase interact. Lore maps the relationships between your components, invariants, and decisions to give you a bird's-eye view of your architecture.

The Dependency Graph ensures you never accidentally break a downstream service by refactoring an upstream module.