Data Flow & Algorithms

How data moves through the Lore application and the algorithms that power context compaction.

The Blast Radius Algorithm

When checking if an entry applies to a specific file (e.g. running lore why src/auth.js), Lore doesn't just perform a dumb string match. It executes the proprietary Blast Radius Calculation to determine true relevance.

Weighting System

Entries are weighted mathematically based on the dependency graph built via the Babel Parser AST (.lore/graph.json):

💡 The Budget System
Before handing generated rules to an LLM via the Context Compactor or MCP Server, Lore strictly enforces an Attention Token Budget (budget.js). By slicing off the lowest-scoring rules generated by the Blast Radius calculation, it ensures the AI never suffers from "Context Rot" or blown window bounds.

Execution Sequence

Data moves through Lore across three primary layers: the Entry Layer, Core Engine, and Passive Graph. Both human developers (CLI) and AI Agents (MCP) traverse this identical path.

For Human CLI (lore why src/auth.js)

  1. User executes the command string.
  2. Levenshtein Fuzzy Matcher validates and routes to commands/why.js.
  3. Command calls scoreEntry() in the Core Engine (relevance.js).
  4. Relevance queries the Passive Graph to retrieve the import/importedBy maps for src/auth.js.
  5. Relevance fetches all raw JSON entries (Decisions, Invariants, Gotchas) from Storage.
  6. The Blast Radius Math calculates the final relevance scores.
  7. The sorted array returns to the command layer, is styled beautifully via chalk, and prints to the stdout terminal UI.

For AI Agents (MCP Tool Invocation)

  1. AI executes JSON RPC call_tool(lore_why, {filepath: 'src/auth.js'}).
  2. The MCP Server intercepts and routes to tools/why.js.
  3. Tool calls scoreEntry() in the exact same Core Engine (relevance.js).
  4. Engine performs graph retrieval, storage fetches, and Blast Radius Math identical to the CLI.
  5. Engine returns the sorted entries to the Tool.
  6. The Tool calls enforceBudget() to ensure truncation fits the specific LLM's defined bounds.
  7. Returns strictly sized Markdown or XML to the Assistant's native dialogue context.