Context Compactor

Generate perfectly formatted LLM system prompts containing only the project context that matters.

How It Works

The lore prompt command takes a natural language query describing what you're working on, searches your knowledge base for relevant entries using semantic vector search, and outputs a formatted markdown system prompt to stdout.

$ lore prompt "refactoring the auth middleware"

The output is pure markdown — designed to be piped directly into your clipboard, a file, or an AI tool:

# Pipe to clipboard (macOS)
$ lore prompt "auth" | pbcopy

# Save to a file
$ lore prompt "database migration" > context.md

Semantic Search

When Ollama is running with the nomic-embed-text model, Lore uses semantic vector embeddings for search. This means you can query by concept, not just keyword:

Build/rebuild the embedding index at any time:

$ lore embed
💡 Fallback
If Ollama is not running, lore prompt automatically falls back to basic text matching using title and context fields.

Options

Flag Description Default
-t, --threshold <n> Minimum relevance score (0.0–1.0). Lower = more results, less precise. 0.4
-l, --limit <n> Maximum number of entries to include in the prompt. 10
# Include more entries with lower relevance threshold
$ lore prompt "payments" --threshold 0.3 --limit 20

Output Format

The generated prompt groups entries by type (invariants first, then decisions, gotchas, and graveyard) with markdown formatting. Invariants are given the highest priority since they represent unbreakable constraints the AI must respect.

Workflow Example

# 1. You want Claude to help refactor auth
$ lore prompt "refactoring auth" | pbcopy

# 2. Paste the system prompt into Claude, ChatGPT, or any LLM
#    The AI now knows:
#    - JWT must be validated on every request (invariant)
#    - You chose JWT over sessions (decision)
#    - Date.now() causes flaky tests (gotcha)