Open-source ground truth
for AI.
A portable, vendor-neutral way to ground any large language model in your team's private truth — with a built-in signal for when that grounding runs out.
Five primitives
Each does one job. Together they form the protocol.
- Factlet
One atomic truth about your private information — a decision, a constraint, an anti-pattern.
- FactMap
The structured collection of factlets covering one body of work — your codebase, your product, your customer base.
- Factbook
A packaged FactMap, versioned in git, portable across implementations. The artifact your team owns.
- FactSignal
How strong the grounding is at any query, measured in bars (0-5). The same vocabulary you've used since your first phone.
- Low-FactSignal warning
A runtime callback that fires when a model is about to answer in a zone with no relevant factlets. Before the answer ships.
Why a protocol, not a product
Anthropic's Memory belongs to Claude. OpenAI's Memory belongs to GPT. Google's Project Astra belongs to Gemini. Each frontier lab is shipping its own per-vendor private-knowledge layer. Teams who want their factlets in multiple tools — today's reality — get fragmented across incompatible memory stores.
The Factlet Protocol is the open layer underneath. Same factlets. Same FactMap. Same FactSignal. Whichever model you ask. Whichever IDE you use today and switch from tomorrow.
Status
v0.1 draft. The full specification, reference SDK (Python + TypeScript), and example registry are being prepared. The blog post launch and complete artifact set ship together. Star the spec repo to follow along, or open a Discussion to propose an RFC.
Implementation
Kernora's Nora is the maintained reference implementation of the protocol. The protocol exists with or without any one implementation — Cursor, Claude Code, Continue.dev, Aider, Goose, OpenCode could all read the same factbooks.