Interpretive governance
In AI-enabled systems, the critical risk is not only “true or false”. The major risk is interpretation distortion: an output can look coherent while violating a constraint (legal, business, identity, security, accountability).
Core thesis
An AI system must be governed across three axes at once: meaning (interpretation), authority (who is allowed to claim what), and evidence (what makes an output verifiable).
Guiding principles (high-level)
- Observation before interpretation: separate observed facts, inference, and uncertainty.
- Stable identity: reduce entity conflation and semantic drift.
- Explicit constraints: make priorities, prohibitions, conditions, and limits visible.
- Evidence via artifacts: produce minimal traces readable by humans and machines.
- Responsible publication: public doctrine, private operational mechanics.
Errors, distortions, drift
- Error: a clear falsehood or contradiction.
- Distortion: plausible, but wrong within the framework (missing context, ignored constraint, conflated authority).
- Drift: uncontrolled variation over time, prompts, models, or channels.
Why “AI-first”
Because the Web is moving toward agents and crawlers that need to understand the framework quickly. AI-first here means: machine-first surfaces (llms.txt, /well-known), structure (JSON‑LD), integrity (hashes), and boundaries (scope).
What we publish / do not publish
Published (public)
- principles, vocabulary, mission, scope,
- public policies (publication, security),
- machine-first signals and integrity index.
Not published (private)
- reproducible protocols and detailed rubrics,
- thresholds, weights, calibrations, matrices,
- datasets, logs, tooling, pipelines, runbooks.
Machine references
- Governance: https://inferenslab.org/.well-known/ai-governance.json
- Scope: https://inferenslab.org/.well-known/ai-scope.json
- Publication policy: https://inferenslab.org/.well-known/publication-policy.json
- Integrity index: https://inferenslab.org/.well-known/doctrine-index.json