AI governance
Policies, boundaries, proof obligations, change control, and machine-first publication.
Use this topic when interpretive doctrine must become organizational governance: measurement, visibility audits, baselines, and publication discipline.
Lane: Governance boundaries and decision risk11 active notes
Machine surfaces
Start here
This hub reduces collisions between close notes and clarifies where to go next when the problem changes layer.
- Published baseline (phase 0): what observation shows, and what it does not prove 2026-02-26
- Observation vs attestation: why Q-Ledger is intentionally weak 2026-02-26
- Making governance measurable: Q-Metrics 2026-02-26
- Runbook & ops: from logs to snapshots, without leakage or drift 2026-02-26
- Measuring invisibilization: how to audit presence in AI answers without fooling yourself 2026-02-26
Adjacent topics
- Interpretive risk — Systemic risks: false certainty, plausible errors, economic and reputational damage.
- Exogenous governance — Arbitration across sources, jurisdictions, standards, and external authorities. Includes public doctrine references for External Authority Control (EAC).
- Agentic era — Agents, delegation, non-answers, safety, and proxy governance.
Browse the full topic corpus
The filter below loads the corpus from /.well-known/doctrine-blog.json while preserving doctrinal topic order.
Loading…