Public doctrine, vocabulary, governance signals, and contact surface. Operational methods remain private and are discussed only under engagement.
Topic

AI governance

Policies, boundaries, proof obligations, change control, and machine-first publication.

Use this topic when interpretive doctrine must become organizational governance: measurement, visibility audits, baselines, and publication discipline.

Lane: Governance boundaries and decision risk11 active notes
Reading route

Start here

This hub reduces collisions between close notes and clarifies where to go next when the problem changes layer.

  1. Published baseline (phase 0): what observation shows, and what it does not prove 2026-02-26
  2. Observation vs attestation: why Q-Ledger is intentionally weak 2026-02-26
  3. Making governance measurable: Q-Metrics 2026-02-26
  4. Runbook & ops: from logs to snapshots, without leakage or drift 2026-02-26
  5. Measuring invisibilization: how to audit presence in AI answers without fooling yourself 2026-02-26

Adjacent topics

  • Interpretive risk — Systemic risks: false certainty, plausible errors, economic and reputational damage.
  • Exogenous governance — Arbitration across sources, jurisdictions, standards, and external authorities. Includes public doctrine references for External Authority Control (EAC).
  • Agentic era — Agents, delegation, non-answers, safety, and proxy governance.
Explorer

Browse the full topic corpus

The filter below loads the corpus from /.well-known/doctrine-blog.json while preserving doctrinal topic order.

Loading…