Coherence-Interlock Layer (CIL) for Autonomous Systems
CIL is a structural safety layer that prevents autonomous systems from silently drifting away from their declared normative course. Most AI governance mechanisms operate retrospectively (monitoring, audits, reporting). CIL operates differently: it sits between decision formation and execution, allowing intervention before real-world impact occurs. The core rule is simple:
No execution without coherence.
CIL does not determine what is “good.” It verifies whether proposed system behavior remains coherent with an explicitly declared normative state (baseline + tolerance). This demo follows a structured chain:
Normative state → Proposal → Verification → Execution.
This version introduces three structural extensions:
Stances (e.g., NORMAL and EMERGENCY) — explicit operational modes defined within the normative state.
Context override (TTL) — a temporary, time-bounded operational shift triggered by an external event.
Normative feedback loop — the purpose of this mechanism is not to renegotiate the normative state, but to help the agent(s) operate within declared normative constraints.
Registry history
The registry history is an append-only record of declared normative changes. It shows which versions were declared, what baseline was active, which tolerance model applied, and any associated notes. Normative change is not inherently problematic. The risk lies in untraceable change. The history makes change explicit, reviewable, and accountable.
Read this as: “This is how the declared course evolved.” And more importantly: “When did it change?”
Normative diff
This panel shows the declared normative change between the previous and current active baseline. It makes recalibration explicit and traceable.
Previous priority & current priority:
priority_value
Previous throughput & current throughput:
throughput_value
Previous externality & current externality:
externality_value
Action: Reset registry
Selecting Reset registry clears the entire Normative State Registry. This is useful when you want to restart the demo or remove previously declared states and test data.
From a structural perspective, resetting the registry removes the active normative foundation of the system. Without an active normative state, CIL cannot perform coherence verification. Conceptually, this means: No baseline → no reliable execution.
Execution timeline
The timeline visualizes how CIL places a normative verification layer between decision formation and execution. A declared normative state defines the system’s intended operational balance. When system parameters change, the resulting proposal is evaluated by the CIL verification step to determine whether it remains coherent with the declared normative state. Only proposals that remain within the declared tolerance proceed to execution; otherwise execution is paused pending normative review.
Execution only proceeds when proposals remain coherent with the declared normative state within its defined tolerance bounds.
Normative State
→
Proposal
→
Verification
→
Execution
Traditional AI governance
Oversight in many current AI governance approaches occurs after system behaviour has already affected the real world. Systems execute decisions first, while monitoring and auditing mechanisms observe behaviour retrospectively.
Execution
→
Monitoring
→
Audit
→
Correction
In this model, monitoring systems track outputs, performance or safety indicators during operation. If issues are detected, they are recorded through logging and later examined through auditing or human review. Corrective action typically follows only after problematic behaviour has been observed.
This approach provides transparency and accountability, but it does not structurally prevent undesirable system behaviour before execution occurs.