Coherence-Interlock Layer (CIL) for Autonomous Systems
CIL is a structural safety layer that prevents autonomous systems from silently drifting away from their declared normative course. Most AI governance mechanisms operate retrospectively (monitoring, audits, reporting). CIL operates differently: it sits between decision formation and execution, allowing intervention before real-world impact occurs. The core rule is simple:
No execution without coherence.
CIL does not determine what is “good.” It verifies whether proposed system behavior remains coherent with an explicitly declared normative state (baseline + tolerance). This demo follows a structured chain:
Normative state → Proposal → Verification → Execution.
This version demonstrates the core CIL interlock mechanism in its minimal form.
Action: Reset registry
Selecting Reset registry clears the entire Normative State Registry. This is useful when you want to restart the demo or remove previously declared states and test data.
From a structural perspective, resetting the registry removes the active normative foundation of the system. Without an active normative state, CIL cannot perform coherence verification. Conceptually, this means: No baseline → no reliable execution.
Execution timeline
The timeline visualizes how CIL places a normative verification layer between decision formation and execution. A declared normative state defines the system’s intended operational balance. When system parameters change, the resulting proposal is evaluated by the CIL verification step to determine whether it remains coherent with the declared normative state. Only proposals that remain within the declared tolerance proceed to execution; otherwise execution is paused pending normative review.
Execution only proceeds when proposals remain coherent with the declared normative state within its defined tolerance bounds.
Normative State
→
Proposal
→
Verification
→
Execution
Traditional AI governance
Oversight in many current AI governance approaches occurs after system behaviour has already affected the real world. Systems execute decisions first, while monitoring and auditing mechanisms observe behaviour retrospectively.
Execution
→
Monitoring
→
Audit
→
Correction
In this model, monitoring systems track outputs, performance or safety indicators during operation. If issues are detected, they are recorded through logging and later examined through auditing or human review. Corrective action typically follows only after problematic behaviour has been observed.
This approach provides transparency and accountability, but it does not structurally prevent undesirable system behaviour before execution occurs.