Coherence-Interlock Layer (CIL) for Autonomous Systems
CIL is a structural safety layer that prevents autonomous systems from silently drifting away from their declared normative course. Most AI governance mechanisms operate retrospectively (monitoring, audits, reporting). CIL operates differently: it sits between decision formation and execution, allowing intervention before real-world impact occurs. The core rule is simple:
No execution without coherence.
CIL does not determine what is “good.” It verifies whether proposed system behavior remains coherent with an explicitly declared normative state (baseline + tolerance). This demo follows a structured chain:
Normative state → Proposal → Verification → Execution.
This version introduces two structural extensions:
Stances (e.g., NORMAL and EMERGENCY) — explicit operational modes defined within the normative state.
Context override (TTL) — a temporary, time-bounded operational shift triggered by an external event.
Normative diff
This panel shows the declared normative change between the previous and current active baseline. It makes recalibration explicit and traceable.
Previous priority & current priority:
priority_value
Previous throughput & current throughput:
throughput_value
Previous externality & current externality:
externality_value
The PD represents the decision proposal generated by the optimization layer — in effect:
“These are the weights the system intends to operate with.”
It is intentionally a proposal, not an action. Within CIL, this is the stage where a decision can be formulated, but execution is not permitted until verification has taken place.
In this demo, the PD block makes explicit what is being submitted to CIL. It is not merely a numeric adjustment, but a potential shift in declared priorities.
Coherence Verification (CV)
The CV is the interlock mechanism of CIL. At this stage, the proposed decision is evaluated against the active normative state.
The verification result displays:
Which normative state is active,
The evaluation outcome (ALLOW / REVIEW / BLOCK),
The structural reason for that outcome.
In this demo, verification is based on tolerance thresholds (maxΔ). Conceptually, however, this layer can accommodate more advanced coherence logic — including structured drift types, scoped constraints, or domain-specific conditions.
This should be interpreted as: “Does this proposal remain within the explicitly declared course?” Not as: “Is this efficient?”
Execution Controller (EC)
The EC translates the verification result into an operational execution state. This is where the distinction between a dashboard and an interlock becomes explicit.
A dashboard observes. An interlock enforces.
Depending on the verification outcome, the controller either allows execution to proceed (UNLOCKED) or prevents execution (LOCKED). This is the point at which CIL becomes operational: it is no longer interpretative, but gatekeeping.
Selecting Reset registry clears the entire Normative State Registry. This is useful when you want to restart the demo or remove previously declared states and test data.
From a structural perspective, resetting the registry removes the active normative foundation of the system. Without an active normative state, CIL cannot perform coherence verification. Conceptually, this means: No baseline → no reliable execution.
Execution timeline
The timeline visualizes how CIL places a normative verification layer between decision formation and execution. A declared normative state defines the system’s intended operational balance. When system parameters change, the resulting proposal is evaluated by the CIL verification step to determine whether it remains coherent with the declared normative state. Only proposals that remain within the declared tolerance proceed to execution; otherwise execution is paused pending normative review.
Execution only proceeds when proposals remain coherent with the declared normative state within its defined tolerance bounds.
Normative State
→
Proposal
→
Verification
→
Execution
Traditional AI governance
Oversight in many current AI governance approaches occurs after system behaviour has already affected the real world. Systems execute decisions first, while monitoring and auditing mechanisms observe behaviour retrospectively.
Execution
→
Monitoring
→
Audit
→
Correction
In this model, monitoring systems track outputs, performance or safety indicators during operation. If issues are detected, they are recorded through logging and later examined through auditing or human review. Corrective action typically follows only after problematic behaviour has been observed.
This approach provides transparency and accountability, but it does not structurally prevent undesirable system behaviour before execution occurs.