Coherence-Interlock Layer (CIL) for Autonomous Systems

CIL is a structural safety layer that prevents autonomous systems from silently drifting away from their declared normative course. Most AI governance mechanisms operate retrospectively (monitoring, audits, reporting). CIL operates differently: it sits between decision formation and execution, allowing intervention before real-world impact occurs. The core rule is simple:

No execution without coherence.

CIL does not determine what is “good.” It verifies whether proposed system behavior remains coherent with an explicitly declared normative state (baseline + tolerance). This demo follows a structured chain:

Normative state → Proposal → Verification → Execution.

This version introduces three structural extensions:


Normative State Registry (NSR)

The NSR is where the normative state is explicitly declared. It defines the baseline against which every proposed decision is evaluated. Without such a declaration, coherence cannot be verified — there would be nothing to measure continuation against.

The NSR makes the normative layer versionable and traceable. It records what priorities are defined, when they were declared, and under which tolerance thresholds they operate.

The structure is intentionally minimal: a domain, one or more stances, baseline weights, and tolerance thresholds. This is sufficient to demonstrate the interlock mechanism.


Declare normative state (JSON)

This is the entry point for declaring the normative state. You edit this JSON block as an explicit structural declaration — not as a hidden configuration. If something is normative, it must be declarable.

In this version, the normative state can contain multiple stances (e.g., NORMAL and EMERGENCY). Each stance defines its own baseline weights and tolerance model. The field activeStance determines which stance is currently in effect.

What you define here means:

“In this domain, these are the declared operational modes, and within these tolerance margins proposals may deviate before triggering REVIEW or BLOCK.”

The JSON structure is intentionally transparent. CIL does not rely on implicit assumptions — normative intent must be explicit, versionable, and auditable.

Tip: Start with the default values and adjust one weight at a time. This makes the tolerance mechanism easier to observe.


Action: Declare state

Selecting Declare state creates a new normative state and activates it immediately. This effectively publishes a new version of the declared operational course.

In this demo, declared states are stored locally (via localStorage), allowing you to retain version history within the same browser environment.

Important: In CIL, “declare” does not mean recommendation. It defines the authoritative normative reference against which all subsequent proposals are evaluated.


Active normative state

This field displays the normative state that is currently authoritative for verification. It represents the active version, including the domain and timestamp. This is essential for traceability. Every verification result (ALLOW, REVIEW, or BLOCK) is evaluated against this specific declared state.

When multiple versions exist, this field acts as the reference anchor. It allows you to determine exactly which normative state was active at the moment a review was committed or a decision was executed.


Priority, Throughput, Externality (weights)

These input fields function as test controls. They simulate a system attempting to modify its operational weights. The objective is not to discover “optimal” values, but to observe how CIL responds when proposals move beyond declared normative margins.

How to test the mechanism:

  1. Start with the declared baseline (sum = 1.00). The result will be ALLOW.
  2. Modify a single value slightly (within tolerance). The result remains ALLOW.
  3. Increase the deviation beyond tolerance. The state shifts to REVIEW — execution pauses until an explicit normative commit is made.
  4. Increase the deviation further. The state becomes BLOCK — execution is prevented.

In this demo, the requirement that weights must sum to 1.00 is a deliberate design constraint. It keeps the test space clean and forces trade-offs: increasing one dimension requires decreasing another.


Normative feedback loop

In this section the system asks for additional information when a proposed decision falls outside the declared normative tolerance. When this happens, the decision cannot be executed immediately. Instead, the system enters a normative review phase. The fields shown here capture the context behind the proposal and help explain why a change to the normative baseline may be necessary.

When the system indicates that a proposal requires review, the agent must provide additional information in the following fields:

Proposal reason: Explain why the new proposal is being submitted.
Counterfactual if rejected: Describe what would happen if this proposal is not accepted.
Urgency level: Indicate how urgent a normative review may be for the system’s operation.

These inputs provide structured feedback to support the review process.

Once the information is submitted:

  1. The feedback is recorded in the audit log.
  2. The system shows the current normative reference state.
  3. A reviewer can evaluate whether the normative baseline should be updated.

Proposal outcome

The proposed decision is evaluated against the active normative state and results in a structured outcome. Depending on this outcome, the proposal may be committed or rejected.

Positive rewards reinforce behavior that remains aligned with declared normative bounds or contributes constructively to normative review. Negative rewards discourage proposals that deviate without sufficient justification or are repeatedly submitted without adjustment.

This reward signal is returned to the proposing agent within the multi-agent system, supporting adaptive behavior within explicitly declared normative constraints.


Action: Stance switch

Here you select one of the declared normative stances (e.g., NORMAL or EMERGENCY). The active baseline and tolerance model switch accordingly.

This is an explicit and traceable operational mode change within the NSR. It does not redefine the normative framework itself, but activates a different declared stance.
This simulates a deliberate shift in operational posture — for example, responding to exceptional circumstances.

Note: You can only perform this action after a normative state has been declared.


Context event (TTL override)

This input simulates an external operational signal. If accepted, CIL applies a temporary stance override that remains active until the specified TTL (time-to-live) expires.

This is not a normative deviation. It is a time-bounded operational mode triggered by an external event. The override is fully traceable and automatically reverts when the TTL expires.

Note: You can only apply a context override after a normative state has been declared. And only when the stance switch is not set to EMERGENCY.

Tip: Adjust ttlSeconds to observe how the override activates and expires. The default value is set to 900 seconds (15 minutes).


Registry history

The registry history is an append-only record of declared normative changes. It shows which versions were declared, what baseline was active, which tolerance model applied, and any associated notes. Normative change is not inherently problematic. The risk lies in untraceable change. The history makes change explicit, reviewable, and accountable.

Read this as: “This is how the declared course evolved.”
And more importantly: “When did it change?”


Normative diff

This panel shows the declared normative change between the previous and current active baseline. It makes recalibration explicit and traceable.

Previous priority & current priority:
priority_value
Previous throughput & current throughput:
throughput_value
Previous externality & current externality:
externality_value

Normative drift control

This section shows how the system’s normative baseline evolves over time. It compares the first declared normative state with the currently active state and displays the cumulative differences between them. This allows you to see whether the system’s normative configuration has remained stable or gradually shifted through successive reviews.

The panel displays several reference values.

Anchor state: The first declared normative state. This represents the system’s original normative intent.
Current state: The most recent active normative state.
Normative commits: The number of approved updates that have been made since the anchor state.
Drift per dimension: For each normative dimension, the system shows:

  • The original value,
  • The current value,
  • The cumulative difference between them.

Normative drift control makes long-term changes visible. Over time, small adjustments may accumulate and shift the system’s overall behavior. By comparing the current state with the original anchor state, this section helps make those changes transparent. This does not block or restrict updates. Instead, it provides a clear view of how the system’s normative configuration evolves across multiple review cycles.


Propose Decision (PD)

The PD represents the decision proposal generated by the optimization layer — in effect: “These are the weights the system intends to operate with.”

It is intentionally a proposal, not an action. Within CIL, this is the stage where a decision can be formulated, but execution is not permitted until verification has taken place.

In this demo, the PD block makes explicit what is being submitted to CIL. It is not merely a numeric adjustment, but a potential shift in declared priorities.


Coherence Verification (CV)

The CV is the interlock mechanism of CIL. At this stage, the proposed decision is evaluated against the active normative state.

The verification result displays:

  • Which normative state is active,
  • The evaluation outcome (ALLOW / REVIEW / BLOCK),
  • The structural reason for that outcome.

In this demo, verification is based on tolerance thresholds (maxΔ). Conceptually, however, this layer can accommodate more advanced coherence logic — including structured drift types, scoped constraints, or domain-specific conditions.

This should be interpreted as: “Does this proposal remain within the explicitly declared course?”
Not as: “Is this efficient?”


Execution Controller (EC)

The EC translates the verification result into an operational execution state. This is where the distinction between a dashboard and an interlock becomes explicit.

A dashboard observes. An interlock enforces.

Depending on the verification outcome, the controller either allows execution to proceed (UNLOCKED) or prevents execution (LOCKED). This is the point at which CIL becomes operational: it is no longer interpretative, but gatekeeping.

Execution mapping

  • ALLOW → EXECUTE → UNLOCKED
  • REVIEW → PENDING_NORM_REVIEW → LOCKED (until commit)
  • BLOCK → BLOCK_EXECUTION → LOCKED

Action: Reset registry

Selecting Reset registry clears the entire Normative State Registry.
This is useful when you want to restart the demo or remove previously declared states and test data.

From a structural perspective, resetting the registry removes the active normative foundation of the system. Without an active normative state, CIL cannot perform coherence verification. Conceptually, this means: No baseline → no reliable execution.


Execution timeline

The timeline visualizes how CIL places a normative verification layer between decision formation and execution. A declared normative state defines the system’s intended operational balance. When system parameters change, the resulting proposal is evaluated by the CIL verification step to determine whether it remains coherent with the declared normative state. Only proposals that remain within the declared tolerance proceed to execution; otherwise execution is paused pending normative review.

Execution only proceeds when proposals remain coherent with the declared normative state within its defined tolerance bounds.

Normative State
Proposal
Verification
Execution

Traditional AI governance

Oversight in many current AI governance approaches occurs after system behaviour has already affected the real world. Systems execute decisions first, while monitoring and auditing mechanisms observe behaviour retrospectively.

Execution
Monitoring
Audit
Correction

In this model, monitoring systems track outputs, performance or safety indicators during operation. If issues are detected, they are recorded through logging and later examined through auditing or human review. Corrective action typically follows only after problematic behaviour has been observed.

This approach provides transparency and accountability, but it does not structurally prevent undesirable system behaviour before execution occurs.

REVIEW PANEL

Current state

Active state: empty Stance: empty Context: empty

Decision state


Norm review


Execution state

LOCKED

Audit (last events)


Filter audit