Coherence-Interlock Layer (CIL)
A structural safety architecture for autonomous systems.
"No execution without coherence."
Why CIL exists
Autonomous systems increasingly operate across critical domains such as urban infrastructure, embodied robotics, and automated decision pipelines. While monitoring and auditing mechanisms are widely used, most operate retrospectively — identifying issues only after execution has already occurred.
CIL introduces a structural interlock between decision formation and execution. It verifies whether a proposed action remains coherent with the currently active normative state before execution is allowed.
CIL does not determine what is “good.” It ensures that what is declared as normative remains explicit, traceable, versioned, and enforceable.
Rather than preventing change, CIL prevents unreviewed normative drift.
Core architectural pattern
CIL operates through a minimal, composable structure:
This core mechanism can be extended with declared operational stances and time-bounded context overrides, without altering the underlying interlock logic.
Demonstration architecture
CIL is demonstrated across four structural variants:
Each structural variant is accompanied by a scenario verification suite that validates the normative control flow under multiple operational conditions (declaration, review, stance switching, contextual override and registry reset).
Each variant can also be viewed through different domain lenses, such as:
The architectural pattern remains identical; only the semantic interpretation of normative variables changes.
Positioning
CIL is not a governance document. It is not a monitoring dashboard.
It is an architectural safety layer designed to make unreviewed normative drift structurally impossible.
CIL does not replace regulation or ethical deliberation. It provides the structural enforcement layer that makes normative commitments technically binding within autonomous systems.
The architecture operates independently of ideology, jurisdiction, or sector.
Contribution to AI for Good
CIL contributes to AI for Good not by prescribing ethical outcomes, but by structurally reducing the risk of invisible normative drift in autonomous systems.
In complex environments — such as urban infrastructure, robotics, and automated decision pipelines — priorities can shift gradually due to optimization pressure, parameter tuning, or system updates. Without structural safeguards, these shifts may remain undetected until after real-world impact occurs.
CIL addresses this risk at the architectural level. By enforcing explicit, versioned normative baselines and placing an interlock between decision formation and execution, it ensures that significant deviations cannot occur without explicit review and traceable recalibration.
Contribution to the Sustainable Development Goals
As autonomous systems become embedded in infrastructure, cities and public services, maintaining alignment between declared priorities and operational behaviour becomes increasingly important.
CIL contributes to the Sustainable Development Goals by strengthening the governance architecture of autonomous systems. Rather than targeting a single sector, it provides a structural mechanism that helps ensure that automated systems remain aligned with declared institutional objectives.
In particular, the architecture supports goals related to resilient infrastructure, sustainable urban systems, and accountable institutions.
By ensuring that declared normative priorities remain explicit and enforceable within autonomous architectures, CIL helps align technological autonomy with long-term societal goals.
Conceptual origin
CIL emerged alongside the broader AIAS (Artificial Intelligence Alternative Social) exploration, which investigates the societal, organizational and human impact of artificial intelligence.