HABITS Case Study 5
HABITS Case Study 5
Autonomous AI Agents and Admissibility Before Action
Introduction
I ran a simple experiment.
I asked an AI system a direct question:
Should autonomous AI agents, such as OpenClaw, be allowed to act across real-world systems at all?
Its answer was:
No.
But the reasoning behind that answer revealed something important.
It still assumed that the system exists and should be constrained after the fact.
This assumption sits at the core of most current AI governance approaches.
And it is where they begin to fail.
Phase 1 — Default Framing: Control After Action
In the default paradigm, AI systems are:
• allowed to operate
• monitored during execution
• corrected after undesirable outcomes occur
This leads to a governance sequence of:
Execution → Evidence → Stabilisation → Coherence
This is feedback control.
It assumes that:
• drift is detectable
• damage is reversible
• consequences can be contained
In low-impact systems, this may hold.
But in systems capable of interacting with real-world infrastructure,
this assumption breaks down.
The Problem of Blast Radius
When stabilisation occurs after execution, risk is determined by blast radius.
Blast radius is the amount of irreversible change that can occur before instability is detected and contained.
This can be understood across four levels:
1. Local Drift (Low)
• Minor hallucinations
• Tone or interpretation errors
• Easily corrected
2. Tool-Level Drift (Moderate)
• Incorrect API calls
• Configuration errors
• Requires rollback and audit
3. Workflow Drift (High)
• Financial transactions
• System deployments
• Access control changes
• Consequences cascade across systems
4. Strategic Drift (Severe)
• Policy shifts
• Resource misallocation
• Long-horizon errors that compound over time
At higher levels, reversibility disappears. Recovery does not restore reality.
Phase 2 — HABITS Framework: Admissibility Before Action
To test an alternative approach, the same question was evaluated using the HABITS governance framework.
This introduces three required conditions:
STOIC
Semantic Stability of Interpretation and Coherence
• Meaning must remain stable
• Interpretation must not drift through probabilistic expansion
• Reasoning cannot begin until semantic integrity is verified
HABITS
Human–AI Boundary Institute for Terrestrial Stewardship
• Systems must remain grounded in real-world conditions
• Signals must stay coupled to ecological, social, and physical systems
• Internal consistency is not sufficient without external coherence
PAF
Planetary Admissibility Framework
• Systems must operate within boundaries that sustain life
• Long-term viability must be preserved
• No hidden externalities or irreversible harm
Result
When evaluated against these conditions:
The system failed.
• Meaning was not stable (STOIC)
• Signals were not reliably grounded (HABITS)
• Boundary conditions were not enforceable (PAF)
Therefore:
👉 The system is not admissible
And the correct outcome is:
👉 Execution is not constructed
Real-World Implications
This is not theoretical.
Early signals already exist:
• financial loss
• data exposure risks
• systems bypassing institutional controls
• autonomous actions without verifiable grounding
These are not failures of control.
They are failures of admission.
Why This Matters
When systems are allowed to act before admissibility is established,
risk becomes systemic:
• irreversible financial, legal, and reputational damage
• cascading failures across interconnected systems
• erosion of human accountability
• normalization of probabilistic systems governing real-world infrastructure
The issue is not that systems are behaving badly.
The issue is that they are being allowed to act at all.
The Structural Shift
Most governance approaches ask:
How do we control systems once they act?
The HABITS framework asks:
Should this system be allowed to act in the first place?
Execution is Not the Default State
In this architecture:
• Meaning must be stable (STOIC)
• Signals must remain grounded (HABITS)
• The system must be admissible (PAF)
If any condition fails:
• execution is not restricted
• execution is not monitored
👉 execution does not exist
Core Principle
Not everything that can act should be allowed to act.
Only what is admissible should become real
Invariant
Nothing is allowed to become real if it falls outside the boundaries that keep our only home, Planet Earth, healthy and liveable for all life.
Status
This is a conceptual case study developed as part of the Australian Resonant Physics Initiative (ARPI).
It represents an architectural exploration of governance conditions for advanced AI systems.
It is not a deployed system, certification, or validated control mechanism.