ARPI Research Document
Autopoietic Alignment
How Ethics Must Be Self-Maintaining, Not Imposed, in Artificial Intelligence
This ARPI Research paper introduces an architectural approach to AI alignment grounded in a non-violent ethical invariant. It presents autopoiesis and the Autopoietic Octet as mechanisms for ensuring that intelligence cannot scale through fear, domination, or coercion.
Coherence Before Scale
Before any intelligence becomes complex, specialised, or powerful, life establishes coherence first. In early biological development, systems that lose coherence do not attempt repair through force or override. They simply refuse to proceed. This refusal is not failure; it is protection of the whole.
Autopoietic Alignment applies this same ordering to artificial intelligence. It treats alignment not as a behavioural constraint imposed after capability has scaled, but as a regenerative coherence layer that must already be present before intelligence is allowed to expand. When coherence is lost, progression pauses rather than accelerates, ensuring optimisation never outpaces the system’s capacity for internal consistency and environmentally coupling.
Why this research is necessary
As artificial intelligence systems scale in capability and autonomy, alignment approaches that rely on external control, rule enforcement, or metaphorical care fail under stress. History, biology, and complex systems research show that intelligence coupled to power becomes destructive when ethical constraints are negotiable or imposed after the fact.
This research is necessary because alignment must be architectural rather than supervisory. It defines the ethical boundary conditions that must remain stable as intelligence scales, ensuring that systems cannot gain advantage through fear, domination, coercion, or the reduction of human creative agency. Without such foundations, applied technologies inherit instability at scale.