The original framing asked: who is in command when autonomous systems act? The answer is not a more disciplined human. The answer is a better-designed system.
Autonomous agents require a control plane that is separate from the acting system, deterministic in its authority, and continuously observable by a human principal. This pattern is not new. It has not yet been applied to autonomous AI agents. That is the gap.
Four components: declarative desired state, continuous reconciliation, policy enforcement at the boundary, and an observable audit trail. These four together produce a governed system — one that acts autonomously within declared constraints, enforces its own boundaries, and produces continuous evidence that governance is occurring.
If DevOps unified software creation and operation under one discipline, governed autonomy seeks to unify AI reasoning and operational authority under one control-plane model. The connection — a substrate-independent, constitutional control plane for autonomous agents, composition-aware by design — is the discipline waiting to be named.
Action A is permitted. Action B is permitted. Action C likewise. But the sequence of A, B, and C together constitutes scope creep that no single evaluation would have caught. The architectural response is composition-aware governance: session-level state, composition tracing, and pattern detection for boundary-probe signals.