GRCcareers.ai

The Three Destabilizing Features of AI Governance: Opacity, Emergence, and Velocity

By Stephan Pochet · April 29, 2026

Every governance framework presupposes a model of the thing it governs. Sarbanes-Oxley presupposes humans signing financial statements. COSO presupposes processes with definable control points. ISO 31000 presupposes risks that can be enumerated, scored, and treated. Each of these frameworks is a marvel of accumulated institutional learning — and each was built for a world in which the entity under control was either a human actor or a deterministic process. Artificial intelligence is neither.

The previous essay in this series argued that AI governance is undergoing an ontological shift, not merely a lexical one. This essay names the three features of AI systems that force the shift: opacity, emergence, and velocity. Each one breaks a load-bearing assumption inside the existing GRC stack. Together they require a new control vocabulary — and a new generation of professionals capable of speaking it.

1. Opacity and the Limits of Audit

The audit function is the spine of corporate governance. Auditability presupposes traceability: a decision can be reconstructed, a control can be tested, a process can be walked. Probabilistic models invert this assumption. A modern foundation model produces an output through the interaction of billions of parameters, none of which are individually meaningful and the joint behavior of which is, in the strict sense, irreducible. There is no walkable trail. There is a forward pass.

Traditional auditors respond to this by demanding documentation: model cards, data sheets, evaluation suites. These are useful artifacts, but they are not audit trails — they are surrogates for behavior the auditor cannot directly inspect. The governance question is not whether to produce more documentation. It is whether the audit function itself must be reconceived for a class of systems whose internal logic is constitutively opaque.

2. Emergence and the End of Predictability

Conventional risk management assumes that the universe of possible failures can be enumerated in advance. Failure mode analyses, control matrices, and residual risk scores all rest on this assumption. Emergent behavior in large models violates it. Capabilities and failure modes appear at scale that no engineer designed and no risk register anticipated — jailbreaks, sycophancy, deceptive alignment, tool-use chains that cross system boundaries.

The governance implication is severe. If failures cannot be enumerated in advance, then controls designed against an enumerated failure set will be perpetually behind the system they govern. The discipline must move from ex ante enumeration toward continuous behavioral monitoring, red-teaming as a standing function rather than a project, and organizational structures that treat surprise as the expected case rather than the exception.

3. Velocity and the Collapse of the Human Review Window

Every legacy control assumes that a human reviewer sits somewhere in the loop on a timescale where review is meaningful. Quarterly attestations. Monthly reconciliations. Daily exception reports. Agentic AI systems collapse this timescale. An autonomous agent can execute thousands of micro-decisions per minute across procurement, customer communication, code commits, or trading. The human review window does not shrink — it disappears.

This is not a problem that more reviewers solve. It is a problem that requires a different topology of control: programmatic guardrails operating at the same velocity as the agent, kill switches with measurable activation latency, and segregation-of-duties structures encoded in the agent's environment rather than in human approval queues.

Toward a New Control Vocabulary

Opacity, emergence, and velocity are not edge cases. They are the defining features of the systems GRC functions are now expected to govern. The control vocabulary built for human actors and deterministic processes — segregation of duties, four-eyes review, periodic attestation — does not translate cleanly. New primitives are needed: behavioral evaluations as continuous controls, capability elicitation as a standing risk activity, agent permissioning as a first-class governance object, and incident taxonomies that recognize emergent failure as a category rather than an anomaly.

What This Means for GRC Careers

The professionals who will define the next decade of corporate governance are not the ones who memorize the existing control catalog. They are the ones who can extend it. They read research papers and audit reports with equal fluency. They translate between the language of model evaluations and the language of board risk committees. They understand that an AI system is not software in the traditional sense — it is a probabilistic capital asset that requires a new fiduciary frame.

GRCcareers.ai exists for this transition. The next essays in this series will move from diagnosis to prescription: the new control primitives, the new role definitions, and the search criteria boards will use to recruit the leaders of this discipline.