GPRC for Sustainability & ESG: A Tale of Two Futures: Star Trek or Blade Runner? 

In nearly every organization I speak with, sustainability and ESG are now part of the conversation. Not just in annual reports or investor decks, but in strategy sessions, risk workshops, board discussions, and even operational resilience planning. The reasons vary — regulations, investor expectations, customer demands, talent attraction, reputational pressure — but the direction is unmistakable. ESG has moved from “nice to have” into the territory of “must govern.”

And yet, despite the attention, there is still a persistent disconnect. Many organizations are doing ESG, reporting ESG, and talking ESG, but they struggle to manage ESG as a unified capability that truly influences decisions, shapes performance, and stands up to scrutiny. Too often, ESG remains a collection of initiatives rather than an operating model. It becomes a patchwork of programs and metrics, rather than a command framework that can guide the enterprise through uncertainty.

This is why I often use an analogy in presentations that seems to resonate with executives and practitioners alike: a tale of two futures.

  • One future looks like . . .

[The rest of this blog can be read on the Corporater blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

Homeostatic Digital Risk and Resilience in GRC 7.0 – GRC Orchestrate

I have reached a point in my research, advisory work, and ongoing dialogue with boards, executives, regulators, and technology providers where incremental language no longer feels responsible. The signals are too strong, the failures too visible, and the velocity of change too unforgiving. Digital risk and resilience are no longer peripheral concerns managed through documentation and periodic review. They have become existential capabilities that determine whether an organization can be trusted to operate, scale, and endure.

The future of digital risk and resilience is not incremental. It is architectural. And most governance, risk management, and compliance platforms on the market today, including many that are widely viewed as leaders, are not built for what comes next. This is not a criticism of intent or effort. It is an observation rooted in how these platforms were conceived, funded, and engineered over the past two decades.

This article is a call to action from my analyst seat. Not a marketing manifesto. Not speculative futurism. It is a direct appeal for architectural honesty. If IT risk platforms do not fundamentally re-architect toward homeostatic digital risk and resilience, they will not remain relevant by 2030. And relevance in this decade is inseparable from the ability to deliver, evidence, and sustain digital trust.


From Reactive Control to Homeostatic Stability

Homeostasis is a concept drawn from biology, but its relevance to digital enterprises is profound. A living system survives not because it eliminates variability, but because it continuously senses change, evaluates deviation, and adapts its behavior to remain stable while conditions fluctuate. Temperature, oxygen levels, hydration, and energy are all regulated through constant feedback loops. There is no quarterly assessment of survival. There is continuous regulation.

Now contrast this with how most organizations still manage digital risk and resilience . . .

  • Risk assessments are performed periodically
  • Controls are defined statically. Issues are logged after something breaks
  • Dashboards report on conditions that have already passed

This model may satisfy audit requirements, but it does not create resilience. It creates records of hindsight.

In an environment defined by cloud-native architectures, continuous software delivery, AI-driven operations, cyber-physical convergence, volatile geopolitics, and accelerating regulation, non-homeostatic GRC is not merely inefficient. It is dangerous. It creates a false sense of control while the operating environment changes faster than the governance mechanisms meant to oversee it.

A homeostatic approach to digital risk and resilience recognizes that stability is dynamic. It accepts that disruption is normal and that the role of GRC is not to document failure after the fact, but to continuously regulate exposure in pursuit of objectives. This is the foundational shift introduced by GRC 7.0 – GRC Orchestrate.

In practical terms, a homeostatic GRC capability must enable organizations to:

  • Detect deviation from normal operating conditions as it emerges, not weeks or months later
  • Understand how that deviation propagates across processes, technologies, and third parties
  • Adjust controls, resources, and decision thresholds in near real time
  • Learn from disruption so the system becomes more resilient with each stress event

GRC 7.0 Is System-Centric, Not Workflow-Centric

For more than twenty years, the gravity of the GRC market has been firmly anchored in workflow. Platforms were designed, sold, and evaluated based on how efficiently they could route tasks, collect attestations, enforce approvals, and store evidence. In an era where risk and compliance were episodic, largely manual, and organizationally siloed, this made sense for compliance but not risk management. Workflow provided structure and traceability in otherwise fragmented environments.

Over time, however, workflow quietly became mistaken for intelligence. The ability to move a task from one role to another was conflated with the ability to understand risk. The completion of an assessment was treated as equivalent to managing exposure. Many platforms optimized for efficiency of process rather than fidelity of insight. They became very good at documenting that something happened, but far less capable of understanding what it meant in the context of a constantly changing operating environment.

GRC 7.0 – GRC Orchestrate – forces a reckoning with this assumption. In a system-centric model, the enterprise itself becomes the object of governance, not the checklist. The platform must understand how objectives are pursued, how value is created, and how disruption propagates across interconnected processes, technologies, and third-party dependencies. Workflow does not disappear, but it is demoted from architectural foundation to orchestration layer.

In practice, this means workflows are triggered by system conditions rather than calendars. Tasks are generated because thresholds are breached, dependencies shift, or risk signals change, not because an annual cycle demands activity. Forms become structured interfaces into a living system model, not static repositories of point-in-time opinion.

In a system-centric GRC architecture, workflows are typically invoked because:

  • A critical service dependency degrades rather than a review cycle beginning
  • A third-party risk signal materially changes rather than a questionnaire coming due
  • A regulatory obligation enters scope due to business change rather than annual refresh
  • A control’s effectiveness drifts rather than an audit being scheduled

This is a profound shift in how GRC platforms are designed, implemented, and used, and it cannot be achieved without rethinking architecture from the ground up.


Digital Twins as the Nervous System of Homeostatic GRC

At the center of any homeostatic system is a representation of self. In biological terms, this is the nervous system’s ability to sense, interpret, and coordinate response across the organism. In GRC 7.0, that role is fulfilled by the digital twin. Not as a static diagram or a visualization layer, but as a continuously synchronized, semantically rich model of how the enterprise actually operates.

A true GRC digital twin models far more than risks and controls. It captures objectives, processes, assets, data flows, applications, infrastructure, and third-party relationships as interconnected elements of a single system. It understands not only that a control exists, but what it protects, how it operates, and what fails when it degrades. It connects regulatory obligations to the processes and technologies that fulfill them, and to the vendors and services upon which they depend.

This is where the limitations of most current platforms become visible. They store information, but they do not model relationships with sufficient depth or fidelity. Risks are isolated records. Controls are abstract requirements. Third parties are managed as inventories rather than as operational dependencies. As a result, platforms struggle to answer the questions that actually matter when disruption occurs.

A mature GRC digital twin allows organizations to answer questions such as:

  • Which business objectives are immediately at risk if a system or cloud provider fails
  • Which regulatory obligations become non-compliant under a given cyber/digital disruption scenario
  • Where concentration risk exists across cloud providers, regions, or critical vendors
  • Which controls act as true stabilizers versus ceremonial safeguards

Consider a major cloud provider outage. In a traditional GRC system, this may trigger an incident record, perhaps a business continuity workflow, and eventually a report. In a homeostatic GRC platform with a digital twin, the system already understands which business services rely on that provider, which processes are degraded, which regulatory obligations are at risk, and which customers may be impacted. It can simulate cascading effects, test compensating scenarios, and support real-time decision making.

This capability cannot be bolted onto workflow-driven architectures. It requires graph-based data models, semantic ontologies, and continuous synchronization with operational systems and risk intelligence feeds. Without a digital twin, GRC remains descriptive. With one, it becomes predictive and adaptive.


Risk Intelligence as Continuous Sensory Input

Homeostasis depends on sensing. A system that cannot perceive change cannot regulate itself. In GRC 7.0 – GRC Orchestrate, risk intelligence provides the sensory input that allows the platform to understand its current state and detect deviation from acceptable operating conditions.

Internal risk intelligence is generated continuously by the enterprise itself. This includes operational telemetry from IT and OT environments, incident and near-miss data, control performance metrics, system availability indicators, and business performance signals tied directly to objectives.

Concrete examples of internal risk intelligence include:

  • Control performance telemetry showing drift, failure, or degradation
  • Incident and near-miss trends revealing weak signals before major loss events
  • System availability and integrity metrics tied directly to critical business services
  • Business performance indicators that reveal when risk is beginning to erode objectives

These signals reveal not just whether controls exist, but whether they are effective in practice.

External risk intelligence extends the platform’s awareness beyond organizational boundaries. Cyber threat intelligence, regulatory change and enforcement activity, geopolitical developments, third-party risk signals, supply chain disruption indicators, and ESG and reputational data all provide context for how the operating environment is evolving.

In a homeostatic GRC platform, this external intelligence often includes:

  • Cyber threat intelligence correlated directly to technologies and vendors in use
  • Regulatory change mapped to affected obligations, controls, and processes
  • Geopolitical risk indicators tied to supplier concentration and geographic exposure
  • Third-party risk signals derived from financial, cyber, operational, and reputational data

In most GRC platforms today, this information is consumed passively. It is attached to risk records, referenced during assessments, or reviewed manually by specialists. In a homeostatic architecture, intelligence is active. It continuously recalibrates exposure, likelihood, velocity, and impact. Risk ceases to be a static score and becomes a dynamic condition of the system.

This distinction is critical. A platform that updates risk only when someone completes a form cannot keep pace with real-world change. A platform that adjusts its understanding continuously can support timely, proportionate response. This is the difference between managing risk as documentation and managing risk as a living phenomenon in the homeostatic digital enterprise.


Agentic AI as the Regulatory Mechanism

Artificial intelligence is often discussed in GRC in superficial terms. Dashboards are labeled as intelligent. Chat interfaces are presented as transformation. But intelligence that is not grounded in architecture is cosmetic. In a homeostatic GRC platform, AI performs a regulatory function analogous to biological control systems.

Agentic AI continuously monitors the digital twin and associated intelligence feeds for deviation from expected operating states. When thresholds are breached or patterns emerge, it evaluates potential impact using system context rather than isolated data points.

In practice, agentic AI capabilities in a homeostatic GRC platform include:

  • Identification of abnormal patterns that exceed defined risk appetite thresholds
  • Simulation of cascading impact across business services and regulatory obligations
  • Prioritization of response actions based on business criticality and risk velocity
  • Coordination of actions across risk, compliance, IT, security, and operations

Over time, these agents learn. They observe which responses were effective, which were not, and how quickly the system returned to stability. This learning loop is essential. Without it, platforms repeat the same playbooks regardless of outcome.

Importantly, this does not remove humans from the loop. It changes their role. Risk and compliance professionals become stewards of thresholds, assumptions, and trade-offs. They focus on governance of the system rather than administration of tasks. This is a necessary evolution if GRC is to remain relevant in an AI-accelerated world.


Digital Trust as the Emergent Outcome

Digital trust is frequently invoked but rarely defined with precision. It is not achieved through an accumulation of controls, certifications, or policy statements. Digital trust emerges when stakeholders have confidence in how an organization behaves under stress.

Boards want assurance that objectives can be sustained in volatile cyber conditions. Regulators want evidence that digital resilience compliance is continuous rather than episodic. Customers and partners want confidence that services will remain reliable, secure, and ethical even when disruptions occur. These expectations cannot be met through static governance mechanisms.

Homeostatic GRC provides a credible foundation for digital trust because it demonstrates stability through adaptation. It shows not just that controls exist, but that the system can sense degradation, respond proportionately, and recover effectively.

Increasingly, digital trust is judged by evidence such as:

  • Continuous compliance signals rather than point-in-time attestations
  • Resilience testing grounded in realistic stress scenarios and dependencies
  • Third-party oversight based on operational criticality rather than vendor tier labels
  • Transparent reporting that reflects real system behavior under stress

Transparency is grounded in system truth rather than curated narratives.

This aligns closely with regulatory direction, even where language differs. Operational resilience requirements, continuous controls monitoring, and enhanced third-party accountability all point toward a future where static GRC approaches are no longer sufficient. Digital trust will increasingly be judged by observed behavior, not documented intent.


What Breaks Without Homeostatic GRC

Before addressing the market implications, it is worth being explicit about what fails when organizations continue to rely on non-homeostatic GRC architectures.

When GRC remains workflow-driven and episodic, several failure modes become inevitable:

  • Risk is detected too late because assessments lag reality
  • Dependencies are misunderstood, leading to cascading failures during disruption
  • Third-party risk is underestimated because vendors are treated as records, not operational lifelines
  • Regulatory non-compliance emerges unexpectedly due to misaligned obligations and processes
  • Executive decisions are made on outdated or incomplete information

These failures are not hypothetical. They are already visible in cloud outages, cyber incidents, regulatory enforcement actions, and supply chain disruptions across industries. The common root cause is not lack of effort, but lack of architectural alignment with how modern enterprises actually operate.


A Hard Truth for the Market

I will say this plainly. Many of today’s leading cyber/digital/IT risk platforms will not be leaders by 2030. Not because they lack customers, capital, or talent, but because their core architectures are optimized for a world that no longer exists.

You cannot retrofit homeostasis onto a system designed for periodic reporting. You cannot achieve real-time digital resilience with quarterly risk assessments. You cannot deliver digital trust with disconnected modules and brittle data models.

This requires re-architecting at the core. Data before workflow. Digital twins before dashboards. Intelligence before tasks.


The Call to Action

For technology providers, the message is clear. Stop adding features and start rebuilding foundations. For organizations, stop buying tools that document risk and start demanding platforms that manage it as a living system.

For risk, compliance, and technology leaders, the question is no longer whether GRC will change. The question is whether your architecture will survive the change.

GRC 7.0 – GRC Orchestrate is not a distant vision. It is an inevitability driven by complexity, velocity, and trust. Homeostatic digital risk and resilience is the price of admission to the next decade of digital enterprise. The only remaining decision is whether you will lead the transition or be disrupted by it.

Rise of Homeostatic Enterprise & Operational Risk and Resilience in GRC 7.0 – GRC Orchestrate

A Call to Action at an Architectural Inflection Point

This article builds directly on last week’s analysis, GRC at the Architectural Crossroads: Why Legacy Platforms Must Rebuild to Survive, where I argued that many governance, risk management, and compliance platforms have reached the limits of architectures designed for a slower, simpler era. That piece examined why incremental modernization and surface-level AI enhancements are insufficient — and why a core rearchitecture is unavoidable.

That article is a call to action; this article continues that . . .

Not because today’s platforms are “bad,” but because many of the most established and widely deployed GRC systems were architected for a different era: an era of periodic assessment, static records, and retrospective assurance. That era is ending.

GRC 7.0 — what I call GRC Orchestrate — (and yes, I know what GRC 8.0 is and have framed it for 2030 and beyond, but the world is not ready for it yet) represents a structural rethinking of how risk and resilience must function inside the enterprise. At its core is a concept that is rarely discussed explicitly in GRC, but is essential to the future of enterprise and operational risk management:

Homeostasis.

In biology, homeostasis is the property that allows a living system to survive in a hostile, changing environment. Body temperature, blood chemistry, oxygen levels, immune response . . . none of these are managed through periodic review. They are sensed continuously, regulated dynamically, and corrected automatically when they drift outside acceptable bounds.

The system does not wait for failure. It anticipates imbalance and responds before collapse.

That is the analogy most governance, risk management, and compliance discussions stop short of making explicit; but it is the only analogy that truly fits the modern enterprise.

An organization is not a machine that can be tuned once a quarter. It is a living system operating in a volatile ecosystem of markets, regulations, technologies, partners, and threats. Risk management, in this context, is not about documenting exposure. It is about maintaining internal equilibrium while the external environment is in constant motion so the organization can make decision and achieve objectives amid uncertainty and instability.

Enterprise and operational risk management are no longer about periodic control. They are about continuous physiological balance: organizational homeostasis.


From Static Control to Living Systems

Legacy governance, risk management, and compliance platforms were designed as skeletal systems; rigid structures meant to hold the organization upright through predefined processes, controls, and attestations. They assume stability, predictability, and time for reflection.

But skeletons do not sense. They do not adapt. And they do not heal.

Modern enterprises require something closer to a nervous system, one that continuously senses internal conditions and external stimuli, interprets meaning, and triggers corrective action before damage spreads.

Most legacy GRC platforms were built around a fundamentally application-centric worldview. Risk is captured in registers. Controls are documented as artifacts. Assurance is delivered through workflows, attestations, and reports. AI, where present, is often bolted on — assisting with text analysis, summarization, or productivity — but rarely altering the core operating logic of the system.

This architecture assumes that risk can be observed after the fact and managed through periodic intervention.

That assumption is equivalent to checking a patient’s vital signs once a year and declaring them healthy.


Intelligence-Centric GRC: The Foundation of Homeostasis

One of the most important distinctions emerging in the market is between application-centric GRC and intelligence-centric GRC.

Application-centric GRC optimizes workflows, repositories, and compliance processes. Intelligence-centric GRC engineers the conditions for continuous understanding.

Homeostatic risk and resilience cannot be achieved through documents and workflows alone. They require:

  • Data engineering as a first-class capability
  • Semantic models that define meaning consistently across the enterprise
  • Deterministic computation of risk and control states
  • Explainable and replayable reasoning
  • Continuous sensing and feedback loops

This is why AI cannot be bolted on.

Large language models are extraordinarily powerful at interpretation, synthesis, and interaction. But in regulated environments, they cannot serve as systems of record or assurance engines on their own. Non-determinism, semantic drift, and irreproducibility are not tuning problems — they are architectural characteristics.

GRC Orchestrate addresses this by separating concerns:

  • Deterministic data and reasoning layers establish truth, causality, and replayable assurance
  • Agentic AI operates on top of this foundation: sensing signals, interpreting context, recommending action, and orchestrating response

This layered architecture is what enables homeostasis; stability through constant adjustment, not rigidity.


Homeostatic Enterprise Risk: Stability in Strategic Motion

At the enterprise level, risk is not an obstacle to ambition. It is the interpretive intelligence that allows ambition to be pursued responsibly.

Homeostatic enterprise risk management reframes ERM from a catalog of exposures into a living system that continuously aligns uncertainty with objectives and enables the organization to make good decisions.

In GRC Orchestrate, this is achieved by anchoring risk directly to strategy, decisions, and performance through decision-centric and objective-centric models. Risks are not evaluated in isolation. They are evaluated in relation to what the enterprise is considering (decisions) trying to achieve (objectives).

Consider a global manufacturer expanding into a new region amid geopolitical instability and regulatory uncertainty. In a traditional ERM model, risks are identified, scored, and reported. In a homeostatic model, the organization continuously monitors:

  • Signals that indicate changing geopolitical conditions
  • Regulatory developments affecting market access
  • Supply chain dependencies and fragility
  • Performance indicators tied to strategic objectives

Digital twins simulate how shifts in these variables affect strategic outcomes. Agentic AI interprets emerging patterns and recommends adjustments — not after the fact, but as conditions evolve.

The system does not freeze strategy. It stabilizes execution while allowing strategic motion.

That is enterprise risk as homeostasis.


Objective-Centric ERM: Keeping Performance in Balance

The second layer of homeostatic risk operates at the level of objectives and performance.

Traditional ERM often collapses under its own abstraction. Risk taxonomies grow. Registers expand. Relevance diminishes. Objective-centric ERM resists this gravitational pull by keeping risk grounded in outcomes.

In a GRC Orchestrate architecture:

  • Objectives are explicitly modeled
  • Uncertainties are mapped to those objectives
  • Leading indicators are continuously monitored
  • Risk responses are dynamically adjusted

This creates a feedback loop between performance and uncertainty.

For example, a financial services firm pursuing growth in digital channels monitors not only financial performance but operational capacity, third-party dependency, regulatory scrutiny, and customer trust indicators. As signals shift, the system adjusts risk thresholds, control intensity, and escalation paths; maintaining balance without halting progress.

This is not risk avoidance. It is performance stabilization under uncertainty.


Operational Risk & Resilience: The Mechanics of Homeostasis

Operational risk and resilience are where the homeostatic analogy becomes unavoidable.

In a living organism, resilience is not an emergency plan. It is the ability to maintain function under stress; to reroute blood flow, mobilize immune response, and preserve core systems even when damaged.

Operational risk management plays the same role inside the enterprise.

Processes are the organs. Systems are the circulatory system. Third parties are external organs temporarily grafted into the body. When one element weakens or fails, the risk is not localized . . . it propagates.

In too many organizations, operational risk remains trapped in a SOX-shaped mindset; narrowly focused on financial controls and retrospective testing. This is equivalent to measuring bone density while ignoring respiratory failure.

Homeostatic operational risk and resilience require something fundamentally different:

  • Continuous sensing of operational vital signs
  • Modeling of interdependencies across processes, systems, and third parties
  • Simulation of stress, shock, and cascading failure
  • Dynamic adjustment of controls, tolerances, and response mechanisms

Digital twins function as the organization’s internal physiology model; a living map of how processes, assets, systems, and partners interact. When stress is applied, leaders can see not just where pain occurs, but how failure propagates and where compensating mechanisms must engage.

Agentic AI operates like an autonomic nervous system; detecting anomalies, interpreting weak signals, and initiating corrective action without waiting for human intervention, while remaining governed by deterministic rules and risk appetite.

The organization does not pause to recover. It continuously self-corrects.

That is operational resilience as homeostasis.


From Business Continuity to Resilience by Design

One of the most profound shifts in GRC Orchestrate is the evolution from business continuity as a reactive function to resilience as a design principle.

Resilience is not a plan on a shelf. It is not an annual exercise. It is an architectural property of the enterprise.

In a homeostatic model:

  • Redundancy is intentional
  • Flexibility is engineered
  • Recovery is rehearsed continuously through simulation
  • Response is orchestrated, not improvised

A pharmaceutical firm, for example, designs supply chain resilience directly into product lifecycle planning: modeling alternative suppliers, inventory buffers, and regulatory constraints before disruption occurs. When conditions change, the system adjusts; maintaining equilibrium between availability, compliance, and cost.

This is resilience that lives inside operations, not alongside them.


The Role of Agentic AI in Maintaining Equilibrium

Agentic AI is often described as the “brain” of next-generation governance, risk management, and compliance. That framing is misleading.

In a homeostatic system, intelligence is distributed. The brain does not consciously regulate heart rate, blood pressure, or glucose levels. It delegates regulation to autonomic systems designed to act faster than conscious thought.

Agentic AI plays this autonomic role inside GRC Orchestrate.

It is not the system of record. It is not the arbiter of truth. It is the mechanism that keeps the organization within safe operating bounds as conditions fluctuate.

In a GRC Orchestrate architecture, agents continuously:

  • Monitor internal telemetry and external signals
  • Interpret context through semantic and objective-centric models
  • Detect drift from risk appetite, tolerance, or performance equilibrium
  • Recommend or initiate corrective action

Crucially, these agents operate on top of deterministic data and reasoning layers. This ensures that every adjustment is explainable, replayable, and defensible: a regulatory-grade nervous system, not a black box reflex.

This is how the enterprise remains stable without becoming rigid; adaptive without becoming uncontrolled.


Why Non-Homeostatic GRC Will Fail

Organizations do not fail because they lack policies. They fail because they lose equilibrium.

Non-homeostatic governance, risk management, and compliance architectures assume that stability comes from control, documentation, and periodic review. In reality, stability comes from continuous regulation. When conditions change faster than governance cycles, static systems become amplifiers of risk rather than mitigators of it.

Non-homeostatic GRC fails in predictable ways:

  • It detects risk after damage has already occurred
  • It reports symptoms without understanding underlying causes
  • It escalates issues without the ability to rebalance the system
  • It optimizes for audit defensibility rather than operational survivability

This is why organizations with mature compliance programs still experience cascading operational failures. Their GRC platforms can explain what went wrong, but only after the fact. They lack the sensory, interpretive, and corrective mechanisms required to keep the enterprise within safe operating bounds as conditions shift.

In a non-homeostatic model, risk management becomes brittle. Controls multiply, but adaptability declines. Decision-makers receive more reports, yet have less confidence. The system grows heavier precisely when it needs to be lighter.

By contrast, a homeostatic GRC architecture assumes instability as the baseline. It is designed to absorb shock, compensate for weakness, and preserve critical functions under stress. It does not seek perfect control. It seeks continuous balance.

This is why the future of governance, risk management, and compliance will not be defined by who has the most workflows, dashboards, or AI features. It will be defined by who can keep the enterprise functioning — credibly, explainably, and defensibly — while everything around it changes.


Why This Matters Now

The pace of change is not slowing. Regulatory volume is increasing. Operational complexity is accelerating. Expectations of resilience are rising from boards, regulators, customers, and society.

Platforms that remain anchored to workflow-driven, record-centric architectures will struggle; not because they lack features, but because they lack the structural capacity for continuous equilibrium.

GRC Orchestrate is not a nice-to-have evolution. It is a necessary response to a world where risk is continuous and resilience must be designed.

Enterprise and operational risk management are no longer about control. They are about keeping the enterprise in balance continuously, explainably, and defensibly.

That is homeostatic risk and resilience.

And it is the future of GRC.


Next Week: Risk of Homeostatic Digital Risk & Resilience in GRC Orchestrate — Trust at the Speed of Digital.

GRC at the Architectural Crossroads: Why Legacy Platforms Must Rebuild to Survive

A View Earned Over Time

I do not come to this perspective lightly, nor is it driven by the latest technology trend or marketing cycle. I have been immersed in GRC technology for more than twenty-six years. I defined the GRC acronym in 2002. I authored the first Forrester GRC Waves when the market was still trying to understand itself. Since then, I have continuously tracked, analyzed, briefed, and advised on this space across industries, geographies, and regulatory regimes.

That longitudinal view matters. When you watch a market evolve over decades rather than quarters, patterns become unmistakable. You see which design decisions age well and which quietly become liabilities. You see which vendors innovate from conviction and which survive by layering complexity on top of aging assumptions, or use marketing fiction to weave perceptions that are not reality (or not yet reality). You also learn that technology markets do not fail loudly at first: they fail slowly, then suddenly.

The GRC market is now approaching that kind of inflection point.

What concerns me is not that legacy platforms are imperfect. No platform ever is. What concerns me is that many of the architectural foundations underpinning today’s dominant GRC solutions were designed for a world that no longer exists; and, more importantly, cannot stretch far enough to support the world we are rapidly entering.

How We Got Here: The Long Shadow of Early GRC Architecture

Many GRC platforms in use today trace their lineage back ten, fifteen, even twenty years or more. They were conceived in an era when the primary problem organizations were trying to solve was visibility and documentation. Regulators wanted proof. Boards wanted reports. Audit committees wanted assurance artifacts. The dominant paradigm was periodic, retrospective, and largely siloed.

Those early architectures reflected that reality. Data models were designed around assessments, issues, controls, and documents. Workflows were linear and role-based. Risk, compliance, audit, and policy management were treated as adjacent — but fundamentally separate — domains.

Over time, pressures mounted. Regulations multiplied. Risk categories expanded. Third-party ecosystems exploded. Cyber risk grew into an existential threat. Vendors responded the only way most enterprise software vendors know how: they added.

They added modules. They added configuration layers. They added analytics engines. They added integrations. When organic innovation slowed, they acquired competitors and complementary tools. Each step made sense in isolation. Collectively, they created platforms that look powerful on the surface but are increasingly brittle and cumbersome under the hood. They updated user experiences, the UX, but underneath things became archaic.

Architecture, unlike marketing, has memory. Every bolt-on capability inherits the constraints of the foundation beneath it. Eventually, those constraints begin to define what is no longer possible.

Why This Moment Is Fundamentally Different

The GRC market has lived through many cycles of change: Sarbanes-Oxley, the financial crisis, GDPR, operational resilience, ESG. Each wave increased complexity, but none fundamentally changed the nature of the system itself.

  • AI does.
  • Agentic AI does.
  • Digital twins do.

These are not new features to be slotted into an existing roadmap. They represent a shift from systems that record and report to systems that sense, reason, and act. That shift exposes architectural weaknesses that were previously manageable, even invisible.

In my conversations with vendors, I increasingly hear phrases like “AI-powered,” “embedded intelligence,” and “next-generation analytics.” In many cases, what that actually means is that an AI capability sits adjacent to the core platform, drawing from exported data, operating under tight constraints, and returning insights that must still be interpreted and acted upon by humans.

That is assistive technology. It is not transformational technology.

The future of GRC is not about helping humans work faster inside broken models. It is about enabling organizations to operate as adaptive systems under continuous uncertainty.

GRC 7.0 and the Emergence of Homeostatic GRC

When I describe GRC 7.0 – GRC Orchestrate, I am describing a fundamental reframing of what GRC is meant to do. At its core, GRC has always been about three things: achieving objectives, addressing uncertainty, and acting with integrity. What has been missing is the ability to do this continuously, dynamically, and at scale.

This is where the concept of homeostasis becomes essential.

In biology, homeostasis refers to the ability of a living organism to maintain internal stability while external conditions change. This is not achieved through constant conscious oversight. It is achieved through deeply integrated systems of sensors, controls, and effectors that operate automatically, continuously, and proportionally.

Most organizations today operate their GRC programs like a patient in intensive care: monitored constantly, intervened upon manually, and perpetually one incident away from escalation. This is inefficient, exhausting, and ultimately unsustainable.

A homeostatic GRC system (built on GRC 7.0 – GRC Orchestrate) is different. It is self-aware. It detects weak signals before they become failures. It adjusts behavior within defined tolerances. It escalates only when necessary. Most importantly, it frees leadership to focus on strategic objectives rather than perpetual fire-fighting.

This is not a cultural aspiration alone. It is an architectural requirement.

Why Digital Twins Change Everything — and Why Star Trek: Strange New Worlds Gets It Right

I want to be explicit about the illustration I am using here, because it matters. This is not a vague science‑fiction reference. It is a precise example of systems thinking applied under extreme uncertainty.

In Star Trek: Strange New Worlds, Season 3, Captain Batel is infected by a Gorn parasite. This is not a routine medical problem. Standard diagnostic models fail. Traditional treatment protocols are ineffective. Time is a hard constraint. The situation is complex, non‑linear, and existential.

At this point, the medical team — anchored by Nurse Chapel and Spock — does something fundamentally different. They turn to an advanced AI system that constructs a digital twin of Captain Batel. This twin is not a static replica. It is a living, adaptive simulation of her physiological state, capable of modeling millions of potential interventions across biological, chemical, and environmental dimensions.

The digital twin becomes the locus of decision‑making.

Spock and Chapel do not simply ask the system questions. They work with it. They iterate through scenarios. They test interventions that would be impossible — or unethical — to test directly on a human. The system evaluates outcomes, refines probabilities, and narrows the solution space. Most importantly, it does this at machine speed, far beyond human cognitive limits.

The digital twin is not just a model. It is:

  • A sensing mechanism, continuously incorporating new data
  • A reasoning engine, evaluating trade‑offs and constraints
  • An orchestration layer, coordinating potential actions
  • An ethical compass, helping determine what should be done, not just what could be done

This distinction is critical.

From Simulation to Action: The Role of Agentic AI

What makes this Strange New Worlds episode such a powerful metaphor for the future of GRC is that the digital twin does not exist in isolation. It is paired with intelligence that can act, not just analyze.

This is where agentic AI enters the picture.

Agentic AI is not simply predictive analytics or generative text. It is goal‑driven intelligence that can:

  • Monitor conditions continuously
  • Reason about objectives, constraints, and risk appetite
  • Propose and sequence actions
  • Execute within defined authorities
  • Learn from outcomes and adjust behavior

In the episode, the AI system does not merely present Spock with a report. It actively participates in the diagnostic and treatment process. It compresses decision cycles. It orchestrates complexity. It enables human experts to operate at a higher level of judgment rather than drowning in data.

This is exactly what GRC must become; is becoming by 2030.

Digital Twins as the Foundation of Homeostatic GRC

Homeostasis depends on three things: sensing, control, and effectors. In biological systems, these functions are deeply integrated. They do not operate in silos. They do not wait for quarterly reviews. They do not require constant executive oversight.

In GRC 7.0 – GRC Orchestrate, the digital twin of the enterprise becomes the core mechanism that enables this integration.

A true GRC digital twin models:

  • Strategic options and decisions
  • Enterprise objectives and performance
  • Business processes and assets
  • Risks, uncertainties, and dependencies
  • Controls, obligations, and tolerances
  • Third‑party ecosystems and external signals
  • Cultural and behavioral drivers

But modeling alone is insufficient.

Without agentic AI, a digital twin is a sophisticated dashboard. With agentic AI, it becomes a homeostatic system.

Agentic AI continuously senses deviations from tolerance, evaluates impact across interconnected domains, and initiates corrective action; automatically where appropriate, escalated where necessary. This is not about removing humans from the loop; it is about removing humans from tasks that does not require conscious oversight and action.

Just as the human body regulates temperature or glucose without executive intervention, a homeostatic GRC system regulates risk exposure, compliance posture, and resilience dynamically.

Why Legacy GRC Architectures Cannot Support This Vision

This is where the architectural fault lines become impossible to ignore.

Digital twins and agentic AI require a unified, semantically rich understanding of the enterprise. They require continuous data flows, consistent taxonomies, and explicit representation of objectives, constraints, and cause‑and‑effect relationships.

Most legacy GRC platforms simply do not have this.

What I see instead are fragmented representations of reality: risk in one model, compliance in another, third‑party data somewhere else entirely. Performance and objectives — if they exist at all in the system, and most they do not — are loosely connected at best. These architectures were never designed to support living simulations or autonomous orchestration.

As a result, AI initiatives in these platforms are constrained to the edges. They summarize documents. They answer questions. They accelerate existing workflows. They do not run the system.

By 2030, this distinction will define survival or obsolescence of GRC platforms.

Here is the uncomfortable truth: most GRC platforms today cannot support true digital twins or agentic AI: not because vendors lack talent or intent, but because the core architecture was never designed for this purpose.

When I evaluate platforms, I consistently see fragmented representations of reality. Risk lives in one model. Compliance obligations live in another. Third parties live in yet another. Performance, objectives, and outcomes are often afterthoughts, if they exist at all.

Agentic AI requires context. Digital twins require coherence. You cannot simulate what you do not truly understand.

Bolting AI onto fragmented architectures results in narrow, brittle use cases. It produces insights that describe the past rather than shape the future. It creates the illusion of intelligence without delivering autonomy or orchestration.

By 2030, that gap will be fatal.

The Hidden Cost of Growth by Acquisition

Market consolidation has accelerated these problems. Acquisitions create breadth quickly, but they also import architectural debt, and even competing architectures within the same solution provider/vendor. Each acquired product brings its own code base, assumptions, data structures, and logic. Integration layers mask inconsistency, but they do not eliminate it.

Over time, innovation slows. Changes become risky. AI initiatives stall because data cannot be reliably correlated or reasoned over.

From the outside, these platforms look comprehensive. From the inside, they struggle to evolve.

This is not a criticism of individual vendors . . . it is a structural reality.

What Re-Architecting Really Means

Re-architecting is not modernization theater. It is not cloud migration. It is not refactoring.

It means rebuilding from first principles around:

  • A unified enterprise data model that connects objectives, performance, risk, compliance, assets, processes, and third parties
  • Event-driven architectures that support continuous sensing and response
  • Intelligence as a native service, not an add-on
  • Orchestration of humans, systems, and agents rather than rigid workflows

This is what makes homeostatic GRC with GRC 7.0 – GRC Orchestrate possible.

A Direct Message to Buyers

If you are writing an RFP today, understand this: you are not just selecting a tool. You are selecting an architectural future.

Many platforms will meet today’s requirements. Far fewer will meet tomorrow’s.

Ask questions that go beyond features. Ask about data models. Ask about architectural coherence. Ask how AI actually operates inside the system. Ask how digital twins are supported, not theoretically, but practically.

The vendors that feel safest today may be the most constrained tomorrow.

A Call to Action: From Workflow-Centric GRC to Data-, Knowledge-, and Reasoning-Centric GRC

After nearly three decades of living inside this market, what concerns me most is not that many GRC platforms are aging. It is that much of the market is still asking the wrong foundational question.

The question is not whether workflows are configurable enough, dashboards are modern enough, or AI features are impressive enough. The question is whether the data, knowledge, and reasoning architecture beneath the platform is capable of supporting homeostatic control at enterprise scale.

Most GRC platforms were built as systems of record and systems of workflow. They excel at documenting decisions after the fact, coordinating tasks, and producing reports that satisfy auditors and regulators. That architecture made sense when GRC was driven about compliance proof and oversight. Please note, that this is not how GRC has been defined for over 20 years, but how it has been implemented by many.

The world organizations now operate in is no longer episodic. Risk is continuous. Compliance is dynamic. Resilience is tested in real time. And leadership is increasingly accountable not for whether controls existed, but whether they worked when it mattered.

This is where database architecture — and the philosophy behind it — becomes decisive.

Relational databases (which is the foundation of the majority of GRC platforms on the market) optimized for transactions, forms, and records struggle to represent context, causality, and complex interdependencies. They are excellent at answering questions like “What was assessed?” and “Who approved this?” They are far less capable of answering “Why did this risk emerge?”“How does it propagate across the enterprise?”, or “What action will most effectively change the outcome?”

By contrast, architectures built on ontologies, knowledge graphs, and reasoning layers are explicitly designed to model relationships, inheritance, dependency, and cause-and-effect. They allow controls to be represented not as documents or checklist items, but as measurable states that can be continuously validated. They allow obligations to propagate downward into controls and evidence, and assurance to propagate upward into confidence and trust.

This distinction is not academic. It determines whether AI can simply summarize what has already happened—or whether it can reason about what should happen next and act accordingly.

Agentic AI fundamentally changes the role of the platform. When intelligence is embedded at the core — working directly against a coherent data and knowledge model — it can continuously sense deviations, reason about objectives and tolerances, simulate outcomes through digital twins, and execute corrective actions within defined authority. That is homeostasis in practice.

When AI is bolted on — operating outside the core data model, constrained by integrations, and limited to correlation — it can assist humans, but it cannot run the system. It cannot orchestrate complexity. And it cannot scale to the velocity of modern risk.

Looking out to 2030, this is where long-term viability will be decided.

Legacy providers will argue — correctly — that customers value stability, that re-architecting is disruptive, and that AI should remain advisory. Those arguments hold in the short term. But they collapse when risk signals become continuous, regulatory expectations shift toward anticipation, and boards demand decision-grade explanations rather than retrospective dashboards.

Stability that cannot adapt becomes fragility. Oversight that cannot operate at machine speed becomes theater. And GRC platforms that cannot internalize intelligence become dependent on external systems they do not control.

For buyers, this is the moment to rethink how RFPs are written. The most important questions are no longer about modules and workflows, but about:

  • How data is modeled and normalized
  • Whether the platform can explain causality, not just correlation
  • Whether controls are continuously measurable and auditable by design
  • Whether digital twins and agentic reasoning are native capabilities or future aspirations

For solution providers, this is a moment of strategic honesty. Incremental modernization will not close the gap. Rebranding AI will not change architectural reality. The platforms that endure will be those willing to rebuild around GRC data engineering, knowledge representation, and reasoning . . . placing orchestration, not workflow, at the center.

This is not about replacing existing GRC systems overnight. It is about recognizing that by 2030, GRC will be an intelligent, adaptive, and self-correcting system — a true command center for decisions, objectives, uncertainty, and integrity.

Those who embrace this shift now will shape the next generation of the GRC market. Those who delay will find themselves managing yesterday’s risks with yesterday’s tools.

A Philosophical Close: GRC, Entropy, and the Fight for Organizational Integrity

At its deepest level, this conversation is not really about technology. It is about entropy.

In physics and biology, entropy is the natural tendency of systems to drift toward disorder. Living systems survive not by resisting change, but by continuously counteracting entropy through structure, feedback, and adaptation. Left unattended, even the most sophisticated organism degrades.

Organizations are no different.

Risk accumulates quietly. Controls decay. Incentives drift. Complexity compounds. What once worked begins to fail; not catastrophically at first, but subtly. A missed signal here. A delayed response there. Over time, integrity erodes, resilience weakens, and leaders find themselves managing crises they no longer understand.

Traditional GRC approaches attempt to fight entropy through oversight, documentation, and periodic intervention. This is like asking the brain to consciously regulate every heartbeat. It does not scale, and it was never meant to.

Homeostatic GRC represents a different philosophy. It acknowledges uncertainty as a permanent condition. It assumes complexity as a given. And it designs systems that can sense deviation, evaluate impact, and correct course continuously; without exhausting human attention or organizational capacity.

This is why digital twins and agentic AI are not optional enhancements. They are the mechanisms by which modern organizations can maintain coherence in the face of relentless change. They allow enterprises to model reality as it is, not as last quarter’s report described it. They enable decisions to be tested before they are executed. And they ensure that action aligns with objectives, risk appetite, and ethical boundaries.

GRC 7.0 – GRC Orchestrate is ultimately about managing uncertainty and preserving integrity at scale. Not as a slogan, but as an operating condition: where governance guides purpose, risk management manages uncertainty, and compliance reinforces trust, all in dynamic balance.

The organizations that thrive in the coming decade will not be those with the most controls, the most reports, or the most dashboards. They will be those that have built living systems, capable of learning, adapting, and acting with precision under pressure.

That is the future of GRC.

And like all living systems, it must be designed from the inside out, or it will not survive at all.

GPRC for Operational Risk in Financial Services

Orchestrating Stability, Trust, and Execution Integrity on the Most Pressurized Deck of the Enterprise

There are few industries where the consequences of failure arrive as quickly — and as publicly — as they do in financial services.

A manufacturing firm can experience a production disruption and recover over days. A retailer can absorb a supply chain breakdown and shift to alternative routes. A technology company can endure downtime and lose revenue while rebuilding confidence. In each case, disruption is serious. But in financial services, disruption is different. It is immediate, highly visible, and often systemic. Because in banking, capital markets, and insurance, operations are not simply something that supports the business — operations are the business. The product is execution. The brand is reliability. The currency is trust. 

That is why operational risk has always been present in this industry, even before the industry formally named it, built governance around it, staffed it, quantified it, and surrounded it with frameworks. And yet, for all of the maturity in operational risk programs across the sector, many institutions are still navigating with instruments that were built for a world that no longer exists. They have risk registers, RCSAs, KRIs, operational loss databases, controls testing, issues management, audits, third-party assessments, and incident playbooks — and these are all important. But too often, they remain separate artifacts, living in parallel systems, managed by different teams, interpreted through different lenses, and reported upward in different formats. 

Which creates a dangerous illusion: that because the organization has operational risk components, it therefore has operational risk capability. But capability requires something more. Capability requires orchestration. 

Why Operational Risk in Financial Services Is Different 

The simplest way to frame . . .

[The rest of this blog can be read on the Corporater blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]

The WEF Global Risks Report 2026: How We Make Decisions, Set Objectives, and Perform with Integrity When Instability Is the Baseline

Each year, when the World Economic Forum releases its Global Risks Report, I see leaders react in a familiar way. They circulate the visuals, discuss the rankings, highlight what feels immediate, and then quietly move on. It becomes a useful talking point — something we nod to as evidence that the world is “more complex” and “more uncertain.” But what strikes me — year after year — is how rarely the report actually changes the way most organizations govern themselves. It doesn’t meaningfully reshape how they make decisions, how they set objectives, how they measure performance, or how they prepare to operate with integrity when conditions become unstable. It is one thing to acknowledge uncertainty; it is another thing to build an organization that can perform inside it.

For news coverage of this report, check out the GRC Report article Global Risks Report 2026 Warns of a More Uncertain, Competitive, & Fragmented World by Samuel Rasmussen, editor-in-chief of GRC Report.

And this year, the WEF Global Risks Report 2026 is far more than a broad scan of the horizon. It is a diagnosis of the decade we are already living through. It opens with a stark conclusion that frames everything else: “Uncertainty is the defining theme of the global risks outlook in 2026.” That line is not a rhetorical flourish — it is the premise for the entire report, and I believe it should be the premise for how organizations think about governance, performance, risk management, resilience, and compliance going forward. Not because uncertainty is new, but because uncertainty has shifted from something that occasionally disrupts plans to something that increasingly defines the environment in which plans must succeed.

The report is grounded in the Global Risks Perception Survey (GRPS) 2025–2026, with responses collected between 12 August and 22 September 2025, and it draws insights from over 1,300 experts worldwide. It then looks across three horizons — immediate risks in 2026, the short-to-medium term to 2028, and the longer-term horizon to 2036 — explicitly to help decision-makers balance what is urgent with what is enduring. That time-horizon framing is important because it reinforces a critical point that I often see missed inside organizations: risk is not simply “what could happen.” Risk is what changes the probability of success over time — and different risks distort the future in different ways.

What struck me most is the report’s explicit acknowledgment that negative perceptions of the future are mounting, with 50% of leaders and experts anticipating a turbulent or stormy outlook over the next two years, increasing to 57% over the next ten years, while only 1% anticipate a calm outlook. This is not just pessimism. This is a recognition that buffers are thinner, systems are more tightly coupled, and shocks are faster, more interconnected, and harder to contain.

So when I look at the WEF report, I don’t see it as a list of “risks to worry about.” I see it as a warning about something deeper: many organizations are still operating as if uncertainty is a temporary condition, when it is becoming the baseline operating reality.


The WEF Report Doesn’t Tell You What to Do — and That’s Exactly Why It Matters

One of the best perspectives I’ve seen on this year’s report came from my friend and risk collaborator Alex Sidorenko (who has been on my Risk Is Our Business Podcast), who said something that I think every Chief Risk Officer and risk leader needs to take to heart: the WEF report tells you what’s popular to worry about — not what to do. I agree. Completely. And I would go even further: the WEF report was never meant to tell you what to do. It is not an enterprise playbook. It is a global “risk weather report.”

But that’s the point; it describes the weather. It does not build your house.

Too many organizations read the WEF report the same way they read the news: they absorb the risk narrative, feel a sense of urgency, and then return to business as usual. Yet the organizations that are truly mature in governance and risk management do something different. They take what the report is saying, and they translate it into decision design. They ask: “If these are the operating conditions, how do we need to change our assumptions? How do we need to change our objectives? How do we need to change how we execute and govern performance?”

And this is where Alex’s RM2 translation lands exactly where I believe risk management and with that GRC must go (what I refer to as Strategic Risk & Resilience Management: decisions that set objectives). Alex said that before making any significant cross-border decision in 2026, organizations need to explicitly model how trade restrictions, sanctions, capital controls, or supply chain weaponization could alter expected outcomes. That is not merely a clever interpretation of the report — it is precisely aligned with what the WEF report is actually emphasizing. In the 2026 outlook, geopolitical and geoeconomic risks dominate, and the report notes that close to one-third of respondents selected either Geoeconomic confrontation (18%) or State-based armed conflict (14%) as the single top risk for 2026.

Even more importantly, the report explicitly states that concern about geoeconomic confrontation has deepened and broadened beyond “trade policy uncertainty” into a recognition of escalating use of instruments such as sanctions, regulations, capital restrictions, and weaponization of supply chains as tools of strategy. That is an exceptionally important line. It confirms the world many organizations are now operating in: the environment is not simply competitive, it is becoming deliberately constrained, deliberately adversarial, and increasingly shaped by strategic economic tools that directly impact operating models and performance outcomes.

This is where most risk management programs still fall short. They are very good at identifying “geopolitical risk” or “supply chain risk,” but they are not yet structured to translate those forces into quantified ranges of decision outcomes. They still talk about risk at the level of a category. But the WEF report is describing risk at the level of systemic interference with objectives.

That’s the difference.


My Core Lens: The WEF Report Is a Decision Context Document, Not a Risk Catalog

I have long believed that risk management fails when it becomes a separate universe of artifacts done for compliance and auditors: the risk register, the heatmap, the quarterly risk committee pack, the policy library, the annual assessment cycle. Those things can be helpful, but also can be very harmful, but they are not the point. The point is whether risk management improves governance and performance — which is measured by whether the organization can reliably achieve its objectives amid uncertainty.

This is why I always return to the OCEG framing: GRC is the capability to reliably achieve objectivesaddress uncertainty, and act with integrity. The WEF report is essentially describing a world where achieving objectives will be harder, uncertainty will be greater, and integrity will be under more pressure. In that context, the value of the WEF report is not its ranking of risks — it is the context it provides for why the decisions organizations make today will meet more friction, more disruption, and more volatility than most business planning models assume.

The WEF report is structured across three time horizons for a reason. It is telling us that risk management is not simply “here is a list of what might happen.” Risk is the evolving set of conditions that distort reality across time. In the immediate horizon, geoeconomic confrontation and state-based conflict dominate the crisis outlook. In the longer-term horizon, environmental risks remain dominant, with extreme weather events identified as the top risk and half of the top ten risks being environmental in nature. And in parallel, the report warns that technology is simultaneously transformative and destabilizing, with misinformation and disinformation ranking as a top short-term concern, while adverse outcomes of AI climb sharply over the longer horizon.

What does this mean for organizations? It means you cannot build objectives as if the next three years will be a stable runway with predictable inputs and predictable outputs. It means you cannot build performance management as if your supply chain, infrastructure, and markets will behave in historically “normal” ways. It means you cannot treat compliance and integrity as separate from risk, because in sustained uncertainty, integrity gets tested most severely under performance pressure.


The WEF “Current Global Risk Landscape” and What It Really Signals to Organizations

If there is one part of the report that executives will most quickly gravitate toward, it is the “Current Global Risk Landscape” for 2026. It is the graphic that people share. It is the headline list. But what I want to emphasize is not simply which risks are in the top ten, it is what their relationships imply.

For 2026, the top risk selections include:

  • Geoeconomic confrontation (18%)
  • State-based armed conflict (14%)
  • Extreme weather events (8%)
  • Societal polarization (7%)
  • Misinformation and disinformation (7%)
  • Economic downturn (5%)
  • Erosion of human rights and/or of civic freedoms (4%)
  • Adverse outcomes of AI technologies (4%)
  • Cyber insecurity (3%)
  • Inequality (3%)

The surface-level interpretation is: “We have geopolitical, economic, environmental, societal, and technological risks.” But the deeper interpretation is far more consequential: these risks create an environment where decision-making will be distorted simultaneously across supply chains, capital flows, technology systems, public trust, regulation, and even basic infrastructure reliability.

It is not simply that risk is rising. It is that risk is becoming more interconnected and faster-moving. The report describes a future where relative resilience breaks down under unprecedented turbulence defined by accelerating scale, interconnectedness, and speed. That is exactly what modern organizations struggle with: not the existence of risk, but the speed and interaction of risk across domains.


Translating Alex Sidorenko’s RM2 Thinking into Enterprise Decision-Making

When Alex talks about translating the WEF report into RM2 actions, what I hear is the evolution risk leaders must embrace: risk must become decision-oriented, quantified, and trigger-based.

Stress-test major procurement decisions by quantifying outcome ranges

One of the most practical and urgent places to apply the WEF report is procurement. In 2026, procurement is not just a commercial function. Procurement is a strategic exposure function. It is a geoeconomic function. It is a resilience function. And the WEF report explicitly warns that trade, finance, and technology are being wielded as weapons of influence.

Alex’s approach to stress-testing procurement decisions should become the default for any organization with international exposure. Before signing a contract, the question is not simply “Can they deliver?” but “What happens to our objectives if the environment changes mid-contract?” That is where scenario modeling turns vague geopolitical risk into concrete financial and operational planning.

For example, when I apply Alex’s logic, I want leaders to see procurement scenarios as quantified outcome ranges:

  • Base case: current terms, pricing, delivery timelines hold
  • Scenario A: tariff or restriction increases total cost midstream
  • Scenario B: export restrictions force supplier change and delay delivery
  • Scenario C: payment restrictions freeze money in transit or constrain financial rails

This approach aligns directly with the WEF’s emphasis that geoeconomic confrontation is no longer just tariffs, but also sanctions, investment screening, capital restrictions, and supply chain weaponization.

Integrate scenario ranges into the budget, not just the risk report

I’ve said for years that risk management fails when it produces “risk insights” that are divorced from planning and decisions (Alex and I differ as I see some value in what he calls RM1 in traditional risk management in operations, and I see a middle layer between RM1 and RM2 that focuses on the uncertainty to objectives that are set by decisions). If a risk does not alter how we allocate capital, shape operating buffers, design contingency plans, or define strategic sequencing, then it is not meaningfully governing performance.

The WEF report reinforces why budget planning must evolve. It explicitly notes that economic risks are intensifying, with economic downturn and inflation rising sharply in ranking over the next two years. It also emphasizes that geoeconomic confrontation threatens the core of the interconnected global economy.

The practical implication is that single-point forecasting becomes increasingly fragile. Scenario-based budgeting is no longer a “nice maturity feature.” It is a governance necessity. Leadership needs to see planning not as one number, but as a range of plausible outcomes tied to defined contingencies.

Create decision triggers instead of “monitoring the situation”

I cannot count how many times I have heard: “We are monitoring geopolitical developments.” Monitoring is not strategy. Monitoring without triggers is simply waiting.

This is where Alex’s third point is so critical: replace generic monitoring with clear decision triggers. This isn’t just good risk management. This is resilience engineering, because it creates operational muscle memory in advance of disruption.

A mature organization should have pre-defined triggers such as:

  • If sanctions restrict access to a sector, market, or payment pathway, then activate supplier/market plan B within a defined time window
  • If currency controls or capital restrictions exceed payment delay tolerances, then shift to alternate partners or structures
  • If tariffs increase COGS beyond a threshold, then trigger contractual renegotiation and customer pricing response

That is how risk management becomes decision capability rather than risk commentary.


Emma Price’s Focus on “Infrastructure Endangered” and Why It Matters More Than Many Realize

One of the other perspectives on the 2026 WEF report I appreciated came from Emma Price (who also has been on the Risk Is Our Business Podcast), who highlighted the risk of disruptions to critical infrastructure; and that focus aligns powerfully with what the WEF report actually explores in depth through Section 2.5, “Infrastructure endangered.”

This is where I think many organizations fall into a trap. They scan the horizon for global shocks — geopolitical conflicts, macroeconomic instability, regulatory shifts — but they forget that some of the most consequential risks are closer to home. They are embedded in the systems we rely on every day to perform. Critical infrastructure is not an abstract public-sector issue. It is the backbone of enterprise performance. It includes the provision of power, water, transport, communications, and the digital services and networks that underpin modern commerce.

The WEF report describes how mass digitization and electrification are reshaping economies and increasing pressures on infrastructure, with demand rising not only from growth but from new sources of load; including AI data centers. The report also highlights the concern over interdependencies among ageing infrastructure, which can turn localized disruption into systemic impact. And it explicitly notes that geoeconomic confrontation is likely to amplify infrastructure challenges and create new ones in physical, cyber, and cyber-physical realms.

This matters to risk and resilience leaders because infrastructure failure is one of the clearest examples of a risk that is both operational and systemic. It is operational because it affects daily performance. It is systemic because it can cascade quickly across business services, technology dependencies, customer commitments, regulatory obligations, and reputation. This is not merely a continuity issue. It is an objective assurance issue. It is governance.


The Integrity Dimension: Uncertainty Doesn’t Only Break Systems — It Breaks Judgment

One of the most under-discussed implications of the WEF report is not simply that uncertainty is growing, but that uncertainty creates pressure, and pressure changes behavior. Organizations under pressure do not only face operational failure. They face ethical strain. They face governance erosion. They face incentives that invite corners to be cut. They face decision environments where shortcuts become tempting and rationalizations become easy.

The WEF report highlights misinformation and disinformation as a top short-term concern. That is not merely a technology or media issue. It is an integrity issue, because misinformation distorts decision-making, undermines trust, and can accelerate reputational crises even when the underlying operational event is manageable. In that world, integrity is not a static compliance posture. Integrity becomes an operational requirement for maintaining trust when the environment becomes volatile.

I often say this plainly: integrity is not truly tested when things are calm. Integrity is tested when objectives are threatened. That is when organizations face the temptation to misrepresent performance, delay disclosure, bypass controls, or soften accountability. In 2026, with instability as the baseline, those moments will occur more often.


What This Means for Chief Risk Officers: From Risk Stewardship to Risk Orchestration

The Chief Risk Officer role is changing — not in theory, but in practice. It is no longer enough for CROs to produce risk frameworks and risk reporting. Those remain important, but the environment described by the WEF report requires CROs to become architects of decision confidence.

In 2026, I believe the CRO’s true mandate is this: ensure the organization can make sound decisions under uncertainty without sacrificing integrity. That requires a shift from static assessments to dynamic orchestration. The WEF report explicitly frames a world where trade, finance, and technology become weapons, institutions are increasingly deadlocked, and turbulence accelerates through interconnected risks. In that world, slow governance becomes fragile governance. Fragmented governance becomes blind governance.

CROs must therefore drive:

  • scenario-based decision-making that quantifies ranges of outcomes rather than identifying vague threats
  • trigger-based response capability that turns monitoring into action
  • objective-centric risk alignment that ties uncertainty directly to performance commitments
  • resilience programs that map dependencies, define tolerances, and test the organization’s ability to sustain critical services
  • integrity-by-design governance that holds under pressure

That is what will differentiate “risk programs that report” from “risk programs that protect objectives.”


The Conclusion: Why GRC 7.0 — GRC Orchestrate Is Built for the World the WEF Report Describes

The WEF Global Risks Report 2026 does not merely tell us the world is risky. It tells us something more profound: uncertainty is now structural, fragmentation is increasing, and the pace and interconnectedness of risk are accelerating. In this environment, the organizations that succeed will not be those with the best risk register or the most polished risk heatmap (do not get me started on heatmaps . . .). They will be the ones that can make decisions with quantified uncertainty, set objectives realistically, perform reliably amid disruption, and maintain integrity when pressure makes compromise feel convenient.

That is exactly why I’ve framed the future as GRC 7.0 — GRC Orchestrate.

GRC Orchestrate is not simply the next iteration of GRC technology. It is the next iteration of enterprise capability. It is the shift from GRC as record-keeping to GRC as a command center for governing performance under uncertainty. It brings together decisions, objectives, risks, controls, obligations, third-party dependencies, infrastructure dependencies, and resilience requirements into one coherent architecture that leaders can actually use to steer the organization.

In the world the WEF report is describing, GRC (or whatever you desire to call it) must stop being an after-the-fact layer that checks what happened and becomes a forward-looking capability that shapes what happens next. GRC Orchestrate is what enables scenario intelligence to be embedded inside procurement and expansion decisions, not trapped in workshops. It is what enables budget planning to reflect outcome ranges and contingencies, not single-point forecasts. It is what enables decision triggers to activate coordinated action across the enterprise and its third parties, rather than waiting for disruption to escalate into crisis. And it is what operationalizes integrity — not as compliance theatre, but as traceable accountability and transparent governance that holds under stress.

Ultimately, the WEF report is leaving organizations with one unavoidable question: are we prepared to operate through a decade where instability is the baseline? If we are not, then the answer is not to worry more. The answer is to build capability — the capability to reliably achieve objectives, address uncertainty, and act with integrity.

That is the purpose of modern GRC.

And that is what GRC 7.0 — GRC Orchestrate delivers, for those that truly focus on it.

From Readiness to Reality: What Operational Resilience Demands as We Enter 2026

As we move toward 2026, I find myself increasingly uneasy with how many organizations talk about operational resilience. Not because they are ignoring it, quite the opposite. Most financial institutions, and a growing number of organizations beyond financial services, have invested heavily in resilience over the past several years. Frameworks are in place. Programs exist. Governance structures have been approved. The language of resilience has entered boardrooms and regulatory conversations.

And yet, when I look beneath the surface, I see a growing gap between readiness on paper and resilience in practice.

That gap is not caused by a lack of effort or intent. It is caused by a misunderstanding of what resilience actually is.

Resilience is not the absence of disruption

One of the most persistent and dangerous myths I encounter is the belief that resilience is something you earn once you have done enough “good work.” Enough controls. Enough documentation. Enough maturity.

What I have learned — both professionally and personally — is that resilience is not a reward for doing things right. It is the capacity to continue when things go wrong anyway.

Healthy organizations experience failure. Well-governed institutions suffer outages. Strong control environments still face cascading disruptions when dependencies behave in unexpected ways. Resilience does not mean disruption will not happen. It means the organization can absorb shock, adapt under pressure, and continue delivering what matters most without causing unacceptable harm.

This distinction is fundamental, yet many operational resilience programs are still designed as if the goal were to prevent failure rather than operate through it.

Why resilience has become a strategic capability

What I am seeing across industries is a fundamental shift in how disruption manifests. Disruption today is rarely localized. It is rarely singular. And it is rarely contained within organizational boundaries.

Modern organizations operate through complex ecosystems; particularly digital ecosystems. Critical services depend on cloud platforms, SaaS providers, managed service providers, data aggregators, identity services, and increasingly specialized third parties. Each of these dependencies enables scale and innovation. Collectively, they create shared vulnerability.

When something breaks in that ecosystem, the impact propagates. Failures cascade across services, firms, and jurisdictions. Recovery becomes nonlinear. Decisions must be made with incomplete information and under real time pressure.

This is why operational resilience has moved out of the operational basement and into strategic discussions. It now shapes how services are designed, how technology architectures are approved, how outsourcing decisions are made, and how executives think about acceptable trade-offs under stress.

Resilience is no longer something you “activate.” It is something you design for.

The global regulatory signal; and what matters more than the rules

I spend a great deal of time with regulations, guidance, and supervisory expectations. But what stands out to me most right now is not the differences between regulatory regimes; it is their convergence.

Across Europe, North America, and Asia-Pacific, regulators are independently arriving at the same conclusions. Whether through digital resilience regimes, broader critical-entity frameworks, prudential standards, or supervisory guidance, the message is consistent:

Organizations must understand what matters most, how it is delivered, how it fails, and how it recovers; and they must be able to demonstrate that understanding continuously.

The specific regulation matters less than the principle behind it. Organizations that treat each new requirement as a standalone compliance exercise miss the point entirely. Those that internalize the underlying resilience principles find themselves better prepared: not just for one regulator, but for an increasingly volatile operating environment.

What worries me most: fragmentation disguised as maturity

The biggest weakness I see in operational resilience programs today is not a lack of controls. It is fragmentation masquerading as maturity.

Risk management teams have their models. ICT teams have theirs. Third-party risk teams focus on contracts and due diligence. Business continuity teams focus on plans and exercises. Each discipline may be competent — even sophisticated — in isolation.

But resilience does not fail within silos. It fails between them.

When business services are defined differently across functions, when dependencies are modeled inconsistently, when testing results are not connected to tolerances, and when incidents do not inform investment decisions, the organization appears resilient on paper while remaining brittle in reality.

Fragmentation allows organizations to produce documentation without producing capability. That is increasingly visible as expectations mature.

The shift I believe defines the next phase: from readiness to demonstration

Over the past few years, most organizations focused on readiness. That made sense. You cannot demonstrate what you have not built.

But readiness is now table stakes.

What I believe defines the next phase of operational resilience is demonstration. Demonstration means showing — credibly and repeatedly — that the organization can deliver critical services within tolerable limits under adverse conditions.

This is where many programs struggle, because demonstration requires things that documentation alone cannot provide:

  • Impact tolerances grounded in operational reality rather than aspiration,
  • Recovery strategies tested under compound stress,
  • Third-party exit plans that work when markets are constrained, and
  • Evidence that is consistent over time, not manually assembled when asked.

Demonstration exposes assumptions. And assumptions are often where resilience quietly breaks.

Testing as a discipline of truth, not reassurance

Few aspects of resilience are more misunderstood than testing.

In many organizations, testing is still treated as a validation exercise, a way to confirm that plans work and reassure stakeholders. Scenarios are narrow. Outcomes are framed positively. Lessons learned are modest.

But meaningful resilience testing does something very different. It reveals uncomfortable truths.

It shows where recovery objectives are unrealistic, where dependencies were overlooked, where decision-making breaks down under pressure, and where third-party assurances collapse when stressed.

I believe organizations must fundamentally change how they think about testing. Testing is not about passing. It is about learning faster than the next disruption. Organizations that avoid uncomfortable results delay maturity. Those that embrace them build resilience that actually holds.

Third parties: where theory meets reality

If there is one area where I see the largest disconnect between theory and reality, it is third-party risk.

Most organizations now acknowledge that they are deeply dependent on external providers. Yet many still manage those dependencies using tools and assumptions designed for a different era.

When dozens — or hundreds, or thousands — of institutions depend on the same providers, resilience is no longer an individual firm problem. It is systemic. Exit strategies that look plausible on paper may collapse when multiple firms attempt to act simultaneously. Substitution plans may fail when alternatives are scarce.

What I recommend is a fundamental reframing of third-party risk management; from a procurement and compliance function into an enterprise resilience capability. That means understanding shared dependencies, testing failure scenarios honestly, and governing third-party relationships as part of critical service delivery, not as vendor checklists.

Evidence, data, and the illusion of manual control

Another area where I see growing strain is evidence. Many organizations believe they understand their resilience posture because experienced individuals can explain it. But supervisory confidence does not rest on explanation. It rests on traceable, consistent evidence.

When resilience data is fragmented across systems, manual aggregation becomes the norm. That approach might survive an audit. It does not survive ongoing supervision or real disruption.

My call to action here is simple but not easy: organizations must treat resilience evidence as a byproduct of operations, not a reporting exercise. That requires integrated data models that connect services, dependencies, risks, tests, incidents, and outcomes.

This is not a technology problem. It is an operating model problem.

Resilience as an operating model, not a program

The organizations I see making real progress share a common trait: they stop treating resilience as a program and start treating it as an operating model.

Resilience informs how services are designed, how architectures are approved, how outsourcing decisions are made, and how trade-offs are evaluated. It becomes embedded in decision-making rather than activated during crisis.

This is the point where resilience stops being a regulatory obligation and starts becoming a source of confidence.

My call to action as we approach 2026

As we move into 2026, my call to action is clear.

Stop asking whether your organization is compliant.
Start asking whether it can perform under stress.

Stop treating resilience as documentation.
Start treating it as design.

Stop optimizing within silos.
Start integrating across services, dependencies, and decisions.

Operational resilience is not about avoiding disruption. It is about reliably achieving objectives amid uncertainty and continuing with integrity when disruption arrives anyway. Organizations that internalize this principle will be better prepared not just for regulatory scrutiny, but for the world as it is now.

Closing 2025 and Reframing Resilience for 2026


Resilience, When You Don’t Get to Choose the Disruption

As 2025 draws to a close, many people around the world are entering a season of reflection. Whether marked by Christmas, Hanukkah, the New Year, or other traditions, this is a time when we pause, look back on what the year demanded of us, and consider what lies ahead.

For my wife (Mandi) and me, this moment carries particular gravity.

She has completed chemotherapy. She has completed immunotherapy. What remains are reconstruction surgeries in 2026. We are, without question, on the other side of a journey that reshaped our lives. What stays with me most is not simply what she endured, but what the experience revealed about resilience itself.

She entered cancer treatment healthy. She had been attentive to her health for years. There was no family history of cancer. Genetic testing showed no inherited predisposition to cancer. Every comforting narrative we tell ourselves about risk, probability, and control said this should not have happened . . . And yet, it did.

That reality dismantles a deeply human assumption: that if we prepare well enough, follow the rules closely enough, and do the right things consistently enough, disruption will pass us by . . . It does not.

What followed was not dramatic defiance or inspirational slogans. It was something far more instructive: resilience practiced quietly and relentlessly. Adjusting expectations. Preserving energy. Accepting uncertainty without surrendering purpose. Continuing forward without guarantees.

That experience has profoundly shaped how I now think about strategic and operational resilience in organizations — particularly as regulatory conversations intensify heading into 2026.

Operational resilience is not earned through compliance

In many organizations, resilience is still treated as a destination; something you arrive at once controls are mature, audits are clean, and frameworks are documented.

But resilience does not work that way . . .

  • Resilience is not a certification.
  • It is not a policy.
  • And it is certainly not a reward for good behavior.

Resilience is the capacity to continue delivering what matters most when disruption occurs anyway, often in ways that defy models, probabilities, and prior experience.

This is where many operational resilience programs quietly fail. They are built on the assumption that disruption is an exception, rather than a recurring condition. They emphasize prevention and preparedness, but struggle when systems, suppliers, people, and decisions collide under stress.

Just as health does not guarantee immunity from illness, strong controls do not guarantee immunity from disruption. What matters is how the organization responds, adapts, and sustains its purpose when assumptions break down.

Why operational resilience has become a universal concern

Operational resilience is no longer confined to financial services or “critical infrastructure” labels. The modern enterprise — regardless of industry — delivers value through dense networks of dependency:

  • Digital platforms and data flows
  • Cloud services and managed providers
  • Third-party and fourth-party ecosystems
  • Physical facilities and human decision-making
  • External infrastructure such as energy, telecoms, and transport

When disruption occurs, it rarely respects organizational boundaries. Failures cascade. Delays compound. Recovery becomes nonlinear. This is why operational resilience has shifted from a technical discipline to a strategic capability. Organizations are increasingly judged not by whether incidents occur, but by whether they can:

  • Continue delivering critical services
  • Stay within tolerable levels of exposure
  • Recover with speed and coordination
  • Learn and adapt without repeating failure

This service-centric view of resilience — focused on outcomes rather than components — is the common thread running through today’s regulatory landscape.

DORA: a clear expression of resilience as performance

Among global regulatory initiatives, DORA stands out not because it introduces entirely new ideas, but because it forces organizations to operationalize them.

DORA reframes digital operational resilience as an enterprise obligation tied directly to business services, third-party dependencies, testing discipline, and evidence-based assurance. Its real impact will not be measured by how many policies were written, but by how organizations perform under sustained supervisory scrutiny.

As we move into 2026, DORA enters its most consequential phase—not implementation, but practice (see end of this article for upcoming webinar). This is the moment when organizations discover whether resilience is embedded in how they operate, or merely described in how they document.

The shift is subtle but profound. Under mature DORA supervision, questions change:

  • Not “Do you have a framework?” but “Does it work under stress?”
  • Not “Is this risk assessed?” but “Can you stay within tolerance?”
  • Not “Is this vendor critical?” but “What happens when they fail—and how do you know?”

Testing becomes more than validation; it becomes revelation. Third-party oversight becomes systemic, not contractual. Evidence becomes continuous, not episodic. And resilience begins to look less like compliance—and more like organizational fitness.

Supporting signals from the wider regulatory landscape

DORA does not exist in isolation. It is part of a broader convergence of resilience expectations globally.

In Europe, UK Operational Resilience has been in place for several years.  NIS2 reinforces that cyber incidents are operational disruptions with governance, reporting, and accountability consequences. CER broadens the lens further, emphasizing resilience across physical, environmental, and systemic threats to essential services.

Outside Europe, Australia’s CPS 230 offers one of the clearest tolerance-based expressions of operational resilience, explicitly linking critical operations, disruption limits, continuity capability, and service provider governance.

Beyond these, there is a growing body of resilience regulation and guidance emerging from jurisdictions such as Hong Kong, Singapore, the BIS, and U.S. regulators including the OCC. Each reinforcing, in different language, the same fundamental principle: organizations must understand what matters most, design for disruption, and prove their ability to endure. The important point is not the differences between these regimes. It is the shared direction of travel.

The danger of compliance-first resilience

One of the greatest risks organizations face heading into 2026 is mistaking regulatory alignment for resilience that erroneously thinks:

  • Compliance can be achieved without capability.
  • Documentation can exist without coordination.
  • Testing can occur without learning.

A truly resilient organization does not ask, “Which regulation do we need to satisfy?” It asks, “Can we continue to serve our purpose when our environment becomes hostile?”

That question naturally drives a holistic resilience program—one that integrates:

  • Business services and value delivery
  • ICT and digital dependencies
  • Third-party and concentration risk
  • Scenario-based and adversarial testing
  • Incident response and recovery governance
  • Continuous assurance and learning loops

Regulations like DORA are not the destination. They are the forcing function that exposes whether this integration actually exists.

Closing 2025, looking toward 2026

As this year comes to an end, resilience feels less like an abstract concept and more like a lived reality — for me and Mandi, personally.

Resilience is not about eliminating uncertainty. It is about continuing with intention when certainty disappears. It is about designing systems that bend without breaking. The organizations that embrace resilience as a principle, not a checklist, will find themselves better prepared not just for supervisors, but for the world as it is.

I’ll be expressing these thoughts in more detail — particularly what DORA in practice means as we move into 2026, and how organizations can evolve from regulatory readiness to sustained operational resilience — in my upcoming session:

DORA in Practice: What’s Next for Operational Resilience in 2026
January 6, 2026 | 12:00–1:00 pm | London, UK

Risk Management Is Not a SOX Coloring Book: A Call for Risk Management as a Strategic Discipline

For more than twenty years, risk management has been shaped by the gravitational pull of Sarbanes-Oxley. SOX arose from a genuine crisis of trust, and its intentions were honorable: to reinstate accountability, protect investors, and restore faith in financial reporting. But its unintended legacy has been far larger and far more limiting. Instead of elevating the role of risk management across the enterprise, SOX compressed it, concentrating the discipline into a narrow, compliance-oriented model rooted in documentation and evidence trails. Entire organizations came to believe that if they could prove controls were executed, they were “managing risk.” In reality, they were managing paperwork.

This is the heart of what I call the SOX Coloring Book: a worldview in which risk is represented not by thoughtful exploration of uncertainty but by grids shaded in red, yellow, and green. It is a worldview in which risk becomes a performative exercise for auditors rather than a strategic dialogue for executives. It is a worldview that keeps the discipline of risk firmly planted in the past, while the organization demands a capability oriented toward the future in decisions and achieving obectives.

SOX is not what risk management was meant to be. And it is not what OCEG envisioned when it articulated the modern definition of GRC as the capability to reliably achieve objectives [GOVERNANCE], address uncertainty [RISK MANAGEMENT], and act with integrity [COMPLIANCE]. That definition is perhaps the most concise articulation of what organizations require today. It places decisions, performance and objectives at the center, surrounded by the term governance, infused with integrity, and sharpened by a clear-eyed understanding of uncertainty.

The tragedy is that many U.S. organizations (and worldwide), under the long cultural influence of SOX, have inverted this model. They begin with compliance, shape risk around compliance, and never arrive at the horizon where objectives and strategy reside. Risk becomes mechanical rather than meaningful, captured rather than understood, constrained rather than harnessed.


How SOX Diminished the Discipline of Risk

The unintended damage of SOX is not the regulation itself but the mindset it produced. The act conditioned organizations to believe they were practicing risk management when they were merely validating control execution. SOX created a generation of leadership and practitioners who thought the essence of risk was:

  • proving a control existed,
  • documenting that it operated as designed,
  • remediating deficiencies, and
  • producing evidence to satisfy an auditor.

But this is not risk management; it is compliance maintenance. Compliance is necessary, even vital, but it is fundamentally backward-looking. Its responsibility is to confirm whether guardrails were followed, whether required steps were taken, whether behaviors aligned with expectations.

Risk management, by contrast, is about the terrain ahead, not the path behind. It is about anticipating what might influence objectives, understanding the potential pathways of uncertainty, preparing for resilience, and enabling the organization to move with confidence toward its goals.

When SOX became the archetype for risk, it pulled the entire function backward. Risk became episodic instead of continuous; administrative instead of analytical; procedural instead of strategic. It became a discipline obsessed with demonstration rather than insight. And because SOX rewarded proof over perspective, organizations increasingly staffed and structured risk functions to serve compliance needs rather than strategic ones.


The Problem With Heatmaps: When Color Replaced Analysis

As this mindset took hold, the visual language of risk followed suit. Heatmaps and RAG (red, amber, green) stoplight diagrams became ubiquitous—not because they provided meaningful insight, but because they were easy to generate, easy to present, and easy to misunderstand as “analysis.” They offered leaders the illusion that risk had been neatly captured on a single slide, when in truth, the most important dimensions of uncertainty were nowhere represented. Consider that . . .

  • A heatmap cannot reveal the velocity of a risk, the speed at which it can materialize.
  • It cannot reveal the interconnectedness of risks, the way supply chain fragility influences cyber exposure, or how geopolitical volatility alters operational resilience.
  • It cannot reveal the depth of uncertainty surrounding a risk estimate.
  • And it certainly cannot reveal how a risk influences—or is influenced by—the organization’s objectives.

Heatmaps turned risk into something decorative. They reduced uncertainty to a handful of colors. They made risk appear deceptively simple, concealing the fact that modern uncertainty is anything but. Heatmaps became the coloring book pages of risk management: tidy, symmetric, and ultimately disconnected from the strategic and operational realities leaders must navigate.

The greatest danger of heatmaps is that they create false confidence. Executives begin to believe that risks fit neatly into categories, that qualitative estimates are truth, that the world conforms to a grid of nine or sixteen cells. But the world is not a grid, and risk certainly is not. Risk is fluid, relational, systemic. It is shaped by context. It evolves by the hour. To reduce it to red, amber/yellow, and green is to flatten a three-dimensional world into a two-dimensional cartoon.


Reclaiming Risk as the Discipline That Guides Performance

To move beyond the SOX Coloring Book, organizations must return to the true purpose of risk management: supporting making decisions and the reliable achievement of objectives. This is where the OCEG model offers such clarity. Governance is the structure by which objectives are set and aligned. Risk is the engagement with uncertainty in pursuit of those objectives. Compliance is the assurance that integrity underpins the journey. Together, they form a coherent, integrated capability: GRC as an enterprise system of purpose, insight, and resilience.

When organizations treat risk as a compliance artifact, they prevent it from fulfilling this purpose. But when risk is understood as a proactive, decision-oriented discipline, it becomes the lens through which leaders evaluate choices, interpret signals, and understand the conditions under which performance will succeed or fail. Risk becomes an enabler of agility, not a barrier to it. It becomes the partner to innovation, not its adversary. It becomes the instrument through which strategy is informed, shaped, refined, and strengthened.

The shift required is not from bad tools to better ones; it is from a backward-looking posture to a forward-looking one. It is a shift from asking, “How do we demonstrate compliance?” to asking, “How do we navigate uncertainty to achieve our objectives?” It is a shift from cataloging risks to understanding relationships among risks. It is a shift from controlling yesterday’s failures to preparing for tomorrow’s realities.

This is why introspection is needed at all levels of risk practice.


The Three Levels of Risk & Resilience: A Narrative Architecture

If risk is to serve the enterprise, it must operate across three levels—each distinct, each essential, each reinforcing the others. These levels are not technical constructs; they are narrative layers of how organizations understand themselves, their environments, and their ambitions.

Strategic Risk & Resilience: Risk as the Author of Decisions

At the highest level, risk becomes a strategic companion to leadership. It is not there to prevent bold choices but to inform them. Strategic risk management is not a defensive shield but an interpretive intelligence. It allows leaders to envision multiple possible futures, evaluate emerging signals, and understand how forces — geopolitical, technological, economic, environmental — shape their pathways.

At this level, risk does not guard strategy; it guides strategy. It enables leaders to ask not only “What could go wrong?” but also “What must go right?” and “How do we steer through uncertainty to arrive where we intend?” This is risk as a cognitive asset, embedded in the executive conversation. It is the antithesis of the SOX view.

Objective-Centric ERM: Risk as the Interpreter of Performance

The second level — objective-centric ERM — is where risk meets the engine of the organization’s performance. Here, uncertainty is evaluated not in abstraction but in the context of what the enterprise is trying to accomplish. Traditional ERM often loses itself in risk registers and taxonomies. Objective-centric ERM resists this gravitational pull and instead keeps its focus on outcomes.

This level is where risk becomes integrated, proactive, and relevant. It fosters the conversations that matter:

  • What uncertainties matter most to this objective?
  • How does our understanding of risk shape our operational and strategic planning?
  • What leading indicators must we monitor to anticipate shifts that affect performance?

By tying risk to objectives, the organization for the first time gains a risk management discipline that is not just descriptive, but decisive.

Operational Risk & Resilience: Risk as the Guardian of Today and Tomorrow

The third level — operational risk and resilience — provides the stability upon which all strategy depends. It encompasses the everyday realities of process reliability, system performance, third-party dependencies, and the organization’s ability to adapt to disruptions. Operational resilience is not merely the avoidance of failure; it is the active cultivation of durability and adaptability. It is the ability to perform today without compromising the capacity to perform tomorrow.

This is where risk takes physical form: where uncertainty meets operations, where resilience meets real-world conditions. Yet in many SOX-shaped organizations, operational risk was overshadowed by a far narrower focus on financial control testing. The cost has been a generation of firms less prepared for the complexity of modern disruption.


A Call to Action: Release Risk From the Coloring Book

To practice risk across these three levels, organizations require a modern architecture — one that moves far beyond the tools of the SOX era. This is what I refer to as GRC 7.0 – GRC Orchestrate with digital twins simulate the movement of risk. Agentic AI expands the capacity for monitoring, detection, and interpretation of signals. Shared ontologies create coherence across silos. Where risk management is architected in an orchestrated framework. These capabilities do not eliminate the human element; they enhance it. They allow risk professionals to spend less time compiling evidence and more time shaping insight. They allow leaders to make decisions with clarity rather than conjecture. They align the discipline of risk with the speed and complexity of the modern world.

The time has come for organizations to acknowledge the limitations of their inherited SOX risk paradigm. The SOX Coloring Book never did risk management. To navigate the world, organizations must release risk from the confines of compliance and restore it to its rightful place: as a strategic, objective-driven, forward-looking discipline intimately connected to the heart of performance.

Risk is how organizations make decisions.
Risk is how organizations pursue opportunity.
Risk is how organizations reliably achieve objectives.
Risk is our business.

And it is time to practice it as such.

GPRC for Enterprise Risk Management

Orchestrating Strategic, Objective-Centric, and Operational Risk & Resilience through GRC 7.0

Risk! Risk is our business. That’s what this starship is all about. That’s why we’re aboard her — Captain James T. Kirk, Star Trek: The Original Series, Season 2, Episode 20

The Enterprise was not built to sit safely in space dock. Its mission — “to boldly go where no one has gone before” — embodies both ambition and uncertainty. It is a vessel of purpose, guided by command decisions made under imperfect information, relying on systems, crew, and foresight to navigate the unknown.

In the same way, the modern enterprise is a starship of risk. It exists not to avoid uncertainty but to chart opportunity through it. The organization’s ability to govern, perform, and act with integrity depends on how well it understands and orchestrates risk across all levels of its mission . . .

[The rest of this blog can be read on the Corporater blog, where GRC 20/20’s Michael Rasmussen is a Guest Blogger]