When A.I. (Artificial Intelligence) Fails . . .

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

A.I. technology and models are used across industries to analyze, predict, generate, and represent information, decisions, and outcomes that impact operations and business strategy. A range of departments, functions, and roles are beginning to rely on A.I. as a critical foundation of business processes that support operations, long-term strategic planning, and day-to-day tactical decisions.

A range of A.I. technology spans predictive analytics, machine learning, deep learning, natural language processing, and robotic process automation to the new era of generative A.I. Within these various approaches, there are three core components:

  • Input Component. Delivers assumptions and data to a model.
  • Processing Component. Analyzes inputs into predictions, decisions, and content. Within A.I. systems there is often a continuous learning component/engine, which sets it apart from conventional data processing systems.
  • Reporting/Output Component. Translates the processing into useful business information.

While the common understanding of models is that they have three components – input, processing, and reporting/output – the reality is that there are multiple parts to each of these component areas.  Multiple components within input, processing, and reporting connect to each other and have an array of assumptions, data, and analytics. Adding to this complexity are the human and process elements intertwined throughout the business use of A.I. that weave together various manual processing and technology integration elements needed to use and interpret A.I.. As the environment changes, A.I. models themselves have to change to accurately represent the world in which they exist.

Models are used to represent scenarios and produce outcomes through inputs of values, relationships, events, situations, expressions, and characteristics. This is defined as the ‘input component’ of a model. The real world is a complex web of interrelationships and variables of significant complexity and intricacy that models cannot fully represent. Inputs are a simplified abstract of the real world used to process and report on quantitative estimates in outcomes. The challenge is that bringing in wrong assumptions, bias, and bad (or incomplete) information is compounded with the complexity of variables in the real world, and models can fail in their validity and reliability and by their inability to process any variables that sit outside their input scope. Validity speaks to accuracy whereas reliability speaks to repeatability. Something can be very reliable but not at all accurate. There is a risk that complex models lose both validity and reliability as the focus shifts from analyzing the impact of key critical variables to the fragile interaction and relationship of a variety of variables. They will reliability provide an outcome, but it will increasingly not be accurate or valid.

When A.I. Fails

Organizations are in the early stages of becoming highly dependent upon A.I. to support critical business processes and decisions. A.I. is now critical to many businesses. The expanding use of A.I. in the organization reflects how A.I. can improve business decisions. Still, A.I. comes with risks when internal errors or misuse results in bad decisions. 

Unfortunately, as much value as A.I. provides, it also exposes the organization to significant risk. Ironically, the A.I. tools often used to model and predict risk can be a significant risk exposure if not governed appropriately. A.I. model risk is the potential for adverse consequences from decisions based on incorrect or misused A.I. It leads to financial loss, poor business and strategic decision-making, legal and regulatory issues, and damage to an organization’s brand. For example, disclosing restricted information to “public A.I.” might be a risk as well when employees register and use tools like ChatGPT for business purposes. The most dangerous thing (moral hazard) for an organization is to have developed complete trust in what is being produced / delivered by A.I.

A.I. should be informing decisions and raising points for consideration rather than being solely relied on to make decisions – especially those that are business critical. A.I., inappropriately used and controlled, brings many risks to organizations.  These include:

  • Dynamic & Changing Environment. A.I. models are not static. In reality, new A.I. models and use are being added, old ones are retired, and current A.I. technology and models are constantly changing. Compounding this is the constant change in risk, regulations, and business that puts the environment that A.I. is supposed to represent in a constant state of flux. Organizations face significant risk when the environment changes, yet A.I. and its associated data inputs fail to evolve to represent the current environment. A.I. models that were accurate last year may not be accurate this year.
  • Lack of Governance & Control. The pervasive use of A.I. has also introduced what could now be Shadow A.I., a component of Shadow IT where the line of business bypasses IT and uses technology that has not been approved. This increases risk through inappropriate and unauthorized use that exposes the organization.
  • More Than the A.I. Model-Processing Component. The use of A.I. is more than the A.I. model-processing component. It is an aggregation of components that span a variety of input, processing, and reporting functions that integrate and work together. This includes the overall A.I. modeling and use project. Organizations risk being fixated on the A.I. model-processing component alone while the many supplementary components undergo rapid changes that are not governed, and bad input data means bad decisions from A.I. The quality of A.I. depends upon the quality of the input data and the assumptions: errors in inputs and assumptions lead to inaccurate processing and outputs. 
  • Errors in Input, Processing & Reporting. A.I. may have errors that produce inaccurate outputs without proper development and validation.  Errors can occur throughout the A.I. lifecycle from design through A.I. use and can be found in any or all of the input, processing, and reporting components. With specific data, if that data is not annotated correctly, the outcome will always be wrong. These errors may be from the development of the A.I. model in its inputs, processing, and reporting or can be errors introduced through changes and modifications to the model components over time. Errors may also occur from the failure of A.I. to change to shifting business use and a changing business environment.  
  • Undiscovered Model Flaws. It’s possible that an A.I. model will initially appear to generate highly predictive output, despite having serious flaws in its design/training. In this case, the A.I. solution may gain increased credibility and acceptance throughout the organization until eventually, some combination of data exposes the flaw. False positives are part of any predictive system but can be extremely convincing with A.I., leading to greater long-term reliance on a flawed model.
  • Misuse of Models. A significant risk is from A.I. that is used incorrectly. An accurate A.I. model will produce accurate results but lead the business to error if used for purposes the A.I. tech/model was never designed for. Organizations need to ensure that models are accurate and appropriately used. Organizations face risk when using and applying existing A.I. to new areas without validating A.I. in that context.
  • Misrepresentation of Reality. The very nature of A.I. means they are a representation of reality and not reality itself. A.I. models are simplifications of that reality and, in the process of simplification, may introduce assumptions and errors due to bias, misunderstanding, ignorance, or lack of perception. This risk is particularly a hot topic in generative A.I. which may leverage inaccurate data but also a risk across A.I.
  • Limitations in the Model. A.I. models approximate the real world with a finite set of inputs and variables (in contrast to an infinite set of circumstances and variables in the real world). Risk is introduced when A.I. is used with inaccurate, misunderstood, missing, or misapplied assumptions that they are built upon. 
  • Pervasiveness of Models. Organizations bear significant risk as A.I. can be used at any level without accountability and oversight. Anyone can acquire and/or access A.I. that may or may not serve the organization properly. Organizations struggle to identify A.I. being used not only within traditional business but also across third-party relationships. The problem grows as existing A.I. models are modified and adapted to new purposes. The original A.I. model developer in the organization often does not know how others are adapting and using A.I.. 
  • Big Data and Interconnectedness. The explosion of inputs and variables from massive amounts of data within organizations has made A.I. use complex across input, processing, and reporting components. The interconnectedness of disparate information sets makes A.I. models more complex and challenging to control. This leads to a lack of standardization, inconsistent use of data, data integrity issues across systems that feed into models, and data integrity within A.I.
  • Inconsistent Development and Validation. A.I. models are being acquired/developed, revised, and modified without any defined development and validation process. The role of audit in providing independent assurance on A.I. integrity, use, and fit for purpose is inconsistent and needs to be addressed in organizations. 

The Bottom Line: A.I. is rapidly growing in variety, complexity, and use within organizations. It is quickly moving from a tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

The Challenges & Risk in Artificial Intelligence

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

Artificial Intelligence (A.I.) has emerged as a disruptive force, propelling organizations into the future. Its transformative capabilities promise efficiency, accuracy, and scalability, providing a significant competitive edge. However, alongside the immense potential, A.I. usage poses unique risks and challenges that organizations must acknowledge and address.

While A.I. offers numerous opportunities for organizations, it is not without risks and challenges. Recognizing and addressing these challenges is necessary in the successful integration and responsible use of A.I.. Some of the challenges of A.I. include:

  • Powerful. A.I. can impact significant change with minimal effort. While this is a major strength, it also means a little A.I. use by an unskilled worker could result in a profoundly negative outcome. Companies may be wise to adopt a “first do no harm” approach.
  • Complexity. One of the primary challenges in using A.I. is the complexity of its implementation. Effective A.I. integration requires substantial investment in technology, skilled labor, and time. Organizations need experts capable of governing, developing, maintaining, and managing A.I. systems, which often leads to costly upskilling or recruitment   of new resources. Additionally, compatibility with existing systems and workflows must be considered, often requiring a complete overhaul of current practices.
  • Simplicity. Complexity is certainly a challenge, but the recent A.I. gold rush also has the opposite concern, simplicity. The necessity of data scientists, advanced technology infrastructure, and sizable ongoing support costs, all served as a check to keep many companies from running amok with the technology. With generative A.I., however, that bar is removed. Anyone can leverage these technologies, with limited resources and no training or consideration of the consequences. Essentially, the brakes have been removed while traveling at high speed.
  • Productivity. For many cases of A.I. uses, their are productivity enhancements, such as GitHub Co-pilot, which suggests to the developer what their next code block should be. The developer either accepts, modifies, or declines the suggestion. This is the same type of technology iPhones use, GSuite uses, etc. How A.I. is being used is a major determining factor between IT tool governance and A.I. model governance. As organizations use productivity tools laced with A.I. (versus full process automation) we will see ITGCs, SOC 2s, and data agreements that will cover many of these issues.
  • Data Privacy. Data privacy is another critical concern with A.I. usage. A.I. systems are data-hungry, needing vast quantities of data for training and operation. This dependence raises significant issues regarding data security, user privacy, and compliance with regulations such as GDPR. Breaches can lead to severe reputational and financial damage.
  • Bias. Bias is an inherent problem in A.I. systems that poses significant ethical and practical concerns. If the data used for training is biased, the A.I. system can amplify and reproduce these biases, leading to unfair outcomes. Examples include discrimination in hiring through A.I.-powered recruitment tools or unequal treatment in A.I.-driven healthcare solutions.
  • Opaqueness. Another risk with A.I. is the “black box” problem, referring to the opaqueness of A.I. decision-making. Advanced A.I. models, particularly in deep learning, often make decisions in ways that humans cannot easily understand. This lack of transparency can be problematic, especially in sensitive areas like healthcare or finance where understanding the rationale behind decisions is crucial.
  • Legal Liability. A.I. systems also present a potential liability issue. Determining culpability when an A.I. system causes harm is not straightforward. Is the developer, user, or even the A.I. system itself at fault? Legal systems worldwide are currently grappling with these novel issues. This can be further broken down into: 
    • Supply Chain challenges because A.I. systems and models are being developed using open source or other code where the country of origin or its reliability is not known.
    • Inappropriate or unintended use, which could lead to legal exposures.

Addressing these challenges requires a multifaceted approach. Organizations must develop a comprehensive A.I. strategy (see A.I. GRC below), addressing the technical aspects of A.I. implementation and ethical, legal, and social considerations. They must also invest in upskilling their workforce and adopt a culture of continuous learning to keep up with this rapidly evolving technology. This also requires ongoing collaboration between organizations, regulators, and policymakers. These collaborations can help create a conducive environment for A.I. usage, with adequate regulations to manage risks without stifling innovation. As A.I. technology evolves, organizations must remain vigilant, adaptive, and ethical in their A.I. journey.

If the enterprise has existing model risk management (MRM) processes, A.I. models that are being used for process automation should be handled within an MRM framework that this paper promotes. It is critical to not reinvent the wheel and instead adapt existing MRM best practices.

Increasing Regulatory & Legal Pressure on A.I.

A cavalier approach to A.I. has led to a monumental lack of structured A.I. governance within organizations at a time when there is a growing need for enterprise visibility into A.I. and its use. Organizations should keep an accurate inventory of A.I. technology and models, documentation, and defined roles and responsibilities in A.I. governance, risk management, and compliance throughout the A.I. use lifecycle. A.I. is evolving rapidly, fundamentally reshaping various industries, including healthcare, finance, and defense. However, its escalating integration into everyday life has raised many legal and regulatory challenges. 

There are several legal issues tied to A.I. usage. These include:

  • Privacy and Data Protection and Leakage. A.I. systems often require large amounts of potentially sensitive data, posing risks to privacy rights and data protection. For instance, A.I. applications like facial recognition technology have raised significant privacy concerns. Therefore, jurisdictions worldwide are deliberating on stricter data protection laws to control A.I. misuse. This also includes data leakage, where responses may inadvertently include sensitive information from training data sets. And given the often global nature of organizations, issues around cross-border transfer of data also arise.
  • Bias and Discrimination. A.I. systems can potentially reflect and amplify societal biases, which could lead to discriminatory practices. There have been cases where A.I. algorithms used in hiring or criminal justice systems have produced biased outcomes. Lack of visibility into methodology also makes diagnosis and solving of bias/discrimination difficult to identify and resolve.
  • Liability & Accountability. There is a legal ambiguity surrounding who should be held accountable when A.I. systems cause harm, or in the event of errors or failures that lead to compliance violations.
  • Intellectual Property Rights. Questions regarding A.I.’s creative outputs, whether these should be considered intellectual property, and if so, who should hold the rights, remain largely unresolved. There are also questions about intellectual property rights associated with Inputs, and whether or not these comply, for example with licensing and copyright terms and conditions.
  • Security. A.I. could be exploited for malicious purposes, like deepfakes or autonomous weapons, necessitating legal provisions for managing these risks.

Regulators are focusing on these legal issues with increased regulatory requirements and scrutiny in the governance and use of A.I. It is understood that A.I. is becoming a necessary part of business, but regulators want to ensure that A.I. is governed, developed, validated, used, and maintained properly to ensure the stability of organizations and industries. In that context, this is putting greater emphasis on A.I. governance and risk management. Financial services regulators have been leading in this area. MRM is an established function in financial services, recent statements highlight A.I. as “just another” model type to be included (e.g., OCC model Risk handbook and PRA SS1/23).

The regulatory landscape for A.I. varies globally, reflecting different cultural, societal, and political contexts. However, many jurisdictions are moving towards comprehensive legal frameworks to manage A.I. risks.

  • European Union. The EU has been at the forefront of A.I. regulation, with its proposed A.I. Act aiming to establish robust and flexible rules for A.I. use across its member states.
  • United States. The U.S. currently relies on sector-specific regulations, like the Fair Credit Reporting Act for A.I. in finance. However, there is an ongoing discussion on comprehensive federal A.I. laws. As well as OCC 2011-12 SR 11/7 for model risk management in finance. The White House has published an AI Bill of Rights that will be applicable across the board.
  • United Kingdom. In the U.K., the PRA CP6/22 proposed firms should adopt five principles which it considers to be key in establishing an effective MRM framework. The principles were intended to complement existing requirements and supervisory expectations in force on MRM.
  • China. China’s A.I. regulation encourages innovation while setting certain limits to protect national security and public interests.
  • Other International Efforts. Organizations such as the OECD and UNESCO are working towards global A.I. standards and principles to ensure responsible A.I. use.

A.I. presents both enormous potential and significant challenges. As A.I. use grows, corresponding legal and regulatory frameworks evolve in tandem to mitigate risks while promoting innovation. Harmonizing regulations across jurisdictions and developing comprehensive, flexible laws are key steps to responsibly integrating A.I. into society. A.I.’s complexity and rapid evolution of A.I. necessitate ongoing research and dialogue among all stakeholders – regulators, A.I. developers, users, and society at large.

The Bottom Line: A.I. is rapidly growning in variety, complexity, and use within organizations. It is quickly moving from tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

2023 Buyers Guide: Third-Party Risk Management & Intelligence Solutions

Traditional brick-and-mortar business is outdated: physical buildings and conventional employees no longer define the organization. The modern organization is an interconnected web of relationships, interactions, and transactions that span traditional business boundaries. Layers of relationships go beyond traditional employees, including suppliers, vendors, outsourcers, service providers, contractors, subcontractors, consultants, temporary workers, agents, brokers, dealers, intermediaries, partners, and more. The modern business depends on and is defined by the governance, risk management, and compliance of third-party relationships to ensure the organization can reliably achieve objectives, manage uncertainty, and act with integrity in each of its third-party relationships. 

The range of regulations and resiliency risks are prompting many organizations to reevaluate and define their third-party risk management programs. This includes the ESG and the ESG-related regulations, such as Germany’s LkSG and the EU CSDDD, to more focused legal requirements, becoming an acronym regulatory soup. A haphazard department and document-centric approach for third-party risk management compounds the problem and does not solve it. 

I am interacting with many organizations as they evaluate third-party risk management solutions. Organizations must carefully choose the right third-party risk solution and related intelligence/content integrations. There is a lot of marketing hype and claims that need to be carefully ‘weeded’ to find the reality of the best fit for an organization. Too often, organizations fail in their own ‘due diligence’ of what third-party risk solution best fits their needs. 

Sadly, I have seen people lose their jobs over selecting the wrong third-party risk software—more than once. 

Because of this, and being involved in many third-party risk RFPs worldwide, I am doing the: 2023 Buyers Guide: Third-Party Risk Management Platforms & Intelligence Solutions.

Organizations need to address third-party risk with an integrated platform as well as have the right third-party risk intelligence content feeds to keep current on developments throughout the world and extended enterprise to manage the ecosystem of third-party relationships with real-time information about third-party performance, risk, and compliance and how it impacts the organization. 

It is time for organizations to step back and implement third-party risk solutions and integrate third-party risk intelligence/content that delivers value to the business. This value can be measured in the efficiency, effectiveness, resilience, and agility to the business.

Organizations need to be intelligent about what third-party risk technologies and intelligence services they deploy. Join GRC 20/20 for this in-depth analysis of how to evaluate and purchase third-party risk management and intelligence solutions . . .

  • Discover drivers and trends in third-party risk management 
  • Identify what is needed to go into a business case and ROI for purchasing third-party management solutions
  • Understand the breadth of capabilities and approaches software solutions deliver in third-party risk management
  • Determine what RFP requirements best fit your organization for thrid-party risk management

The 2023 Buyers Guide: Third-Party Risk Management & Intelligence Solutions provides GRC 20/20’s market research and understanding of the segments of the third-party risk management market to help organizations build their business case, understand what capabilities they need, and determine the RFP requirements they should consider in evaluating solutions in the market. This Research Briefing provides a framework to understand capabilities and build requirements for RFP and selection process. 

ESG: Doing the right thing is never the wrong thing

I am back in London for a week of exciting engagements. The top of the list is ESG. In my ESG presentation tomorrow at the GRC: The Resilient & Responsible Enterprise event, I am leading with a quote from the sage of wisdom, Ted Lasso, who stated, “Doing the right thing is never the wrong thing” . . .

Managing ESG in a Dynamic World

ESG – Environmental, Social & Governance – is seeing increasing pressure from investors, regulators, lawmakers, employees, business partners, and citizen activists. Pressure is mounting from multiple fronts for organizations to implement ESG reporting in their organizations. In one respect, this is an evolution of the past’s sustainability and corporate social responsibility (CSR) efforts. However, ESG is broader with more momentum. Where CSR and sustainability were too often (but not always) pushed from a marketing perspective, ESG has the momentum and force to become a significant measurement of the organization’s integrity. Integrity is what the organization commits to in its values and is a reality throughout the organization and the extended enterprise. 

ESG is more than the E (environmental). Too often, organizations see that lead E, and they perceive that ESG is just about environmental values and climate change. It is so much more than this. The S (social) and the G (governance) are just as important as the E in ESG. There are many standards and various definitions for ESG; here is the high-level scope of it . . .

  • E = Environmental. Measures and reports on the organization’s values and commitment to stewardship of the natural world and environment. It includes reporting and monitoring the organization’s environmental initiatives for climate change, waste management, pollution, resource use and depletion, greenhouse gasses, etc.
  • S = Social. Measures and reports on the values and commitments and how the company treats people. This includes employee and customer/partner relations, human rights (e.g., anti-slavery), diversity and inclusion, anti-harassment and discrimination, the privacy of individuals (both employees and others), working conditions and labor standards (e.g., child labor, forced labor, health, and safety), and how the company participates and gives back to society and the communities it operates within.
  • G = Governance. Measures and reports on the culture and behaviors of the organization in context and alignment with its values and commitment. This includes finance and tax strategies, whistleblower and reporting of issues, resiliency, anti-bribery and corruption, security, board/executive diversity and structure, and overall transparency and accountability.

ESG crosses business boundaries. Brick-and-mortar walls and traditional employees do not define the modern organization. The modern organization is a web of third-party relationships: vendors, suppliers, outsourcers, service providers, contractors, consultants, temporary workers, intermediaries, agents, partners, and more. To truly deliver on ESG requires monitoring and managing the shared values and integrity throughout the organization’s extended enterprise. Legislation and regulation are focused on this, like the European Union’s Directive on Corporate Due Diligence and Accountability with Germany’s corresponding Due Diligence Act (to name one of many). 

THE CHALLENGE: Delivering 360° Situational Awareness of ESG

Business is complex – gone are the years of simplicity in organizational operations and processes.  Managing ESG amid complexity and change is a significant challenge for boards, executives, and all levels of management as they seek to execute their ESG directives and deliver results to stakeholders.

The modern organization is:

  • Distributed. Organizations have operations complicated by a web of ESG-related data scattered across many different systems. The organization is an interconnected mesh of ESG objectives, transactions, and interactions that span business units and even extend beyond the organization to third-party suppliers and vendors. Complexity grows as these interconnected relationships, processes, and systems nest themselves in ESG intricacy.
  • Dynamic. Organizations are in a constant state of flux. Leaders constantly adapt strategies and solutions to remain competitive, sustainable, and compliant. This results in ESG processes and information that are continually growing and changing. Exacerbating all this chaos is the growing abundance of ESG regulatory structures policing it. This complicates the ESG environment of any organization, as any new change must be carefully considered in ESG impact and reporting, placing tremendous stress on leaders attempting to keep pace with evolving business. 
  • Disrupted. Organizations are constantly managing high volumes of structured and unstructured ESG-related information across multiple systems, processes, and relationships to see the big picture of ESG. The velocity, variety, and volume of ESG scope and data can be overwhelming – disrupting the organization and slowing it down at a time when it needs to be agile and fast.

In 1996, Fritjof Capra made an insightful observation on living organisms and ecosystems that rings true when applied to governance business today: “The more we study the major problems of our time, the more we come to realize that they cannot be understood in isolation. They are systemic problems, which means that they are interconnected and interdependent.”  

Capra’s point is that biological ecosystems are complex and interconnected and require a holistic understanding of the intricacy of relationships as an integrated whole rather than a dissociated collection of parts. Change in one segment of the ecosystem has cascading effects and impacts the entire ecosystem. This is true in managing ESG in today’s organizations. Dissociated data, systems, and processes leave the organization with fragments of truth that fail to connect and see the big picture of ESG across the enterprise. Simply managing ESG data across different systems in spreadsheets and documents is prone to errors, unreliable, impossible to audit, and very costly to maintain.

THE BOTTOM LINE: Lacking an integrated view of ESG results in business processes, partners, employees, and systems that behave like leaves blowing in the wind, constantly moving and churning but often only creating a further mess. Modern business requires a coordinated ESG strategy and process across the organization enabled by technology for efficient, effective, and agile ESG reporting and monitoring. 

USA vs. UK/Europe in Risk & Compliance Approaches

I am preparing for another trip next week to the United Kingdom/Europe and reflecting on the differences in GRC – governance, risk management, and compliance – between North America and the UK/Europe. BTW: if you want to meet next week in London and discuss GRC strategy, process, technology, and/or content/intelligence solutions . . . let me know . . .

OK, let me be clear. What I am about to state is generalizing. There are exceptions, but this is the overall picture of the differences between North America (USA and Canada) and the U.K./Europe in the context of GRC, particularly risk management and compliance.

Consider (generalizing as there are exceptions) . . . 

  • Risk management. The USA too often approaches risk management (and its acronyms of ERM, ORM, IRM) as a compliance exercise from SOX. Risk starts with a risk and control register mapping in North America. It is a bottom-up approach.
    • Risk management in Europe, which is most often aligned with ISO 31000, in this context risk management is a more business perspective that starts with objectives (e.g., entity, division, department, process, project, asset). Risks are understood and managed in the context of objectives, and the business is more involved in risk management as it provides more value to the business. It is often a top-down approach aligned with strategy and performance. I see a lot more board-level involvement in risk management in Europe, and risk management is seen as a business tool and enabler. There are several RFPs this year that are driven by the board. In Germany, you have things like IDW PS 340 requiring enterprise risk normalization, aggregation, and quantification up to the board-level
  • Compliance. Compliance is very different between USA and Europe. From a product safety side, the USA generally has a prove it is harmful perspective while Europe has a prove it is safe perspective. But the regulatory regimes take a very different approach. The USA is a checklist/tickbox mentality. North American firms want to be told what they have to do, let them check the checkboxes, and then they want a get-out-of-jail-free card.
    • Europe has an outcome/principled base approach to compliance. They generally do not create checklists but define principles and objectives that must be achieved. For example, the UK Consumer Duty has a core principle as its foundation, which then rolls into three more sub-principles (duties), focusing on four outcomes. There is no detailed checklist. The focus is on principles being embraced in the culture and conduct of the organization and measured by outcomes
    • This outcome/principled-based approach started with what was the UK FSA, which later became the UK FCA, and rolled into the EU Better Regulatory Policy twenty years back. 
    • This outcome/principled-based approach to compliance requires a risk-based approach as the focus is on principles and outcomes, or we can say objectives (like the European approach to risk management). Everything is objective-based. How one organization complies to achieve principles/outcomes/objectives may differ from another, but the achievement of objectives/outcomes is measured. This requires a different way of approaching and thinking about compliance than what you see in North America.
    • Focusing on principles and outcomes is very different than detailed checklists of rules. It requires a deeper focus on ethics and culture driving the conduct.
  • ESG. This is another difference. The USA does not have broad sweeping legislation tackling ESG like the EU does with CSRD, CSDDD, CSRS, or individual country laws like Germany’s LkSG (on the third-party side). In the USA, some fragments tackle parts of ESG (e.g., FCPA, Conflict Minerals, California Transparency in Supply Chains Act), but no broad legislation mandating expansive ESG reporting, assurance, attestations, and due diligence like Europe has. ESG in the USA is too often thought of as only the E for the environment. Too many think ESG regulation is the proposed SEC Climate Change regulation (should it ever be finalized). But the reality is that it is only a part of the E in ESG, not even all of the E. And the political environment is in a stalemate, and there is back and forth on things like the SEC climate disclosure rules and a forthcoming Supreme Court ruling that might undermine all of that.
    • In Europe, the EU CSRD, CSDDD. CSRS (the ESG trifecta) impacts 50,000 firms that have to start doing ESG reporting (and many North American firms with operations in Europe). You have Germany’s LkSG, which has global concerns about ongoing due diligence in supply chains. 
    • ESG is a huge focus in Europe; in North America, it has fragmented attention but not the same momentum. The exception is firms that have significant operations in Europe.
    • I discuss this in detail in the 2023: How to Market & Sell ESG Solutions & Services Research Briefing.
  • Privacy is another example. The EU has GDPR while the USA has . . . nothing at the Federal level . . .

Navigating Complexity & Chaos: Approaching Regulatory Requirements with Control Automation

‍GRC 20/20’s Michael Rasmussen will be speaking on the topic of this blog in the webinar: Navigating Complexity & Chaos: Approaching Regulatory Requirements Across Jurisdictions with Control Automation!

Complex problems often have a solution that is understandable, simple, and uncomplicated – and usually wrong. 

How can we expect that to be different from business operations? The years of simplicity are gone!

Exponential growth and change in risks, regulations, globalization, distributed operations, competitive velocity, technology, and business data encumber organizations of all sizes. Keeping business strategy, performance, uncertainty, complexity, and change in sync is a significant challenge for boards, executives, and management professionals throughout all levels of the business

The physicist Fritjof Capra once said, “The more we study the major problems of our time, the more we come to realize that they cannot be understood in isolation. They are systemic problems, which means that they are interconnected and interdependent.” Capra was making the point that ecosystems are complex and interconnected and require a holistic, contextual awareness of the intricacy of interconnectedness as an integrated whole – rather than a dissociated collection of systems and parts. Risk and control in one area have cascading effects that impact the entire ecosystem . . .

[The rest of this blog can be read on the VOQUZ Labs blog, where GRC 20/20’s Michael Rasmussen is a guest author]

2023 GRC Trends: Engagement

In the first post, 2023 Governance, Risk Management & Compliance, we reviewed the top five 2023 GRC trends. Then we dove deep into the first trend of the need for GRC agility, and then explored GRC resilience and explored GRC resilience again, moved on to Integrity (ESG), then GRC Accountability . . . and we now continue with the fifth trend of five, ENGAGEMENT . . .

ENGAGEMENT is the fifth global trend in the GRC market for solutions and services.

GRC (Governance, Risk Management & Compliance) is as relevant to the front office as it is to the back office. The front lines of the business use GRC systems and need engaging user experiences. 

It is not just the front lines. All levels of the organization interact and use GRC technologies, from taking assessments, reading policies, going through training, reporting incidents, evaluation reports, diving through dashboards, and more.

Employee engagement in GRC requires GRC technologies to extend across the organization: Even to extended third-party relationships such as vendors, suppliers, agents, contractors, outsourcers, services providers, consultants, and temporary workers. Engaging stakeholders at all levels of the organization requires GRC technologies to be relevant, intuitive, easy to use, and attractive. Employees live their personal and professional lives in a social-technology-permeated world. GRC needs to engage employees and not frustrate or bore them. It has to be easy to use and interact with.

It has been stated that:

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage to move in the opposite direction.

A primary directive of GRC is to provide GRC engagement that is simple yet gets the job done. Like Apple, with its innovative technologies, organizations must approach GRC engagement in a way that re-architects the way it works as well as the way it interacts. The GRC goal is simple; it is itself simplicity. Simplicity is too often equated with minimalism. Yet true simplicity is more than just the absence of clutter or the removal of embellishment. It’s about offering the right GRC information in the right place when the individual needs it. It’s about bringing interaction and engagement to GRC process and data. GRC interactions should be intuitive.

I have been evaluating GRC technologies for 23 years and find that many have average to poor user experiences. Even some who are recognized as GRC leaders, who would have you believe that their platform could solve the world’s problems, have interfaces that are overly complex, non-intuitive, confusing, and sometimes downright confounding. 

What is needed at the core of GRC engagement is a human firewall.

Firewalls protect us. In buildings, it is a wall intended to shield and confine a fire to an area to protect the rest of the building. In a vehicle, it is a metal shield protecting passengers from heat and potential fire in the engine. In network security, it is the logical ingress and egress points securing a network. 

Within organizations, there is another firewall that is the most essential but the most overlooked. That is the ‘Human Firewall.’ I have been an analyst for twenty-two years.

Humans are the weakest area of any governance, risk management, and compliance (GRC) strategy. Humans make mistakes, they do dumb things, they can be negligent, and they can also be malicious. In the technical world, we can lock things down and the IT operates in binary. In the world of human interaction, it is not binary but shades of grey. Nurturing corporate culture and behavior is critical. The Human Firewall is the greatest protection of the organization. At the end of the day, people make decisions, initiate transactions, and they have access to data and processes. 

A decade ago, I was involved with The Institute of Risk Management in London in developing Risk Culture: Resources for Practitioners. In this guidance, there is the A-B-C model. The ‘A’ttitudes of individuals shape the ‘B’ehavior of these individuals and the organization, forming the ‘C’ulture of the organization. And that culture, in turn, has a symbiotic effect, further influencing attitudes and behavior. Culture is one of the organization’s greatest assets. It can spiral out of control and become corrupt quickly but can take years, or even decades, to nurture and build in the right direction. The ‘Human Firewall’ is the greatest bastion/guardian of the organization’s integrity and culture. In today’s focus on ESG – environmental, social, and governance – it is in the Human Firewall that becomes the reality of ESG integrity in the behavior and culture of the organization.

Every organization needs a Human Firewall. So what is a Human Firewall? What is it composed of? The following are essential elements:

  • Policy Management. Policies govern the organization, address risk and uncertainty, and provide the boundaries of conduct for the organization to act with integrity. The organization needs well-written policies that are easy to understand and apply to the context they govern. They should be in a consistent writing style, maintained, and monitored. Policies must be well-designed, well-written, consistent, maintained, and monitored, as they provide the foundation for the Human Firewall.
  • Policy Engagement. More than well-written and maintained policies are needed; they must also be communicated and engaged with the workforce. It does the organization no good, and can actually be a legal liability, to have policies that establish conduct that is not communicated and engaged to the workforce. All policies should be in a common corporate policy portal to be easily accessed and have a regular communication and engagement plan. 
  • Training. The next part of the Human Firewall is training. Individuals need training on policies and procedures on proper and improper conduct in the organization’s processes, transactions, and interactions. Training applies policies to real-world contexts and aids understanding, strengthening the Human Firewall.
  • Assessments & Controls. Employees at all levels of the organization need a simplified and engaging user experience to answer GRC-related questions on objectives, risks, and controls.
  • Dashboards and Reporting. From executives to operational management, risk owners need easily understood and accessible insight into the status of objectives, risks, controls, and issues. 
  • Issue Reporting. Things will go wrong. Bad decisions will be made, inadvertent mistakes will happen, and the malicious insider will do something wrong. Part of the Human Firewall is providing mechanisms such as hotlines, whistle-blower systems, management reports, and other mechanisms of issue reporting for the employees in the front-office and back-office can report where things are breaking down or going wrong before they become significant issues for the organization. 
  • Extended Enterprise. Brick-and-mortar walls and traditional employees do not define the modern organization. The modern organization is an extended web of relationships: suppliers, vendors, outsourcers, service providers, consultants, temporary workers, contractors, and more. You walk down the halls of an organization, and half the people you walk by, the insiders, are no longer employees. They are third-parties. The Human Firewall also has to extend across these individuals, a core part of the organization’s processes. Policies, training, and issue reporting should encompass the web of third-party relationships that shape and form today’s organization.

Where are you at in building, maintaining, and nurturing your organization’s Human Firewall to improve GRC Engagement?

2023 GRC Trends: Accountability

In the first post, 2023 Governance, Risk Management & Compliance, we reviewed the top five 2023 GRC trends. Then we dove deep into the first trend of the need for GRC agility, and then explored GRC resilience and explored GRC resilience again, moved on to Integrity (ESG) . . . and we now continue with the fourth trend of five, ACCOUNTABILITY . . .

The fourth global trend in the GRC market for solutions and services is ACCOUNTABILITY.

In the Fellowship of the Ring, Frodo asks Gandalf, “Why was I chosen?” Gandalf replies . . .

‘Such questions cannot be answered,’ said Gandalf. ‘You may be sure that it was not for any merit that others do not possess. But you have been chosen, and you must therefore use such strength and heart and wits as you have.” 

Whenever I think of accountability, this statement by Gandalf comes to mind. Hopefully, there is merit in those given GRC-related accountability, but it is evidently not always the case. But those that are given GRC accountabilities need to pursue those accountabilities with all the strength and heart and wits that they have.

Accountability is different than responsibility—responsibilities I can give to someone else. Accountability is something I own and cannot give others. If there are issues of risk, compliance, control . . . then I have to face up to it and own it (or be praised when things go right). That is accountability.

Too often, GRC-related accountabilities were passed around the organization like a hot potato. No one wanted to be accountable for risk and compliance. Things are changing. We are entering an era of greater GRC accountability that executives and the board must pay close attention to. There are RFPs in the GRC solution space that are being board-driven because of greater accountability.

Consider the following . . .

  • Accountability Regimes. There are a growing array of accountability regimes around the world. This started with the United Kingdom’s Senior Manager Regime/Certification Regime (UK SMCR), and other jurisdictions have followed suit, such as Ireland (SEAR), Australia (what was BEAR now FEAR/FAR), Hong Kong (MIC), Singapore (IAC), and now South Africa is the latest. In the UK, 28 Senior Management Functions (SMFs) have to be defined in financial services organizations that are executives accountable for different areas of risk, compliance, control, and conduct. If there is willful misconduct, this executive can go to jail. If there is negligence or lack of due diligence, that SMF can be personally fined. UK SMCR is going into review to see how it can be improved.
    • Consider the most recent enforcement where a CIO of a bank was personally fined £81,620 out of his personal bank account for a third-party risk failure.
  • US Department of Justice. The updates to the DoJ enforcement policies are clear on individual accountability. Expectations are set that the DoJ expects companies to provide information on anyone culpable and requires organizations to incentivize executives that act ethically and those that do not need to have compensation clawbacks. Individual accountability is the DoJ’s top priority in corporate criminal cases. The DoJ also states that CEOs and CCOs/CECOs must certify that their compliance programs are effectively designed and operational.
  • Case law. In In re McDonald’s Corporation Stockholder Derivative Litigation, the Delaware Court of Chancery stated that the fiduciary obligation in the previous In re Caremark decision also applies to non-director officers. The trajectory is tho hold corporate officers/executives, particularly CCOs/CECOs, personally liable for corporate misconduct.
  • Regulators. In Rule 3110, FINRA has focused on the liability of CCOs/CECOs in broker-dealers. And CCOs/CECOs are also finding exposure to personal liability under the Investment Advisers Act of 1940 and the Securities Exchange Act of 1934.
  • ESG. In the context of ESG, we are seeing increased pressure on Board Members and Executives for ESG. In some cases they are being voted out if they do not hit ESG-related metrics.
  • Personal Liability (criminal and civil). The former CISO of Uber, Joseph Sullivan, was convicted on federal criminal charges for his role in covering up a 2016 data breach in which the personal information of 57 million Uber users was stolen.

This is just a smattering of developments touched on briefly of which we can add a lot more. But the writing on the wall is there is greater and greater accountability being placed on the board, and senior executives for aspects of GRC.

So what do we do?

Greater accountability means that these roles need greater insight into GRC-related data to have their “strength and heart and wits” about them. The old era of documents, spreadsheets, and emails will not work in this new era of GRC Accountability. Regulators, law enforcement, auditors, and opposing counsel in civil cases have become aware that evidence of risk management and compliance can be fictitiously manufactured in documents and spreadsheets to cover a trail. They increasingly want to see what was assessed or communicated on what day and time. If something changed, who changed it, and what was changed. Organizations need complete audit trails and systems of record/truth of GRC-related activities to support accountability.

And those functions that are accountable need real-time dashboards and reports so they can uphold their fiduciary duties and accountabilities in the organization.

2023 GRC Trends: Integrity (ESG)

In the previous post, 2023 Governance, Risk Management & Compliance, we reviewed the top five 2023 GRC trends. Then we dove deep into the first trend of the need for GRC agility, and then explored GRC resilience and explored GRC resilience again . . . and we now continue with the third trend of five, integrity . . .

The third global trend in the GRC market for solutions and services is INTEGRITY.

INTEGRITY: 1. the quality of being honest and having strong moral principles; moral uprightness. 2. the state of being whole and undivided.

Organizations are re-evaluating their internal core values, ethics, and standards of conduct in 2023 and how this extends and is enforced across the organization. The integrity of the organization is a front-and-center concern. Organizations see the need to define and live their corporate values in the business, its transactions, with clients, and in third-party relationships. This includes focusing on human rights, privacy, environmental standards, health and safety, corruption, conflicts of interest, compliance, managing risk, conduct with others (e.g., customers, partners), privacy, security, and more. What the organizations communicates in policies, statements, reports, and controls needs to be a reality in the organization and not just smoke and mirrors.

Integrity is played out in ESG – Environmental, Social, Governance . . .

  • Environment. Climate change, natural resource utilization, pollution and waste, biodiversity, certification, carbon footprint/emissions.
  • Social. Child labor, forced labor, socio-economic inequality, privacy, personal data use, diversity, inclusion, working conditions, health and safety, product liability.
  • Governance. Corporate governance, fraud, anti-bribery and corruption, anti-money laundering, internal controls over financial reporting, security, corporate conduct and behavior, anti-competitive practices, tax transparency, ownership, and structure.

Organizations need a cohesive strategy from the board down into operations to address the organization’s integrity in the context of ESG. This moves from strategy down into ESG processes and how this is automated and managed in a GRC (which enables ESG) information and technology architecture.

ESG does not start with RISK!

ESG does not start with risk but with objectives. I cringe when solution providers show me their solutions focused on ESG risk management. That is putting the cart before the horse. To be an organization of integrity requires a focus on principles and objectives. Only in that context can we identify and manage risks to those objectives. An organization may have an objective to be carbon neutral; then, we map the risks of not being carbon neutral. An organization has inclusivity and diversity objectives, no tolerance of child labor objectives, and more. Risks are then mapped to ESG objectives. YOU CAN BET THAT I WILL NEVER RECOMMEND A SOLUTION IN THE MARKET that starts with ESG risk over ESG objectives.

ESG requires focus on the EXTENDED ENTERPRISE!

Brick-and-mortar walls and traditional employees no longer define the modern organization. The modern organization is the extended enterprise: suppliers, vendors, outsourcers, service providers, contractors, consultants, temporary workers, agents, brokers, dealers, partners, and more. ESG, and in this context, integrity, plays out and is measured across these relationships. Martin Luther King Jr stated, “Whatever affects one directly, affects all indirectly. I can never be what I ought to be until you are what you ought to be. This is the interrelated structure of reality.” This statement is true in our individual relationships, and it is true in an organization’s relationships in the extended enterprise. The saying “Show me who your friends are, and I will tell you who you are” translates to business: show me who your third-party relationships are, and I will tell you who you are as an organization in the context of ESG. The integrity and ability of the organization to act with integrity in the context of ESG require addressing this across the extended enterprise.

There is so much happening in the regulatory aspects of this. Consider the breadth of ESG impact from just the following (much more can be added) . . .

Broad ESG Regulation

  • EU CSRD – 50,000 firms have to respond to this.
  • EU CSDDD – requires ongoing continuous due diligence on ESG

Third-Party Due Diligence

  • EU CSDDD
  • Germany’s LkSG (Supply Chain Due Diligence Act)
  • Dutch Due Diligence Act

Modern Slavery (but ties into Third-Party due diligence)

  • UK Modern Slavery Act
  • Norwegian Transparency Act
  • Swiss Human Rights Due Diligence Law
  • California Transparency in Supply Chain Act
  • Conflict Minerals
  • Uyghur Forced Labor Prevention – China Supply Chain Risk

Anti-Bribery & Corruption (under the G)

  • US FCPA
  • UK Bribery Act
  • Sapin II

Environmental

  • SEC Climate Change
  • PFAS

IT Security (under the G)

  • SEC Cybersecurity

Internal Controls & Governance (under the G)

  • US SOX
  • UK SOX & Corporate Governance

And, of course, Tax Transparency, Beneficial Ownership, Inclusivity/Diversity, Consumer Duty, PFAS, and much more . . . 

The writing is on the wall; organizations must fundamentally change how they approach ESG internally and across the extended enterprise. Organizations should start defining an integrated strategy for ESG to address these forthcoming requirements and stakeholder demands in a unified and consistent approach.

GRC 20/20 will be doing a deep-dive on ESG and these regulations for solution and service providers in the upcoming 2023: How to Market & Sell ESG Solutions & Services.

Why the Banking Crisis is Back

The following is an article published in the latest issue of Enterprise Risk Magazine (Summer 2023) starting on page 16. This article was authored by Michael Rasmussen, Analyst & Pundit at GRC 20/20 Research, and William Gonyer of Group697.

The latest banking crisis in North America has put potential failures regulation, governance and risk management back in the spotlight crisis is back.

Springtime often becomes a metaphor for change, new growth and transformation. While change and transformation tend to be the by-product of dissatisfaction with behaviours and patterns that are no longer tenable to the present situation, sometimes this change is also involuntary in its nature – an uncomfortably forced evolution that imposes progress on us. Springtime this year has pushed forward a mass sobering for the banking industry. After riding a wave of ultra-low interest rates and high market liquidity, a domino effect of events has brought on the failure of several major regional American banks, marking the greatest shake-up of the global financial system since the financial crisis of 2007-08.

As the age-old adage goes, “there is nothing new under the sun.” The driving factors that led to the collapse of Lehman Brothers, Bear Stearns, Wachovia and Washington Mutual are almost identical to the key drivers of the bank failures within Silicon Valley Bank (SVB) and Signature Bank this year – a gross failure of governance and risk management, the exception being First Republic.

Situational awareness

The interconnectedness of organisational objectives, risks, resilience and integrity requires 360° situational awareness of governance, risk and resiliency. Organisations must see the intricate relationships and impacts of objectives, risks, processes and controls. It requires holistic visibility and intelligence regarding risk and resiliency.

Organisations such as banks and other financial institutions take risks all the time. Still, the failure to monitor and manage these risks effectively in an environment that demands agility can lead to a tinder box of potential catastrophe. Too often, risk management is seen as a compliance exercise and not truly integrated with the organisation’s strategy, decision-making and objectives. It results in the inevitable failure of governance, risk and compliance (GRC) and risk management, providing case
studies for future generations on how poor GRC management leads to the demise of organisations.

The collapse of SVB is one of the most blatant cases of this. For example, SVB failed to institute some of the most basic risk management practices by industry standards. Starting from the end of 2019, SVB deposits grew from $61 billion to $189 billion by quarter 4 of Interest rates at the time were so low that these deposits were treated as free money at ~25 basis point cost average. SVB then used these inflows to increase loans 100 per cent to $66 billion and push far beyond average industry risk parameters with its held-to-maturity (HTM) securities portfolio, ramping what was mostly agency mortgage holdings from $13.5 billion at quarter 4 of 2019 to $99 billion at quarter 4 of 2021.

SVB’s big problems were with its HTM portfolio. The bank increased its security portfolio by 700 per cent, buying in at a generational top in the bond market and buying $88 billion of mostly 10 plus year mortgages with an average yield of just 1.63 per cent. In the absence of adequate interest rate risk management, this resulted in massive unrealised losses when the Federal Reserve began hiking its benchmark interest rates.

Deregulation

SVB’s HTM securities had mark-to-market losses as of quarter 3, 2022 of $15.9 billion, compared to just $11.5 billion of tangible common equity. Due to lobbying for deregulation by SVB, as well as other midsized banks such as Signature Bank (of which Barney Frank of Dodd-Frank was a board member), regulators did not require SVB to mark its HTM securities to market. However, internally they should have been doing this anyway, as well as running risk models against changing rates.

The deregulation that enabled their increased risk tolerance came as a result of Congress passing the Economic Growth, Regulatory Relief, and Consumer Protection Act (EGRRCPA), also known as the Dodd-Frank Reform Act. The act was signed into law in May 2018, and it raised the asset threshold for systemically important financial institutions (SIFIs) from $50 billion to $250 billion, effectively reducing the regulatory burden on many midsized banks such as SVB and First Republic.

On top of this, due to the Federal Reserve Bank’s interest rate hikes, SVB saw accelerating deposit outflows (-6.5 per cent YTD in January), a mix shift away from non-interest accounts and skyrocketing interest costs (money markets now yield 4 per cent), as well as increased burn rates from the bank’s venture clients resulting in customer deposit drawdowns. As SVB’s funding costs continued to reset higher, SVB was faced with a massively high negative carry cost on its HTM portfolio, largely a fixed-yield securities portfolio.

But SVB’s greatest failures extend to the top – its leadership. The Federal Reserve’s review described SVB as “textbook case of mismanagement” and further described a failure of oversight and accountability of senior leadership by the bank’s board of directors. Only one member of SVB’s board had previous banking experience. The practices and procedures used by SVBs risk management team raises serious questions on their competencies based on evident gaps in their risk management frameworks. SVB’s risk management team “failed to establish a risk-management and control infrastructure suitable for the size and complexity of SVBFG when it was a $50 billion firm, let alone when it grew to be a $200 billion firm”, said the review. SVB had 31 identified unaddressed “safe and soundness supervisory warnings” more than triple the average number of peer banks. Furthermore, the bank was also left without a chief risk officer for 7 months in 2022, a departure that may demand an explanation. The discoveries made by the Federal Reserve and Treasury Department regarding the bank’s risk management practices only beg more questions outside of the obvious conclusions: SVB failed to institute an adequate asset liability committee, erroneously focused on short-term profits, and neglected long-term associated risks.

Bad timing

The relaxing of Dodd-Frank also came at exactly the worst time. It happened almost a year before the beginning of the Federal Reserve’s tightening cycle and at the natural end of an era of economic expansion that was later disrupted by emergency monetary intervention measures during the global COVID-19 pandemic. Midsized banks could now take on greater risks, and they did so during a time of irregular economic factors of expanded emergency liquidity.

First Republic’s portfolio arguably could have withstood the fluctuations. However, First Republic lost more than half of its deposit base amid SVB’s collapse, pulling the bank into a critical territory and ultimately leading to its collapse and takeover by JP Morgan and the Federal Deposit Insurance Corporation (FDIC). This marked the second-largest bank collapse in US history after Washington Mutual in 2008.

First Republic’s traditional savings and loan business model was arguably sound. It catered to wealthier clients in the tech
sector, targeting the employees at companies like Apple, Alphabet and Meta. First Republic even had a branch inside of Facebook’s headquarters. But First Republic’s failure was purely panic induced. Even with paper losses on lowinterest loans and its interest rate risk mismatch, the bank could have survived if it didn’t have to rapidly fund withdrawals by depositors seeking higher returns on deposits elsewhere, as well as outflows triggered by panic amid the failure of SVB. As a result, the bank was forced to rely on government lending facilities at rates that exceeded its income in an attempt to ride out the storm. First Republic’s problems are almost reminiscent of Bailey’s Building and Loan in Frank Capra’s 1946 film It’s a Wonderful Life, only in this not so wonderful life the townspeople did not temper their panic and rally around their community bank.

Re-regulation

The recent failure of these regional banks will likely trigger a new wave of regulations and guidelines as well as a reversal of the changes made to regulatory frameworks for midsized banks in 2018. Regulators need to consider that with the increased scale of the financial system, midsized banks that may be only regionally important can still pose a significant systemic risk as supervisory authorities do not have the resources to monitor their activities and should not underestimate the propensity for mismanagement. Asset thresholds for enhanced prudential standards for SIFIs should be reversed from $250 billion to $50 billion. Regulators and organisations with large deposits also need to consider the concept of dual fiduciary duty.

In the case of SVB, a bank of choice for many venture capital firms and venture-backed companies, the burden of large deposit risk cannot fall solely on the bank. Venture capital firms, while exempt from many of the regulations and compliance burdens of hedge funds and other asset managers, were arguably negligent in managing their cash risk for their limited partners and thus somewhat complicit in the risk concentration of SVB. The leading practice of asset managers is to hedge cash risk through treasuries. A venture capital firm’s responsibility to its investors must extend to its cash risk within its portfolio companies.

Too often, regulators and bank managers alike continue to make policies solely in the vacuum of a crisis. Policy developed in the vacuum of a crisis is inherently inadequate, as it usually only accounts for remedying the causation and symptoms of the present crisis. Supervisory authorities need to consider expanded guidelines for bank governance and leadership, and the policies set by leadership for financial institutions should meet qualification standards. All bank board members should be certified by supervisory authorities such as the Office of the Comptroller of the Currency (OCC), FDIC and Financial Industry Regulatory Authority (FINRA) for a minimum qualification standard.

Cost of failure

While The US Department of the Treasury and Federal Reserve have taken responsibility for inadequate supervisory measures of these troubled midsized banks, financial institutions now need to realise more than ever that increased legal risk tolerance does not equate to acceptable risk tolerance. Banks must institute more sophisticated internal risk frameworks that factor in significantly higher stress tests for implied volatility.

Major money centre banks are forced to adhere to a wide range of scenarios for long-term resilience, but midsized and even small banks need to develop their own internal frameworks beyond the demands of compliance that mirror the top of the industry at scale, even if it comes at the cost of profits because the cost of a bank failure is far greater than neglecting profits made unsustainably. Banks that are currently undergoing pressure should consider seeking to consolidate with peer banks before they are forced into consolidation, liquidation or shotgun acquisitions. Well-structured asset-liability committees and audit committees
should become a universal practice for banks of all sizes.

The conclusions of the Federal Reserve’s review of SVB implicitly stated that two of the three critical weaknesses of the bank were governance and risk management. The further conclusion of the review was that while SVB was compliant, compliance alone was inadequate because the regulation and the supervisory frameworks were inadequate in preventing the bank’s failure. The second and third largest bank collapses in US history have set the stage for a new wave of regulation to reinforce neglected gaps in global financial services from the United States, European Union, United Kingdom, the Commonwealth and beyond.