Cognitive GRC: A.I. & Regulatory Change & Intelligence

One of the top inquiry areas for GRC 20/20’s market research is the role of Corporate Compliance and Ethics Management, managing the range of conduct, ethics, regulations/obligations, policies, and boundaries of the organization. Particularly now in the era of ESG. We regularly get inquiries from organizations looking for solutions for policy management, hotline/whistleblower, case management, forms/disclosures, third-party compliance/risk, compliance assessments, and more.

A growing area for solutions for corporate compliance is in regulatory change management and regulatory intelligence. This is an area where the traditional approach of armies of subject matter experts is now automated with artificial intelligence. 

Managing and keeping up with regulatory change is one of the most significant challenges for organizations in the context of governance, risk management, and compliance (GRC). Managing the dynamic and interconnected nature of change and how it impacts the organization is driving strategies to mature and improve regulatory change management as a defined process. The goal is to make regulatory change management more efficient, effective, and agile as part of an integrated GRC strategy within the organization.

Regulatory change is overwhelming organizations. Many industries, like financial services, are past the point of treading water as they actively drown in regulatory change from the turbulent waves of laws, regulations, enforcement actions, administrative decisions, and more worldwide. Regulatory compliance and reporting is a moving target as organizations are bombarded with thousands of new regulations, changes to existing regulations, enforcement actions, and more each year.

In the past five years, the number of regulatory changes has more than doubled, while the typical organization has not increased staff or updated processes to manage regulatory change. According to Thomson Reuters, financial services had an average of 257 regulatory change events every business day in 2020, just in this one industry. In the past five years, the number of regulatory change updates impacting organizations has grown extensively across industries.

GRC 20/20 Research is seeing a steady pace of regulatory change management inquiries and research interactions, focusing on artificial intelligence in this context. In our market research, we have reviewed/evaluated many solutions in this space. Some solutions deliver real value, and some solutions claim A.I. but are stretching the term (anyone with some logic in a workflow claims it as A.I.), or it is the Wizard of Oz with the man behind the curtain doing the work as the A.I. tech is not fully baked and delivering. 

The best solutions deliver a lot of value in A.I. for regulatory change, with natural language processing, machine learning, deep learning, predictive analytics, generative A.I., and more. 

I am told that if you print off the entire UK FCA rulebook, it is a stack of paper six feet tall. Printing off the U.S. Code of Federal Regulations and stack it end to end is longer than a marathon. Internal documents, like policies, are also a mess. One bank I built a business case for policy management had one policy that took six months to get updated because of a regulatory change and went through 75 reviewers in a linear document check-in and check-out fashion . . . that certainly is not agile. Another bank states that if every branch printed the policy manual, it would be a stack of paper as tall as the Elizabeth Tower (Big Ben) in London. 

A machine with natural language processing can read the US CFR or UK FCA rulebook in minutes. It would take me a year or more. But a machine can read it in minutes and direct, map, and categorize it in minutes. 

The Chief Ethics and Compliance Officer (CECO) I interacted with at a life sciences firm did some internal testing on A.I. for regulatory change management. They not only found that a machine was a ‘gazillion’ times faster at reading and mapping regulations, but they also found it was 30% more accurate/effective. Think about it, if we are going to read a lot of legal documents/regulations, and I mean a lot, looking for changes/updates . . . are minds are going to wander and think about the plans for dinner or the weekend, or how our favorite sports club is doing. We miss things where a machine stays on point. 

There are a variety of use cases for A.I. in regulatory change management. Not one solution has all of this covered in detail, so it takes an architecture and often plugs into your favorite enterprise GRC platform for even broader value. These include:

  • Horizon Scanning. Using A.I. to monitor and evaluate pending legislation, proposed rules, changes in enforcement, speeches, and comments made by regulators to determine what we need to pay attention to that will be tomorrow’s concerns. 
  • Regulatory Obligation Library. Using A.I. to monitor the current situation of regulations, changes in regulations, comparisons of change (side-by-side markups), and notifications, all to keep the organization current with regulatory changes impacting the hear and now. 
  • Policy Management. This is mapping regulations and changes to your current policy library and leveraging A.I. to inform you what policies should be reviewed because of changes and suggest language for the update to address the change (generative A.I.)
  • Control Management. I worked on a large risk management RFP for a global organization a few years ago. Once they were done with that RFP, they looked to using A.I. to keep controls updated and current in their environment. They specifically leveraged Natural Language Processing to derive content-related information from local control descriptions. They then used Machine Learning to score quality and identify quality gaps in documentation. This enabled them to provide real-time feedback to control owners directly and indicate areas for improvement. They then did Scoring Reports & Dashboards to generate an overview of the documentation quality of ICS Principles in Business Units.

And this is just exploring the regulatory change management-related use cases of A.I. I also see a lot of interest in using A.I. for third-party risk management, from reading and comparing differences in policies/controls between an organization and a supplier/vendor to monitoring the range of third-party risk databases (e.g., ESG ratings, financial viability/corporate ratings, reputation and brand lists, watch lists, sanction lists, negative news, security ratings, politically exposed persons, geo-political risk, and more).

My job as an analyst is to research and understand the variety of GRC solutions (both very narrow and specific to broad platforms) and understand what differentiates one vendor from another and what is the best solution for an organization. 

In that context, GRC 20/20 covers the range of Cognitive GRC solutions available in the market, around the world, and in which industries . . . and know which are real and provide value, and which are ’the Wizard of Oz.’

Navigating Risk and Resilience: Balancing Complexity and Cost in GRC Solutions

Complexity & Costs: Key Points of Consideration in Selecting a Solution

When it comes to operational resilience and continuity, as well as broader GRC, many solution options are available in the market. Selecting the right solution is critical as many choices lead organizations down the road of complexity and cost, not just in implementation but also in ongoing maintenance, management, and user experience. Organizations need operational resilience and continuity solutions that are highly efficient (in both human capital and financial capital), effective, and agile to the needs of dynamic and distributed businesses.

It used to be that the dividing line between agile solutions with lower implementation and maintenance costs was whether the solutions were cloud-based (e.g., SaaS) or on-premise. This is not the case anymore, as some . . .

[The rest of this blog can be read on the CLDigital blog, where GRC 20/20’s Michael Rasmussen is a guest author]

When A.I. (Artificial Intelligence) Fails . . .

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

A.I. technology and models are used across industries to analyze, predict, generate, and represent information, decisions, and outcomes that impact operations and business strategy. A range of departments, functions, and roles are beginning to rely on A.I. as a critical foundation of business processes that support operations, long-term strategic planning, and day-to-day tactical decisions.

A range of A.I. technology spans predictive analytics, machine learning, deep learning, natural language processing, and robotic process automation to the new era of generative A.I. Within these various approaches, there are three core components:

  • Input Component. Delivers assumptions and data to a model.
  • Processing Component. Analyzes inputs into predictions, decisions, and content. Within A.I. systems there is often a continuous learning component/engine, which sets it apart from conventional data processing systems.
  • Reporting/Output Component. Translates the processing into useful business information.

While the common understanding of models is that they have three components – input, processing, and reporting/output – the reality is that there are multiple parts to each of these component areas.  Multiple components within input, processing, and reporting connect to each other and have an array of assumptions, data, and analytics. Adding to this complexity are the human and process elements intertwined throughout the business use of A.I. that weave together various manual processing and technology integration elements needed to use and interpret A.I.. As the environment changes, A.I. models themselves have to change to accurately represent the world in which they exist.

Models are used to represent scenarios and produce outcomes through inputs of values, relationships, events, situations, expressions, and characteristics. This is defined as the ‘input component’ of a model. The real world is a complex web of interrelationships and variables of significant complexity and intricacy that models cannot fully represent. Inputs are a simplified abstract of the real world used to process and report on quantitative estimates in outcomes. The challenge is that bringing in wrong assumptions, bias, and bad (or incomplete) information is compounded with the complexity of variables in the real world, and models can fail in their validity and reliability and by their inability to process any variables that sit outside their input scope. Validity speaks to accuracy whereas reliability speaks to repeatability. Something can be very reliable but not at all accurate. There is a risk that complex models lose both validity and reliability as the focus shifts from analyzing the impact of key critical variables to the fragile interaction and relationship of a variety of variables. They will reliability provide an outcome, but it will increasingly not be accurate or valid.

When A.I. Fails

Organizations are in the early stages of becoming highly dependent upon A.I. to support critical business processes and decisions. A.I. is now critical to many businesses. The expanding use of A.I. in the organization reflects how A.I. can improve business decisions. Still, A.I. comes with risks when internal errors or misuse results in bad decisions. 

Unfortunately, as much value as A.I. provides, it also exposes the organization to significant risk. Ironically, the A.I. tools often used to model and predict risk can be a significant risk exposure if not governed appropriately. A.I. model risk is the potential for adverse consequences from decisions based on incorrect or misused A.I. It leads to financial loss, poor business and strategic decision-making, legal and regulatory issues, and damage to an organization’s brand. For example, disclosing restricted information to “public A.I.” might be a risk as well when employees register and use tools like ChatGPT for business purposes. The most dangerous thing (moral hazard) for an organization is to have developed complete trust in what is being produced / delivered by A.I.

A.I. should be informing decisions and raising points for consideration rather than being solely relied on to make decisions – especially those that are business critical. A.I., inappropriately used and controlled, brings many risks to organizations.  These include:

  • Dynamic & Changing Environment. A.I. models are not static. In reality, new A.I. models and use are being added, old ones are retired, and current A.I. technology and models are constantly changing. Compounding this is the constant change in risk, regulations, and business that puts the environment that A.I. is supposed to represent in a constant state of flux. Organizations face significant risk when the environment changes, yet A.I. and its associated data inputs fail to evolve to represent the current environment. A.I. models that were accurate last year may not be accurate this year.
  • Lack of Governance & Control. The pervasive use of A.I. has also introduced what could now be Shadow A.I., a component of Shadow IT where the line of business bypasses IT and uses technology that has not been approved. This increases risk through inappropriate and unauthorized use that exposes the organization.
  • More Than the A.I. Model-Processing Component. The use of A.I. is more than the A.I. model-processing component. It is an aggregation of components that span a variety of input, processing, and reporting functions that integrate and work together. This includes the overall A.I. modeling and use project. Organizations risk being fixated on the A.I. model-processing component alone while the many supplementary components undergo rapid changes that are not governed, and bad input data means bad decisions from A.I. The quality of A.I. depends upon the quality of the input data and the assumptions: errors in inputs and assumptions lead to inaccurate processing and outputs. 
  • Errors in Input, Processing & Reporting. A.I. may have errors that produce inaccurate outputs without proper development and validation.  Errors can occur throughout the A.I. lifecycle from design through A.I. use and can be found in any or all of the input, processing, and reporting components. With specific data, if that data is not annotated correctly, the outcome will always be wrong. These errors may be from the development of the A.I. model in its inputs, processing, and reporting or can be errors introduced through changes and modifications to the model components over time. Errors may also occur from the failure of A.I. to change to shifting business use and a changing business environment.  
  • Undiscovered Model Flaws. It’s possible that an A.I. model will initially appear to generate highly predictive output, despite having serious flaws in its design/training. In this case, the A.I. solution may gain increased credibility and acceptance throughout the organization until eventually, some combination of data exposes the flaw. False positives are part of any predictive system but can be extremely convincing with A.I., leading to greater long-term reliance on a flawed model.
  • Misuse of Models. A significant risk is from A.I. that is used incorrectly. An accurate A.I. model will produce accurate results but lead the business to error if used for purposes the A.I. tech/model was never designed for. Organizations need to ensure that models are accurate and appropriately used. Organizations face risk when using and applying existing A.I. to new areas without validating A.I. in that context.
  • Misrepresentation of Reality. The very nature of A.I. means they are a representation of reality and not reality itself. A.I. models are simplifications of that reality and, in the process of simplification, may introduce assumptions and errors due to bias, misunderstanding, ignorance, or lack of perception. This risk is particularly a hot topic in generative A.I. which may leverage inaccurate data but also a risk across A.I.
  • Limitations in the Model. A.I. models approximate the real world with a finite set of inputs and variables (in contrast to an infinite set of circumstances and variables in the real world). Risk is introduced when A.I. is used with inaccurate, misunderstood, missing, or misapplied assumptions that they are built upon. 
  • Pervasiveness of Models. Organizations bear significant risk as A.I. can be used at any level without accountability and oversight. Anyone can acquire and/or access A.I. that may or may not serve the organization properly. Organizations struggle to identify A.I. being used not only within traditional business but also across third-party relationships. The problem grows as existing A.I. models are modified and adapted to new purposes. The original A.I. model developer in the organization often does not know how others are adapting and using A.I.. 
  • Big Data and Interconnectedness. The explosion of inputs and variables from massive amounts of data within organizations has made A.I. use complex across input, processing, and reporting components. The interconnectedness of disparate information sets makes A.I. models more complex and challenging to control. This leads to a lack of standardization, inconsistent use of data, data integrity issues across systems that feed into models, and data integrity within A.I.
  • Inconsistent Development and Validation. A.I. models are being acquired/developed, revised, and modified without any defined development and validation process. The role of audit in providing independent assurance on A.I. integrity, use, and fit for purpose is inconsistent and needs to be addressed in organizations. 

The Bottom Line: A.I. is rapidly growing in variety, complexity, and use within organizations. It is quickly moving from a tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

The Challenges & Risk in Artificial Intelligence

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

Artificial Intelligence (A.I.) has emerged as a disruptive force, propelling organizations into the future. Its transformative capabilities promise efficiency, accuracy, and scalability, providing a significant competitive edge. However, alongside the immense potential, A.I. usage poses unique risks and challenges that organizations must acknowledge and address.

While A.I. offers numerous opportunities for organizations, it is not without risks and challenges. Recognizing and addressing these challenges is necessary in the successful integration and responsible use of A.I.. Some of the challenges of A.I. include:

  • Powerful. A.I. can impact significant change with minimal effort. While this is a major strength, it also means a little A.I. use by an unskilled worker could result in a profoundly negative outcome. Companies may be wise to adopt a “first do no harm” approach.
  • Complexity. One of the primary challenges in using A.I. is the complexity of its implementation. Effective A.I. integration requires substantial investment in technology, skilled labor, and time. Organizations need experts capable of governing, developing, maintaining, and managing A.I. systems, which often leads to costly upskilling or recruitment   of new resources. Additionally, compatibility with existing systems and workflows must be considered, often requiring a complete overhaul of current practices.
  • Simplicity. Complexity is certainly a challenge, but the recent A.I. gold rush also has the opposite concern, simplicity. The necessity of data scientists, advanced technology infrastructure, and sizable ongoing support costs, all served as a check to keep many companies from running amok with the technology. With generative A.I., however, that bar is removed. Anyone can leverage these technologies, with limited resources and no training or consideration of the consequences. Essentially, the brakes have been removed while traveling at high speed.
  • Productivity. For many cases of A.I. uses, their are productivity enhancements, such as GitHub Co-pilot, which suggests to the developer what their next code block should be. The developer either accepts, modifies, or declines the suggestion. This is the same type of technology iPhones use, GSuite uses, etc. How A.I. is being used is a major determining factor between IT tool governance and A.I. model governance. As organizations use productivity tools laced with A.I. (versus full process automation) we will see ITGCs, SOC 2s, and data agreements that will cover many of these issues.
  • Data Privacy. Data privacy is another critical concern with A.I. usage. A.I. systems are data-hungry, needing vast quantities of data for training and operation. This dependence raises significant issues regarding data security, user privacy, and compliance with regulations such as GDPR. Breaches can lead to severe reputational and financial damage.
  • Bias. Bias is an inherent problem in A.I. systems that poses significant ethical and practical concerns. If the data used for training is biased, the A.I. system can amplify and reproduce these biases, leading to unfair outcomes. Examples include discrimination in hiring through A.I.-powered recruitment tools or unequal treatment in A.I.-driven healthcare solutions.
  • Opaqueness. Another risk with A.I. is the “black box” problem, referring to the opaqueness of A.I. decision-making. Advanced A.I. models, particularly in deep learning, often make decisions in ways that humans cannot easily understand. This lack of transparency can be problematic, especially in sensitive areas like healthcare or finance where understanding the rationale behind decisions is crucial.
  • Legal Liability. A.I. systems also present a potential liability issue. Determining culpability when an A.I. system causes harm is not straightforward. Is the developer, user, or even the A.I. system itself at fault? Legal systems worldwide are currently grappling with these novel issues. This can be further broken down into: 
    • Supply Chain challenges because A.I. systems and models are being developed using open source or other code where the country of origin or its reliability is not known.
    • Inappropriate or unintended use, which could lead to legal exposures.

Addressing these challenges requires a multifaceted approach. Organizations must develop a comprehensive A.I. strategy (see A.I. GRC below), addressing the technical aspects of A.I. implementation and ethical, legal, and social considerations. They must also invest in upskilling their workforce and adopt a culture of continuous learning to keep up with this rapidly evolving technology. This also requires ongoing collaboration between organizations, regulators, and policymakers. These collaborations can help create a conducive environment for A.I. usage, with adequate regulations to manage risks without stifling innovation. As A.I. technology evolves, organizations must remain vigilant, adaptive, and ethical in their A.I. journey.

If the enterprise has existing model risk management (MRM) processes, A.I. models that are being used for process automation should be handled within an MRM framework that this paper promotes. It is critical to not reinvent the wheel and instead adapt existing MRM best practices.

Increasing Regulatory & Legal Pressure on A.I.

A cavalier approach to A.I. has led to a monumental lack of structured A.I. governance within organizations at a time when there is a growing need for enterprise visibility into A.I. and its use. Organizations should keep an accurate inventory of A.I. technology and models, documentation, and defined roles and responsibilities in A.I. governance, risk management, and compliance throughout the A.I. use lifecycle. A.I. is evolving rapidly, fundamentally reshaping various industries, including healthcare, finance, and defense. However, its escalating integration into everyday life has raised many legal and regulatory challenges. 

There are several legal issues tied to A.I. usage. These include:

  • Privacy and Data Protection and Leakage. A.I. systems often require large amounts of potentially sensitive data, posing risks to privacy rights and data protection. For instance, A.I. applications like facial recognition technology have raised significant privacy concerns. Therefore, jurisdictions worldwide are deliberating on stricter data protection laws to control A.I. misuse. This also includes data leakage, where responses may inadvertently include sensitive information from training data sets. And given the often global nature of organizations, issues around cross-border transfer of data also arise.
  • Bias and Discrimination. A.I. systems can potentially reflect and amplify societal biases, which could lead to discriminatory practices. There have been cases where A.I. algorithms used in hiring or criminal justice systems have produced biased outcomes. Lack of visibility into methodology also makes diagnosis and solving of bias/discrimination difficult to identify and resolve.
  • Liability & Accountability. There is a legal ambiguity surrounding who should be held accountable when A.I. systems cause harm, or in the event of errors or failures that lead to compliance violations.
  • Intellectual Property Rights. Questions regarding A.I.’s creative outputs, whether these should be considered intellectual property, and if so, who should hold the rights, remain largely unresolved. There are also questions about intellectual property rights associated with Inputs, and whether or not these comply, for example with licensing and copyright terms and conditions.
  • Security. A.I. could be exploited for malicious purposes, like deepfakes or autonomous weapons, necessitating legal provisions for managing these risks.

Regulators are focusing on these legal issues with increased regulatory requirements and scrutiny in the governance and use of A.I. It is understood that A.I. is becoming a necessary part of business, but regulators want to ensure that A.I. is governed, developed, validated, used, and maintained properly to ensure the stability of organizations and industries. In that context, this is putting greater emphasis on A.I. governance and risk management. Financial services regulators have been leading in this area. MRM is an established function in financial services, recent statements highlight A.I. as “just another” model type to be included (e.g., OCC model Risk handbook and PRA SS1/23).

The regulatory landscape for A.I. varies globally, reflecting different cultural, societal, and political contexts. However, many jurisdictions are moving towards comprehensive legal frameworks to manage A.I. risks.

  • European Union. The EU has been at the forefront of A.I. regulation, with its proposed A.I. Act aiming to establish robust and flexible rules for A.I. use across its member states.
  • United States. The U.S. currently relies on sector-specific regulations, like the Fair Credit Reporting Act for A.I. in finance. However, there is an ongoing discussion on comprehensive federal A.I. laws. As well as OCC 2011-12 SR 11/7 for model risk management in finance. The White House has published an AI Bill of Rights that will be applicable across the board.
  • United Kingdom. In the U.K., the PRA CP6/22 proposed firms should adopt five principles which it considers to be key in establishing an effective MRM framework. The principles were intended to complement existing requirements and supervisory expectations in force on MRM.
  • China. China’s A.I. regulation encourages innovation while setting certain limits to protect national security and public interests.
  • Other International Efforts. Organizations such as the OECD and UNESCO are working towards global A.I. standards and principles to ensure responsible A.I. use.

A.I. presents both enormous potential and significant challenges. As A.I. use grows, corresponding legal and regulatory frameworks evolve in tandem to mitigate risks while promoting innovation. Harmonizing regulations across jurisdictions and developing comprehensive, flexible laws are key steps to responsibly integrating A.I. into society. A.I.’s complexity and rapid evolution of A.I. necessitate ongoing research and dialogue among all stakeholders – regulators, A.I. developers, users, and society at large.

The Bottom Line: A.I. is rapidly growning in variety, complexity, and use within organizations. It is quickly moving from tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

2023 Buyers Guide: Third-Party Risk Management & Intelligence Solutions

Traditional brick-and-mortar business is outdated: physical buildings and conventional employees no longer define the organization. The modern organization is an interconnected web of relationships, interactions, and transactions that span traditional business boundaries. Layers of relationships go beyond traditional employees, including suppliers, vendors, outsourcers, service providers, contractors, subcontractors, consultants, temporary workers, agents, brokers, dealers, intermediaries, partners, and more. The modern business depends on and is defined by the governance, risk management, and compliance of third-party relationships to ensure the organization can reliably achieve objectives, manage uncertainty, and act with integrity in each of its third-party relationships. 

The range of regulations and resiliency risks are prompting many organizations to reevaluate and define their third-party risk management programs. This includes the ESG and the ESG-related regulations, such as Germany’s LkSG and the EU CSDDD, to more focused legal requirements, becoming an acronym regulatory soup. A haphazard department and document-centric approach for third-party risk management compounds the problem and does not solve it. 

I am interacting with many organizations as they evaluate third-party risk management solutions. Organizations must carefully choose the right third-party risk solution and related intelligence/content integrations. There is a lot of marketing hype and claims that need to be carefully ‘weeded’ to find the reality of the best fit for an organization. Too often, organizations fail in their own ‘due diligence’ of what third-party risk solution best fits their needs. 

Sadly, I have seen people lose their jobs over selecting the wrong third-party risk software—more than once. 

Because of this, and being involved in many third-party risk RFPs worldwide, I am doing the: 2023 Buyers Guide: Third-Party Risk Management Platforms & Intelligence Solutions.

Organizations need to address third-party risk with an integrated platform as well as have the right third-party risk intelligence content feeds to keep current on developments throughout the world and extended enterprise to manage the ecosystem of third-party relationships with real-time information about third-party performance, risk, and compliance and how it impacts the organization. 

It is time for organizations to step back and implement third-party risk solutions and integrate third-party risk intelligence/content that delivers value to the business. This value can be measured in the efficiency, effectiveness, resilience, and agility to the business.

Organizations need to be intelligent about what third-party risk technologies and intelligence services they deploy. Join GRC 20/20 for this in-depth analysis of how to evaluate and purchase third-party risk management and intelligence solutions . . .

  • Discover drivers and trends in third-party risk management 
  • Identify what is needed to go into a business case and ROI for purchasing third-party management solutions
  • Understand the breadth of capabilities and approaches software solutions deliver in third-party risk management
  • Determine what RFP requirements best fit your organization for thrid-party risk management

The 2023 Buyers Guide: Third-Party Risk Management & Intelligence Solutions provides GRC 20/20’s market research and understanding of the segments of the third-party risk management market to help organizations build their business case, understand what capabilities they need, and determine the RFP requirements they should consider in evaluating solutions in the market. This Research Briefing provides a framework to understand capabilities and build requirements for RFP and selection process. 

ESG: Doing the right thing is never the wrong thing

I am back in London for a week of exciting engagements. The top of the list is ESG. In my ESG presentation tomorrow at the GRC: The Resilient & Responsible Enterprise event, I am leading with a quote from the sage of wisdom, Ted Lasso, who stated, “Doing the right thing is never the wrong thing” . . .

Managing ESG in a Dynamic World

ESG – Environmental, Social & Governance – is seeing increasing pressure from investors, regulators, lawmakers, employees, business partners, and citizen activists. Pressure is mounting from multiple fronts for organizations to implement ESG reporting in their organizations. In one respect, this is an evolution of the past’s sustainability and corporate social responsibility (CSR) efforts. However, ESG is broader with more momentum. Where CSR and sustainability were too often (but not always) pushed from a marketing perspective, ESG has the momentum and force to become a significant measurement of the organization’s integrity. Integrity is what the organization commits to in its values and is a reality throughout the organization and the extended enterprise. 

ESG is more than the E (environmental). Too often, organizations see that lead E, and they perceive that ESG is just about environmental values and climate change. It is so much more than this. The S (social) and the G (governance) are just as important as the E in ESG. There are many standards and various definitions for ESG; here is the high-level scope of it . . .

  • E = Environmental. Measures and reports on the organization’s values and commitment to stewardship of the natural world and environment. It includes reporting and monitoring the organization’s environmental initiatives for climate change, waste management, pollution, resource use and depletion, greenhouse gasses, etc.
  • S = Social. Measures and reports on the values and commitments and how the company treats people. This includes employee and customer/partner relations, human rights (e.g., anti-slavery), diversity and inclusion, anti-harassment and discrimination, the privacy of individuals (both employees and others), working conditions and labor standards (e.g., child labor, forced labor, health, and safety), and how the company participates and gives back to society and the communities it operates within.
  • G = Governance. Measures and reports on the culture and behaviors of the organization in context and alignment with its values and commitment. This includes finance and tax strategies, whistleblower and reporting of issues, resiliency, anti-bribery and corruption, security, board/executive diversity and structure, and overall transparency and accountability.

ESG crosses business boundaries. Brick-and-mortar walls and traditional employees do not define the modern organization. The modern organization is a web of third-party relationships: vendors, suppliers, outsourcers, service providers, contractors, consultants, temporary workers, intermediaries, agents, partners, and more. To truly deliver on ESG requires monitoring and managing the shared values and integrity throughout the organization’s extended enterprise. Legislation and regulation are focused on this, like the European Union’s Directive on Corporate Due Diligence and Accountability with Germany’s corresponding Due Diligence Act (to name one of many). 

THE CHALLENGE: Delivering 360° Situational Awareness of ESG

Business is complex – gone are the years of simplicity in organizational operations and processes.  Managing ESG amid complexity and change is a significant challenge for boards, executives, and all levels of management as they seek to execute their ESG directives and deliver results to stakeholders.

The modern organization is:

  • Distributed. Organizations have operations complicated by a web of ESG-related data scattered across many different systems. The organization is an interconnected mesh of ESG objectives, transactions, and interactions that span business units and even extend beyond the organization to third-party suppliers and vendors. Complexity grows as these interconnected relationships, processes, and systems nest themselves in ESG intricacy.
  • Dynamic. Organizations are in a constant state of flux. Leaders constantly adapt strategies and solutions to remain competitive, sustainable, and compliant. This results in ESG processes and information that are continually growing and changing. Exacerbating all this chaos is the growing abundance of ESG regulatory structures policing it. This complicates the ESG environment of any organization, as any new change must be carefully considered in ESG impact and reporting, placing tremendous stress on leaders attempting to keep pace with evolving business. 
  • Disrupted. Organizations are constantly managing high volumes of structured and unstructured ESG-related information across multiple systems, processes, and relationships to see the big picture of ESG. The velocity, variety, and volume of ESG scope and data can be overwhelming – disrupting the organization and slowing it down at a time when it needs to be agile and fast.

In 1996, Fritjof Capra made an insightful observation on living organisms and ecosystems that rings true when applied to governance business today: “The more we study the major problems of our time, the more we come to realize that they cannot be understood in isolation. They are systemic problems, which means that they are interconnected and interdependent.”  

Capra’s point is that biological ecosystems are complex and interconnected and require a holistic understanding of the intricacy of relationships as an integrated whole rather than a dissociated collection of parts. Change in one segment of the ecosystem has cascading effects and impacts the entire ecosystem. This is true in managing ESG in today’s organizations. Dissociated data, systems, and processes leave the organization with fragments of truth that fail to connect and see the big picture of ESG across the enterprise. Simply managing ESG data across different systems in spreadsheets and documents is prone to errors, unreliable, impossible to audit, and very costly to maintain.

THE BOTTOM LINE: Lacking an integrated view of ESG results in business processes, partners, employees, and systems that behave like leaves blowing in the wind, constantly moving and churning but often only creating a further mess. Modern business requires a coordinated ESG strategy and process across the organization enabled by technology for efficient, effective, and agile ESG reporting and monitoring. 

USA vs. UK/Europe in Risk & Compliance Approaches

I am preparing for another trip next week to the United Kingdom/Europe and reflecting on the differences in GRC – governance, risk management, and compliance – between North America and the UK/Europe. BTW: if you want to meet next week in London and discuss GRC strategy, process, technology, and/or content/intelligence solutions . . . let me know . . .

OK, let me be clear. What I am about to state is generalizing. There are exceptions, but this is the overall picture of the differences between North America (USA and Canada) and the U.K./Europe in the context of GRC, particularly risk management and compliance.

Consider (generalizing as there are exceptions) . . . 

  • Risk management. The USA too often approaches risk management (and its acronyms of ERM, ORM, IRM) as a compliance exercise from SOX. Risk starts with a risk and control register mapping in North America. It is a bottom-up approach.
    • Risk management in Europe, which is most often aligned with ISO 31000, in this context risk management is a more business perspective that starts with objectives (e.g., entity, division, department, process, project, asset). Risks are understood and managed in the context of objectives, and the business is more involved in risk management as it provides more value to the business. It is often a top-down approach aligned with strategy and performance. I see a lot more board-level involvement in risk management in Europe, and risk management is seen as a business tool and enabler. There are several RFPs this year that are driven by the board. In Germany, you have things like IDW PS 340 requiring enterprise risk normalization, aggregation, and quantification up to the board-level
  • Compliance. Compliance is very different between USA and Europe. From a product safety side, the USA generally has a prove it is harmful perspective while Europe has a prove it is safe perspective. But the regulatory regimes take a very different approach. The USA is a checklist/tickbox mentality. North American firms want to be told what they have to do, let them check the checkboxes, and then they want a get-out-of-jail-free card.
    • Europe has an outcome/principled base approach to compliance. They generally do not create checklists but define principles and objectives that must be achieved. For example, the UK Consumer Duty has a core principle as its foundation, which then rolls into three more sub-principles (duties), focusing on four outcomes. There is no detailed checklist. The focus is on principles being embraced in the culture and conduct of the organization and measured by outcomes
    • This outcome/principled-based approach started with what was the UK FSA, which later became the UK FCA, and rolled into the EU Better Regulatory Policy twenty years back. 
    • This outcome/principled-based approach to compliance requires a risk-based approach as the focus is on principles and outcomes, or we can say objectives (like the European approach to risk management). Everything is objective-based. How one organization complies to achieve principles/outcomes/objectives may differ from another, but the achievement of objectives/outcomes is measured. This requires a different way of approaching and thinking about compliance than what you see in North America.
    • Focusing on principles and outcomes is very different than detailed checklists of rules. It requires a deeper focus on ethics and culture driving the conduct.
  • ESG. This is another difference. The USA does not have broad sweeping legislation tackling ESG like the EU does with CSRD, CSDDD, CSRS, or individual country laws like Germany’s LkSG (on the third-party side). In the USA, some fragments tackle parts of ESG (e.g., FCPA, Conflict Minerals, California Transparency in Supply Chains Act), but no broad legislation mandating expansive ESG reporting, assurance, attestations, and due diligence like Europe has. ESG in the USA is too often thought of as only the E for the environment. Too many think ESG regulation is the proposed SEC Climate Change regulation (should it ever be finalized). But the reality is that it is only a part of the E in ESG, not even all of the E. And the political environment is in a stalemate, and there is back and forth on things like the SEC climate disclosure rules and a forthcoming Supreme Court ruling that might undermine all of that.
    • In Europe, the EU CSRD, CSDDD. CSRS (the ESG trifecta) impacts 50,000 firms that have to start doing ESG reporting (and many North American firms with operations in Europe). You have Germany’s LkSG, which has global concerns about ongoing due diligence in supply chains. 
    • ESG is a huge focus in Europe; in North America, it has fragmented attention but not the same momentum. The exception is firms that have significant operations in Europe.
    • I discuss this in detail in the 2023: How to Market & Sell ESG Solutions & Services Research Briefing.
  • Privacy is another example. The EU has GDPR while the USA has . . . nothing at the Federal level . . .

Navigating Complexity & Chaos: Approaching Regulatory Requirements with Control Automation

‍GRC 20/20’s Michael Rasmussen will be speaking on the topic of this blog in the webinar: Navigating Complexity & Chaos: Approaching Regulatory Requirements Across Jurisdictions with Control Automation!

Complex problems often have a solution that is understandable, simple, and uncomplicated – and usually wrong. 

How can we expect that to be different from business operations? The years of simplicity are gone!

Exponential growth and change in risks, regulations, globalization, distributed operations, competitive velocity, technology, and business data encumber organizations of all sizes. Keeping business strategy, performance, uncertainty, complexity, and change in sync is a significant challenge for boards, executives, and management professionals throughout all levels of the business

The physicist Fritjof Capra once said, “The more we study the major problems of our time, the more we come to realize that they cannot be understood in isolation. They are systemic problems, which means that they are interconnected and interdependent.” Capra was making the point that ecosystems are complex and interconnected and require a holistic, contextual awareness of the intricacy of interconnectedness as an integrated whole – rather than a dissociated collection of systems and parts. Risk and control in one area have cascading effects that impact the entire ecosystem . . .

[The rest of this blog can be read on the VOQUZ Labs blog, where GRC 20/20’s Michael Rasmussen is a guest author]

2023 GRC Trends: Engagement

In the first post, 2023 Governance, Risk Management & Compliance, we reviewed the top five 2023 GRC trends. Then we dove deep into the first trend of the need for GRC agility, and then explored GRC resilience and explored GRC resilience again, moved on to Integrity (ESG), then GRC Accountability . . . and we now continue with the fifth trend of five, ENGAGEMENT . . .

ENGAGEMENT is the fifth global trend in the GRC market for solutions and services.

GRC (Governance, Risk Management & Compliance) is as relevant to the front office as it is to the back office. The front lines of the business use GRC systems and need engaging user experiences. 

It is not just the front lines. All levels of the organization interact and use GRC technologies, from taking assessments, reading policies, going through training, reporting incidents, evaluation reports, diving through dashboards, and more.

Employee engagement in GRC requires GRC technologies to extend across the organization: Even to extended third-party relationships such as vendors, suppliers, agents, contractors, outsourcers, services providers, consultants, and temporary workers. Engaging stakeholders at all levels of the organization requires GRC technologies to be relevant, intuitive, easy to use, and attractive. Employees live their personal and professional lives in a social-technology-permeated world. GRC needs to engage employees and not frustrate or bore them. It has to be easy to use and interact with.

It has been stated that:

Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage to move in the opposite direction.

A primary directive of GRC is to provide GRC engagement that is simple yet gets the job done. Like Apple, with its innovative technologies, organizations must approach GRC engagement in a way that re-architects the way it works as well as the way it interacts. The GRC goal is simple; it is itself simplicity. Simplicity is too often equated with minimalism. Yet true simplicity is more than just the absence of clutter or the removal of embellishment. It’s about offering the right GRC information in the right place when the individual needs it. It’s about bringing interaction and engagement to GRC process and data. GRC interactions should be intuitive.

I have been evaluating GRC technologies for 23 years and find that many have average to poor user experiences. Even some who are recognized as GRC leaders, who would have you believe that their platform could solve the world’s problems, have interfaces that are overly complex, non-intuitive, confusing, and sometimes downright confounding. 

What is needed at the core of GRC engagement is a human firewall.

Firewalls protect us. In buildings, it is a wall intended to shield and confine a fire to an area to protect the rest of the building. In a vehicle, it is a metal shield protecting passengers from heat and potential fire in the engine. In network security, it is the logical ingress and egress points securing a network. 

Within organizations, there is another firewall that is the most essential but the most overlooked. That is the ‘Human Firewall.’ I have been an analyst for twenty-two years.

Humans are the weakest area of any governance, risk management, and compliance (GRC) strategy. Humans make mistakes, they do dumb things, they can be negligent, and they can also be malicious. In the technical world, we can lock things down and the IT operates in binary. In the world of human interaction, it is not binary but shades of grey. Nurturing corporate culture and behavior is critical. The Human Firewall is the greatest protection of the organization. At the end of the day, people make decisions, initiate transactions, and they have access to data and processes. 

A decade ago, I was involved with The Institute of Risk Management in London in developing Risk Culture: Resources for Practitioners. In this guidance, there is the A-B-C model. The ‘A’ttitudes of individuals shape the ‘B’ehavior of these individuals and the organization, forming the ‘C’ulture of the organization. And that culture, in turn, has a symbiotic effect, further influencing attitudes and behavior. Culture is one of the organization’s greatest assets. It can spiral out of control and become corrupt quickly but can take years, or even decades, to nurture and build in the right direction. The ‘Human Firewall’ is the greatest bastion/guardian of the organization’s integrity and culture. In today’s focus on ESG – environmental, social, and governance – it is in the Human Firewall that becomes the reality of ESG integrity in the behavior and culture of the organization.

Every organization needs a Human Firewall. So what is a Human Firewall? What is it composed of? The following are essential elements:

  • Policy Management. Policies govern the organization, address risk and uncertainty, and provide the boundaries of conduct for the organization to act with integrity. The organization needs well-written policies that are easy to understand and apply to the context they govern. They should be in a consistent writing style, maintained, and monitored. Policies must be well-designed, well-written, consistent, maintained, and monitored, as they provide the foundation for the Human Firewall.
  • Policy Engagement. More than well-written and maintained policies are needed; they must also be communicated and engaged with the workforce. It does the organization no good, and can actually be a legal liability, to have policies that establish conduct that is not communicated and engaged to the workforce. All policies should be in a common corporate policy portal to be easily accessed and have a regular communication and engagement plan. 
  • Training. The next part of the Human Firewall is training. Individuals need training on policies and procedures on proper and improper conduct in the organization’s processes, transactions, and interactions. Training applies policies to real-world contexts and aids understanding, strengthening the Human Firewall.
  • Assessments & Controls. Employees at all levels of the organization need a simplified and engaging user experience to answer GRC-related questions on objectives, risks, and controls.
  • Dashboards and Reporting. From executives to operational management, risk owners need easily understood and accessible insight into the status of objectives, risks, controls, and issues. 
  • Issue Reporting. Things will go wrong. Bad decisions will be made, inadvertent mistakes will happen, and the malicious insider will do something wrong. Part of the Human Firewall is providing mechanisms such as hotlines, whistle-blower systems, management reports, and other mechanisms of issue reporting for the employees in the front-office and back-office can report where things are breaking down or going wrong before they become significant issues for the organization. 
  • Extended Enterprise. Brick-and-mortar walls and traditional employees do not define the modern organization. The modern organization is an extended web of relationships: suppliers, vendors, outsourcers, service providers, consultants, temporary workers, contractors, and more. You walk down the halls of an organization, and half the people you walk by, the insiders, are no longer employees. They are third-parties. The Human Firewall also has to extend across these individuals, a core part of the organization’s processes. Policies, training, and issue reporting should encompass the web of third-party relationships that shape and form today’s organization.

Where are you at in building, maintaining, and nurturing your organization’s Human Firewall to improve GRC Engagement?

2023 GRC Trends: Accountability

In the first post, 2023 Governance, Risk Management & Compliance, we reviewed the top five 2023 GRC trends. Then we dove deep into the first trend of the need for GRC agility, and then explored GRC resilience and explored GRC resilience again, moved on to Integrity (ESG) . . . and we now continue with the fourth trend of five, ACCOUNTABILITY . . .

The fourth global trend in the GRC market for solutions and services is ACCOUNTABILITY.

In the Fellowship of the Ring, Frodo asks Gandalf, “Why was I chosen?” Gandalf replies . . .

‘Such questions cannot be answered,’ said Gandalf. ‘You may be sure that it was not for any merit that others do not possess. But you have been chosen, and you must therefore use such strength and heart and wits as you have.” 

Whenever I think of accountability, this statement by Gandalf comes to mind. Hopefully, there is merit in those given GRC-related accountability, but it is evidently not always the case. But those that are given GRC accountabilities need to pursue those accountabilities with all the strength and heart and wits that they have.

Accountability is different than responsibility—responsibilities I can give to someone else. Accountability is something I own and cannot give others. If there are issues of risk, compliance, control . . . then I have to face up to it and own it (or be praised when things go right). That is accountability.

Too often, GRC-related accountabilities were passed around the organization like a hot potato. No one wanted to be accountable for risk and compliance. Things are changing. We are entering an era of greater GRC accountability that executives and the board must pay close attention to. There are RFPs in the GRC solution space that are being board-driven because of greater accountability.

Consider the following . . .

  • Accountability Regimes. There are a growing array of accountability regimes around the world. This started with the United Kingdom’s Senior Manager Regime/Certification Regime (UK SMCR), and other jurisdictions have followed suit, such as Ireland (SEAR), Australia (what was BEAR now FEAR/FAR), Hong Kong (MIC), Singapore (IAC), and now South Africa is the latest. In the UK, 28 Senior Management Functions (SMFs) have to be defined in financial services organizations that are executives accountable for different areas of risk, compliance, control, and conduct. If there is willful misconduct, this executive can go to jail. If there is negligence or lack of due diligence, that SMF can be personally fined. UK SMCR is going into review to see how it can be improved.
    • Consider the most recent enforcement where a CIO of a bank was personally fined £81,620 out of his personal bank account for a third-party risk failure.
  • US Department of Justice. The updates to the DoJ enforcement policies are clear on individual accountability. Expectations are set that the DoJ expects companies to provide information on anyone culpable and requires organizations to incentivize executives that act ethically and those that do not need to have compensation clawbacks. Individual accountability is the DoJ’s top priority in corporate criminal cases. The DoJ also states that CEOs and CCOs/CECOs must certify that their compliance programs are effectively designed and operational.
  • Case law. In In re McDonald’s Corporation Stockholder Derivative Litigation, the Delaware Court of Chancery stated that the fiduciary obligation in the previous In re Caremark decision also applies to non-director officers. The trajectory is tho hold corporate officers/executives, particularly CCOs/CECOs, personally liable for corporate misconduct.
  • Regulators. In Rule 3110, FINRA has focused on the liability of CCOs/CECOs in broker-dealers. And CCOs/CECOs are also finding exposure to personal liability under the Investment Advisers Act of 1940 and the Securities Exchange Act of 1934.
  • ESG. In the context of ESG, we are seeing increased pressure on Board Members and Executives for ESG. In some cases they are being voted out if they do not hit ESG-related metrics.
  • Personal Liability (criminal and civil). The former CISO of Uber, Joseph Sullivan, was convicted on federal criminal charges for his role in covering up a 2016 data breach in which the personal information of 57 million Uber users was stolen.

This is just a smattering of developments touched on briefly of which we can add a lot more. But the writing on the wall is there is greater and greater accountability being placed on the board, and senior executives for aspects of GRC.

So what do we do?

Greater accountability means that these roles need greater insight into GRC-related data to have their “strength and heart and wits” about them. The old era of documents, spreadsheets, and emails will not work in this new era of GRC Accountability. Regulators, law enforcement, auditors, and opposing counsel in civil cases have become aware that evidence of risk management and compliance can be fictitiously manufactured in documents and spreadsheets to cover a trail. They increasingly want to see what was assessed or communicated on what day and time. If something changed, who changed it, and what was changed. Organizations need complete audit trails and systems of record/truth of GRC-related activities to support accountability.

And those functions that are accountable need real-time dashboards and reports so they can uphold their fiduciary duties and accountabilities in the organization.