This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

A.I. technology and models are used across industries to analyze, predict, generate, and represent information, decisions, and outcomes that impact operations and business strategy. A range of departments, functions, and roles are beginning to rely on A.I. as a critical foundation of business processes that support operations, long-term strategic planning, and day-to-day tactical decisions.

A range of A.I. technology spans predictive analytics, machine learning, deep learning, natural language processing, and robotic process automation to the new era of generative A.I. Within these various approaches, there are three core components:

  • Input Component. Delivers assumptions and data to a model.
  • Processing Component. Analyzes inputs into predictions, decisions, and content. Within A.I. systems there is often a continuous learning component/engine, which sets it apart from conventional data processing systems.
  • Reporting/Output Component. Translates the processing into useful business information.

While the common understanding of models is that they have three components – input, processing, and reporting/output – the reality is that there are multiple parts to each of these component areas.  Multiple components within input, processing, and reporting connect to each other and have an array of assumptions, data, and analytics. Adding to this complexity are the human and process elements intertwined throughout the business use of A.I. that weave together various manual processing and technology integration elements needed to use and interpret A.I.. As the environment changes, A.I. models themselves have to change to accurately represent the world in which they exist.

Models are used to represent scenarios and produce outcomes through inputs of values, relationships, events, situations, expressions, and characteristics. This is defined as the ‘input component’ of a model. The real world is a complex web of interrelationships and variables of significant complexity and intricacy that models cannot fully represent. Inputs are a simplified abstract of the real world used to process and report on quantitative estimates in outcomes. The challenge is that bringing in wrong assumptions, bias, and bad (or incomplete) information is compounded with the complexity of variables in the real world, and models can fail in their validity and reliability and by their inability to process any variables that sit outside their input scope. Validity speaks to accuracy whereas reliability speaks to repeatability. Something can be very reliable but not at all accurate. There is a risk that complex models lose both validity and reliability as the focus shifts from analyzing the impact of key critical variables to the fragile interaction and relationship of a variety of variables. They will reliability provide an outcome, but it will increasingly not be accurate or valid.

When A.I. Fails

Organizations are in the early stages of becoming highly dependent upon A.I. to support critical business processes and decisions. A.I. is now critical to many businesses. The expanding use of A.I. in the organization reflects how A.I. can improve business decisions. Still, A.I. comes with risks when internal errors or misuse results in bad decisions. 

Unfortunately, as much value as A.I. provides, it also exposes the organization to significant risk. Ironically, the A.I. tools often used to model and predict risk can be a significant risk exposure if not governed appropriately. A.I. model risk is the potential for adverse consequences from decisions based on incorrect or misused A.I. It leads to financial loss, poor business and strategic decision-making, legal and regulatory issues, and damage to an organization’s brand. For example, disclosing restricted information to “public A.I.” might be a risk as well when employees register and use tools like ChatGPT for business purposes. The most dangerous thing (moral hazard) for an organization is to have developed complete trust in what is being produced / delivered by A.I.

A.I. should be informing decisions and raising points for consideration rather than being solely relied on to make decisions – especially those that are business critical. A.I., inappropriately used and controlled, brings many risks to organizations.  These include:

  • Dynamic & Changing Environment. A.I. models are not static. In reality, new A.I. models and use are being added, old ones are retired, and current A.I. technology and models are constantly changing. Compounding this is the constant change in risk, regulations, and business that puts the environment that A.I. is supposed to represent in a constant state of flux. Organizations face significant risk when the environment changes, yet A.I. and its associated data inputs fail to evolve to represent the current environment. A.I. models that were accurate last year may not be accurate this year.
  • Lack of Governance & Control. The pervasive use of A.I. has also introduced what could now be Shadow A.I., a component of Shadow IT where the line of business bypasses IT and uses technology that has not been approved. This increases risk through inappropriate and unauthorized use that exposes the organization.
  • More Than the A.I. Model-Processing Component. The use of A.I. is more than the A.I. model-processing component. It is an aggregation of components that span a variety of input, processing, and reporting functions that integrate and work together. This includes the overall A.I. modeling and use project. Organizations risk being fixated on the A.I. model-processing component alone while the many supplementary components undergo rapid changes that are not governed, and bad input data means bad decisions from A.I. The quality of A.I. depends upon the quality of the input data and the assumptions: errors in inputs and assumptions lead to inaccurate processing and outputs. 
  • Errors in Input, Processing & Reporting. A.I. may have errors that produce inaccurate outputs without proper development and validation.  Errors can occur throughout the A.I. lifecycle from design through A.I. use and can be found in any or all of the input, processing, and reporting components. With specific data, if that data is not annotated correctly, the outcome will always be wrong. These errors may be from the development of the A.I. model in its inputs, processing, and reporting or can be errors introduced through changes and modifications to the model components over time. Errors may also occur from the failure of A.I. to change to shifting business use and a changing business environment.  
  • Undiscovered Model Flaws. It’s possible that an A.I. model will initially appear to generate highly predictive output, despite having serious flaws in its design/training. In this case, the A.I. solution may gain increased credibility and acceptance throughout the organization until eventually, some combination of data exposes the flaw. False positives are part of any predictive system but can be extremely convincing with A.I., leading to greater long-term reliance on a flawed model.
  • Misuse of Models. A significant risk is from A.I. that is used incorrectly. An accurate A.I. model will produce accurate results but lead the business to error if used for purposes the A.I. tech/model was never designed for. Organizations need to ensure that models are accurate and appropriately used. Organizations face risk when using and applying existing A.I. to new areas without validating A.I. in that context.
  • Misrepresentation of Reality. The very nature of A.I. means they are a representation of reality and not reality itself. A.I. models are simplifications of that reality and, in the process of simplification, may introduce assumptions and errors due to bias, misunderstanding, ignorance, or lack of perception. This risk is particularly a hot topic in generative A.I. which may leverage inaccurate data but also a risk across A.I.
  • Limitations in the Model. A.I. models approximate the real world with a finite set of inputs and variables (in contrast to an infinite set of circumstances and variables in the real world). Risk is introduced when A.I. is used with inaccurate, misunderstood, missing, or misapplied assumptions that they are built upon. 
  • Pervasiveness of Models. Organizations bear significant risk as A.I. can be used at any level without accountability and oversight. Anyone can acquire and/or access A.I. that may or may not serve the organization properly. Organizations struggle to identify A.I. being used not only within traditional business but also across third-party relationships. The problem grows as existing A.I. models are modified and adapted to new purposes. The original A.I. model developer in the organization often does not know how others are adapting and using A.I.. 
  • Big Data and Interconnectedness. The explosion of inputs and variables from massive amounts of data within organizations has made A.I. use complex across input, processing, and reporting components. The interconnectedness of disparate information sets makes A.I. models more complex and challenging to control. This leads to a lack of standardization, inconsistent use of data, data integrity issues across systems that feed into models, and data integrity within A.I.
  • Inconsistent Development and Validation. A.I. models are being acquired/developed, revised, and modified without any defined development and validation process. The role of audit in providing independent assurance on A.I. integrity, use, and fit for purpose is inconsistent and needs to be addressed in organizations. 

The Bottom Line: A.I. is rapidly growing in variety, complexity, and use within organizations. It is quickly moving from a tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.

This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.

Leave a Reply

Your email address will not be published. Required fields are marked *