The Challenges & Risk in Artificial Intelligence
This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.
Artificial Intelligence (A.I.) has emerged as a disruptive force, propelling organizations into the future. Its transformative capabilities promise efficiency, accuracy, and scalability, providing a significant competitive edge. However, alongside the immense potential, A.I. usage poses unique risks and challenges that organizations must acknowledge and address.
While A.I. offers numerous opportunities for organizations, it is not without risks and challenges. Recognizing and addressing these challenges is necessary in the successful integration and responsible use of A.I.. Some of the challenges of A.I. include:
- Powerful. A.I. can impact significant change with minimal effort. While this is a major strength, it also means a little A.I. use by an unskilled worker could result in a profoundly negative outcome. Companies may be wise to adopt a “first do no harm” approach.
- Complexity. One of the primary challenges in using A.I. is the complexity of its implementation. Effective A.I. integration requires substantial investment in technology, skilled labor, and time. Organizations need experts capable of governing, developing, maintaining, and managing A.I. systems, which often leads to costly upskilling or recruitment of new resources. Additionally, compatibility with existing systems and workflows must be considered, often requiring a complete overhaul of current practices.
- Simplicity. Complexity is certainly a challenge, but the recent A.I. gold rush also has the opposite concern, simplicity. The necessity of data scientists, advanced technology infrastructure, and sizable ongoing support costs, all served as a check to keep many companies from running amok with the technology. With generative A.I., however, that bar is removed. Anyone can leverage these technologies, with limited resources and no training or consideration of the consequences. Essentially, the brakes have been removed while traveling at high speed.
- Productivity. For many cases of A.I. uses, their are productivity enhancements, such as GitHub Co-pilot, which suggests to the developer what their next code block should be. The developer either accepts, modifies, or declines the suggestion. This is the same type of technology iPhones use, GSuite uses, etc. How A.I. is being used is a major determining factor between IT tool governance and A.I. model governance. As organizations use productivity tools laced with A.I. (versus full process automation) we will see ITGCs, SOC 2s, and data agreements that will cover many of these issues.
- Data Privacy. Data privacy is another critical concern with A.I. usage. A.I. systems are data-hungry, needing vast quantities of data for training and operation. This dependence raises significant issues regarding data security, user privacy, and compliance with regulations such as GDPR. Breaches can lead to severe reputational and financial damage.
- Bias. Bias is an inherent problem in A.I. systems that poses significant ethical and practical concerns. If the data used for training is biased, the A.I. system can amplify and reproduce these biases, leading to unfair outcomes. Examples include discrimination in hiring through A.I.-powered recruitment tools or unequal treatment in A.I.-driven healthcare solutions.
- Opaqueness. Another risk with A.I. is the “black box” problem, referring to the opaqueness of A.I. decision-making. Advanced A.I. models, particularly in deep learning, often make decisions in ways that humans cannot easily understand. This lack of transparency can be problematic, especially in sensitive areas like healthcare or finance where understanding the rationale behind decisions is crucial.
- Legal Liability. A.I. systems also present a potential liability issue. Determining culpability when an A.I. system causes harm is not straightforward. Is the developer, user, or even the A.I. system itself at fault? Legal systems worldwide are currently grappling with these novel issues. This can be further broken down into:
- Supply Chain challenges because A.I. systems and models are being developed using open source or other code where the country of origin or its reliability is not known.
- Inappropriate or unintended use, which could lead to legal exposures.
Addressing these challenges requires a multifaceted approach. Organizations must develop a comprehensive A.I. strategy (see A.I. GRC below), addressing the technical aspects of A.I. implementation and ethical, legal, and social considerations. They must also invest in upskilling their workforce and adopt a culture of continuous learning to keep up with this rapidly evolving technology. This also requires ongoing collaboration between organizations, regulators, and policymakers. These collaborations can help create a conducive environment for A.I. usage, with adequate regulations to manage risks without stifling innovation. As A.I. technology evolves, organizations must remain vigilant, adaptive, and ethical in their A.I. journey.
If the enterprise has existing model risk management (MRM) processes, A.I. models that are being used for process automation should be handled within an MRM framework that this paper promotes. It is critical to not reinvent the wheel and instead adapt existing MRM best practices.
Increasing Regulatory & Legal Pressure on A.I.
A cavalier approach to A.I. has led to a monumental lack of structured A.I. governance within organizations at a time when there is a growing need for enterprise visibility into A.I. and its use. Organizations should keep an accurate inventory of A.I. technology and models, documentation, and defined roles and responsibilities in A.I. governance, risk management, and compliance throughout the A.I. use lifecycle. A.I. is evolving rapidly, fundamentally reshaping various industries, including healthcare, finance, and defense. However, its escalating integration into everyday life has raised many legal and regulatory challenges.
There are several legal issues tied to A.I. usage. These include:
- Privacy and Data Protection and Leakage. A.I. systems often require large amounts of potentially sensitive data, posing risks to privacy rights and data protection. For instance, A.I. applications like facial recognition technology have raised significant privacy concerns. Therefore, jurisdictions worldwide are deliberating on stricter data protection laws to control A.I. misuse. This also includes data leakage, where responses may inadvertently include sensitive information from training data sets. And given the often global nature of organizations, issues around cross-border transfer of data also arise.
- Bias and Discrimination. A.I. systems can potentially reflect and amplify societal biases, which could lead to discriminatory practices. There have been cases where A.I. algorithms used in hiring or criminal justice systems have produced biased outcomes. Lack of visibility into methodology also makes diagnosis and solving of bias/discrimination difficult to identify and resolve.
- Liability & Accountability. There is a legal ambiguity surrounding who should be held accountable when A.I. systems cause harm, or in the event of errors or failures that lead to compliance violations.
- Intellectual Property Rights. Questions regarding A.I.’s creative outputs, whether these should be considered intellectual property, and if so, who should hold the rights, remain largely unresolved. There are also questions about intellectual property rights associated with Inputs, and whether or not these comply, for example with licensing and copyright terms and conditions.
- Security. A.I. could be exploited for malicious purposes, like deepfakes or autonomous weapons, necessitating legal provisions for managing these risks.
Regulators are focusing on these legal issues with increased regulatory requirements and scrutiny in the governance and use of A.I. It is understood that A.I. is becoming a necessary part of business, but regulators want to ensure that A.I. is governed, developed, validated, used, and maintained properly to ensure the stability of organizations and industries. In that context, this is putting greater emphasis on A.I. governance and risk management. Financial services regulators have been leading in this area. MRM is an established function in financial services, recent statements highlight A.I. as “just another” model type to be included (e.g., OCC model Risk handbook and PRA SS1/23).
The regulatory landscape for A.I. varies globally, reflecting different cultural, societal, and political contexts. However, many jurisdictions are moving towards comprehensive legal frameworks to manage A.I. risks.
- European Union. The EU has been at the forefront of A.I. regulation, with its proposed A.I. Act aiming to establish robust and flexible rules for A.I. use across its member states.
- United States. The U.S. currently relies on sector-specific regulations, like the Fair Credit Reporting Act for A.I. in finance. However, there is an ongoing discussion on comprehensive federal A.I. laws. As well as OCC 2011-12 SR 11/7 for model risk management in finance. The White House has published an AI Bill of Rights that will be applicable across the board.
- United Kingdom. In the U.K., the PRA CP6/22 proposed firms should adopt five principles which it considers to be key in establishing an effective MRM framework. The principles were intended to complement existing requirements and supervisory expectations in force on MRM.
- China. China’s A.I. regulation encourages innovation while setting certain limits to protect national security and public interests.
- Other International Efforts. Organizations such as the OECD and UNESCO are working towards global A.I. standards and principles to ensure responsible A.I. use.
A.I. presents both enormous potential and significant challenges. As A.I. use grows, corresponding legal and regulatory frameworks evolve in tandem to mitigate risks while promoting innovation. Harmonizing regulations across jurisdictions and developing comprehensive, flexible laws are key steps to responsibly integrating A.I. into society. A.I.’s complexity and rapid evolution of A.I. necessitate ongoing research and dialogue among all stakeholders – regulators, A.I. developers, users, and society at large.
The Bottom Line: A.I. is rapidly growning in variety, complexity, and use within organizations. It is quickly moving from tactical focus to a strategic pillar that provides the infrastructure and backbone for strategy and decisions at all levels of the organization. Time and evolution of A.I. left ungoverned bring forth loss and potential disaster. Unfortunately, many organizations lack governance and architecture for A.I. risk management. Organizations need to provide a structured approach for A.I. governance, risk management, and compliance that addresses the A.I. governance, lifecycle, and architecture to manage A.I. and mitigate the risk they introduce while capitalizing on the significant value of A.I. when properly used.
This blog is an excerpt from GRC 20/20’s latest research paper, READ MORE in: A.I. GRC: The Governance, Risk Management & Compliance of A.I.