The A.I. Wild West is Over: There is a New Law in Town, The EU AI Act
In a world reminiscent of the Wild West, where Artificial Intelligence (AI) roamed free and unbridled, businesses and organizations for the past few years have harnessed its power, at times haphazardly, to propel themselves into a future filled with promise and potential.
However, the flip side of this unchecked freedom was a landscape riddled with risks – data privacy breaches, bias, opaque decision-making, and more. As the dust settles, a new sheriff has arrived – the EU AI Act, heralding an era of strict AI governance, what GRC 20/20 calls AI GRC (AI Governance, AI Risk Management, and AI Compliance), that requires extensive testing of AI systems, especially those considered high-risk.
OCEG defines GRC as “a capability to reliably achieve objectives, address uncertainty, and act with integrity.” Adapting this definition of GRC to address the specifics of AI GRC, AI GRC is the capability to reliably achieve the objectives of AI models and their use, address uncertainty and risk in the use of AI, and act with integrity in the ethical, legal, and regulatory use of AI in the organization’s context.
The EU AI Act, much like the mythical lawmen of the 1800s, seeks to bring order to a chaotic frontier of AI use within organizations. Its scope extends beyond the borders of Europe, influencing global businesses that must respond to it. The implications are monumental, with the act imposing obligations on any entity operating within or dealing with the EU’s member states and its citizens. The most alarming of these is the potential fine for non-compliance, which can reach up to 35 million euros or 7% of global turnover, underscoring the act’s seriousness in enforcing responsible AI usage.
The EU AI Act categorizes AI systems based on the level of risk they pose, with “high-risk” AI systems receiving particular attention due to their potential impact on safety and fundamental rights.
For these high-stake scenarios, organizations must now ensure data quality, enhanced protection measures, and adherence to ethical standards. The act also bans specific uses of AI that are considered harmful, such as certain types of biometric identification and social scoring systems, bringing a more humane and ethical approach to AI development and deployment.
AI systems classified as high-risk encompass technologies used in various critical sectors:
- Critical Infrastructures. Such as transport systems, where AI can significantly impact citizens’ safety and health.
- Education and Vocational Training. For instance, AI that scores exams, potentially influencing educational paths and career trajectories.
- Product Safety Components. For example, AI applications in robot-assisted surgery and medical devices.
- Employment and Worker Management. Including CV-sorting software for recruitment, which can affect employment and self-employment opportunities.
- Essential Services. Examples include AI in credit scoring that could deny loans to individuals.
- Law Enforcement. AI systems that might infringe upon fundamental rights, such as tools evaluating evidence reliability.
- Migration, Asylum, and Border Control. This covers AI tools like automated visa application processing.
- Justice and Democratic Processes. AI systems used in searching for court rulings are examples here.
These high-risk AI systems are subject to stringent conditions before market release/use. This includes rigorous data management and documentation processes, high levels of transparency, and accountability to ensure that risks are managed effectively. The aim is to prevent or mitigate potential harms or violations of individual rights and freedoms arising from using AI in these critical areas. It reminds me of the testing and validation that has to be done in FDA-validated systems in life sciences. Organizations operating high-risk AI systems need to address:
- Thorough AI risk assessment and mitigation strategies.
- Assurance of high-quality datasets to minimize risk and avoid biased outcomes.
- Comprehensive activity logs for result traceability and AI usage.
- In-depth documentation for AI assessment and validation by authorities.
- Clear, detailed information for AI users.
- Measures for adequate human oversight of AI to reduce risk.
- Exceptional robustness, security, and accuracy controls built into AI.
Then, there are limited-risk AI systems. The term “limited risk” in AI mainly pertains to transparency issues. The AI Act mandates explicit transparency obligations to ensure users are informed when interacting with AI systems, like chatbots, so they can make knowledgeable decisions. Moreover, providers must label AI-generated content, including texts, audio, and video (especially deep fakes), particularly when intended to inform the public on significant issues, indicating their artificial origin.
Finally, there are minimal or no-risk AI systems. The AI Act permits the unrestricted usage of AI systems posing minimal risk, such as AI-enhanced video games or spam filters. Most AI systems currently used in the EU are categorized within this minimal-risk bracket.
The EU AI Act isn’t merely a set of prohibitions; it’s a comprehensive framework demanding a paradigm shift in how organizations develop, deploy, and manage AI. Transparency becomes paramount, particularly for high-risk AI systems, where developers must provide detailed information about their functioning, data usage, and human oversight mechanisms. This level of transparency aims to mitigate the risks associated with the ‘black box’ nature of advanced AI algorithms.
Preparing for the New EU AI Act Frontier – What Organizations Should Do
Organizations must adapt to survive and thrive in this new environment as the AI Act reshapes the landscape. Here’s a roadmap to help navigate these changes:
- AI GRC Oversight. Establish a robust AI governance framework, combining the right policies, roles, and an inventory system that aligns with organizational objectives and values.
- AI GRC Lifecycle Management. Implement a comprehensive lifecycle approach encompassing AI acquisition, development, use, maintenance, and eventual retirement to ensure effective governance across all stages of AI usage.
- Developing and Maintaining an AI Inventory. Undertake a thorough AI discovery process to catalog all AI technologies used within the organization. This inventory should be regularly updated and include details like ownership, development history, and documentation of each AI model.
- Validation and Control. Emphasize the importance of validating AI models for quality and reliability and embed controls throughout the AI components to ensure proper use and prevent misuse.
- Continuous Monitoring and Assurance. Regularly audit and assess AI systems to confirm they function as intended, comply with set standards, and adapt to changes in the business environment.
- Technology and Information Architecture. Build a technology architecture that supports AI GRC management. This includes model management, robust data management capabilities, compliance tracking, and integration with other organizational systems.
- Ethical and Transparent AI Usage. Foster an organizational culture that values ethical AI usage and transparency. Ensure your AI systems are understandable (explainable) and within ethical guidelines and legal boundaries.
The arrival of the EU AI Act marks the end of the AI Wild West. It mandates a structured, responsible approach to AI, emphasizing governance, risk management, and compliance. Organizations worldwide must now saddle up and journey through this new landscape, ensuring their AI initiatives align with this more structured and ethically responsible future. The new law is in town, and it’s reshaping the AI frontier – one regulation at a time. See what GRC 20/20 has to say about this in our research report, A.I. GRC: The Governance, Risk Management & Compliance of A.I.