Blog
/
Risk & Strategy
Jess Bujaroski Image
Jess Bujaroski
Solution Designer

EU AI Act risk categories: Which obligations apply to your systems? 

October 25, 2024
0 min read
A professional researching EU AI Act risk categories

Artificial intelligence (AI) has taken the business landscape by storm. If AI’s ability to infer, adapt and act autonomously has already shaken up corporate operations as we know them, how might it shape — and threaten — the future? The EU AI Act risk categories attempt to answer that question.

AI’s aptitude for self-learning and borderless nature challenge governments, businesses and regulators to contain it. In March 2024, the European Union agreed to the first-of-its-kind act to regulate different AI systems according to their risks. Understanding the risk classifications and who they apply to is now a mandate for any company that wants to operate compliantly in the EU, including:

  • Risk categories in the EU AI Act
  • Details of the EU AI Act high-risk categories
  • The distinction between AI systems deployers and providers
  • Tools to help you comply with the new act

The state of the EU AI Act

At its core, the EU AI Act is a risk-based approach to categorise AI systems based on the harm they may cause. It passed in early 2024 and came into force in August 2024. Covered organisations must comply by February 2025, making the fourth quarter of 2024 a high-stakes quarter for those who provide and deploy AI.

This applies to many different entities but is of particular concern for:

  • Providers: Also called developers, these entities place AI systems or general-purpose AI models on the EU market, whether the provider is based in the EU or elsewhere.
  • Deployers: Also called users, these are entities located or operating in the EU that utilize AI systems as part of their business offering.

For a succinct overview of the EU AI Act, download out cheat sheet.

The 4 EU AI Act risk categories

The legislation classifies AI systems into four risk levels based on their intended purpose. Understanding what each level entails and which systems it applies to is essential to compliance:

  1. Unacceptable risk: AI systems like social scoring and biometric categorisation pose a clear threat to people’s safety, livelihoods and rights. To curb their use, the Act prohibits such systems, including systems that attempt to subliminally influence human behaviour through emotional recognition.
  2. High risk: AI is also used in critical infrastructure, such as transport, educational/vocational training, safety applications, worker management and more. Systems in this category have the highest potential for good — and significant risk for misuse. The Act mandates the most rigorous rules for high-risk systems. (Read more about these systems below.)
  3. Limited risk: Many AI systems interact with individuals or generate content that can be deceptive without context or appropriate notice of AI system involvement. Considered low-risk, these systems are subject to information and transparency requirements to ensure that people know when they are interacting with AI. Providers must enable the marking and detection of the system’s output, while deployers must disclose content that has been artificially generated or manipulated.
  4. Minimal or no risk: This EU AI Act risk category encompasses AI-enabled video games or spam filters, considered risk-free. Providers and deployers can use them freely but are advised to adopt a voluntary code of conduct. However, existing data protection regulations still apply.

General-purpose AI models and systems

Tools like ChatGPT are among the best-known AI tools on the market. Which risk category applies to these systems, considered general-purpose AI (GPAI)?

The Act recognises that GPAI operates more broadly and isn’t designed for the specific purposes outlined in each risk category. This makes them applicable to various use cases, but it also makes them hard to classify.

Instead of pinning them to a risk category, the Act emphasises that GPAI carries systemic risk, meaning that there are far-reaching risks inherent in the use of these tools. The Act mandates transparency requirements, risk management, reporting and surveillance related to GPAI. It is worth noting that GPAI providers could be subject to some of the same obligations as high-risk systems if their models are used in high-risk applications.

Key obligations of the EU AI Act high-risk category

High-risk is the most heavily regulated of the EU AI Act risk categories. As such, providers and deployers specifically handling high-risks systems must understand and comply with the requirements of the Act, specifically those detailed in Chapter III, Section 2.

Download our guide for a deeper dive into EU AI Act compliance.

For providers

  • Risk management: Providers must establish, document and maintain a continuous risk management system. The system must identify the risks of using the AI systems as intended, develop and test risk mitigation measures and ensure that any remaining risk is acceptable. These approaches should pay particular attention to the impact of the AI system on persons under 18 or other vulnerable groups.
  • Data and data governance: AI systems cannot exist without data, so data protection and management integration are essential to avoiding risk. Entities developing high-risk AI systems must implement data governance practices that include system design, data collection and preparation, data suitability, measures to prevent bias and data gap analysis, and to ensure data privacy.
  • Technical documentation: National authorities need specific documentation to evaluate a system’s compliance with the Act. Providers should create and continuously update technical documentation that details the elements and development process, insight into training data sets used, information about the monitoring and control of the system and what cybersecurity measures exist.
  • Record keeping: Providers should design high-risk AI systems with automated recording of events — also called logs — throughout their lifecycle to enable post-market surveillance.
  • Transparency: The Act requires that providers avoid the black box of AI and instead be able to interpret the system’s outputs and use. This includes providing detailed instructions about the system’s intended use, accuracy and any known or anticipated risks to human health or safety.
  • Human oversight: Mandating human oversight is central to the regulation. Providers must design high-risk systems to permit human intervention while minimizing risks to health and safety. The level of intervention should match the risk level of the system’s autonomy and context of use. The more risky it is for the system to act on its own, the more human oversight it should require.

For deployers

Deployers’ obligations under the Act are tied to the instructions of providers. The provider of an AI system must equip deployers with comprehensive instructions for using the system safely and responsibly. Deployers must adopt appropriate measures following those instructions — a burden general counsel can help navigate.

Any deployer that deviates from the providers’ instructions with a new use case or a change to the system can then be classified as a provider, which comes with steeper regulatory requirements.

Under the Act, deployers are responsible for:

  • Ensuring adequate AI awareness
  • Due diligence
  • Performing a fundamental rights impact assessment (FRIA)
  • Ensuring compliance and surveillance according to the provider’s instructions
  • Human oversight by natural persons
  • Transparency and information
  • Recordkeeping and logs
  • Incident reporting
  • Cooperation with authorities

While using AI systems, deployers must still comply with existing member state laws, like GDPR. This includes completing a GDPR data protection impact assessment (DPIA), recognising that AI can involve automated decision making, and high volumes of personal data.

Build a proactive EU AI Act compliance toolkit

Organisations have a relatively short runway to comply with the EU AI Act. However, the short timeline only underscores the importance of taking a proactive approach to protecting human health, safety and fundamental rights that keeps up with AI’s rapid growth.

However, complying with the EU AI Act quickly doesn’t mean cutting corners. Deployers, providers and other entities operating in the EU can both stay ahead and stay compliant — if they have the right toolkit.

CALLOUT: "Perhaps the greatest business risk around AI right now is the risk of doing nothing. You can decide how you'd like to approach it, but what I think every company needs to do is have a considered approach..." — Dale Waterman, Principle Solution Designer, Diligent

Tap into Diligent’s toolkit curated specifically for the EU AI Act to ease your compliance burden now and for the future.

security

Your Data Matters

At our core, transparency is key. We prioritize your privacy by providing clear information about your rights and facilitating their exercise. You're in control, with the option to manage your preferences and the extent of information shared with us and our partners.

© 2024 Diligent Corporation. All rights reserved.