Artificial Intelligence

The EU Artificial Intelligence Act Is Here—With Extraterritorial Reach – Publications



LawFlash




July 26, 2024

Regulation almost always follows innovation, and the AI sector is no exception. The EU’s Artificial Intelligence Act is a world premiere. Published in the EU’s Official Journal on July 12 after many months of intense debate, it will enter into force on August 1, with most of its provisions phasing in by August 2027. The AI Act will impact a wide range of businesses and impose additional compliance obligations. Although the broad lines of the rules have been set, certain key definitions and concepts remain vague. Guidance from the regulators will be essential for parties to understand the full scope of their obligations and liabilities.

EXTRATERRITORIAL REACH

The AI Act has extraterritorial reach in certain circumstances. Notably, the Act will apply to (1) providers, even those based outside the EU, which place on the EU market, or “put into service” in the EU, AI systems or general-purpose AI (GPAI) models and (2) deployers, which have their place of establishment, or that are located, within the EU. Importantly, the Act will also apply to both providers and deployers to the extent that the “output” of the AI system is “used in the EU.”    

RISK-BASED APPLICATION

The EU has adopted a four-tier risk-based classification system, with corresponding obligations and restrictions depending on the level of risk as assessed by the EU. As discussed below, some AI systems are prohibited, while a considerable amount falls into the minimal and limited risk categories. The core of the AI Act is on “high risk” AI systems. AI systems are considered “high-risk” where:

  • The AI system is itself a certain type of regulated product, including medical devices, systems for vehicles, and toys, or
  • The AI system is a safety component of a certain type of a regulated product, or
  • The AI system meets the description of listed “high-risk” AI systems (annexed to the AI Act)

However, the classifications of the AI systems in the AI Act are not static. There are procedures to modify the risk level of AI systems, either up or down. Moreover, the AI Act provides a legal basis for the European Commission (EC) to adopt future implementing acts and amendments to keep pace with market and technological developments.

OBLIGATIONS ON PROVIDERS AND DEPLOYERS OF HIGH-RISK AI SYSTEMS

The enforcement of the AI Act, while primarily undertaken by EU member state enforcement authorities relative to AI systems, will be coordinated by the European Artificial Intelligence Board (EAIB), specifically created for this purpose by the EC. The EAIB will issue codes of conduct and clarifications and coordinate with the relevant EU member state authorities that will be established or designated pursuant to the AI Act. However, providers and deployers of high-risk AI are already expected to comply with the AI Act regarding, in particular:

  • Training Obligations (e.g., AI literacy training of staff and AI overseers within the organization)
  • Operational Duties (e.g., technical and organizational measures to keep the AI safe, appointing overseers within the organization, input data quality management for training)
  • Control Obligations (e.g., measures to avoid prohibited AI, human oversight of the AI, control and monitor training data, General Data Protection Regulation compliance).
  • Documentation Obligations (e.g., impact assessments where needed).

The foregoing comprises the immediate matters of concern for business. There are various nuances and exemptions, as well as rules regarding the coordination and overlap with other EU legislation. The EC also has the power to impose significant fines for failure to apply the AI Act (up to 7% of global group annual revenues or €35 million (approximately $38 million), whichever is greater). Additional enforcement measures may also be adopted by the member states. The practical application of the rules will inevitably take time to develop while they meet the challenges of a sector that evolves extremely rapidly.

THE FINE PRINT: OVERVIEW OF THE AI ACT

Risk Levels and Corresponding Obligations

Minimal Risk AI: No Restrictions

The AI Act allows the unrestricted use of minimal-risk AI in the EU, such as enabled video games or spam filters. The majority of AI systems (approximately 80%) currently used in the EU fall into this category.

Limited Risk AI: Certain Transparency Obligations

Limited risk refers to AI systems such as chatbots. For these AI systems specific transparency obligations apply: Providers should make users aware that they are interacting with a chatbot or machine so they can make an informed decision to continue or abandon the interaction. Providers will also need to ensure that AI-generated content is identifiable. For this, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated.

High-Risk AI: Restrictions and Obligations

High-risk AI systems are typically found in the following areas:

  • Biometric identification systems
  • Management and operation of critical infrastructure
  • Educational or vocational training that may determine access to education and professional course of someone’s life (e.g., scoring of exams)
  • Employment and management of workers and access to self-employment (e.g., CV-sorting software for recruitment procedures)
  • Access to and enjoyment of essential private and public services
  • Law enforcement that may interfere with people’s fundamental rights (e.g., evaluation of the reliability of evidence)
  • Migration, asylum and border control management (e.g., verification of authenticity of travel documents); and
  • Administration of justice and democratic processes (e.g., applying the law to a concrete set of facts, elections)

Certain AI systems that are intended to be used as a safety component of a product, or where the AI system is itself a product, also qualify as high risk, such as certain AI systems involving remote biometric identification.

Providers of high-risk AI systems are subject to strict compliance obligations. In particular, they must:

  • Establish a risk management system throughout the high-risk AI system’s lifecycle, requiring regular, systematic review and update (i.e., identify, analyze, and evaluate potential risks as well as adopt appropriate risk management measures)
  • Ensure data quality: deploy training, validation and testing of data sets that are relevant, sufficiently representative, free of errors, and complete according to the intended purpose
  • Implement a conformity management system that ensures compliance with the AI Act
  • Have technical documentation on compliance of the AI system
  • Allow for the automatic recording of events (logs) over the system’s lifecycle
  • Ensure that its operation is sufficiently transparent to enable deployers to interpret its output and use it appropriately
  • Achieve an appropriate level of accuracy, robustness, and cybersecurity throughout its lifecycle
  • Keep the documentation of the AI system for at least 10 years after the system is placed on the market

There are a few exceptions: for example, the AI system does not pose a significant risk of harm to the health, safety, or fundamental rights of individuals and is not considered high risk, provided the AI system is limited to: (1) perform a narrow procedural task; (2) improve the result of a completed human activity; (3) detect deviation from prior human decision-making patterns; and/or (4) perform preparatory tasks in risk assessments.

Prohibition of AI with Unacceptable Risk

The AI Act prohibits several AI applications and systems where the EU considers these to pose potential threats to fundamental rights and democracy. These include certain:

  • AI systems that are manipulative or misleading, influencing human behavior by deploying deceptive techniques
  • AI systems that might also exploit a person’s weaknesses due to age, handicap, or social/economic status
  • Biometric categorization systems that use sensitive characteristics (e.g., such as race or religion or political convictions)
  • Biometric identification systems in public spaces
  • AI-driven recognition of emotion in the workplace and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV footage for facial recognition
  • Social credit scoring based on private behavior or personal characteristics

General Purpose AI Models

General purpose AI models that can perform a wide array of tasks and be integrated into a variety of downstream applications, such as large generative AI models, are not considered AI systems. Providers are nonetheless subject to the following obligations regardless of how the systems are placed on the market:

  • Perform fundamental rights impact and conformity assessments
  • Implement risk and quality management to continually assess and mitigate systemic risks
  • Inform individuals when they interact with AI; content must be labelled and detectable as AI
  • Tests and monitoring for accuracy, robustness, and cybersecurity

Certain general-purpose AI is considered of systemic risk (mainly because of the amount of data processed and its reach) and is subject to the relevant obligations under the AI Act. The classification as high risk is critical as it creates a legal presumption that companies need to rebut.

KEY TAKEAWAYS FOR BUSINESS TODAY

While the various provisions will enter into force in stages, businesses must get ready now to comply with their obligations as AI providers or deployers—and the distinction between the two will be an important issue. While deployers (and importers and distributors) of AI systems have less far-reaching obligations, there are operations along the value chain, mirroring the system in operation for the placing on the EU market of products in other areas, that transform companies into providers of AI in the EU. The same is true for obligations triggered by modifications of general-purpose AI models.

On the other hand, providers that supply products already subject to EU regulation, into which AI systems or models are going to be incorporated, notably regarding safety, may benefit from presumptions of conformity with the AI Act in some areas, but face additional requirements through the AI Act in others.

For the next five years, the EC has the power to adopt so-called delegated acts, which can change key provisions, such as the definition of high-risk AI, high-impact general AI models, and the required technical documentation, including documentation offering presumption of conformity under existing legislation.

Still to come is guidance from the EC on the allocation of responsibilities for the various actors along the AI value chain (in particular, what constitutes a substantial modification); on obligations related to high-risk AI; on prohibited practices; on the transparency obligations; on the details of the relationship with other EU law; and even the application of the definition of an AI system. In other words, key concepts and aspects regarding the practical implementation of the AI Act are still outstanding, with no clear time limit to provide such guidance. This places businesses in the challenging position of lacking guidance on key issues as the various obligations start to apply.


Infographic – Datasource Item: EU AI Act Timeline

 

At the same time, claims for violations can be brought already, and the AI Act is subject to the Representative Actions Directive, which increases the risk for litigation brought by consumer or civil rights associations, in particular, for example, in light of the fact that the new Product Liability Directive now covers all types of software.

HOW WE CAN HELP

Morgan Lewis lawyers are well suited to help companies navigate AI Act and other AI-related compliance, enforcement, and litigation matters. Our team stands ready to assist companies designing, developing, or using AI navigate this evolving and complex legal landscape.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.