The European Union (EU) on Saturday reached provisional agreement on the AI Act – a broad legal framework limiting how artificial intelligence can be used.
“The EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide,” claimed EU Commission president Ursula von der Leyen.
According to von der Leyen, the agreement focuses on identifiable risks, thereby creating legal certainty for the industry and the technology’s development.
The Commission’s description of the Act outlines a tiered risk management system, with different AIs categorized by their impact on rights.
The majority of AI systems fit within the “minimal risk” category, the classification used for auto recommendation systems, spam filters and the like. Participation in AI codes of conduct for providers of such services will be voluntary.
AI systems deemed “high-risk” – such as those related to critical infrastructures, education access assessment, law enforcement, or biometric identification – will be subjected to stricter requirements.
Regulations could demand detailed documentation, higher quality data sets, human oversight and risk-mitigation systems.
Anything presenting a clear threat to fundamental rights falls in the “unacceptable risk” tier and is not permitted.
AIs in this category include predictive policing, workplace emotional recognition systems, manipulation of human behavior to circumvent free will, or categorizing people based on qualities including political or religious persuasion, race, or sexual orientation.
The Act will also require labelling of deepfakes and AI generated content, while chatbot users must be made aware they are conversing with a machine.
Meanwhile, foundational models used in AI that require training upwards of 1025 flops will face additional regulation in 12 months’ time.
With a six-month lead time to compliance, AI developers will need to ensure any feature that falls within the unacceptable risk category is quickly removed from their products. Compliance with regs for “high risk” AI must also be in place within half a year of the Act’s passage.
Being found afoul of the Act brings substantial fines that AI law expert Barry Scannell evaluated as “a significant financial risk” for businesses.
These fines are anywhere from 1.5 percent of their global turnover or $8 million, to seven percent or $37.7 million – depending on the violation and the developer’s capacity.
The Commission did hint it could explore “more proportionate caps” on fines for startups and SMEs.
According to Scanell, there’s a lot for businesses to consider. Those that invest in biometric and emotion recognition may need “major strategic shifts.”
“Additionally, enhanced transparency requirements might challenge the protection of intellectual property, necessitating a balance between disclosure and maintaining trade secrets,” wrote the Dublin-based attorney.
He also assessed that upgrading data quality and acquiring advanced bias management tools could raise operational costs. The increased documentation and human oversight could also present a business nuisance in the form of time and money.
The agreement was signed after 38 hours of negotiation. European Parliament member Svenja Hahn praised the result as preventing massive overregulation, but expressed concerned it could use “more openness to innovation and an even stronger commitment to civil rights.”
The lawmaker was disappointed by the failure to prevent allowance of real-time biometric identification “against the massive headwind from the member states,” but expressed she was pleased by the prevention of biometric mass surveillance.
Big Tech was also not entirely satisfied with the Act. The Computer and Communications Industry Association (CCIA) quipped that “the outcome seems to indicate that future-proof AI legislation was sacrificed for a quick deal” and “is likely to slow down innovation in Europe.”
CCIA members represent a who’s who of internet services, software and telecom companies – including names like Amazon, Apple, Cloudflare, Intel, Google, Samsung, and Red Hat.
The CCIA complained in its canned statement about the increased requirements for some technologies and the full on bans for those deemed an “unacceptable risk.”
“This could lead to an exodus of European AI companies and talent seeking growth elsewhere,” asserted the CCIA.
While the agreement is fairly set, it’s not an absolutely done deal. It still must pass formal approval by the European Parliament and the Council. It will enter into force 20 days after publication in the Official Journal.
Von der Leyen promised to continue the work of AI regulation at an international level – through involvement in the G7, OECD, Council of Europe, G20 and UN. ®