Artificial Intelligence

Chinese tech companies face increased EU compliance costs amid bloc’s new AI rules


The world’s first comprehensive artificial intelligence (AI) rules, which will come into effect throughout the European Union (EU) on August 1, is projected to raise the costs of assessment and compliance for Chinese tech enterprises doing business in the bloc’s 27 member states, according to industry experts.
The Artificial Intelligence Act, which was approved by the EU Council in May after it was passed into law by the European Parliament in March, aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from so-called high-risk AI, while boosting innovation and establishing Europe as a leader in the technology.

Some Chinese AI firms already expect to spend more time and money to comply with the EU’s new rules, while faced with the spectre of overregulation potentially hindering innovation.

Hong Kong-based Dayta AI, which provides retail analytics software around the world, expects the EU regulation’s “compliance and assessment requirements [to] increase R&D [research and development] and testing costs” for the firm by around 20 per cent to 40 per cent, company co-founder and chief executive Patrick Tu said. He pointed out that the higher spending will cover “additional documentation, audits, and [certain] technological measures”.
A catchy artificial intelligence-related slogan on display at Google’s booth at the Hannover Messe 2024 trade fair in Germany on April 22, 2024. Photo: Bloomberg
The new EU rules’ passage and implementation reflects the global race to draw up AI guardrails amid the boom in generative AI (GenAI) services since OpenAI released ChatGPT in November 2022. GenAI refers to algorithms that can be used to create new content – including audio, code, images, text, simulations and videos – in response to short prompts.

“The EU’s institutions may give people an impression of overregulating,” said Tanguy Van Overstraeten, a partner at Linklaters and head of the law firm’s technology, media and telecommunications (TMT) group in Brussels, Belgium. “What the EU is trying to do with the AI Act is to create an environment of trust.”

The AI Act establishes obligations for the technology based on its potential risks and level of impact. The regulation consists of 12 main titles that cover prohibited practices, high-risk systems and transparency obligations to governance, post-market monitoring, information sharing and market surveillance.

The regulation will also require member states to establish so-called regulatory sandboxes and real-world testing at the national level. The rules, however, do not apply to AI systems or AI models, including their outputs, which are specifically developed and put into service for the sole purpose of scientific research and development.

The European Union took an early lead in the global race to draw up artificial intelligence guardrails. Photo: Shutterstock

If companies “want to test [an AI application] in real life, they can benefit from the so-called sandbox that can last up to 12 months, during which they can test the system within certain boundaries”, Linklaters’ Van Overstraeten said.

Non-compliance to the rules’ prohibition of certain AI practices shall be subject to administrative fines of up to 35 million euros (US$38 million) or up to 7 per cent of the offending firm’s total worldwide annual turnover for the preceding financial year, whichever is higher.

Dayta AI’s Tu said the EU’s mandates surrounding “the quality, relevance, and representativeness of training data will require us to be even more diligent in selecting our data sources”.

“Such focus on data quality will ultimately enhance the performance and fairness of our solution,” he added.

Tu said the AI Act provides “a comprehensive, user rights-focused approach” that “imposes strict limitations on personal data usage”. By comparison, the “rules in China and Hong Kong seem to focus more on enabling technological progress and aligning with the government’s strategic priorities”, he said.

On August 15 last year, Beijing implemented new GenAI regulations. It stipulates that GenAI service providers must “adhere to core socialist values” and not generate any content that “incites subversion of state power and the overthrow of the socialist system, endangers national security and interests, damages the image of the country, incites secession from the country, undermines national unity and social stability, promotes terrorism, extremism, national hatred and ethnic discrimination, violence, obscenity and pornography”.

More generally, AI models and chatbots should not generate “false and harmful information”.

“Chinese regulations require companies and products to observe socialist values and ensure that their AI outputs are not perceived as harmful to political and social stability,” said Linklaters’ Shanghai partner Alex Roberts, who also heads the firm’s China TMT group. “For multinational corporations that have not grown up with these concepts, this can cause confusion among compliance officers.”

He added that China’s regulation so far only focuses on GenAI, and “is seen as more of a state or government-led rule book”, while the EU’s AI Act “focuses on the rights of users”.

Still, Roberts described the main principles of the EU and China’s AI regulations as “very similar”. That refers to being “transparent with customers, protecting data, being accountable to the stakeholders, and providing instructions and guidance on the product”.

The European Union’s comprehensive artificial intelligence rules, which will take effect on August 1, is poised to serve as a blueprint for the world amid a push by a growing number of governments to regulate the technology. Photo: Shutterstock
Beijing has also been pushing to enact a comprehensive AI law. The State Council, China’s cabinet, listed that initiative in its annual legislation plans for 2023 and 2024. A draft law, however, has not yet been proposed.
Other jurisdictions in Asia have also been working on AI regulations. South Korea, for example, last year drafted its “Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI”. This proposed regulation is still under review.

“We’re now seeing some governments in the [Asia-Pacific] region taking large chunks from the EU’s regulation on data and AI, as they work on their own AI legislation,” Linklaters’ Roberts said. “Businesses can certainly consider lobbying their local government stakeholders to achieve more harmony and consistency in cross-market rules.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.