Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems such as Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Lionel Bonaventure | AFP | Getty Images

A key committee of lawmakers in the European Parliament has approved a first-of-its-kind regulation on artificial intelligence – bringing it closer to becoming law.

The approval marks a milestone in the race among authorities to get to grips with AI, which is developing at breakneck speed. The law, known as the European AI Act, is the first law for AI systems in the West. China has already drafted draft rules designed to manage how companies develop generative AI products like ChatGPT.

The law takes a risk-based approach to regulating AI, where the obligations of a system are proportionate to the level of risk it poses.

The rules also specify requirements for providers of so-called “basic models” such as ChatGPT, which have become a key concern for regulators, given how advanced they are becoming and the fear that even skilled workers will be displaced.

What do the rules say?

The AI ​​Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk.

Unacceptably risky applications are banned by default and cannot be deployed in the block.

They include:

  • AI systems that use subliminal techniques, or manipulative or deceptive techniques to distort behavior
  • AI systems that exploit vulnerabilities of individuals or specific groups
  • Biometric categorization systems based on sensitive attributes or characteristics
  • AI systems used for social scoring or reliability evaluation
  • AI systems used for risk assessments that predict criminal or administrative offenses
  • AI systems create or augment facial recognition databases through undirected scraping
  • AI systems that infer emotions in law enforcement, border management, the workplace and education

Several lawmakers had called for the measures to be more expensive to ensure they cover ChatGPT.

To that end, demands have been placed on “foundational models”, such as large language models and generative AI.

Developers of basic models will need to apply security controls, data governance measures and risk mitigation before publishing their models.

They will also need to ensure that the educational data used to inform their systems does not breach copyright law.

“The providers of such AI models would need to take steps to assess and mitigate the risks to fundamental rights, health and safety and the environment, democracy and the rule of law,” Ceyhun Pehlivan, counsel at Linklaters and co-leader of the law firm’s telecommunications, media and technology group and IP in Madrid, told CNBC.

“They would also be subject to data governance requirements, such as examining the appropriateness of data sources and possible biases.”

It is important to stress that although the law has been adopted by legislators in the European Parliament, it is a long way from becoming law.

Why now?

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems such as Microsoft-supported OpenAI’s ChatGPT and Google’s Bard.

Google on Wednesday announced a series of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on some tasks.

New AI chatbots like ChatGPT have captivated many technologists and academics with their ability to produce human-like responses to user prompts powered by large language models trained on massive amounts of data.

But AI technology has been around for years and is integrated into more applications and systems than you might think. It determines which viral videos or food photos you see on your TikTok or Instagram feed, for example.

The purpose of the EU proposals is to provide some rules of the road for AI companies and organizations that use AI.

Tech industry reaction

The rules have raised concerns in the technology industry.

The Computer and Communications Industry Association said it was concerned that the scope of the AI ​​law has been broadened too much and that it could capture forms of AI that are harmless.

“It is worrying to see that broad categories of useful AI applications – which pose very limited risks, or none at all – would now face stringent requirements, or perhaps even be banned in Europe,” Boniface de Champris, policy director at CCIA Europe, told CNBC via email.

“The European Commission’s initial proposal for the AI ​​law takes a risk-based approach and regulates specific AI systems that pose a clear risk,” de Champris added.

“Members have now introduced all sorts of changes that change the very nature of the AI ​​Act, which now assumes that very broad categories of AI are inherently dangerous.”

What the experts say

Dessi Savova, head of continental Europe for the technology group at law firm Clifford Chance, said the EU rules would set a “global standard” for AI regulation. However, he added that other jurisdictions including China, the US and the UK are rapidly developing their own responses.

“The long reach of the proposed AI rules itself means that AI players in every corner of the world need to care,” Savova told CNBC via email.

“The real question is whether the AI ​​Act will set the single standard for AI. China, the US and the UK to name a few are defining their own AI policies and regulatory approaches. Undeniably, they will all be closely following the AI ​​Act negotiations in tailor their own approaches.”

Savova added that the latest draft AI law from parliament would incorporate many of the ethical AI principles that organizations have been pushing for.

Sarah Chander, senior policy adviser at European Digital Rights, a Brussels-based digital rights campaign group, said the laws would require basic models like ChatGPT to “undergo testing, documentation and transparency requirements.”

“While these transparency requirements will not eradicate infrastructural and financial problems with the development of these massive AI systems, it does require tech companies to disclose the amount of computing power required to develop them,” Chander told CNBC.

“There are currently several initiatives to regulate generative AI around the world, such as China and the United States,” Pehlivan said.

“However, the EU’s AI law is likely to play a crucial role in the development of such legislative initiatives around the world and lead to the EU once again becoming a standard-setter on the international stage, in the same way as what happened in the context of the general the data Protection Regulation.”

[pub1]