1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

How the EU plans to regulate artificial intelligence

December 9, 2023

The EU Parliament and the member states agreed on a draft of their new AI Act. So what exactly will the landmark regulations entail?

https://p.dw.com/p/4ZyYM
A demonstration of AI facial recognition
Mass facial recognition to classify people will be banned in the EUImage: David Mcnew/AFP/Getty Images

The impact that using artificial intelligence  will have in almost all areas of life is enormous. While there are huge opportunities for commercial enterprises, there are also risks for users. Even Sam Altman, the developer of the ChatGPT language model, has made such warnings. Some scientists even argue that there could be a threat to humans if artificial intelligence develops aggressive applications beyond our control.

This is why the EU set out to be the first major economic region worldwide to develop comprehensive regulations for AI. The aim is to achieve comprehensible, transparent, fair, safe and environmentally friendly AI, according to the European Commission's draft legislation. But this need not hinder development opportunities for AI startups, EU Industry Commissioner Thierry Breton said after the Commission, European Parliament representatives and the member state Council agreed on the proposal in what is known as a "trilogue" meeting between the three entities, which must now be approved by a committee vote and confirmed by the plenary.

EU Industry Commissioner Thierry Breton
Industry Commissioner Thierry Breton emphasized that the EU is the first to regulate AIImage: EU/Lukasz Kobus

So what will be regulated?

The EU formulated a neutral definition of artificial intelligence, regardless of the technology used. This is intended to enable the law to be applied to future developments and the next generations of AI. The rules for specific AI products can then be issued in the form of simple ordinances.

AI products are divided into four risk classes: Unacceptable risk, high risk, generative AI and limited risk.

Prohibited

Systems that force people to change their behavior, for example toys that encourage children to perform dangerous actions, fall into the unacceptable category. The same goes for remote-controlled biometric recognition systems that recognize faces in real time. AI applications that divide people into classes based on certain characteristics such as gender, skin color, social behavior or origin will be banned.

Exceptions will be made for the military, intelligence services and investigative authorities.

"My Friend Cayla" dolls on a shelf
Toys that observe or direct children's behavior will be bannedImage: Dirk Shadd/ZUMA/picture alliance

Only with approval

AI programs that pose a high risk will be subject to a review before they are approved for the market to prevent any impact on fundamental rights. These risky applications include self-driving cars, medical technology, energy supply, aviation and toys.

However, they also include border surveillance, migration control, police work, the management of company personnel and recording biometric data in ID cards.

Programs intended to help with the interpretation and application of European AI law are also classified as high-risk and subject to regulation.

Transparency for generative AI

According to EU legislators, systems that generate new content and analyze vast amounts of data, such as generative AI products like ChatGPT from Microsoft subsidiary OpenAI, pose a medium risk.

Companies are obliged to be transparent about how their AI works and how it prevents illegal content from being generated. They must also disclose how the AI was trained and which copyright-protected data was used. All content generated with ChatGPT, for example, must be labeled.

Limited regulations

According to the new EU rules, programs that manipulate and recreate videos, audio or photos pose only a low risk. This also includes so-called "deep fakes," which are already commonplace on many social media platforms. Service programs that look after customers also belong to this risk class, with only minimal transparency rules to be applied.

Users must simply be made aware they are interacting with an AI application and not with humans. They can then decide for themselves whether or not to continue using the AI program.

Artificial intelligence conquers the car

When will the new law take force?

After three long days of negotiations, the three main EU institutions — the European Commission, Parliament and Council of Ministers — have agreed on a preliminary draft law, which does not yet contain all the technically necessary provisions. This must now be formally approved by the European Parliament and the Council, the representative body of the 27 member states, which is due to take place in April 2024 at the end of the parliament's legislative period. Member states will then have two years to transpose the AI law into national law.

Given the rapid developments in artificial intelligence, there is a risk that the EU rules will already be outdated by the time they come into force, German Christian Democratic MEP Axel Voss warned before the whole process even began.

ChatGPT now offers paid programs that can be modified by users according to their wishes and specifications. According to UK broadcaster BBC's research, these "toolkits" can for instance do things like  write fraudulent emails for hackers or people with criminal intentions.

"We need to make sure that everything that has been agreed upon works in practice. The ideas in the AI law will only be workable if we have legal certainty, harmonized standards, clear guidelines and clear enforcement," Voss said in Brussels on Friday..

How has the tech sector reacted?

The Computer and Communications Industry Association in Europe (CCIA) warned on Saturday that the EU's compromise proposal is "half-baked" and could over-regulate many aspects of AI. "The final AI Act lacks the vision and ambition that European tech startups and businesses are displaying right now. It might even end up chasing away the European champions that the EU so desperately wants to empower," CCIA Policy Manager Boniface de Champris told DW.

Consumer advocates from the European Consumer Organisation (BEUC) lobbying group also criticized the draft law. Their initial assessment said the law is too lax because it gives companies too much room for self-regulation without providing sufficient guard rails for virtual assistants, toys or generative AI such as ChatGPT.

Sam Altman, AI developer at OpenAI (left) with UK Prime Minister Rishi Sunak in London at the AI summit in November 2023
Non-binding declarations at the AI summit in London in November: Sam Altman, AI developer at OpenAI (left) with UK Prime Minister Rishi SunakImage: ALASTAIR GRANT/AFP/Getty Images

How does the EU now compare to other countries?

The United States, United Kingdom and 20 other countries have issued data protection rules and recommendations for AI developers, but none of these are legally binding, the expectation being that big tech companies working on AI should voluntarily monitor themselves. A "Safety Institute" in the US is meant to assess the risks of AI applications, while President Joe Biden has instructed developers to disclose their tests if national security, public health or safety are at risk.

In China, the use of AI for private customers and companies is severely restricted because the communist regime is afraid that it will no longer be able to censor learning systems as easily as censored the internet. ChatGPT, for example, is not available in China. However, facial recognition is already being used on a large scale on behalf of the state.

This article originally appeared in German.

Bernd Riegert
Bernd Riegert Senior European correspondent in Brussels with a focus on people and politics in the European Union