After a marathon negotiation – estimated to have lasted more than 36 hours – the European Commission, the European Parliament and the Council of the European Union reached a provisional agreement on the Artificial Intelligence Law either AI Act.
The regulation, which Parliament approved to negotiate last June, is the first of its kind worldwide. And although it has undergone modifications since the European Commission proposed it in 2021, it has taken a new step towards its implementation. Something that, let us remember, will not happen immediately.
The political agreement that has been reached in the trilogue to give the green light to the Artificial Intelligence Law will now have to be ratified. Both the European Parliament and the Council of the European Union must formally approve it. While its application will be completed only two years after its publication.
This means that, if the legislation is approved in 2024, it will formally take effect from 2026. Although there are exceptions for some points. The prohibitions established by the standard will begin to apply after 6 months. While the rules established for “general purpose artificial intelligence” will be applicable one year after their approval.
What does the European Union Artificial Intelligence Law consist of?
The Artificial Intelligence Law of the European Union is presented as a rule book unique in the world, since it establishes the parameters that must be met for the development and use of AI at its different levels. This covers a very wide spectrum of possibilities. From the use of an antispam filter in an email service, to the collection of biometric data in real time.
According to the European Commission, the rules set out in the Artificial Intelligence Law are “future-proof.” That is, the regulations have been created in such a way that they do not become obsolete in the face of the rapid evolution of technology. Its direct application will be identical in all EU member countries, of course.
As was already known in advance, the European Union Artificial Intelligence Law establishes an approach based on risk categorization. This had caused quite a lot of controversy at the time, since the legislation considered applications or platforms like ChatGPT to be high risk.
With the Artificial Intelligence Law, four categories are established:
- Minimal risk: According to the European Commission, the “vast majority” of artificial intelligence systems will be categorized in this way. They do not have to comply with specific obligations because they do not represent a risk to the security and rights of citizens. Despite this, its developers will be able to commit to additional codes of conduct. This section mentions cases such as anti-spam filters or AI-based recommendation algorithms.
- High risk: The new law will force the developers of these artificial intelligence systems to comply with a series of very strict requirements. These will range from ensuring that the data used to train the language models is of “high quality” to including risk mitigation systems. They must also present detailed documentation, be accurate, provide clarity in the use of user information, and have human supervision.
- Specific risk against transparency: In this case, the Artificial Intelligence Law rests on chatbots and generative AI systems. The legislation establishes that all synthetically generated content must be identified as such – something that platforms such as YouTube or Amazon have already begun to implement with watermarks that cannot be altered – and that users will have to be informed when they are interacting in a manner directly with a machine. Likewise, the public must be notified when emotion recognition or biometric categorization systems are in use.
- Unacceptable risk: Artificial intelligence systems—or certain implementations thereof—that fall into this category will be strictly prohibited. The Artificial Intelligence Law includes in this section the use of technology to manipulate people and restrict their free will. The use of AI for emotion recognition in the workplace will also not be allowed. Nor for companies or governments to implement a system of control over the behavior of citizens. While the collection of biometric data in public places for police purposes will also be vetoed, although not always.
Rules for general purpose AI models
The Artificial Intelligence Law considers as “general purpose” the models that today give life to some of the most popular platforms. For example, GPT-3.5 or GPT-4 (ChatGPT), PaLM 2 or Gemini (Bard), or LLaMA 2 (Code Llama, Audiocraft), just to mention some of the best known.
The regulations will require the companies that develop them to provide transparency, as well as to comply with “binding obligations”. They will be established through codes of practice that the European Commission will develop together with experts from the scientific community, the technology industry and civil society.
“For very powerful models that could pose systemic risks, there will be additional binding obligations related to risk management and monitoring of serious incidents, conducting model evaluations and adverse testing,” the European Union’s executive arm said. A message that points directly to firms like OpenAI, Google and Meta.
The use of biometric data: prohibitions and exceptions
The collection and use of citizens’ biometric data has been one of the points of greatest discussion in the trilogue. Finally, the European Commission, Parliament and the Council agreed limits and exceptions that are important to mention.
First of all, the European Union Artificial Intelligence Law prohibits the creation of databases intended for facial recognition using records from video surveillance cameras or images available on the web. Biometric categorization, meanwhile, cannot be carried out based on parameters such as race, religion, sexual orientation or philosophical and political beliefs of people.
Added to this is what we already mentioned in the “unacceptable risk” section. For example, the recognition of emotions in the workplace, the manipulation of people, or the implementation of scoring systems based on the social behavior of individuals.
And while the collection of biometric data in public places for police or law enforcement purposes is also generally prohibited, there are some important exceptions here. The Artificial Intelligence Law of the European Union will allow it when dealing with specific crimes. For this, safeguards will be established and prior judicial authorization will be required. But its use will also be enabled in two other specific cases:
- Biometric identification after the fact or ex post: The search will be allowed for people who are suspected of having committed a serious crime, or who have already been convicted of it.
- Biometric identification in real time: As in the previous point, its use will be enabled to identify or find people suspected or convicted of committing serious crimes. Among them, acts of terrorism, homicides, kidnappings, human trafficking, rape, etc. Although it is not limited to this, it will also be used to find victims of this type of crime. It is worth clarifying, in any case, that the Artificial Intelligence Law will not enable the use of biometric identification in real time in an unlimited manner. The regulations establish restrictions for their implementation at a specific time and place.
How the Artificial Intelligence Law will be applied
Although each country in the European Union will apply the Artificial Intelligence Law through its own regulatory bodies, European AI Office will be created. According to the European Commission, its mission will be to “ensure coordination” at the continental level. For general-purpose AI models, panels with independent experts will be created to maintain oversight and set alerts if necessary.
“Together with national market surveillance authorities, the AI Office will be the first body globally to enforce binding rules on artificial intelligence and is therefore expected to become an international reference point.”
The Artificial Intelligence Law establishes a fine scheme for companies that do not comply with their guidelines.
- €35 million or 7% of global annual turnover (whichever is higher) in the case of violating prohibitions on the use of artificial intelligence.
- €15 million or 3% of global annual turnover (whichever is higher) in the case of other types of violations.
- €7.5 million or 1.5% of global annual turnover (whichever is higher) in the event of incorrect information being provided.
As we said at the beginning, the Artificial Intelligence Law is expected to achieve full application only in 2026. Of course, European authorities want companies to begin adopting some of the key guidelines of the regulation before it comes into force. That is why there is a AI Pactwhich aims for companies that develop AI models to voluntarily commit to comply with certain obligations before the regulations are approved and published.