On 14 June 2023, the European Parliament (“EU Parliament”) published its final position on the Artificial Intelligence Act (“AI Act”). Thus, after the initial draft of the European Commission (“EU Commission”) and the position of the Council of the European Union (“EU Council”), all three drafts are now available and the way is clear for the trilogue. Against the backdrop of the EU Parliament elections in 2024, this is expected to take place soon. The most important changes to the Commission’s draft are highlighted below.
Extension of the scope of application
The EU Parliament has broadened the definition of the term “artificial intelligence system (“AI system”)”, thus extending the scope of application. broader and thus extended the scope of the AI Act. AI is now defined as a machine-based system designed to operate with varying degrees of autonomy and to generate results such as predictions, recommendations or decisions for explicit or implicit goals that affect the physical or virtual environment. Thus, the parliamentarians have endorsed the OECD’s definition. Even though this definition is probably one of the most recognised, it has come under criticism: it is clearly too broad, so that even technically simple devices – such as smart home devices or “normal” software – would fall under the AI Act.
Even under the parliamentary draft, systems for the imperceptible manipulation of human behaviour, exploitation of a person’s weaknesses and social scoring remain banned. However, a number of other AI systems are added, including AI applications for predictive policing and risk assessment tools, biometric categorisation systems, vision recognition databases through scraping of social media or surveillance cameras, and emotion recognition systems in certain areas such as law enforcement. The ban on Biometric Identification Systems has also been extended.
Changes in the area of high-risk AI
There has been less of an increase in the scope of high-risk systems. However, one notable addition could have great practical relevance, as recommendation systems of very large online platforms (so-called VLOPs) are additionally classified as high-risk. AI systems for influencing elections or voter behaviour are also to be classified as high-risk. Another important innovation for high-risk AI systems is that a second classification level has been introduced: AI systems are only to be considered high-risk systems if they pose a significant risk to the health, safety or fundamental rights of natural persons.
Important changes are also made to the topic of “Generative AI” – the area of AI around which the core of the recent public discussion has revolved since the publication of ChatGPT at the end of November last year. Accordingly, generative AI is not regulated separately either in the EU Commission’s initiative draft or in the Council’s proposed amendments of December 2022. The EU Parliament sets out various requirements for providers of a foundation model. These include the establishment of a risk management system, the use of appropriate data sets, ensuring adequate quality (performance, predictability, safety, etc.) through appropriate measures, compliance with energy efficiency standards; the preparation of adequate technical documentation and instructions for use, the establishment of a quality management system, and the registration of the foundation model. Additional obligations are set for generative AI providers. They must comply with transparency obligations, ensure adequate safeguards against the creation of content that infringes EU law, in accordance with the state of the art and without compromising fundamental rights such as freedom of expression, and publicly document a summary of the use of proprietary training data. Since the Commission and the Council have not (yet) defined their positions on Generative AI, the outcome of the trialogue is to be awaited with particular interest.
Also noteworthy are the adjustments regarding fines in the case of a violation of the provisions of the AI Act. The EU Parliament has significantly reduced the fines. Only in the case of violations of placing prohibited AI systems on the market, the maximum fine was raised from up to 30 to 40 million euros (or 7% of the worldwide annual turnover). The upper limits for SMEs were deleted, but size and economic performance are to be taken into account when assessing fines. A new fine rule has been added for violations of regulations on data governance and transparency and provision of information. The draft provides for a fine of 20 million euros or 4% of the worldwide annual turnover.
Source: Taylor Wessing Insights as of 14 June 2023