France and Europe do not want to see the new digital revolution that constitutes artificial intelligence (AI) pass them by, without participating in it, but they are concerned about the potential risks that AI poses, particularly on freedom of individuals. After having long hindered the drafting of a European text, France, keen to grow its start-ups working in AI, finally gave its agreement. Lawyer Xavier Prés (partner of the Varetprès Killy firm) answers six questions posed by this text.
What is the “AI Act”?
This is the proposed European Union (EU) regulation on artificial intelligence. It is a transversal text which is not limited to copyright, but which broadly embraces all sectors. The text aims to provide the EU with a general system, unprecedented at global level, to allow AI systems to develop in a framework of trust and in compliance with fundamental rights and EU values. .
The text was adopted unanimously on February 2, 2024 by the 27 EU member states. It was voted on by the European Parliament on March 13, 2024. It will be applied gradually, from the end of the year, in order to allow economic operators, private and public, to take the necessary measures to bring their systems into compliance. of AI.
Who is concerned ?
The EU regulation on AI is a transversal text. It is intended to apply widely to any operator of AI systems (supplier, deployer, distributor, manufacturer, importer) whose head office is located in the EU, or, under certain conditions, in a third country when AI systems are commercialized in the EU. It is also intended to apply to all sectors, excluding however exclusively military, defense or national security purposes or even for the exclusive purposes of research and scientific development.
What does the text provide?
In essence, this regulation tends to promote innovation, while protecting society, by following a regulatory approach graded according to risks: AI systems are classified according to their level of risk; legal constraints vary in proportion to the risk.
What are the most dangerous AIs?
The most dangerous are those that are considered to present an “unacceptable risk”. They are prohibited. In essence, these are AI systems that use techniques to alter a person's decision-making power (e.g. subliminal techniques), that evaluate or rank people (social score) or that use systems remote biometric identification “in real time” in spaces accessible to the public for law enforcement purposes, except in special cases.
After that ?
An intermediate category targets “high-risk” AI systems. These are, on the one hand, AI systems which, under certain conditions, are used in connection with a product covered by EU product safety legislation, according to a list defined in Annex 2 ( nearly twenty EU regulations and directives are thus concerned, for products as diverse as toys, elevators, cables, medical devices, transport, etc.). On the other hand, these are AI systems falling into the following eight crucial areas: biometric data; critical infrastructure; education and professional training; employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement services; the management of migration, asylum and border controls; and the administration of justice and democratic processes.
The applicable constraints are numerous and relate in particular to compliance with harmonized standards, declaration of conformity, registration in an EU database, conformity marking, as well as the establishment of a technical documentation allowing in particular to ensure compliance with the various applicable constraints.
What are the least dangerous AIs?
The least dangerous are those which are considered to present a “limited risk”. This is a residual category, including all other AI systems that are neither prohibited (unacceptable risk) nor heavily regulated (high risk). Among the latter, generative AI systems are particularly targeted, such as ChatGPT, DALL-E, Midjourney (see ill.), Stable Diffusion and Gemini. They are subject to various transparency obligations. One of these concerns copyright and should make it possible to force those who exploit generative AI to identify the content used to power the AI, including, first and foremost, creations protected under copyright. which we know are widely used as training data for algorithms, often without the authorization of the authors. So far at least…