"A new era is coming," says Bruce Daley, a researcher at Stanford University. According to him, " technologies built on the model of the human brain, such as deep learning, are performing tasks as varied as medical diagnosis, risk assessment, program trading, fraud detection, product recommendation, image classification, voice recognition, translation or autonomous vehicles. And the first results already speak for themselves."¹
Artificial intelligence (AI) is considered one of the most promising technologies of our time, as it can offer many benefits in various fields such as healthcare, finance, manufacturing, transportation and many others. However, it also raises many concerns, including potential risks to jobs, privacy, security and national sovereignty. It is important to understand the benefits and risks of AI so that we can get the most out of it while minimizing the risks2.
Artificial intelligence fascinates as much as it worries
Artificial intelligence has been booming since the 2010s. Quickly, GAFAM (Google, Apple, Facebook, Amazon, Microsoft) invested heavily in the field of artificial intelligence and set up teams dedicated to the research and development of this technology. Since 2020, following the COVID-19 pandemic, the use of artificial intelligence has accelerated in various fields such as disease detection, vaccine and treatment research, and propagation prediction, but also with the aim of reinforcing security, anti-fraud and crime mechanisms.
The AI market is expected to reach $11.1 billion by 2025. Gartner estimates that by 2035, AI could help increase global productivity by 40% ³.
The introduction of some artificial intelligences was widely acclaimed for their relevance and diverse knowledge in different fields. However, this initial admiration turned to distrust and fear as users began to realize the risks associated with these technologies. In December 2022, the journal Nature expressed concern about the ability of researchers to distinguish abstracts written by artificial intelligences from those written by scientists, highlighting the difficulty in establishing absolute trust in these technologies.
In light of this, it is important to understand that AI mechanisms can infringe on privacy, be discriminatory, manipulative, or even produce physical, psychological, or economic harm. Therefore, it is crucial to carefully assess the benefits and risks associated with AI so that it can be used responsibly and minimize potential dangers
Artificial intelligence: an emerging regulatory framework
There are currently few regulations specific to artificial intelligence. However, many governments and international organizations are working to develop rules to govern its use and prevent risks. It is important to ensure that AI complies with existing laws and ethical standards, including privacy and non-discrimination.
Although AI systems were already partially regulated by the General Data Protection Regulation (GDPR) in 2018, the European Commission therefore adopted on April 21, 2021 a regulatory proposal, the AI Act aiming to propose "a legal framework for trustworthy AI."
This future regulation, which would be the second with extraterritorial scope, could come into force between 2023 and 2026. Its objectives are clear4:
- Position the European Union as a major player in AI regulation,
- Build a coherent European digital strategy that respects the fundamental rights and freedoms of individuals,
- Promote cooperation between member states and prevent market fragmentation,
- Facilitate the development of a single market for legal, safe and trustworthy AI applications.
A key role for suppliers, distributors and users (companies) to ensure control and implementation of AI systems
Suppliers, distributors and users, all these actors are impacted by the implementation and monitoring of AI systems, as they all have different roles to play in their use and management.
The Artificial Intelligence Act will apply to the following actors5:
- AI system providers, established in the Union or in a third country, who place AI systems on the market or commission them in the EU,
- AI system distributors located in a third country when the results generated by the system are used in the European Union,
- Users (legal entities) of AI systems located in the European Union.
Vendors responsible for developing and supplying AI technologies to businesses are bound before AI is placed on the market or used within the EU by numerous obligations regarding conformity assessment, quality management, information to national authorities competences and technical documentation.
The Artificial Intelligence Act will not impact :
- Public authorities of a third country or international bodies using AI systems in the framework of international police and judicial cooperation agreements with the Union or with one or more Member States,
- Purely private and non-commercial use of the AI system.
Contents and impacts of the regulation for suppliers, distributors and consumers
The proposed regulation is based on a risk-based approach that involves classifying AI systems by level of risk. The regulators distinguish four categories of AI classified according to their risks to individuals:
- AI systems with "unacceptable" risks. They include mechanisms that influence behavior, leading to discrimination, in particular through the classification of people (e.g. refusal of a loan due to bad social behavior). The implementation of such systems is prohibited.
- "Highly risky" AI systems. Eight systems have been defined as "very risky", such as vocational training, access to public services, border controls, etc. These systems require, among other things, the implementation of a risk management system, clear transparency towards individuals or human supervision. These systems require, among other things, the implementation of a risk management system, a clear transparency towards individuals or a human supervision. These systems, once compliant, are subject to a CE mark guaranteeing the conformity of the AI and the protection of the rights of individuals.
- AI systems "with transparency obligation" having interactions with humans, used in particular to detect emotions or generating modified content (e.g.: chabots). These systems can be implemented subject to clear information to the consumer and the implementation of a code of conduct.
- AI systems with minimal or no risk such as predictive maintenance.
Failure to comply with the obligations of the Artificial Intelligence Act may result in financial penalties and image damage
As with the GDPR, the Artificial Intelligence Act provides for financial penalties for non-compliance with certain obligations6:
- Up to 30 million euros in fines or 6% of worldwide turnover in the event of moral or physical harm to individuals or their behaviour directly or indirectly, discrimination and failure to comply with the principles of risk-based approach,
- 10 million euros or 2% of annual worldwide sales in the case of misleading or inaccurate information,
- 20 million or 4% of worldwide turnover in the event of other breaches of the other obligations of the Artificial Intelligence Act.
Beyond these financial sanctions, companies can expect image damage in case of non-compliance with the obligations set forth by the legislator. Indeed, it is possible that the sanctions will be made public by the control authorities as they are in the RGPD (General Data Protection Regulation).
The "Artificial Intelligence Act" project includes ambitions asserted by the European Union, which will not be without consequences for AI actors acting on the European territory.
With this proposed regulation, the European Union is once again asserting its desire to impose extraterritorial regulations in the face of dominant players such as Asia and America. After a regulation focused on the protection of personal data and the rights of the persons concerned (RGPD), the European Commission legislates on a technology and its implementation framework in order to promote innovation and ethics and to position Europe as a major player in artificial intelligence.
In the next few years, companies should therefore expect to deepen their risk-based approach methods in order to bring their processes into compliance, but also to respond to new issues generated by the growing evolution of technologies
Senior Data privacy & Data governance Consultant
Data compliance & Data privacy Manager
1 and 2: Microsoft
3 : CNRS
4 : CNIL
5 and 6 : European Commission