Although generative artificial intelligence has radically transformed our uses, the urgency of defining and implementing ethical AI remains a major and complex challenge. Global convergence on what constitutes ethical AI remains elusive, despite universal recognition of its importance for the responsible development of this technology. An overview of the initiatives taken by governments and GAFAMs, which reflect a complex landscape where significant advances towards ethical regulation of AI face persistent challenges, requiring ongoing coordination on a global scale.
Global AI regulation at several speeds
Let's start with the old continent: after nearly six years' work, the European Commission recently finalized the AI ACT, aimed at regulating artificial intelligence. Although this legal framework introduces principles of human responsibility, explicability and transparency, it still leaves some grey areas, notably in the classification of risks associated with the different categories of AI (Forbidden AI, High-Risk AI, General-Purpose AI, "Non-Risk" AI).
In the U.S., the"U.S. Executive Order on AI" proposed by the Biden administration at the end of last year promotes the ethical, safe and reliable development and use of AI through a few guidelines, which this time appear to be fairly non-binding. As is often the case with Uncle Sam, civil rights and the freedoms of enterprise and innovation must not be constrained too brutally...
Although this text calls for cross-cutting legislation on AI and data protection, it is above all state-level applications that will set the tempo and trajectory. Some states are already beginning to specify locally applicable rules: Florida, for example, which would like to strengthen transparency of AI-generated political-type content, or California, which is proposing rules around "automation systems". Things are moving forward, however, with the creation of the AI Safety Institute Consortium, a structure whose official mission will be to define limits for the "use and development of AI"... and, probably, as a hidden goal, to ensure that the USA remains at the forefront of AI innovation.

Au nord du continent Américain, le Canada a annoncé en fin 2023 la mise en œuvre d’un « Code de conduite volontaire visant un développement et une gestion responsables des systèmes d’IA générative avancés ». Cela doit permettre de vérifier que les entreprises qui développent et utilisent ces technologies le fassent de façon sécurisée et non discriminatoire. En somme, un début d’éthique volontariste.
And then there's China, which has been investing in AI for many, many years. It's easy when you control the data (including biometric data) of several billion people in your own country, and probably a few million more outside its borders!
But surprising as it may seem, efforts are being made in China to describe an ethical framework for the use of AI, including algorithmic rules and recommendations to avoid discrimination based on age, gender or origin.
The aim for the Chinese government is to ensure that the content generated does not contain "false and harmful information" and is above all in line with the political guideline without hindering development and investment, and thus stay in the race with the USA.
On a global level, organizations such as UNESCO have drawn up recommendations on the ethics of AI, which are gradually influencing national policies.
GAFAM and GenIA players in search of authenticity
To conclude this overview, let's mention the virtuous intentions of the players in generative AI and GAFAM, who are each, in turn, displaying a willingness to set up mechanisms for identifying AI-generated content:
- Synthid at Google for the integration of a watermark and metadata identifying AI-generated images within its ecosystem to "promote trustworthy information".
- At Meta, the fight against misinformation is taking shape by adding, thanks to a Deep Learning model, invisible watermarks, as announced by Nick Clegg, Meta's Head of International Affairs. Similar initiatives are also underway at OpenAI, Midjourney and others, with the aim of making it possible to trace the origin of content.
- And finally, Microsoft and Adobe have founded the "Coalition for Content Authenticity and Provenance" (C2PA).
These are all initiatives which, while not directly related to ethics, will enable AI-generated content to be traced and identified, and hopefully, the authenticity of online content.

Regulation... but also education: the other major issue behind the development of AI
In the rapidly evolving field of artificial intelligence, ethics is not simply a matter of establishing moral standards; above all, it must ensure that the application of these technologies does not infringe fundamental human rights such as dignity, privacy and freedom of expression. Ethical AI must avoid deception and discrimination, and require transparent and explainable algorithms, while empowering its designers and users. While initiatives such as digital watermarking are promising for identifying manipulated content, they do not stop its spread, nor the dissemination of false information from unregulated models.
At a time when technological evolution often precedes legislation, it's crucial not only to collaborate on the development of a global ethical framework, but also to strengthen education. It's essential that we teach our children, and ourselves, to critically analyze and evaluate information before accepting it as reliable. The urgency of understanding and using generative AI responsibly has never been more critical, particularly in the run-up to major electoral deadlines likely to be influenced by misinformation...
Let's stop wasting time trying to detail legal texts that will always lag behind technology... But let's explain, train and educate!
Jérôme Malzac, Innovation Director at Micropole