Publish
Share
Send

AI Act: regaining control before AI gets out of hand

The implementation of the European IA Act marks a turning point for companies: what was previously a matter of experimentation or innovation is now a matter of compliance, sovereignty and governance. And yet, in many organizations, generative artificial intelligence is unfolding silently, outside any formal framework. Public tools, quick exchanges between colleagues, unsupervised content generation: these spontaneous uses often go unchecked.

In this context, risk is no longer theoretical. It is already there, at the heart of our practices. And in the face of it, there are two urgent needs: to regain control of what is beyond governance, and to build a control strategy that complies with the European framework. The IA Act is not simply a regulatory constraint: it's an opportunity to restore rigor, clarity and responsibility to the use of AI.

Shadow AI and regulation: a dual threat to be contained

One of today's major flaws lies in what we can't see. While some AI projects are framed, validated and supervised by the IT department or innovation teams, the overwhelming majority of uses escape all governance. This is the logic of "Shadow IT", applied to artificial intelligence: today, any employee can mobilize generative tools to write, translate, summarize or illustrate, without any validation or guarantee. As a result, risks are multiplying. Leakage of sensitive data, reproduction of protected content, undetected errors, lack of traceability: everything is in place to compromise an organization's security and compliance.

But the AI Act makes these gray areas unacceptable. Companies will have to demonstrate their ability to manage their artificial intelligence systems, including those used internally. Data traceability, model documentation, transparency of use, risk assessment: the requirements are clear. Added to this is a major strategic constraint: guaranteeing sovereignty over the data used. Beyond the fines, which can reach 7% of worldwide sales, the integrity of a company's information capital is at stake. Allowing uncontrolled uses to flourish means running the risk of losing control over what we produce, publish or decide based on AI.

Training and governance: two levers for regaining control

This situation calls for two urgent projects.

The first is acculturation. In fact, the European regulation explicitly requires AI training for all employees (Article 4). It's no longer a question of raising the awareness of a few teams on an ad hoc basis, but of structuring a massive training effort on the fundamentals of AI, and in particulargenerative AI. Every employee needs to understand the limits of models, their risk zones, their biases, but also the conditions for their responsible use. This is what we call AI Literacy: the ability of every player in the company to understand the principles, uses and limits of artificial intelligence. It encompasses an understanding of algorithms, awareness of ethical issues and knowledge of the legal framework. Without this shared foundation, use cases proliferate in management's blind spot, often with irreversible consequences.

The second lever is governance. AI is no longer a simple optimization tool: it becomes a system to be managed, evaluated and documented. In particular, the IA Act requires use cases to be classified according to their level of risk, with full technical documentation, compliance assessment and post-deployment monitoring. This requirement cannot be met by data or IT teams alone. It requires a cross-functional approach involving the legal, business, compliance and IT departments. Only then will a company be able to justify its choices in the event of an audit, prevent abuses and maintain a sufficient level of confidence in its tools.

Sharing responsibility, striking a balance

The widespread use of AI in companies calls for a new form of collective responsibility. The challenge is not to oppose innovation and control, but to articulate them with lucidity. Individual use and experimentation must be able to coexist with a clear compliance framework, guaranteeing data security, traceability and sovereignty.

This responsibility does not rest on a single player. It involves CDOs, CIOs, business and legal departments in a joint governance effort. Only then can AI become a sustainable lever for transformation.

Eliott Mourier

Senior Manager Data Governance
Micropole, a Talan company

How does AI boost the efficiency and steering of finance departments?

How AI boosts efficiency and control...

The role of finance departments is no longer limited to producing...
Agentic AI: autonomy comes at a price

Agentic AI: autonomy comes at a price

AI no longer simply performs tasks. It decides, plans and coordinates....
Semarchy Customer Day 2025: Micropole present as sponsoring partner

Semarchy Customer Day 2025: Micropole present...

Semarchy Customer Day 2025 was held on Tuesday, October 14, 2025...
ACCELERATE WITH US
ARE YOU DATA FLUENT?

Contact us