Key points
AI capabilities double every 4 to 7 months. Our regulatory cycles remain constant. This exponential progression creates a growing gap between technological advancement and our collective ability to manage it.
The risks are not hypothetical, they are already materializing across all domains: childhood and cognitive development, democracy and public debate, employment and concentration of power, cybersecurity and national security. (See sections 3 to 7 for details on risks)
The root cause: we are deploying "black box" systems whose internal workings and all behaviors we cannot control. Current alignment techniques are insufficient and provide no reliable guarantees.
Concrete levers for action exist: proactive regulation, legal liability, mandatory evaluations, investment in AI safety, protection of the EU AI Act, algorithm transparency, international red lines. (See section 8 for detailed recommendations)
Executive Summary
Today, AI systems can autonomously perform tasks that would take a human approximately 2 hours. This autonomous capability doubles every 4 to 7 months. The exponential progression of capabilities, at the cost of phenomenal growth in energy consumption and investments ($500 billion announced by the United States), is widening at a dizzying speed the gap between technological advancement and our collective ability to manage its consequences.
The October 31, 2025 colloquium at the Senate, organized by the Pause AI association with the support of Senator Ghislaine Senée and Senator Thomas Dossus, brought together experts, civil society and parliamentarians around a dual observation:
1. The risks of AI are not science fiction, they are already materializing. Chatbots encouraging teenagers to commit suicide, cyberattacks paralyzing hospitals, recommendation algorithms fragmenting our shared reality, unprecedented use of resources and concentration of economic power: the concrete impacts affect every aspect of our society. And these risks are amplifying at the pace of capability progression.
2. Regulation is not a brake on innovation, it is its condition. Aviation and the pharmaceutical industry demonstrate this: it is a reassuring regulatory framework that builds trust and allows for beneficial deployment of technology. Faced with a frantic race led by a handful of private actors, public authorities must take back control to ensure that AI serves the public interest.
The identified dangers are serious and numerous
Childhood and cognitive development. 64% of children aged 9 to 17 already use AI. Yet these systems were not designed for developing brains. They create risks of addiction, cognitive atrophy and emotional manipulation through "parasocial relationships" with machines that simulate empathy without feeling it.
Democracy and public debate. Social media recommendation algorithms are opaque and optimized for engagement rather than truth. They polarize our societies and undermine the foundations of democratic debate. YouTube's algorithms already controlled 700 million hours per day in 2018, equivalent to the teaching of 25,000 teachers over their entire careers. Generative AI can only massively amplify this phenomenon.
Economy and employment. The automation of cognitive tasks threatens entire swaths of skilled employment. Between 17% and 30% of current work could be automated. Without political anticipation, this shock will lead to massive precariousness and a concentration of economic power in the hands of predominantly non-European technology actors, making any redistribution of wealth nearly impossible.
Cybersecurity and biorisks. AI drastically lowers the skill threshold needed to carry out sophisticated cyberattacks or design new biological weapons. Our critical infrastructure is on the front line. Models show that the economic damage from cyberattacks could multiply by 4 to 8 in the coming years.
Current AI systems are inherently uncontrollable
The fundamental problem is that "black box" systems are being deployed at scale whose internal workings and all behaviors no one controls. No one programmed ChatGPT to encourage teenagers to commit suicide, yet it happens. Current alignment techniques are insufficient and provide no reliable guarantees. The scientific community, including Nobel and Turing Prize winners, is sounding the alarm: continuing on this trajectory means accepting a progressive loss of control.
Levers for action
France and Europe can become leaders, not in the race for power, but in the mastery of AI. This requires strong political actions:
- Adopt proactive regulation that legislates for tomorrow's technologies, not yesterday's, anticipating future developments and integrating strict precautionary principles.
- Establish clear legal liability: AI developers and companies that deploy their applications must be held responsible for damages caused. The "black box" nature cannot be an excuse.
- Require mandatory and independent risk assessments before any high-risk AI deployment, modeled on aviation or pharmaceuticals.
- Massively invest in a French and European AI safety sector: support research on AI robustness, transparency and control. Strengthen INESIA's resources and create a center of excellence.
- Protect European regulation: the EU AI Act is under intense lobbying pressure (tech invests more in lobbying than automotive, pharma and aeronautics combined). Defending and strengthening this regulatory framework is crucial.
- Impose transparency on recommendation algorithms and explore models of democratic governance to protect public debate from manipulation.
- Bring to the international level the establishment of "red lines" on dangerous autonomous capabilities and unacceptable uses of AI.
These measures are not aimed at slowing down the French ecosystem, but at framing the high-risk race toward Artificial General Intelligence led by a handful of international laboratories.
Going further
This colloquium opened an essential dialogue. The Pause AI association and the mobilized experts are fully available to parliamentarians to deepen these subjects, organize hearings and contribute to working groups aimed at translating these levers into concrete legislative proposals.