IA : Nos craintes pour la France
Analyse critique du rapport de la Commission de l'IA, dénonçant conflits d'intérêts et minimisation des risques.
AI development is progressing at breakneck speed: in 2020, AI systems struggled to count to ten; today they already surpass human capabilities in many domains. While AI promises potential positive advances, the flip side is terrifying.
According to AI safety and ethics experts, unchecked development also entails major risks: large-scale disinformation, mass manipulation, devastating cyberattacks, engineered pandemics, and loss of control over autonomous systems — up to threatening the very survival of humanity.
Pause AI, in line with the warnings of AI safety researchers, promotes responsible development of this rapidly evolving technology. We call for a moratorium on training more dangerous systems, to avoid a global catastrophe.
The dangers are real, but hope exists. By acting together, we can shape a future where AI remains a beneficial tool for humanity.
AI labs are planning to automate all human work within 4 years. Whether they fully achieve this goal or not, it's clear we're heading toward a major upheaval in the labor market. We must prepare for an unprecedented economic shock.
Economic and material dangers linked to AI directly threaten our infrastructure, businesses and resources. This technology introduces unprecedented vulnerabilities and possibilities for catastrophic failures. Imagine devastating AI-facilitated attacks, or the collapse of entire sectors facing these disruptive technologies. These dangers, though less visible than those directly threatening human lives, can have equally massive consequences.
We must exercise heightened vigilance and put rigorous regulations in place to protect ourselves.
To learn more about AI risks in the labor market, visit our AI Employment working group.
Learn moreAnalyse critique du rapport de la Commission de l'IA, dénonçant conflits d'intérêts et minimisation des risques.
Plus grande conférence francophone sur la sécurité de l'IA, réunissant des experts à la veille du Sommet IA.
Agents IA autonomes, rapport international de sécurité 2026, démissions dans les équipes de sécurité des géants de l'IA, et bilan du sommet de New Delhi.
Auto-réplication et dissimulation des IA, partenariat Mistral-Armées, et lancement des groupes locaux Pause IA.
Capacités croissantes des IA, audits de sécurité insuffisants et tensions géopolitiques sur la régulation.
Cyberattaque autonome par IA, nouvelle définition de l'AGI, et transformation d'OpenAI en entreprise à profit.