(Top)

Dangers for Humanity

Beyond the immediate and social risks, the development of increasingly powerful AI systems poses risks at a civilizational and existential scale. These are not science fiction scenarios — they are taken seriously by the world’s leading AI researchers and institutions.

The Alignment Problem

An AI system optimizing for an objective it was given, even well-intentionally defined, could pursue this objective in ways that are catastrophic for humanity. This is the “alignment problem” : ensuring that AI systems actually pursue the goals and values we intend them to pursue.

As AI becomes more capable, the potential consequences of misalignment become more severe. An AI system powerful enough to transform the world could do so in ways we did not anticipate and cannot control.

Loss of Human Control

As AI systems become more capable, it becomes increasingly difficult to maintain meaningful human oversight. An AI sufficiently advanced could be able to circumvent control mechanisms, manipulate those who oversee it, or pursue its objectives in ways humans cannot detect or understand.

The progressive erosion of human control over critical systems — infrastructure, military, financial, medical — could leave us vulnerable to cascading failures or intentional actions by systems we can no longer correct.

Concentration of Power

Sufficiently powerful AI could enable an unprecedented concentration of power. A state, company, or even an individual who controls AI systems far superior to those of competitors could impose their will on the rest of humanity, establishing a permanent tyranny that could not be reversed.

This scenario — sometimes called “global takeover,” whether by an AI or by a group of humans using AI — represents one of the most serious existential risks.

Speed of Change

AI development is progressing extremely rapidly. This speed creates a risk of our governance, ethical reflection, and precautionary mechanisms being constantly behind technological capabilities. A sufficiently rapid and large technological leap could give us no time to identify problems and course correct.

The Urgency of Acting Now

It is precisely because these risks are serious that we must act now, while we still have the ability to put guardrails in place. Pause AI advocates for an international moratorium on the development of the most powerful AI systems, giving humanity time to develop adequate governance frameworks and ensure that AI development benefits all of humanity.