Artificial Intelligence (AI) is weaving its influence into countless aspects of our lives, aiding scientists in interpreting extensive data sets, spotting financial fraud, navigating our vehicles, recommending music, and interacting through chatbots. This is merely the beginning of its impact.
Understanding the swift evolution of AI poses a significant question: Are we equipped to grasp its rapid development? If the answer is negative, could this lack of understanding be part of what is known as the Great Filter?
The Fermi Paradox points out the contradiction between the high probability of other advanced civilizations in the universe and the complete absence of evidence for any. Among the various explanations proposed is the concept of the ‘Great Filter,’ a hypothetical barrier preventing intelligent life from achieving interplanetary and interstellar existence, potentially leading to its extinction due to various catastrophic events.
Could the rapid advancement of AI be a potential Great Filter?
A recent discussion in the Acta Astronautica journal presents the idea that the progression from Artificial Intelligence to Artificial Super Intelligence (ASI) could serve as the Great Filter. The paper, authored by Michael Garrett from the Department of Physics and Astronomy at the University of Manchester, is provocatively titled “Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?”
Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations
Michael Garrett, University of Manchester
Garrett speculates that the Great Filter might be stopping technological civilizations like ours from achieving a multi-planetary status. He suggests that such civilizations fail to develop a stable multi-planetary presence, typically enduring less than 200 years.
This proposition could shed light on why we have not detected technosignatures or evidence of Extraterrestrial Intelligences (ETIs). It raises crucial questions about our own technological path and the implications of possibly facing a 200-year limit due to ASI. Highlighting the importance of establishing regulatory frameworks for AI development and advancing toward a multi-planetary society, Garrett points out the potential existential threats.
AI’s impact has raised significant concerns among scientists and thinkers regarding its implications, including the potential for job displacement, algorithmic bias, and the erosion of democratic values. The control and ethical alignment of AI pose complex challenges. Stephen Hawking had expressed concerns about AI surpassing human intelligence, suggesting the possibility of AI evolving into a form of life that outperforms humans.
I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans
Stephen Hawking, Wired magazine, 2017.
Garrett emphasizes the significant concerns surrounding ASI potentially going rogue, underlining the importance of addressing this risk in the coming years.
While AI brings numerous benefits, from medical advancements to safer transportation, the challenge lies in regulating it to foster these benefits while minimizing potential harm. Garrett stresses the need for responsible and ethical development, especially in critical areas like national security and defence.
The unique nature of AI and its trajectory of development presents an unprecedented challenge, one that any technologically advanced species would likely encounter. This positions AI and ASI as potential universal threats, suggesting they could be candidates for the Great Filter.
Garrett discusses how achieving a technological singularity, where ASI surpasses biological intelligence, could lead to scenarios where ASI evolves beyond human control, potentially posing existential risks. To counter these risks, Garrett advocates for humanity to become a multi-planetary species, thereby enhancing our resilience against AI-induced catastrophes through redundancy and diversified survival strategies.
However, the stark contrast between the rapid advancement of AI and the slower progress in space technology highlights a significant challenge in keeping pace with AI development. Garrett calls for a concerted effort in space exploration and the establishment of international regulations for AI, aiming to secure the future of intelligent life in the universe.
Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations
Michael Garrett
Addressing these challenges through legislative action and international cooperation is crucial for navigating the rapid advancement of AI and ensuring the long-term survival and evolution of intelligent life.