Back to Episodes
AI and Existential Risk: Assessing the Real Threats to Humanity Cover

AI and Existential Risk: Assessing the Real Threats to Humanity

June 3, 202508:01

Could artificial intelligence really pose a threat to humanity’s future? In this episode of IntelligentPod, we take a clear-eyed look at the probabilities behind AI and existential risk. Drawing from expert surveys, academic research, psychological insights, and real-world examples, we unpack the question: How likely is it that advanced AI—like AGI—could endanger human survival? Explore more episodes, show notes, and bonus content at https://intelligentpod.com

View Transcript

Episode Transcript

Full transcript of this episode

Hello, friends, and welcome back to IntelligentPod, the show where curiosity meets clarity. I’m your host, Sophie Lane, and as always, I’m so grateful you’ve tuned in. Whether you’re out for a walk, commuting, or just relaxing at home, I hope IntelligentPod offers you a thoughtful pause in your day. Today, we’re diving deep into a topic that’s both fascinating and, let’s be honest, a little daunting: “AI and Existential Risk: A Sober Look at the Probabilities.” Now, I know that phrase can sound a bit heavy, but I promise—we’re not here to stoke fear or sensationalize. Instead, we’ll explore the real questions: How likely is it that artificial intelligence could pose a risk to humanity’s long-term future? What does the latest research actually say? And how should we think about these possibilities in our everyday lives? Let’s start by demystifying some of the terms. When we talk about “existential risk,” we mean a risk that threatens the very survival of humanity or could drastically curtail our potential. Think of catastrophes like global pandemics, nuclear war, or, in this case, an artificial intelligence that becomes so powerful and misaligned with human values that it could cause irreversible harm. And artificial intelligence—well, that’s a broad field. Today, we’re mainly focusing on the future possibility of advanced AI systems—sometimes called “artificial general intelligence,” or AGI—that might match or exceed human capabilities across a wide range of tasks. This isn’t just science fiction anymore. Over the last decade, AI has gone from beating humans at chess and Go to writing poetry, diagnosing diseases, and even driving cars. According to a 2023 survey by Stanford’s Human-Centered AI Institute, over 70% of top AI researchers believe there’s at least a 10% chance that advanced AI could pose an existential threat to humanity within the next century. Now, 10% might sound small, but when the stakes are this high, even a small probability demands our attention. But how real is this risk? Let’s look at this from a few different perspectives. First, the psychological angle. Humans are notoriously bad at estimating rare but high-impact events—think about how we worry more about shark attacks than car accidents, even though the latter are much more common. When it comes to something as novel and complex as AI, our intuitions can be all over the map. Some people feel a gut-level certainty that AI will save us all, while others are convinced it’s the ultimate doomsday device. The truth, as usual, is likely somewhere in between. On the scientific front, there’s a lot of debate. Some leading voices, like the late Stephen Hawking and entrepreneur Elon Musk, have warned about the dangers of unchecked AI development. Others, like computer scientist Andrew Ng, argue that worrying about superintelligent AI is like “worrying about overpopulation on Mars”—it’s so far off that it distracts from more immediate concerns. But the field has matured a lot in the last few years. Let me share an academic example: a 2022 study published in the journal “AI & Society” analyzed expert surveys and concluded that while there’s widespread uncertainty, the median estimate for when we might see human-level AI is around 2050. However, the same study found that most experts agreed on the need for proactive research into AI safety—just in case. And let’s bring it down to earth with a real-life anecdote. In 2016, Microsoft released a chatbot named Tay on Twitter. Within 24 hours, Tay began spewing racist and offensive remarks. Why? Because it learned from the internet—uncurated and unfiltered. Now, Tay wasn’t an existential threat, but it was a wakeup call about how quickly AI can go sideways if we’re not careful about its training data and objectives. Culturally, our stories have always reflected our hopes and fears about technology. From the cautionary tale of Frankenstein to the futuristic warnings of movies like “Ex Machina” and “Her,” we grapple with questions about control, ethics, and what it means to be human in a world of intelligent machines. And as AI becomes more present in our daily lives—think voice assistants, recommendation algorithms, even job applications—these questions aren’t just philosophical; they’re practical. So, what should we actually do with all this information? How can we, as individuals, think clearly about AI and existential risk without falling into panic or complacency? Here are a few actionable ideas: First, stay informed, but be selective about your sources. There’s a lot of hype and fearmongering out there, but there are also thoughtful, balanced voices. I recommend following organizations like the Center for the Study of Existential Risk at Cambridge, or the Future of Humanity Institute at Oxford. They regularly publish accessible summaries of the latest research. Second, support transparency and accountability in AI development. This could mean advocating for better regulation, encouraging companies to publish their safety protocols, or just asking tough questions about how AI is used in your workplace or community. Third, cultivate a “probabilistic mindset.” Instead of thinking in terms of “AI will definitely save us” or “AI will definitely destroy us,” try to assign probabilities—and update them as you learn more. This is actually a skill that’s useful in all sorts of areas, not just AI. Fourth, if you’re in a position to influence policy or technology—maybe you work in tech, education, or government—push for the integration of ethics and safety research into every stage of AI development. It’s not enough to ask, “Can we build it?” We also need to ask, “Should we build it? And how can we make it safe?” And finally, don’t lose sight of the positive possibilities. AI has the potential to help us solve some of our toughest challenges—like climate change, disease, and poverty. By focusing on responsible development, we can steer towards the future we want. Let’s recap what we’ve covered today. Existential risk from AI is a real possibility, but it’s not a certainty. The probabilities are debated, but the stakes are enormous, so it’s worth taking the topic seriously. By looking at the issue from psychological, scientific, and cultural perspectives, we can avoid both panic and denial. And by staying informed, advocating for transparency, and adopting a probabilistic mindset, we can all play a role in shaping the future of AI. I’ll leave you with a final, reflective thought. Every generation faces choices about how to use new technology. Ours happens to be facing one of the most profound and far-reaching questions in human history. But with curiosity, humility, and a commitment to collective wisdom, I believe we can rise to the challenge. Thank you so much for listening to IntelligentPod. If you enjoyed this episode, please consider leaving a review—it helps others find the show. For show notes and more resources, visit intelligentpod.com. And if you have thoughts, questions, or feedback, I’d love to hear from you. You can always email me at sophie@intelligentpod.com. Until next time, stay curious, stay informed, and take care.

* This transcript was automatically generated and may contain errors.

Episode Information

Duration08:01
PublishedJune 3, 2025
Transcript
Available

Subscribe to IntelligentPod

Stay updated with our latest episodes exploring technology, philosophy, and human experience.

Share This Episode

Quick Actions