Can AI decide who lives or dies in war? In this episode, we explore the rise of autonomous weapons—drones and machines that can kill without human input. From real-world examples to ethical debates, discover what AI in warfare means for our future—and what you can do about it. Explore more episodes, show notes, and bonus content at https://intelligentpod.com
Full transcript of this episode
Hello, and welcome to IntelligentPod, the show that explores the fascinating crossroads of technology, society, and what it means to be human in our ever-evolving world. I’m your host, Sophie Lane, and I am so glad you’re joining me today. If you’re looking for thoughtful conversations and fresh perspectives on the big topics shaping our future, you are absolutely in the right place. Today, we’re diving into a subject that is as thrilling as it is unsettling—AI in warfare. Specifically, we’re going to unpack the rise of autonomous weapons systems and what they mean for the future of conflict. We’ll look at the science behind these technologies, the ethical and cultural debates swirling around them, and what this all means for us as individuals and a global society. If you’ve ever wondered about killer robots, drone swarms, or how artificial intelligence might change the very nature of war, stick around. This episode is for you. Let’s get started by setting the stage. When we talk about “AI in warfare,” we’re talking about a broad range of technologies, but the hot topic right now is autonomous weapons systems—machines that can select and engage targets without direct human intervention. That includes everything from unmanned aerial drones to land rovers and even underwater vehicles. The technology is advancing rapidly: According to a 2023 report by the Stockholm International Peace Research Institute, at least 33 countries are developing or deploying some form of military AI, and investments in military AI reached over $12 billion last year alone. Let’s put this into perspective. Just a decade ago, most military drones were “remotely piloted”—a human operator would control the drone from a safe distance. Today, some drones can fly, navigate, and even make targeting decisions without real-time human input. For example, the Turkish Bayraktar TB2 drone, used in recent conflicts, can autonomously patrol, identify targets, and strike—all with minimal human oversight. Now, this doesn’t mean they’re completely unsupervised, but the trend is clear: more and more decision-making is shifting from humans to algorithms. But why does this matter? Why does the idea of autonomous weapons ignite such passionate debate? Let’s explore a few perspectives. From a scientific and technological angle, proponents argue that AI-powered weapons can make war more precise and potentially less deadly for both soldiers and civilians. The logic goes like this: machines don’t get tired, emotional, or distracted. With the right programming, they could, in theory, make fewer mistakes than humans. Imagine a future where autonomous systems can distinguish between combatants and civilians more accurately than a stressed-out soldier in the heat of battle. Some military strategists believe this could actually save lives and reduce the "fog of war." But there is, of course, an enormous flip side. The psychological and ethical concerns are profound. When you hand lethal decision-making over to a machine, who is responsible if things go wrong? There’s even a term for this: the “accountability gap.” If an autonomous drone mistakenly bombs a hospital, is it the fault of the programmer, the military commander, or the machine itself? The sense of moral responsibility—so central to human conflict—starts to blur. Let me share a real-life example. In March 2020, during the conflict in Libya, a Turkish-made Kargu-2 drone reportedly attacked retreating soldiers without any direct human command. According to a UN report, the drone used onboard AI to identify and engage targets autonomously. This is believed to be one of the first documented cases of a weapon acting on its own initiative. The incident sparked outrage and anxiety among humanitarian groups, who argue that delegating kill decisions to machines crosses a fundamental ethical line. From a cultural and philosophical point of view, the idea of “killer robots” has haunted our imaginations for decades. Think about movies like The Terminator or Ex Machina. There’s a deep, visceral fear that machines could become uncontrollable, turning against us or acting in unpredictable ways. Even the United Nations has weighed in: more than 30 countries have called for a preemptive ban on fully autonomous weapons, fearing a future where wars could be fought at machine speed, with humans left powerless to intervene. Now, let’s bring in a bit of academic research. In a 2021 study published in the journal Nature Machine Intelligence, researchers surveyed over 500 AI experts on the risks and benefits of autonomous weapons. Three out of four respondents expressed concern that these systems could lower the threshold for going to war, making it easier for governments to initiate conflict without risking their own soldiers’ lives. At the same time, some argued that, if regulated and carefully designed, AI could actually help enforce international law by reducing accidental harm. So, where does this leave us? It’s clear that AI in warfare isn’t just a question of technology—it’s a complex web of science, ethics, psychology, and culture. But it’s not only the experts who get to weigh in. As everyday citizens, we all have a stake in how these technologies are developed and deployed. Let’s shift gears now and talk about what we, as individuals, can do in the face of these enormous changes. I know that not everyone listening today is a policymaker or an AI engineer. But that doesn’t mean you’re powerless. First, stay informed. The landscape is changing quickly, and there’s a lot of misinformation out there. Seek out reputable sources, follow developments in international law, and don’t be afraid to ask tough questions about how your own country is using or regulating military AI. Second, get involved in the conversation. Many advocacy groups, like the Campaign to Stop Killer Robots, are pushing for international agreements to ban or regulate autonomous weapons. They often provide templates for contacting your representatives, joining petitions, or even attending public forums. Third, reflect on your own values. How do you feel about the idea of machines making life-and-death decisions? What kind of world do you want to live in? These are not just technical questions—they’re deeply human ones. Talk about them with your friends, your family, your community. The more we discuss these issues openly, the better equipped we’ll be to shape the future together. And finally, remember that technology is a tool. It’s shaped by our choices, our ethics, and our vision for the future. Autonomous weapons aren’t inevitable—they’re the result of decisions made by people, often influenced by public opinion and democratic processes. Your voice matters, whether you’re voting, protesting, or simply staying curious. So, to recap: today we explored the world of AI in warfare, focusing on autonomous weapons and what they mean for the future of conflict. We looked at the technological advances making these systems possible, the psychological and ethical dilemmas they raise, and the cultural fears they evoke. We discussed real-world examples, like the autonomous drone in Libya, and considered academic research that both warns and reassures us. Most importantly, we talked about what you can do—staying informed, speaking up, and reflecting on your own values. Here’s my closing thought: The story of AI in warfare is still being written. It’s easy to feel overwhelmed by the pace of change, or to assume that the future is out of our hands. But history shows that technology is never just about the tools—it’s about the people who use them, the choices we make, and the values we uphold. So let’s keep asking questions, keep learning, and keep imagining a future where intelligence—human and artificial alike—is used for the greater good. Thank you so much for joining me on IntelligentPod today. If you enjoyed this episode, please leave a review on your favorite podcast platform—it really helps more curious minds discover the show. For more resources, show notes, and links to the studies and organizations I mentioned, visit intelligentpod.com. And if you have thoughts, questions, or just want to share your own perspective, I’d love to hear from you. Drop me a line at sophie@intelligentpod.com. Until next time, stay curious, stay kind, and keep thinking intelligently. I’m Sophie Lane, and this has been IntelligentPod. Take care, everyone.
* This transcript was automatically generated and may contain errors.
Stay updated with our latest episodes exploring technology, philosophy, and human experience.