In late September, Shield AI co-founder Brandon Tseng confidently asserted that fully autonomous weapons—where AI makes the final decision to kill—would never exist in the U.S. “Congress doesn’t want that. No one wants that,” Tseng told TechCrunch. However, his statement was quickly challenged. Just five days later, Anduril co-founder Palmer Luckey expressed a different perspective, signaling a more nuanced view on autonomous weapons.
Speaking at Pepperdine University, Luckey questioned the opposition to such systems, arguing that in some cases, autonomous technology may offer a moral advantage over current weapons. He raised the example of landmines, which can’t distinguish between a school bus and a military target, as a point of comparison. When asked for further clarification, an Anduril spokesperson noted that Luckey’s concern was more about the potential misuse of AI by bad actors, rather than advocating for fully autonomous lethal systems.
Continue reading… “The Debate Over Autonomous Weapons: U.S. Tech Leaders Weigh In”