In late September, Shield AI co-founder Brandon Tseng confidently asserted that fully autonomous weapons—where AI makes the final decision to kill—would never exist in the U.S. “Congress doesn’t want that. No one wants that,” Tseng told TechCrunch. However, his statement was quickly challenged. Just five days later, Anduril co-founder Palmer Luckey expressed a different perspective, signaling a more nuanced view on autonomous weapons.

Speaking at Pepperdine University, Luckey questioned the opposition to such systems, arguing that in some cases, autonomous technology may offer a moral advantage over current weapons. He raised the example of landmines, which can’t distinguish between a school bus and a military target, as a point of comparison. When asked for further clarification, an Anduril spokesperson noted that Luckey’s concern was more about the potential misuse of AI by bad actors, rather than advocating for fully autonomous lethal systems.

The broader defense technology sector has generally leaned towards cautious integration of AI in military tools. Luckey’s co-founder, Trae Stephens, previously emphasized the role of AI in helping humans make better decisions rather than replacing human judgment entirely. However, as discussions about AI weapons evolve, the boundary between human decision-making and AI autonomy is becoming increasingly blurred.

The U.S. military currently does not use fully autonomous weapons, though there is no outright ban on developing or selling such systems. Some weapons, like missiles and mines, have limited autonomous capabilities, but they do not operate with the same level of independence as an AI-driven system that identifies and fires on a target without human intervention. The U.S. has issued voluntary AI safety guidelines for military use, which companies like Anduril follow, but there is no legally binding international ban on fully autonomous weapons.

Palantir co-founder and Anduril investor Joe Lonsdale also weighed in, arguing against framing the debate as a simple “yes or no” on AI in weapons. Speaking at an event hosted by the Hudson Institute, Lonsdale explained that while China may fully embrace autonomous weapons, the U.S. would suffer if it hesitated to adopt AI in warfare. He emphasized the need for policymakers to understand the complexities of integrating AI into defense systems before making blanket rules.

Activists and human rights groups have long pushed for international bans on autonomous lethal weapons, but the war in Ukraine has shifted the conversation. Ukraine has been vocal about its need for more automation in weapons to gain an edge against Russia, with Ukrainian officials calling for AI technologies to aid in their fight. As a result, companies working on AI for defense are using the conflict as a testing ground for new innovations, even as they keep a human in the loop for lethal decisions.

In the face of international competition, particularly from China and Russia, many in Silicon Valley and Washington, D.C., fear that falling behind in AI weapon technology could leave the U.S. vulnerable. At a UN debate on AI arms last year, a Russian diplomat made it clear that human control was not a priority for Russia, further stoking concerns that the U.S. might be forced into developing autonomous systems sooner than expected.

Lonsdale and Luckey’s companies are actively working to educate Congress and the Department of Defense about the potential benefits of AI in weapons systems. With over $4 million spent on lobbying this year, Anduril and Palantir are pushing for a deeper integration of AI into the U.S. defense strategy, hoping to stay ahead in the global arms race.

As the debate continues, the question remains: how much autonomy should AI have in making life-and-death decisions on the battlefield? For now, the U.S. is walking a fine line between embracing the potential of AI and ensuring that humans remain accountable in matters of lethal force.

By Impact Lab