The recent reporting from The Guardian regarding the standoff between Anthropic and the U.S. Department of Defense is a watershed moment for the “soul” of Silicon Valley. For years, we’ve watched a slow-motion retreat from the ethical high ground that tech workers claimed during the 2018 Project Maven protests. But today, with Anthropic being labeled a “supply chain risk” simply for holding onto its safety guardrails, we have reached a breaking point.
As Nick Robins-Early captures in his piece, the conflict isn’t just about a contract; it’s about whether we are willing to hand over the keys to lethal force and domestic privacy to an unproven, algorithmic “black box.”
The Ethics of AI in War: The Human Must Remain the Heart
My ethical stance is clear: Artificial Intelligence should never be granted the authority to make a lethal decision without meaningful human intervention.
The Pentagon’s demand that Anthropic permit “any lawful use” of its Claude model is a euphemism for stripping away the guardrails against fully autonomous weapons—so-called “slaughterbots.” The ethical danger here isn’t just a technical glitch; it is a “responsibility gap.” When a machine misidentifies a civilian as a combatant and strikes, who is the murderer? The programmer? The commander? The algorithm?
War is a human tragedy that requires human accountability. By automating the “kill chain,” we don’t just increase efficiency; we decrease the moral friction of killing. AI can be a powerful tool for logistics, intelligence, and even defensive precision, but when we remove human conscience from the moment of impact, we aren’t advancing technology—we are retreating from our humanity.
The Right to Refuse: Corporate Conscience vs. Government Coercion
The most chilling part of The Guardian’s report is the government’s reaction to Anthropic’s refusal. Labeling a domestic company a “supply chain risk”—a designation typically reserved for foreign adversaries like Huawei—because they won’t build “killer robots” is a gross abuse of executive power.
Should companies be able to refuse to do business with governments? Absolutely.
In a free-market democracy, the right of a private entity to operate according to its stated values is foundational. Anthropic was founded as a “Public Benefit Corporation” specifically to prioritize safety. Forcing them to violate their core charter via the Defense Production Act or through blacklist-style bullying is a tactic more aligned with the “autocratic adversaries” the Pentagon claims to be fighting.
If the government can coerce its most innovative companies into building weapons against their will, then the line between the private sector and the state vanishes. We need a diversity of ethical perspectives in tech. If every AI lab is forced to become a defense contractor by default, we lose the very guardrails that might save us from an accidental escalation or a domestic surveillance state.
The Bottom Line
The “move fast and break things” era of AI has officially entered the theater of war. While OpenAI and xAI have chosen to “fall in line” for the sake of billion-dollar contracts, Anthropic’s lawsuit is a necessary stand.
We cannot allow national security to become a blank check that cancels out ethics. As the article notes, the law is running decades behind the technology. Until we have international treaties and robust domestic laws governing AI in warfare, the “terms of service” of companies like Anthropic are the only thin red line we have left. We should be protecting that line, not trying to erase it.
