Back to News
quantum-computing

Anthropic is facing its biggest challenge yet

TheStreet
Loading...
5 min read
0 likes
⚡ Quantum Brief
The U.S. government designated a domestic AI company as a "supply chain risk" for the first time, banning military contractors from working with it after it refused Pentagon demands for unrestricted access to its AI model. Anthropic rejected a $200M Pentagon contract over two red lines: no mass domestic surveillance and no fully autonomous weapons, citing AI’s unpredictability and ethical risks. OpenAI immediately replaced Anthropic in Pentagon contracts by embedding similar restrictions into its model’s technical architecture rather than contract terms, securing a classified-network deployment deal. Anthropic vowed legal action, calling the designation "legally unsound" and arguing the Pentagon lacked authority, while CEO Dario Amodei insisted the company remains open to limited military collaboration. The move sets a precedent for AI-military tensions, with legal experts warning of lasting reputational damage to Anthropic and broader implications for tech firms working with the defense sector.
Anthropic is facing its biggest challenge yet

Summarize this article with:

The U.S. government has done something it has never done to an American company before. On Friday, Defense Secretary Pete Hegseth declared Anthropic a "supply chain risk" to national security, a designation historically reserved for foreign adversaries like China's Huawei. The move bans every military contractor, supplier, and partner from doing any commercial business with the AI lab, effective immediately.It followed a Truth Social post from President Trump directing every federal agency to immediately stop using Anthropic technology, with a six-month wind-down period for the Pentagon and certain other agencies to find alternatives.Anthropic CEO Dario Amodei called the actions "retaliatory and punitive" in an exclusive interview with CBS News Friday night. Hours later, OpenAI swept in and announced it had secured its own Pentagon contract, stepping into the void Anthropic had just been forced to vacate.What Anthropic refused to give the PentagonThe dispute centered on a $200 million contract the Pentagon awarded Anthropic last July to develop AI capabilities for national security. The military wanted broad "lawful purpose" access to Claude, Anthropic's AI model. Anthropic refused, citing two specific concerns it called non-negotiable.Anthropic's two red linesNo mass domestic surveillance of Americans. Amodei argued that AI has made surveillance that was previously impractical now dangerously easy, putting it ahead of existing law. "That actually isn't illegal. It was just never useful before the era of AI," he told CBS News.No fully autonomous weapons. Amodei said today's AI models are not reliable enough to remove humans from lethal decision-making, citing what he described as the "basic unpredictability" of current systems.The Pentagon's position was that it already has internal policies against both practices, and that having to negotiate case by case with a private company over its terms of service was unworkable. Hegseth gave Anthropic a Friday deadline of 5:01 p.m. to either drop its restrictions or lose its contracts. The company held firm.More Tech Stocks:Morgan Stanley sets jaw-dropping Micron price target after eventNvidia’s China chip problem isn’t what most investors thinkQuantum Computing makes $110 million move nobody saw comingIn his statement, Hegseth accused Anthropic of trying to "seize veto power" over U.S. military operations. Trump went further on Truth Social, calling Anthropic a "radical left, woke company" whose "selfishness is putting American lives at risk."Amodei hits back, vows legal fightIn his CBS News interview, Amodei pushed back on nearly every characterization from the administration. He described Anthropic as a company that had been more willing to work with the military than any other AI lab. "We are patriotic Americans," he said. "Everything we have done has been for the sake of this country, for the sake of supporting U.S. national security."He also said the company had not yet received any formal communication from the Pentagon or the White House confirming the supply chain risk designation. But his response was unambiguous: he told reporters that any formal action from the Pentagon would be challenged in court. Photo by NurPhoto on Getty Images Anthropic's formal statement called the designation "legally unsound" and argued Hegseth does not have the legal authority to bar military contractors from doing business with Anthropic broadly. The company said the designation statute requires the Pentagon to exhaust all alternative options before invoking it, and questioned whether the government could claim to have done so given how quickly the standoff escalated.Amodei also said he is not ready to walk away. "We are still interested in working with them as long as it is in line with our red lines," he told CBS News.OpenAI moved in within hoursWhile Amodei was still speaking to CBS News, OpenAI CEO Sam Altman posted on X that his company had just secured a deal to deploy its models on classified networks. The announcement was striking in its timing and its framing. Altman said OpenAI had agreed to the same two restrictions Anthropic had been demanding, but embedded them into the technical architecture of its models rather than insisting on explicit contract language.The distinction may prove to be more about optics than substance. As Fortune reported, it remains unclear exactly how both OpenAI's "lawful purpose" agreement and its claimed safety limits can coexist. What is clear is that OpenAI found a way to say yes where Anthropic said no, and the Pentagon rewarded it immediately.What this means for the broader AI industryLegal experts quoted by Fortune warned the damage to Anthropic may outlast any court victory. "It will take years to resolve in court," analyst Shenaka Anslem Perera wrote on X. "And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk?"The broader implications are just as significant. This is the first time the U.S. government has designated an American company a supply chain risk, a tool previously aimed at foreign adversaries. Axios reported the government has not yet specified which law it is invoking to impose the ban, raising questions about its legal standing that Anthropic's lawyers are almost certainly already exploring.For now, Anthropic faces a six-month clock. Its Claude model is the only AI currently deployed on the Pentagon's classified networks. Defense officials privately told Axios it would be a "huge pain in the ass" to disentangle. But with the supply chain designation in place and OpenAI already in the door, the window for Anthropic to reclaim its position is narrowing fast.Related: Elon Musk just made things very uncomfortable for Anthropic

Read Original

Tags

quantum-optimization
aerospace-defense

Source Information

Source: TheStreet