The Pentagon-Anthropic feud is quietly obscuring the real fight over military AI

The fight over how the military can use large language models is narrowing the scope of an urgent debate.

The controversy over the Pentagon’s use of Anthropic’s models has become a flashpoint in the national debate over military artificial intelligence, and sparked outrage from Washington to Silicon Valley. The Pentagon wanted to buy Anthropic’s AI models without any restrictions on their use; even as it scrapped its flagship safety rule, the company wouldn’t budge on two particular red lines. And so, just after a Friday evening deadline, the Secretary of War killed the company’s $200 million Pentagon contract and declared the firm was not just “woke” but a “supply chain risk,” banning it from working with defense agencies. Meanwhile, OpenAI CEO Sam Altman had been negotiating his own Pentagon deal, with some, but not all, of the contractual guardrails that Anthropic had wanted. (Anthropic’s CEO would later tell employees that OpenAI’s messaging around the deal was “mendacious.”) In any case, removing Anthropic won’t be easy: On Saturday the company said it would sue the government over the ban, just as its AI models—already deeply embedded in the Pentagon’s systems—were being used by the US military to carry out strikes on Iran.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top