The Trump administration issued a directive on Friday instructing all U.S. agencies to discontinue the use of Anthropic’s artificial intelligence technology and imposed significant penalties, marking a public confrontation between the government and the company regarding AI safety. President Donald Trump, Defense Secretary Pete Hegseth, and other officials criticized Anthropic on social media for refusing to grant the military unrestricted access to its AI technology by a specified deadline, citing national security concerns. Trump expressed a firm stance, stating, “We don’t need it, we don’t want it, and will not do business with them again!”
Hegseth labeled the company as a “supply chain risk,” a classification typically reserved for foreign adversaries that could jeopardize Anthropic’s crucial partnerships with other businesses. In response, Anthropic argued that designating them as a supply chain risk would be unprecedented for an American company and could set a dangerous precedent.
The company had sought specific assurances from the Pentagon regarding the use of its AI chatbot, Claude, to prevent mass surveillance of Americans or its involvement in fully autonomous weapons. While the Pentagon indicated it would use the technology lawfully, it insisted on unrestricted access, which Anthropic resisted. The clash between the government and the company reflects broader concerns about the role of AI in national security and the potential misuse of advanced technologies in sensitive contexts.
Trump’s decision to halt the use of Anthropic’s AI by most agencies but allow a six-month transition period for the military to phase out existing technology was met with criticism and support from various stakeholders. The move may benefit Elon Musk’s competing chatbot, Grok, as the Pentagon plans to grant it access to classified military networks. Additionally, the dispute has drawn attention to the evolving contracts of other AI companies like Google and OpenAI supplying technology to the military.
The conflict has triggered a debate within the AI community, with prominent figures expressing differing opinions on the situation. Musk backed the administration’s decision, while OpenAI CEO Sam Altman supported Anthropic and questioned the Pentagon’s actions. The controversy highlights the complexities of AI governance and the challenges in balancing innovation with security considerations in the rapidly evolving technological landscape.

