The US government has officially designated **Anthropic** as a supply chain risk due to the company’s refusal to engage in an intelligence agreement with the **Pentagon**. This decision has prompted **Anthropic** to announce plans to contest the ruling in court, with CEO **Dario Amodei** describing the designation as “legally unsound.” Despite this setback, the company’s AI platform, **Claude**, is experiencing a remarkable surge in users, with over **one million** people signing up each day.
The supply chain risk label indicates that US authorities perceive a threat to national security from doing business with certain companies. This marks the first instance of a US company receiving this designation. The decision follows a contentious period during which **Anthropic** withdrew from partnership discussions with the military, citing ethical concerns related to mass surveillance and autonomous weapons.
User Growth Amid Controversy
Despite the government’s stance, **Claude** is gaining traction. As of March 5, 2026, more than a million new users are reportedly joining the platform daily. While **Claude** does not publicly disclose its user statistics, it was estimated to have approximately **20 million** active monthly users at the beginning of the year. **Mike Kreiger**, a representative for **Anthropic**, indicated that this influx may be partly due to users migrating from **ChatGPT**, which is developed by **OpenAI**. Following **Anthropic’s** decision to withdraw from military partnerships, **OpenAI** has signed an agreement with the Pentagon that has drawn significant criticism from its user base.
Amodei has expressed skepticism regarding the motives behind **OpenAI’s** military deal, suggesting it could be more about “safety theater” than actual security enhancement. **Sam Altman**, CEO of **OpenAI**, has also characterized the agreement as “rushed,” further fueling discussions around the ethical implications of AI in military applications.
Future Implications
While **Anthropic** and the **White House** work to reconcile their differences, there are indications that negotiations for a potential Pentagon deal may not be entirely off the table. The ongoing situation highlights the complexities of integrating AI technology within national defense frameworks and the varied responses from different companies.
As **Claude** continues to attract new users, the broader implications of these developments may unfold in the coming days. With a clear ethical stance on AI use in military contexts, **Anthropic** appears well-positioned to capitalize on the growing demand for AI solutions that prioritize user values and privacy. The evolving landscape around AI and government partnerships will likely remain a focal point for both users and industry observers alike.
