AI

Anthropic Told the Pentagon No, and Now Everyone Wants Claude

6 min read
Share
Anthropic Told the Pentagon No, and Now Everyone Wants Claude

Claude Went Down on Monday Morning

On Monday morning, nearly 2,000 users flooded Downdetector with reports that Claude, Anthropic's AI chatbot, was completely unreachable. The outage hit around 6:40 AM New York time, taking down claude.ai and the mobile apps while the core API that businesses use kept running. By 10:50 AM, Anthropic had everything back online.

The company's explanation was blunt: "unprecedented demand." That phrase alone tells a story. Anthropic has been grappling with a surge in usage over the past week, and the infrastructure buckled under the weight. Free users have increased more than 60% since January. Paid subscribers have more than doubled since October. The Claude app topped Apple's App Store charts for several consecutive days.

But the real question isn't why Claude went down. It's why so many people suddenly want to use it.

The Pentagon Fight That Started Everything

The backstory begins with a months-long negotiation between Anthropic and the Department of Defense over two specific use cases: mass domestic surveillance of Americans and fully autonomous weapons. Anthropic wanted binding guarantees that the Pentagon would not deploy Claude for either purpose. The Pentagon wanted flexibility. The talks collapsed.

Anthropic's CEO Dario Amodei made the company's position public: no amount of government pressure would change their stance on surveillance and autonomous killing. He framed it as both a safety issue (the technology isn't reliable enough for either task) and an ethical one (some applications shouldn't exist regardless of capability).

The administration's response was swift and punishing. Defense Secretary Pete Hegseth designated Anthropic a "supply-chain risk to U.S. national security," an extraordinary label believed to be the first time an American tech company has received it. The designation bars any military contractor or supplier from doing business with Anthropic. President Trump then ordered all federal agencies to phase out Anthropic technology within six months.

The Streisand Effect, Silicon Valley Edition

What happened next was predictable to everyone except, apparently, the people who made the decision. The attempted punishment became the best marketing campaign Anthropic never paid for.

Silicon Valley rallied around Anthropic almost instantly. Engineers, founders, and venture capitalists publicly praised the company for standing up to what they characterized as government overreach. The narrative crystallized quickly: a small AI company choosing principle over profit, refusing to build tools for mass surveillance even when the most powerful government in the world threatened to destroy its business.

Downloads surged. The Claude app climbed the App Store rankings. Tech workers who had been casually using ChatGPT or Gemini made a point of switching to Claude as a statement of support. The numbers back this up: monthly active users grew from 2.9 million to 18.9 million over the past year, with much of the recent acceleration tied directly to the Pentagon controversy.

The Financial Picture Looks Absurd

Anthropic's business metrics are staggering. The company's revenue run rate hit $9 billion by the end of 2025, surged to $14 billion by February 2026, and the company projects $26 billion for the full year. In February, Anthropic raised $30 billion in Series G funding at a $380 billion valuation. It now serves over 300,000 business customers, and the number of accounts generating over $100,000 in annual revenue grew 7x during 2025.

Claude Code, the company's developer tool, has been a particular standout. Its run-rate revenue has grown to over $2.5 billion, and 4% of all GitHub public commits worldwide are now authored by Claude Code, double the percentage from just one month prior. Weekly active Claude Code users have also doubled since January 1.

The government ban has done nothing to slow this trajectory. If anything, the publicity accelerated it. Federal contracts are a meaningful revenue source for many tech companies, but Anthropic's commercial and consumer business is growing fast enough that the government phase-out, while painful, isn't existential.

OpenAI Moved In, Then Got Embarrassed

The aftermath produced an ironic twist. Within hours of the Trump administration ordering agencies to cut ties with Anthropic, OpenAI announced a deal with the Pentagon. The optics looked terrible: one AI company gets punished for refusing to enable surveillance; its biggest rival immediately fills the void.

But the embarrassment was OpenAI's, not Anthropic's. The Pentagon deal that OpenAI signed quietly included the same core provisions Anthropic had fought for: prohibitions on domestic surveillance and autonomous weapons. In other words, the government blacklisted Anthropic for demanding restrictions that the replacement vendor then agreed to anyway. The entire conflict was, in practical terms, pointless, except for the enormous brand boost it gave Anthropic.

What the Outage Actually Tells Us

Monday's outage is a symptom of success, not failure. Cloud infrastructure doesn't crash because too few people want to use it. Anthropic is dealing with the kind of scaling problem that every hypergrowth company faces: demand is running ahead of capacity.

The timing matters, too. The outage came during a week when global attention is split between the Iran war, market volatility, and tech sector uncertainty. The fact that nearly 2,000 people reported problems on Downdetector at 6:40 AM, a time when most casual users aren't even awake, suggests that a large and growing portion of Claude's user base is made up of professionals and power users who rely on it as a daily work tool.

Anthropic has the money to fix the infrastructure problem. The $30 billion raise in February gives them effectively unlimited runway to scale servers, optimize inference, and build redundancy. Monday's outage will likely be a footnote by next month.

The Bigger Pattern

The Anthropic saga is becoming a case study in how government antagonism can backfire spectacularly in the tech sector. The company drew a line on mass surveillance and autonomous weapons. The government tried to make an example of it. The tech industry and the public responded by making Anthropic more successful than ever.

Whether this dynamic holds depends on what comes next. The six-month phase-out deadline for federal agencies is still ticking. If the administration escalates further, say, by pressuring AWS (which hosts Claude's infrastructure) or by extending the supply-chain risk designation to Anthropic's cloud partners, the calculus could change. For now, though, the numbers tell a clear story: Anthropic picked a fight with the Pentagon, and the Pentagon blinked first.

References

  1. Anthropic's Claude Goes Down Amid 'Unprecedented Demand' - Bloomberg
  2. Pentagon moves to designate Anthropic as a supply-chain risk - TechCrunch
  3. Trump admin blacklists Anthropic as AI firm refuses Pentagon demands - CNBC
  4. Claude Statistics 2026: How Many People Use Claude? - Backlinko
  5. Anthropic's Claude reports widespread outage - TechCrunch

Get the Daily Briefing

AI, Crypto, Economy, and Politics. Four stories. Every morning.

No spam. Unsubscribe anytime.