AI Intel: QuitGPT Hits 2.5M Supporters, Anthropic Doubles to $20B, GPT-5.4 Leaks Surface

OpenAI signed a Pentagon deal last Friday. By Tuesday, 2.5 million people had joined a campaign to delete ChatGPT, uninstalls spiked 295%, and Anthropic's Claude shot to the top of the App Store. It's been the wildest week in AI since DeepSeek dropped V3. Here's what happened and what it means for developers.

The QuitGPT Explosion: 2.5 Million and Counting

The timeline is almost comical in how fast it moved. On February 27, Trump ordered all federal agencies to stop using Anthropic's technology — the Pentagon labeled Claude a "supply risk." Hours later, OpenAI swooped in and signed a Department of Defense contract allowing the military to use its tech for "any lawful purpose." The internet did not take it well.

The #QuitGPT movement, which had been simmering since early February over broader safety concerns, went nuclear. ChatGPT uninstalls surged 295% in the US. The r/ChatGPT subreddit saw one of its most upvoted posts ever — a call for users to show proof of cancelling their subscriptions. "You are training a war machine," it read. Guides on how to export your ChatGPT history and import it into Claude started circulating everywhere.

By March 3, protesters rallied outside OpenAI's San Francisco headquarters. Sam Altman responded by calling the original contract language "sloppy" and amended it to explicitly bar domestic surveillance and NSA use. Too little, too late for the 1.5 million people who'd already signed pledges to leave.

Why it matters for developers: This isn't just consumer drama. Enterprise clients are watching. If your product is built on a single provider's API, you're exposed to reputational risk you didn't sign up for. The companies that hedged their bets with multi-provider setups — routing through API gateways that can switch between Claude, GPT-5, and open-source models — are sleeping better this week. If you're locked into one provider, this is your wake-up call to build in optionality.

Anthropic Hits $20 Billion Revenue Run Rate

While OpenAI was putting out fires, Anthropic quietly dropped a bomb of its own. Bloomberg reported Monday that Anthropic's annual revenue run rate has topped $19 billion — more than double the $9 billion it posted just three months ago. CNBC, Seeking Alpha, and PYMNTS all confirmed the numbers independently.

That's not a typo. They doubled in a quarter. Enterprise adoption of Claude Code is a major driver, along with the broader shift of developer tooling toward Claude models. The timing with the QuitGPT exodus is almost too perfect — Anthropic is absorbing users at exactly the moment people are looking for alternatives.

There's an irony here worth noting: Trump banned Anthropic from federal use, and the company responded by posting the kind of growth numbers that make OpenAI's board nervous. The consumer and enterprise markets clearly don't care what the Pentagon thinks about AI procurement.

Why it matters for developers: Anthropic's growth validates the bet on Claude as a production-grade platform. A company doing $20B/year isn't going anywhere — it means continued investment in model quality, API reliability, and developer tooling. For anyone building on Claude APIs, this is reassuring. The ecosystem is getting deeper, not thinner.

GPT-5.4 Leaks: 2M Context Window and Full-Resolution Vision

OpenAI needs a win right now, and GPT-5.4 might be it. Multiple leaks surfaced this week pointing to an imminent release. On March 1-2, an OpenAI engineer's pull request in the Codex repository referenced "GPT-5.4 or newer" before being hastily deleted. Then on March 3, OpenAI's official account posted: "5.4 sooner than you think."

What the leaks suggest: a 2 million token context window (10x the current 200K), full-resolution vision capabilities, and significant improvements to code generation. NxCode's analysis of the Codex commits points to monthly GPT-5.x update cadence, with 5.4 likely dropping in March or early April.

The 2M context window is the headline feature. That's enough to load an entire medium-sized codebase into a single prompt. For developers working with large monorepos or doing cross-file refactoring, this changes the workflow entirely — no more carefully curating which files to include in context.

Why it matters for developers: If the 2M context window is real, it narrows one of Claude's remaining advantages (Claude already handles 200K well, but Gemini has been the context-length king at 1M+). The model wars are converging on a feature set where context length, reasoning quality, and price are the three axes of competition. For API users, this means more options and better models across the board — especially through gateways like KissAPI that give you access to every major model through one endpoint, so you can switch to GPT-5.4 the day it drops without changing a line of code.

The API Price War Keeps Getting Wilder

The numbers from this week's pricing landscape are staggering. DeepSeek's V3.2 is now available at $0.14 per million input tokens. For context, Claude Sonnet 4.6 costs $3/M input — that's a 21x difference. Gemini Flash undercuts even DeepSeek at $0.10/M. Meanwhile, GPT-5 starts at $1/M, positioning itself as the "premium but not insane" option.

The broader trend: inference costs have dropped roughly 40x year-over-year for reasoning-class models. What cost $100 in early 2025 costs about $2.50 today. Open-source models are closing the quality gap fast enough that the pricing pressure on closed-source providers is real and accelerating.

DeepSeek V4 benchmarks show it competing with GPT-4o at 18x lower cost. That's not a rounding error — it's a structural shift in what AI compute costs. For startups and indie developers, the barrier to building AI-powered products has essentially collapsed.

Why it matters for developers: If you're still paying list price for a single provider, you're leaving money on the table. The smart play is a model router pattern — use cheap models (DeepSeek, Gemini Flash) for simple tasks, mid-tier models (Sonnet, GPT-5) for most work, and premium models (Opus) only when you need peak quality. A good API gateway handles this routing for you, and the cost savings compound fast at scale.

⚡ Quick Hits

  • Claude memory import: Anthropic launched a feature letting users import their conversation history from other AI assistants, perfectly timed with the ChatGPT exodus. Reddit threads are full of migration guides.
  • GPT-5.3 "cringe" backlash: Users on r/ChatGPT are complaining that GPT-5.3's personality has gotten noticeably worse — more sycophantic, more filler phrases, less direct. OpenAI hasn't acknowledged it.
  • Qwen team shakeup: Alibaba's Qwen tech lead departed this week, raising questions about the open-source model's roadmap. The timing is awkward given Qwen's recent momentum in the Chinese AI market.

The Bottom Line

This week crystallized something that's been building for months: the AI industry is splitting into two camps. One camp (OpenAI) is chasing government contracts and scale at any cost. The other (Anthropic, open-source players) is winning developers and enterprises on product quality and trust. The market is voting with its wallets — and its uninstall buttons.

For developers, the practical takeaway is simple: don't bet on one provider. The landscape is moving too fast, the politics are too unpredictable, and the pricing differences are too large to ignore. Build with flexibility. Use model routers. Keep your options open.

We'll be back tomorrow with more from the front lines.

Access Every AI Model Through One API

Claude, GPT-5, DeepSeek, Gemini — all through one OpenAI-compatible endpoint. Switch models in one line of code. Pay only for what you use.

Try KissAPI Free →