AI Intel: Anthropic's $30B Run Rate + Meta's Open-Source Reset + More

The loudest Reddit takeaway this morning was not a flashy demo. It was money. Multiple reports now peg Anthropic's revenue run rate at around $30 billion, and the crowd's conclusion was blunt: coding has stopped being an AI lab feature and become the business itself.

That story sat next to three others that fit together neatly: Meta is leaning back into open source, xAI is going aggressive on price, and local AI tooling is finally good enough that developers treat it as a real option. The market is splitting into premium, cheap, and local lanes fast.

Anthropic isn't just popular anymore. It's turning Claude into a real business.

What happened: Several reports published over the last 24 hours said Anthropic's annualized revenue has climbed to roughly $30 billion, up from about $9 billion at the end of 2025. One of the more interesting numbers inside that pile: Claude Code alone was reported at more than $2.5 billion in run-rate revenue by February. On the pricing side, Anthropic's own docs now list Claude Sonnet 4.6 at $3 in / $15 out per million tokens and Claude Opus 4.6 at $5 in / $25 out. Reddit read all of that as a simple signal: developers are still willing to pay when the model saves them retries.

Why it matters: This matters because it cuts through the fake drama of daily leaderboard swings. Claude looks like it is winning where budgets get approved: code, analysis, and long-form work people actually ship. If these numbers are even directionally right, Anthropic has proved that developer trust compounds into revenue.

Developer angle: For teams building with AI, the lesson is not “use Claude for everything.” Price by task difficulty. Sonnet 4.6 is still the daily driver for most workloads, while Opus is the model you bring in for the hard turns where a better first answer is cheaper than repeated retries.

Meta is heading back to open source because that is still its best card.

What happened: Axios reported that Meta plans to release open-source versions of its next AI models, including the first family developed under Alexandr Wang's leadership. That landed right after a rough reception to the last Llama cycle. Meta's own Llama site still pushes Scout and Maverick as its current open model line, but Reddit's mood was clear: Meta is leaning on openness again because it needs a cleaner story than “trust us, the next one will be better.”

Why it matters: Open weights are not a side quest anymore. They are Meta's distribution edge. Meta is not going to beat OpenAI on product polish or Anthropic on developer affection by copying the same closed playbook. But it can keep startups and infra teams inside its orbit by making the open route feel alive.

Developer angle: Even if you mostly ship on closed APIs, open models are still strategic insurance. They give you a fallback when pricing changes, a private option when data rules tighten, and a sane path for experiments that do not need frontier quality. The smart move is not to pick a religion. It is to keep your stack loose enough that open and closed models can both plug in.

xAI is pushing Grok downmarket on price, and Reddit noticed immediately.

What happened: xAI's current model docs show a real pricing split. The flagship grok-4.20 models are listed at $2 input / $6 output per million tokens, while the cheaper grok-4-1-fast variants come in at just $0.20 input / $0.50 output, all with a stated 2 million token context window. At the same time, a Sensor Tower-based market roundup this week showed ChatGPT first, Gemini second, Claude third, with Grok around 22nd in the App Store.

Why it matters: Cheap matters, but cheap is not the whole game. xAI can undercut plenty of rivals on API cost, yet that has not turned Grok into a consumer winner. The market is separating into low-cost model supply for builders and product trust for end users. Those are connected, but they are not the same business.

Developer angle: Grok fast looks useful for first-pass agents, background classification, and workloads where cost matters more than polish. Just do not build your whole stack around “cheap” as if that is a product strategy. Use it as one lane in a router. If you're testing that split, an OpenAI-compatible layer like KissAPI makes the boring part easier: same client, same API shape, different model economics behind the scenes.

The local AI stack is starting to feel adult.

What happened: The quieter but important Reddit thread today was about tooling. Gemma 4 already shows 1.2 million pulls on Ollama, and pages from LM Studio and other tooling vendors are now treating local models as normal developer infrastructure, not science projects. The feature list is the giveaway: tool use, vision input, reasoning support. That is why the how-to guides are multiplying. People are not reading them for fun. They are reading them because local setups finally do enough to matter.

Why it matters: Local AI is no longer just for benchmark obsessives and people who like buying GPUs. It is becoming the safe sandbox for private data, cheap iteration, internal copilots, and outage backup. Every cloud vendor now has to compete not only with other APIs, but with “good enough on my own box” for a growing class of jobs.

Developer angle: The practical play here is hybrid. Use local models for evals, retrieval experiments, schema extraction, and anything sensitive that should stay on-device. Use cloud models when you need the last mile of quality or better tool reliability. The teams that win this year will be the ones that can move work between local, cheap cloud, and premium cloud without rewriting the app.

Quick Hits

Want one endpoint for premium, cheap, and open-model lanes?

KissAPI gives you OpenAI-compatible access to Claude, GPT, Gemini, Grok, DeepSeek, Qwen, and more so you can route by cost and task instead of betting everything on one vendor.

Try KissAPI Free →