AI Intel: OpenAI Admits Pricing Was 'Accidental', Mystery 1T Model Sparks DeepSeek V4 Frenzy, Meta Kills the Metaverse
OpenAI just told the world its entire subscription model was an accident. Nick Turley, the company's head of ChatGPT, used that exact word in a Business Insider interview this week — and a hidden $100/month "Pro Lite" tier found in ChatGPT's source code suggests the fix is already underway. Meanwhile, a mysterious trillion-parameter model called Hunter Alpha appeared on OpenRouter with no name attached, and the entire AI community is convinced it's DeepSeek V4 in disguise.
OpenAI Calls Its Own Pricing "Accidental" — and a $100 Tier Is Coming
Here's what we know. Nick Turley confirmed ChatGPT now has 900 million weekly active users and 50 million paying subscribers. January and February 2026 were the biggest months for new subscriber additions in OpenAI's history. And yet, Turley described the current pricing structure as something that happened by accident rather than by design.
The math explains why. ChatGPT has evolved from a simple chatbot into a multi-step agent that runs complex tasks, burns through tokens, and demands serious GPU time. A casual user asking three questions a day and a developer running hour-long coding sessions cost wildly different amounts to serve — but both pay $20/month on Plus. That's not sustainable when per-user compute keeps climbing.
The evidence of change is already in the code. Researcher Tibor Blaho found a $100/month "Pro Lite" tier in ChatGPT's web app, complete with a backend API response listing the price including tax. It slots between the $20 Plus and $200 Pro plans, filling a gap that's been obvious since Pro launched.
OpenAI already added ChatGPT Go at $8/month in February. Now with Pro Lite at $100, the full ladder looks like: Free → Go ($8) → Plus ($20) → Pro Lite ($100) → Pro ($200). That's five tiers, and Turley's comments suggest unlimited access at any of them isn't guaranteed long-term.
Why it matters for developers: If you're building on ChatGPT's consumer features or relying on a Plus subscription for API-adjacent work, the ground is shifting. Usage-based pricing is coming, which actually favors API users who already pay per token. If you're exploring alternatives to subscription lock-in, pay-as-you-go API gateways like KissAPI let you access GPT-5, Claude, and other models without committing to any single provider's pricing experiments.
Hunter Alpha: The Mystery Model That Might Be DeepSeek V4
A model called "Hunter Alpha" appeared on OpenRouter this week with no company name, no press release, and no documentation. Just a profile page claiming 1 trillion parameters and a 1 million token context window. It's free to use. And it's good.
Reuters, Mashable, and half of AI Twitter are now running the same theory: this is DeepSeek secretly testing V4 before an official launch. The evidence is circumstantial but stacking up fast. A "V4 Lite" variant briefly appeared on DeepSeek's own website days before Hunter Alpha went live. Tests show the model has strong Chinese-language capabilities. DeepSeek has a history of anonymous soft launches followed by official announcements. And the timeline for a full V4 release has been reported as April 2026.
If this really is DeepSeek V4, the specs are staggering. One trillion parameters with a million-token context window, offered for free. For context, GPT-5.4 charges $2.50 per million input tokens and tops out at 1M context. Claude Opus 4.6 charges $15 per million input tokens with 200K context. A free model matching or approaching those capabilities would reshape the entire pricing landscape.
Why it matters for developers: Don't build production systems on Hunter Alpha — it could disappear tomorrow. But do pay attention. If DeepSeek V4 launches at free or near-free API pricing like its predecessors, it becomes the default choice for cost-sensitive workloads. The smart play is building on OpenAI-compatible endpoints that let you swap models without rewriting code. When V4 officially drops, you want to be one config change away from using it.
Meta Officially Kills the Metaverse
Meta announced it's shutting down Horizon Worlds for Quest VR headsets. The app disappears from Quest by March 31, and VR access ends entirely on June 15, 2026. A mobile-only version will limp along, but let's call this what it is: the metaverse is dead.
This was once the centerpiece of Meta's strategy. Mark Zuckerberg renamed the entire company for it. Reality Labs burned through tens of billions of dollars. And now the VR social network where cartoon avatars could meet up and play games is getting unplugged because nobody used it.
The pivot is obvious. Meta's future is AI, not VR. The company has been pouring resources into Llama models, AI assistants across its apps, and AI-powered advertising. Horizon Worlds was the last visible artifact of the metaverse era, and killing it sends a clear signal about where the money goes next.
Why it matters for developers: If you had any VR/metaverse projects targeting Quest + Horizon Worlds, start your migration now. More broadly, this confirms what the market has been saying for two years: AI is where the platform investment is going. Every dollar Meta pulls from VR goes into AI infrastructure, which means more models, more compute, and eventually more competition on pricing.
Claude Post-Outage: Users Say It Got Dumber
Claude had another rough stretch this week. After a service disruption, users on Reddit reported that the model felt noticeably degraded — slower responses, less coherent reasoning, more generic outputs. The "Claude got dumber after the outage" thread is becoming a recurring genre on r/ClaudeAI.
Anthropic hasn't confirmed any model changes, and it's worth noting that perceived quality drops after outages are partly psychological. When a service goes down and comes back, users scrutinize every response more carefully. That said, the pattern of complaints is consistent enough that something might be happening on the infrastructure side — possibly load balancing changes, request routing adjustments, or capacity constraints that affect response quality under heavy traffic.
Why it matters for developers: If you're running production workloads on Claude, build in quality monitoring. Log a sample of responses and track metrics like response length, coherence scores, or task completion rates. When users say "the model got dumber," you want data, not vibes. And having a fallback model configured — even if you rarely use it — means an outage on one provider doesn't take your whole product down.
Quick Hits
- OpenAI's tier explosion continues. With Go ($8), Plus ($20), Pro Lite ($100), and Pro ($200), OpenAI now has more subscription tiers than most streaming services. The $20 "unlimited" Plus plan is looking increasingly like a loss leader that won't last.
- GPT-5.4 mini and nano adoption is picking up. Developers are reporting solid results with the budget models for agent scaffolding and classification tasks. At $0.15/1M input tokens, nano is cheap enough to use as a router layer.
- The coding agent market crossed $6.5B. Claude Code ($2.5B+ ARR), Codex ($1B+), and Cursor ($2B+) are all growing fast. NVIDIA's NemoClaw entry at GTC signals that even hardware companies want a piece of the agent economy.
Access Every Model Through One API
GPT-5, Claude, DeepSeek, Gemini — one endpoint, one API key, pay-as-you-go. No subscriptions, no lock-in.
Try KissAPI Free →