Last week, OpenAI finally released GPT-5.
The model they promised would be "revolutionary." The breakthrough that would justify their $157 billion valuation and silence the skeptics.
It wasn't.
Don't get me wrong—GPT-5 is technically impressive. It's faster, more accurate, and handles edge cases better than GPT-4. But revolutionary? The kind of leap that transforms entire industries overnight? Not even close. It feels more like GPT-4.5, a solid incremental improvement wrapped in the marketing language of a moonshot.
And that's the sound of the empire cracking.
Everyone thinks OpenAI is a product company. They're wrong.
OpenAI isn't a company at all; it's a thesis. A multi-billion-dollar bet that AGI is coming and they'll be the ones to build it. That's the moonshot that got them the Microsoft money, the best minds in the world, and the cover of every magazine.
But here's the problem: their valuation is based on a ten-year-out promise, while the bills for compute and talent are due today. And those bills are staggering. We're talking about a money furnace for infrastructure and compensation packages that look more like athlete contracts—Meta is reportedly throwing nine-figure deals at top researchers to poach them.
The GPT-5 release was supposed to prove the thesis. Instead, it raised the uncomfortable question: what if the exponential curve is flattening? What if we're hitting the limits of what current approaches can achieve? What if the emperor has no clothes?
They're trying to win a marathon by running a series of desperate, expensive sprints. And GPT-5 just proved that strategy isn't working. The pressure to ship revolutionary breakthroughs on a product timeline is killing the deep research needed for the big prize.
The whole thing feels more fragile than ever.
The Promise That Built an Empire
Let's be clear about what OpenAI actually sold the world. Their $157 billion valuation isn't built on ChatGPT's success or their product strategy. It's built on something much more ambitious: the promise that GPT-5 would be a massive leap toward Artificial General Intelligence.
For months, OpenAI executives hinted that GPT-5 would be transformative. Sam Altman called it "materially better" than GPT-4. The company's safety team said they were preparing for capabilities that would require new governance frameworks. The AI research community held its breath, waiting for the next paradigm shift.
They sold a vision of exponential progress—that each model generation would bring us dramatically closer to human-level AI. And the market bought it, hook, line, and sinker. Microsoft invested $13 billion. Top researchers left prestigious academic positions. Competitors scrambled to catch up.
The problem with building an empire on promises is that eventually, you have to deliver.
The Gravity of Reality
GPT-5's lukewarm reception exposes OpenAI's fundamental problem: the physics of their business model don't add up. They're burning through capital at a rate that requires exponential returns, but they just delivered linear progress.
The numbers are staggering. OpenAI estimated they need $10 billion just to build AGI—not to monetize it, just to build it. Their compute costs alone consume hundreds of millions annually. The talent war is even more expensive, with individual AI researchers commanding packages worth $200-300 million over four years.
This isn't a normal business where you can iterate toward profitability. Every day of operation costs tens of millions, and the only justification for those costs is the promise of revolutionary breakthroughs. GPT-5 was supposed to be that breakthrough. Instead, it's proof that throwing more compute and data at the problem isn't yielding exponential improvements anymore.
The emperor's new clothes are looking pretty threadbare.
The Feature Frenzy Desperation
Watch what OpenAI has been doing since GPT-5's muted reception. Suddenly they're sprinting toward every possible revenue stream: voice models that sound almost human, coding assistants, hints at competing with Google Search, enterprise features, custom GPTs.
This isn't innovation—it's desperation disguised as a product roadmap.
The company that promised to revolutionize intelligence is now scrambling to build incremental features that can generate cash flow right now. They're trying to transform from a research moonshot into a profitable SaaS company before investors notice that the AGI timeline just got a lot longer.
It's actually brilliant survival strategy, but terrible for their core mission. Building great products and conducting groundbreaking research require fundamentally different approaches. Products need reliability, consistency, and clear value propositions. Research needs the freedom to fail, explore dead ends, and think in decades rather than quarters.
OpenAI is trying to be both Google (product excellence) and Bell Labs (research moonshots) simultaneously. GPT-5's reception suggests they're failing at both.
The Talent Exodus Accelerates
Nothing reveals cracks in an empire faster than watching the best people leave. And that exodus is accelerating.
Meta's superintelligence team has become a magnet for OpenAI's top researchers, offering compensation packages that would make Premier League transfer fees look modest. Google, Anthropic, and others are systematically poaching talent with offers that OpenAI can't match—not because they lack funds, but because their equity story just got a lot less compelling.
Each departure doesn't just cost OpenAI talent. It costs them the accumulated knowledge, relationships, and institutional memory that researcher built. Worse, it signals to the market that even insiders aren't confident in the company's trajectory.
The narrative on Reddit and tech Twitter has shifted from "when will OpenAI achieve AGI?" to "can they survive the talent war?" That's not the conversation a $157 billion company wants the market having.
The Hot Take: The Crown Slips Within 18 Months
I'm calling it: GPT-5 was OpenAI's iPhone moment, and they delivered a slightly better Blackberry.
The current strategy is doomed. You can't sustain a $10 billion research budget on incremental improvements. You can't justify a $157 billion valuation on being marginally better than your previous model. And you definitely can't win a talent war when your core value proposition—being first to AGI—just became a lot less credible.
The next 18 months will see a cascade of problems. First, the talent exodus will accelerate as researchers lose faith in the exponential progress narrative. Second, investors will start asking harder questions about the path to AGI and the timeline for returns. Third, competitors who focused on building excellent AI products rather than chasing the AGI white whale will start eating OpenAI's lunch.
I don't think OpenAI will collapse—they have too much funding and infrastructure for a sudden death. But I can easily envision a world where they're no longer the undisputed leader by early 2026. The crown could slip to Anthropic (which stayed focused on safety and research), Meta (which has the resources to sustain massive losses), or a surprise player that makes different bets about the relationship between research and products.
GPT-5 was supposed to silence the doubters. Instead, it gave them ammunition.
The Bottom Line
OpenAI built an empire on the promise of exponential progress toward AGI. GPT-5 just proved that progress might be more linear than anyone wanted to admit. That's not just a technical problem—it's an existential crisis for a company whose entire business model depends on revolutionary breakthroughs happening on schedule.
The emperor's new clothes were supposed to be GPT-5.
Instead, we got a nice outfit that looks suspiciously similar to what he was wearing before. The whole kingdom is starting to notice.
OpenAI's greatest strength—its singular, audacious focus on AGI—has become its greatest vulnerability. They bet everything on being first to the most important prize in human history, but they also created expectations that might be impossible to meet with current approaches.
The next model release isn't just about technology anymore. It's about whether an empire built on promises can survive contact with reality. Based on what I'm seeing, the cracks are already forming.
The question isn't whether OpenAI will fail—it's whether they can transform fast enough to survive their own success. And after GPT-5, I'm not optimistic.
📩 Need help with an AI idea / project?
Have something you’re trying to build, automate, or scale using AI?
Let my team help you figure it out.
We’ll take a look and get back to you with suggestions, resources, or a potential implementation plan — no strings attached.
🚀 Want to Make Money from AI?
We’re quietly launching a private beta program that trains a small group to become AI Implementers — People who help other founders install AI systems and get paid for it.
If you're interested in joining this early group…
👉 Reply with the word “implementer”
Until next time -
Jelani