Back to Blog

Do we even need GPT-5?

5 min read
36 views
0 likes

Is Bigger Always Better? Why Reasoning (Not Size) Is the Next Big Leap for AI

Is Bigger Always Better? Why Reasoning (Not Size) Is the Next Big Leap for AI

The AI community is buzzing with speculation about GPT-5—when will OpenAI release it, how big will it be, and will it bring us closer to AGI? The prevailing assumption seems to be that adding more data and increasing model size is the surest path to intelligence on par with humans, or even beyond. But I’m not convinced that’s the whole story.

Are Current Models Already “Big Enough”?

GPT-4 and models of a similar size already pack in a substantial portion of human knowledge. The internet is vast, and these models have essentially “seen” most of what we’ve collectively created. Simply throwing more data at them won’t necessarily yield exponentially better results. In many ways, they’re already nearing the data-saturation point. The question then becomes: If GPT-4 and its contemporaries already contain most of our accumulated knowledge, why aren’t they perfectly intelligent or “human-like” in their reasoning? That’s where the concept of reasoning models comes in.

The Real Breakthrough: Reasoning Models

Some emerging frameworks, like “o1,” focus on how a system thinks rather than just how much it’s memorized. Instead of spitting out an answer from a huge pool of data in one shot, these reasoning models iterate. They consider multiple solutions in parallel, analyzing each to determine which response makes the most logical sense.

This is computationally expensive because it’s essentially like running the same prompt repeatedly—and possibly recursively—to home in on the best answer. Instead of one forward pass through a network, you might do dozens or even hundreds in seconds. The more compute you have, the more iterations you can run, and the deeper your reasoning becomes.

Scale Without Limits

What’s exciting is that there’s no clear limit to this approach. Suppose a reasoning model can examine 100 scenarios in 10 seconds today. If you suddenly give it 10 times the compute power, it can examine 1,000 scenarios in the same time. Extrapolate that further: if you scale this up enough, you reach a point where the model can effectively reason through lifetimes worth of possibilities in mere seconds.

In other words, the real bottleneck isn’t bigger data or bigger models; it’s how we use compute resources to let these models “think” longer and smarter.

The Power of Synthetic Data and Recursive Improvement

Another argument against endlessly expanding model size is the rise of synthetic data and the concept of recursive improvement. We see this in OpenAI’s (and others’) practice of using one model—say “o1”—to generate new, higher-quality datasets for training a subsequent model—say “o3.” Because “o1” can apply reasoning to refine the data, you end up with training material that’s arguably better than the original human-generated dataset.

You then take that improved dataset, give it to “o3” (along with more compute), and get a model that makes even smarter inferences. This process can be repeated almost indefinitely: o1 trains o3, o3 trains o4, and so on. With self-improving, recursive models, you have a direct path to something approaching superintelligence, all without dramatically increasing model size. The key is more inference compute rather than more training compute.

Conclusion

As we anticipate what might come after GPT-4 (and whether GPT-5 is on the horizon), it’s worth questioning the assumption that bigger is always better. Yes, more training data has led to powerful breakthroughs in AI, but we may already be at the threshold of “enough” data for broad human knowledge.

The real frontier, I believe, lies in advancing how models reason—letting them iterate over multiple possibilities, refine data using their own insights, and recursively improve future generations of AI. It’s a shift from sheer size to smarter, more deliberate thinking. If that’s the path forward, then we’re already well on our way to unlocking AI’s true potential.