The Path To Super Intelligence
Artificial Super Intelligence is probably a lot closer than anyone in the general population realizes. Here is how I think it will happen.
The Path to Superintelligence: How AI’s Next Leap Could Happen Sooner Than We Think
Artificial Super Intelligence (ASI)—the point at which machines not only match human intelligence but exceed it in ways we can’t fully predict—may be a lot closer than many people realize. In my last post, I discussed why endlessly expanding model size (think GPT-5) might not be the real key to advancing AI. Now, I’d like to outline how I believe these pieces fit together into a path that could rapidly lead us to superintelligence.
This evolution didn’t hinge on any single breakthrough. Rather, it came from three distinct, critical steps.
1. Scaling Compute for Model Training
It’s easy to see how raw computational power has shaped AI’s capabilities. Just look at the leap from GPT-3.5 to GPT-4—or X’s Grok and Anthropic’s Claude. These models are only possible thanks to enormous GPU clusters that process data at internet scale.
But this has a natural limit. You eventually scrape all the easily accessible, human-generated data on the internet. That’s the point many AI labs believe we’re fast approaching—running out of traditional data sources. Which leads to the next big step.
2. Synthetic Data Powered by Reasoning Models
When you can’t just keep throwing more “human” data at your model, the solution is to create synthetic data. Early attempts ran into a major problem: hallucinations. Models can invent facts, and if those fabrications become training data, you introduce a lot of noise.
The breakthrough came when reasoning models entered the scene—beginning with releases like OpenAI’s o1. Instead of generating one answer, these models can produce dozens or even hundreds of responses, compare them, and synthesize the best outcome. This process dramatically reduces hallucinations and creates synthetic data that’s more precise than what we can scrape off the web.
Side note: I actually think hallucinations have their place in creativity and exploration. But that’s a topic for another blog post!
3. Shifting the Compute Bottleneck to Inference
Traditionally, we’ve focused on the cost of training large models. Now, inference—the process of taking a prompt and generating a response—has become the new bottleneck. If each question requires the model to generate hundreds of potential responses internally, you multiply the compute demands dramatically.
Yet this is also where the magic happens. By iteratively generating large volumes of reasoned synthetic data, one model (o1) teaches the next (o3), and the next iteration becomes smarter still. We saw this in action when o3 started scoring at a human level on the ARC AGI benchmark and outperformed PhD-level humans on nearly every type of question thrown at it.
This results in a self-reinforcing cycle:
-Train a reasoning model (o1) on massive compute.
-Generate high-quality synthetic data with o1.
-Train a new model (o3) on this improved dataset with even more compute.
-Rinse and repeat, leveling up each new generation.
Of course, each iteration requires exponentially more compute. That’s why we’re in an arms race to raise capital and secure enough GPUs to keep pushing forward. Whoever sustains this cycle the longest may be the first to reach superintelligence.
The Race to Superintelligence
It’s wild to think how quickly this could happen. AI labs around the world are hyper-focused on raising funds, scaling up their GPU fleets, and training ever-smarter models. The combination of reasoning models + synthetic data is a recipe for a feedback loop that may push us past human-level intelligence sooner than most expect.
For many, it’s both thrilling and daunting. We’re on the cusp of transformations that could redefine almost every aspect of modern life—from how we work to how we solve scientific problems, even to how we think about consciousness and creativity.
Yet here we are, documenting it all from the front lines. What a time to be alive!
Final Thoughts
-We’ve maxed out easy data. Models the size of GPT-4 and beyond have already absorbed much of humanity’s collective knowledge.
-The next leap is not just building bigger models—it’s building smarter ones that can refine their own training data through reasoning.
-This iterative process—model generates data, data trains the next model—may be the key to achieving superintelligence.
-Compute is the bottleneck: it’s all about who can apply the most resources to sustain these infinite loops of training and inference.
If you think this pace of change is intense now, buckle up. We’re likely only at the beginning of AI’s most accelerated growth curve—and once superintelligence arrives, there may be no going back. Stay tuned for more posts where I’ll dive into the implications of hallucinations, creativity, and how we should (or shouldn’t) prepare for an ASI future.