AGI Roadmap: We’re 57% of the Way There
A landmark study from AI safety researchers finally gives us a concrete answer to the question, “How close is AGI?” and reveals why memory (not intelligence) is the real challenge that lies ahead.
In October 2025, a coalition of AI safety researchers—led by Dan Hendrycks from the Center for AI Safety and including luminaries like Turing Award winner Yoshua Bengio, former Google CEO Eric Schmidt, and UC Berkeley’s Dawn Song—published a groundbreaking framework that finally answers the burning question we’re all wondering about: How close are we to AGI?
Their verdict: GPT-4 scores 27% AGI; GPT-5 hits 57%.
But here’s the twist: The path to 100% isn’t about making AI smarter; it’s about giving AI something it critically lacks:
Memory.
Let’s revisit the definition of AGI?
AGI is an AI that matches or exceeds the cognitive versatility and proficiency of a well-educated adult human.
Not in one narrow task. Not just economically. Across the full spectrum of human cognition.
The research cuts through years of hype with a refreshingly simple definition and uses the Cattell-Horn-Carroll (CHC) theory measuring 10 core cognitive domains:
Knowledge • Reading & Writing • Math • Reasoning • Working Memory • Long-Term Memory Storage • Memory Retrieval • Visual • Auditory • Speed
Each domain receives equal weight (10%), creating an “AGI Score” from 0–100%.
📄 Read the full research: A Definition of AGI (Hendrycks et al., 2025)
Where AI Excels and Where It Fails
AI doesn’t progress uniformly. In fact, it’s wildly uneven.
GPT-4’s cognitive profile is:
📚 General Knowledge: 80% (near-perfect)
🧮 Math: 40% (strong)
🧠 Reasoning: 0% (critical gap)
💾 Long-Term Memory Storage: 0% (complete amnesia)
It’s like a chess grandmaster who forgets every game they’ve ever played.
AI agents already excel at knowledge-intensive analysis, pattern recognition, and strategic decision-making—but as long as they don’t have genuine long-term memory storage, they can’t build authentic relationships, learn from experience, or maintain consistent identities over time.
Current systems use workarounds (such as massive context windows, external databases, constant re-prompting) that mask the problem but don’t actually solve it. For AI to be a responsible economic partner, it needs cognitive continuity.
From 57% to 100%: The Path Forward
That remaining 43% isn’t “more of the same.” It’s made up of fundamentally different capabilities:
Mostly Solved: ✅ Knowledge Recall
✅ Pattern Recognition
✅ Mathematical Reasoning
✅ Text and Image Generation
Critical Gaps: ❌ Genuine Long-Term Memory Formation
❌ Consistent Identity Across Time
❌ Robust Multi-Step Planning
❌ True Auditory Processing
Why This Matters Right Now
Progress is rapid (from 27% to 57% in just two years), but critical gaps remain. That’s why Reventlov is building governance frameworks today: Creating the legal structures, trust mechanisms, and ethical guidelines that will enable AI agents to participate responsibly as they cross cognitive thresholds.
When AI closes these gaps and gains true memory and sustained identity, the infrastructure to properly facilitate them must already be in place.


