- Ex-OpenAI researcher Daniel Kokotajlo has moderated his previous prediction for when artificial general intelligence (AGI) might emerge, now placing it in the early 2030s rather than by 2027.
- The revision reflects broader scepticism about rapid AI progress and raises fresh questions about the feasibility and governance of AGI development.
What happened: AGI timeline pushed back
Former OpenAI governance researcher Daniel Kokotajlo, best known for co-authoring the speculative scenario “AI 2027,” has recently updated his forecast for when artificial general intelligence might realistically be achieved. Kokotajlo’s earlier scenario depicted a rapid advance in AI capabilities, with fully autonomous coding and an intelligence explosion potentially unfolding by 2027. That scenario attracted widespread attention and debate, even drawing references from political commentators in discussions on US-China AI competition.
In light of evolving evidence and the “jagged” progress observed in modern AI systems, Kokotajlo and his collaborators now believe that key milestones such as autonomous coding are likely to occur later than previously envisioned. In his updated outlook, he places the emergence of fully autonomous AI research capabilities into the early 2030s, shifting the speculative arrival of superintelligence closer to around 2034 rather than the late 2020s.
Kokotajlo has emphasised that even this revised timeline is inherently uncertain and should not be interpreted as a definitive prediction. In commentary shared on social media, he described progress toward the original scenario as “somewhat slower” than anticipated, underscoring the difficulties in forecasting technological breakthroughs with precision.
The new position reflects a growing trend among AI researchers and commentators to temper earlier excitement about imminent AGI. Some experts now argue that while AI systems have demonstrated remarkable capabilities in specific domains, their performance in broader real-world contexts remains uneven, with significant gaps in areas such as planning, reasoning and autonomous decision-making.
Also read: Thrive Capital secures exclusive clause in OpenAI’s funding
Also read: OpenAI hits more than 1M paid business users
Why it’s important
The updated timeline from a prominent figure like Kokotajlo matters for several reasons. First, it influences both public perception and policy discussions about the urgency of addressing AGI risks. Predictions of near-term superintelligence have been used by some policymakers to urge rapid development of governance frameworks designed to safeguard society. Slowing these forecasts may shift focus toward incremental, safety-oriented progress rather than dramatic, apocalyptic scenarios.
At the same time, this moderation does not imply that the risks associated with advanced AI have disappeared. Kokotajlo and other experts maintain that the potential for highly impactful systems remains, even if the path to AGI is longer and more complex than initially thought. Questions remain about how to balance innovation with ethical oversight, particularly as AI capabilities continue to influence critical sectors such as healthcare, finance and national security.
The debate also highlights deeper challenges in defining and measuring AGI itself. Some critics contend that the very concept of a singular “AGI moment” may be outdated or overly simplistic, arguing that AI progress may instead manifest as a continuum of increasingly general capabilities without a clear tipping point. Others warn that focusing too narrowly on timelines can distract from the more immediate and tangible issues posed by current AI technologies, including bias, privacy concerns and economic disruption.
