When Will the Singularity Happen? Expert Predictions

The most widely cited prediction places the singularity around 2045. That date comes from computer scientist Ray Kurzweil, who has maintained it since his 2005 book The Singularity Is Near and reaffirmed it as recently as 2024. But “the singularity” means something specific, expert opinions vary dramatically, and real-world bottlenecks could push the timeline in either direction.

What the Singularity Actually Means

The term was popularized by mathematician and science fiction author Vernor Vinge in a 1993 essay. He defined it as the moment technology creates entities with greater-than-human intelligence. Not just smart software, but something that surpasses the best human minds and then keeps improving. “For me, the superhumanity is the essence of the Singularity,” Vinge wrote. He called it a point where all our existing models of the future break down and “a new reality rules.”

Vinge outlined four paths that could get us there: computers that become “awake” and superhumanly intelligent, large networks that collectively wake up as an intelligent entity, brain-computer interfaces so seamless that augmented humans qualify as superhuman, or biological enhancements to the human brain itself. The singularity isn’t just about building a really good chatbot. It’s the moment intelligence begins to improve itself faster than humans can follow, triggering changes so rapid and profound that predicting anything beyond that point becomes impossible.

The Major Timeline Predictions

Kurzweil’s roadmap has two milestones. First, human-level AI by 2029, meaning systems that match the most skilled humans across most domains. Second, the singularity itself around 2045, when he expects intelligence to expand “a millionfold” through a merging of human and machine cognition. In a 2024 interview with The Guardian, he confirmed these dates haven’t budged: “I have stayed consistent.”

Vinge’s original 1993 prediction was broader. He expected superhuman intelligence to arrive before 2030, noting he’d “be surprised if this event occurs before 2005 or after 2030.” That window has nearly closed without a clear singularity, though recent AI progress has made his timeline look less implausible than it did a decade ago.

Prediction markets and expert surveys offer a middle ground. The forecasting platform Metaculus currently estimates a “weakly general” AI system by March 2028, with stronger general AI arriving around September 2032. A 2024 survey of 2,778 AI researchers found they estimated a 10% chance of machines outperforming humans at every possible task by 2027, and a 50% chance by 2047. That 2047 median lands remarkably close to Kurzweil’s 2045 target, though it’s worth noting that these same researchers have shifted their estimates earlier with each successive survey.

Where AI Stands Right Now

Current AI systems are closing the gap on some measures of human reasoning faster than many expected. On the ARC-AGI benchmark, a test designed to measure the kind of flexible, abstract reasoning that has historically separated humans from machines, Google’s Gemini 3 Deep Think system scores 98% on the first version, matching the human panel. On the harder second version, it scores 84.6% compared to 100% for humans, a gap of about 15 percentage points.

More relevant to the singularity question is whether AI can improve itself. A key milestone would be systems that autonomously redesign their own architecture to become smarter, creating the “intelligence explosion” that makes the singularity a singularity. Early versions of this are already happening. Researchers have demonstrated a self-improving coding agent that edits its own code to boost performance on benchmark tasks, discovering new tools and strategies without human guidance. In one case, the agent independently invented a more efficient file-editing method and built its own code navigation system. This is a long way from full recursive self-improvement, but it’s no longer theoretical.

Why Many Experts Think It Won’t Happen Soon

Not everyone buys the timeline, and the skeptics aren’t fringe voices. Yann LeCun, Meta’s chief AI scientist and a pioneer of deep learning, puts the probability of an AI-caused existential shift at effectively zero. His view represents a broader camp that sees future AI systems as powerful tools without their own goals or drives, not as autonomous superintelligences.

The technical objections are specific. Some researchers argue that large language models, despite their impressive breadth, lack the deep reasoning needed to synthesize genuinely new knowledge. They’re pattern-matchers operating on statistical regularities in text, not thinkers. One surveyed expert put it bluntly: “I don’t think it is ever possible for neural techniques, including LLMs, to produce AGI.” Others point out that extrapolating from current progress curves is naive. As one researcher noted, trends are not guarantees. If you applied the same logic to Moore’s Law indefinitely, you’d eventually predict transistors smaller than the smallest possible unit of physical space.

A separate group of skeptics worries less about superintelligence and more about the slower, structural harms AI is already causing: fake news eroding shared reality, automation deepening inequality, autonomous weapons lowering the threshold for military conflict. In this view, fixating on a dramatic singularity date distracts from the damage that ordinary, non-superintelligent AI is doing right now.

Physical Bottlenecks That Could Delay Progress

Even if the algorithms are ready, the hardware might not be. Training and running frontier AI models requires enormous amounts of electricity, and the power grid is struggling to keep up. AI-driven data centers are expected to consume an additional 126 gigawatts annually through 2028. For context, that’s roughly the total electricity generation capacity of a mid-sized country.

Data center developers expect power constraints to hit as early as 2027 or 2028, the result of years of underinvestment in electrical grids. New data center campuses now reach 1 to 4 gigawatts per site, a scale that overwhelms traditional grid connections. Permitting delays, equipment shortages, a 30% rise in grid equipment costs since 2019, and a scarcity of qualified construction engineers are all compounding the problem. Some operators are pursuing off-grid power solutions to avoid waiting for the grid to catch up, but the bottleneck is real and could slow the pace of scaling for years.

The raw computing power needed is also daunting. The human brain processes the equivalent of roughly 100 to 1,000 petaflops of information, a range that today’s top supercomputers can match on paper. But matching the brain in raw operations per second isn’t the same as matching it in intelligence. Brains are extraordinarily efficient, running on about 20 watts of power. Replicating that efficiency in silicon, at scale, remains an unsolved engineering challenge.

What the Range of Estimates Looks Like

Pulling these threads together, the realistic window spans roughly two decades. The most aggressive credible estimates place transformative AI (systems that outperform humans across virtually all tasks) in the late 2020s to early 2030s. The median expert estimate lands around 2047. The singularity itself, the moment of recursive self-improvement and runaway intelligence growth, would follow some time after human-level AI is achieved, with Kurzweil’s 2045 being the most specific and optimistic mainstream prediction.

The honest answer is that nobody knows, and the range of informed opinion is wide enough to include “within a decade” and “not in our lifetimes.” What has changed in recent years is the direction of the surprises. AI capabilities have consistently arrived faster than experts predicted, benchmark after benchmark falling years ahead of schedule. That doesn’t guarantee the trend continues, but it explains why the median estimates keep shifting earlier with every new survey.