Five Hypothesis On The Future of AI
Despite the blistering summer of AI that we are currently experiencing, most discussions about the future of AI today tend to swing between breathless optimism and cautious realism. Behind the noise, there are five distinct hypotheses that attempt to describe where AI might actually be heading. These hypotheses range from mundane to metaphysical, each carrying a different assumption about the capabilities and limits of artificial intelligence.
The terminology and framing in this post are adapted from Richard Susskind’s book How to Think About AI, where he lays out five hypotheses about the long-term future of artificial intelligence. What follows is my attempt to reflect on and expand his categorisation from a practitioner’s lens, particularly in the context of what we’re seeing at the frontier of GenAI and agentic systems.
The first is the Hype Hypothesis. It assumes that most of what we’re seeing today is inflated expectation. The idea here is that Gen/Agentic AI, while impressive, is mostly a flash in the pan. It wows us with its ability to generate fluent language or mimic creativity, but fails to translate into real, long-term transformational change. This view argues that we’re at the peak of another AI hype cycle, and that disillusionment will follow once the limits of current approaches become painfully clear. If true, investment and attention will eventually shift elsewhere, and AI will settle into a more modest, less disruptive role in society.
The second is the GenAI+ Hypothesis. This view holds that today’s systems, like GPT-4 or Claude, are just the beginning. With more data, better training strategies, and refined architectures, we can build significantly more capable models. These models might not be conscious or self-aware, but they could handle increasingly complex tasks, reshape industries, and augment human capabilities in profound ways. Crucially, this hypothesis does not require the emergence of true general intelligence. It assumes that scaling and alignment of narrow yet powerful models will be enough to drive a second machine age.
The third is the AGI Hypothesis. It suggests that we are on a trajectory to create artificial general intelligence. Unlike GenAI+, which assumes better narrow systems, the AGI view suggests we will build systems that can match or exceed human performance across a wide range of cognitive tasks. The emphasis here is on transfer learning, abstraction, and agency. AGI would not just summarise documents or write code, but understand goals, reason across domains, and adapt to novel environments in a human-like way. This hypothesis carries serious implications for the economy, governance, and even our conception of what it means to be intelligent.
Then there is the Superintelligence Hypothesis. It goes a step beyond AGI. If intelligence is substrate-independent, and if recursive self-improvement is feasible, then a superintelligence might emerge that far outstrips human intelligence in every dimension. This idea, popularised by thinkers like Nick Bostrom, implies a radical shift in the locus of control. Once a system can improve its own architecture and reasoning ability, we may no longer be able to predict or contain its behaviour. This hypothesis brings with it both existential risk and utopian potential, depending on how the transition is managed.
Finally, the Singularity Hypothesis envisions a radical convergence in human-machine evolution. It assumes that once AI surpasses a certain threshold, change becomes exponential, not just incremental. Technological progress accelerates beyond our ability to forecast it, leading to unpredictable social, economic, and philosophical consequences. This isn’t just about better tools or smarter assistants anymore. It’s about the possibility that intelligence itself becomes decoupled from biology and evolves along paths we can barely imagine (or predict).
Each hypothesis frames a different future. Which one feels closest to reality likely depends on your experience, your priors, and your proximity to the cutting edge. But they all remind us of one thing: we’re not just building tools. We’re standing at the edge of something far more consequential, and the story is still being written.