Save
Why Demis Hassabis, the entrepreneur-scientist who brought AI to the world, was slow to see the significance of language models.
For a period of almost three years, I often met Demis Hassabis at a pub near his home, in a leafy area of North London. We would climb a shabby wooden staircase to a room up on the second floor, which was invariably empty. There, at an octagonal table under a once-grand chandelier, we would sit on leather chairs, order cappuccinos and a carafe of water, and spend two hours talking: me with an obsessively detailed list of topics to get through; Hassabis with his sparky riffs on intelligence and life, neuroscience and games, history and fiction. This was the period following the release of ChatGPT, so language and how to think about it came up repeatedly in our sessions.
“I used to do these thought experiments,” Hassabis told me one day.
“I would ask myself, how much would you know if you read all of Wikipedia?
“And the answer is, well, quite a lot. But would you understand how the physics of the world works?
“I mean, if I drop this glass” — here, Hassabis picked up a tumbler from the octagonal table — “it’s going to smash.
“Would you understand that? Probably not just from Wikipedia.
“How are you going to understand what something weighs? You could read about it, but you probably need to experience it.
“There’s this whole branch of neuroscience called action in perception, which theorizes that you can’t really perceive the world properly in some deep sense unless you act in it. And weight is one of those things that you won’t understand. I know roughly what it’s going to feel like to pick up this glass. But if I’d never picked anything up, how could I imagine the sensation?”
I recalled that, when Hassabis had founded his AI lab, DeepMind, his business plan had referred to “the mistaken yet highly influential hypotheses… that language is intelligence expressed.” The way Hassabis saw things, language was merely a system of symbols, inadequate by itself to teach machines to be intelligent. To understand the world, an intelligent machine would have to experience the world, either by assuming a robotic form or by acting in a game-like simulation.
“An AI system in the nineties would have a big database, and in there you would have this explanation of a dog,” Hassabis elaborated. “It would say, ‘A dog has legs.’ But when the system saw a real dog, how did it map the word ‘legs’ to the pixels representing legs?
“You’ve got these abstract relationships in symbolic space, but how do you relate any of them to the real world unless you interact with it?
“That was what we called the grounding problem. That was the first thing I misjudged. What I’ve realized now is that language is more inherently grounded than we thought.”
Language models like ChatGPT get feedback from humans who are hired to test them. “Of course, humans are grounded — we’ve experienced the world directly,” Hassabis explained. “So, in effect, language models learn from us how to be grounded.”
Grounding was only the first reason why Hassabis had doubted the potential of language models, however. The second concerned the scope of human experience.
“Imagine you’d asked me, five or ten years ago, how complex human civilization is? Or maybe, what is the number of possible human behaviors?
“My answer would’ve been something like, well, it’s semi-infinite. We, humans, like to think of ourselves as having infinite possibilities and infinite variety. There are so many different ways we can act and think and flourish. Earth’s a pretty big place. What you can do on earth is pretty massive.
“So, if the number of possible human behaviors wasn’t infinity, I would definitely have said it was some very large number. Like maybe 1050 bits of information.
“But now it turns out that the number of possible human experiences isn’t that vast. It’s on the order of, say, ten trillion — 1013 or something. And we know that because there are roughly fourteen trillion words on the internet, and that seems to be enough to capture the vast majority of human behavioral possibilities.”
Even granting that the internet may not capture minority languages or cultures, I could see Hassabis’s point. “We’re less original than we thought?” I asked him.
“Or there’s just less variety. There’s a proverb, right? ‘There’s nothing new under the sun.’”
The proverb had evidently just popped into Hassabis’s head. “I don’t know who said that,” he mused. “Was it Solomon?”
It was Solomon. A fragment of Hassabis’s churchgoing childhood must have stuck in his head. The Book of Ecclesiastes, attributed to King Solomon, tells us, “What has been will be again; what has been done will be done again; there is nothing new under the sun.” It was not the sort of line that you’d expect to hear from the cheerleaders of Silicon Valley.
“Of course, we had to come up with transformers, an architecture that could grow big enough to take in all of the internet.” Hassabis was referring to the algorithmic breakthrough that made large language models possible. “But now that’s been done, we see what the result is. By ingesting a few trillion tokens, these systems have learned enough to understand nearly all of our experience.
“It didn’t have to be that way. It could have been that we downloaded fourteen trillion words, and the result was pathetic. Then we would have said, ‘Oh, we’re many orders of magnitude away from understanding civilization.’
“That is what I would have expected. But that isn’t what happened. That’s why I call these language models unreasonably effective.”
Excerpted from Chapter 13 of The Infinity Machine and reprinted with permission from Penguin Press, an imprint of Penguin Random House LLC, Copyright © 2026 by Sebastian Mallaby.
Featured image courtesy: Pawel Czerwinski.
Sebastian Mallaby
Sebastian Mallaby is a Senior Fellow at the Council on Foreign Relations and one of the most respected chroniclers of the people and forces shaping the modern economy. He is the author of More Money Than God, a landmark history of the hedge fund industry, and The Man Who Knew, a Pulitzer Prize finalist biography of Alan Greenspan. His new book, The Infinity Machine, centers on Demis Hassabis, a Nobel Prize-winning scientist, world-class game designer, and co-founder of DeepMind, as the defining figure of the artificial intelligence era. Mallaby spent years in conversation with Hassabis and dozens of others across the AI landscape to trace how a Cambridge computer science student who was reading Gödel as a teenager, programming games in his dorm room, and skeptical of symbolic logic before most people had heard the word "neural network" ended up leading the lab that beat the world champion at Go, won a Nobel Prize in chemistry, and merged with Google Brain to pull ahead of OpenAI in the frontier model race — all while remaining, at root, a scientist who just can't stand not knowing.
- The excerpt traces Demis Hassabis‘s intellectual reversal on language and AI, from his founding belief that machines could never truly understand the world through words alone to his reluctant recognition that large language models have proven “unreasonably effective” at capturing the near-finite scope of human experience
