Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› BOOK EXCERPT: The Infinity Machine

BOOK EXCERPT: The Infinity Machine

by Sebastian Mallaby
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

What if everything you thought about human originality was wrong? In this excerpt from The Infinity Machine, Sebastian Mallaby’s new book about Demis Hassabis, the scientist who built Google DeepMind, we learn why Demis once believed that words alone could never teach a machine to truly understand the world. The rise of ChatGPT forced him into a startling reckoning: that human experience, in all its seeming vastness, may be far more finite than any of us would like to admit.

Why Demis Hassabis, the entrepreneur-scientist who brought AI to the world, was slow to see the significance of language models.

For a period of almost three years, I often met Demis Hassabis at a pub near his home, in a leafy area of North London. We would climb a shabby wooden staircase to a room up on the second floor, which was invariably empty. There, at an octagonal table under a once-grand chandelier, we would sit on leather chairs, order cappuccinos and a carafe of water, and spend two hours talking: me with an obsessively detailed list of topics to get through; Hassabis with his sparky riffs on intelligence and life, neuroscience and games, history and fiction. This was the period following the release of ChatGPT, so language and how to think about it came up repeatedly in our sessions.

“I used to do these thought experiments,” Hassabis told me one day.

“I would ask myself, how much would you know if you read all of Wikipedia?

“And the answer is, well, quite a lot. But would you understand how the physics of the world works?

“I mean, if I drop this glass” — here, Hassabis picked up a tumbler from the octagonal table — “it’s going to smash.

“Would you understand that? Probably not just from Wikipedia.

“How are you going to understand what something weighs? You could read about it, but you probably need to experience it.

“There’s this whole branch of neuroscience called action in perception, which theorizes that you can’t really perceive the world properly in some deep sense unless you act in it. And weight is one of those things that you won’t understand. I know roughly what it’s going to feel like to pick up this glass. But if I’d never picked anything up, how could I imagine the sensation?”

I recalled that, when Hassabis had founded his AI lab, DeepMind, his business plan had referred to “the mistaken yet highly influential hypotheses… that language is intelligence expressed.” The way Hassabis saw things, language was merely a system of symbols, inadequate by itself to teach machines to be intelligent. To understand the world, an intelligent machine would have to experience the world, either by assuming a robotic form or by acting in a game-like simulation.

“An AI system in the nineties would have a big database, and in there you would have this explanation of a dog,” Hassabis elaborated. “It would say, ‘A dog has legs.’ But when the system saw a real dog, how did it map the word ‘legs’ to the pixels representing legs?

“You’ve got these abstract relationships in symbolic space, but how do you relate any of them to the real world unless you interact with it?

“That was what we called the grounding problem. That was the first thing I misjudged. What I’ve realized now is that language is more inherently grounded than we thought.”

Language models like ChatGPT get feedback from humans who are hired to test them. “Of course, humans are grounded — we’ve experienced the world directly,” Hassabis explained. “So, in effect, language models learn from us how to be grounded.”

Grounding was only the first reason why Hassabis had doubted the potential of language models, however. The second concerned the scope of human experience.

“Imagine you’d asked me, five or ten years ago, how complex human civilization is? Or maybe, what is the number of possible human behaviors?

“My answer would’ve been something like, well, it’s semi-infinite. We, humans, like to think of ourselves as having infinite possibilities and infinite variety. There are so many different ways we can act and think and flourish. Earth’s a pretty big place. What you can do on earth is pretty massive.

“So, if the number of possible human behaviors wasn’t infinity, I would definitely have said it was some very large number. Like maybe 1050 bits of information.

“But now it turns out that the number of possible human experiences isn’t that vast. It’s on the order of, say, ten trillion — 1013 or something. And we know that because there are roughly fourteen trillion words on the internet, and that seems to be enough to capture the vast majority of human behavioral possibilities.”

Even granting that the internet may not capture minority languages or cultures, I could see Hassabis’s point. “We’re less original than we thought?” I asked him.

“Or there’s just less variety. There’s a proverb, right? ‘There’s nothing new under the sun.’”

The proverb had evidently just popped into Hassabis’s head. “I don’t know who said that,” he mused. “Was it Solomon?”

It was Solomon. A fragment of Hassabis’s churchgoing childhood must have stuck in his head. The Book of Ecclesiastes, attributed to King Solomon, tells us, “What has been will be again; what has been done will be done again; there is nothing new under the sun.” It was not the sort of line that you’d expect to hear from the cheerleaders of Silicon Valley.

“Of course, we had to come up with transformers, an architecture that could grow big enough to take in all of the internet.” Hassabis was referring to the algorithmic breakthrough that made large language models possible. “But now that’s been done, we see what the result is. By ingesting a few trillion tokens, these systems have learned enough to understand nearly all of our experience.

“It didn’t have to be that way. It could have been that we downloaded fourteen trillion words, and the result was pathetic. Then we would have said, ‘Oh, we’re many orders of magnitude away from understanding civilization.’

“That is what I would have expected. But that isn’t what happened. That’s why I call these language models unreasonably effective.”

Want to go deeper into the machine? Hear Sebastian Mallaby take this further on the Invisible Machines podcast

Excerpted from Chapter 13 of The Infinity Machine and reprinted with permission from Penguin Press, an imprint of Penguin Random House LLC, Copyright © 2026 by Sebastian Mallaby.
Featured image courtesy:
Pawel Czerwinski.


post authorSebastian Mallaby

Sebastian Mallaby
Sebastian Mallaby is a Senior Fellow at the Council on Foreign Relations and one of the most respected chroniclers of the people and forces shaping the modern economy. He is the author of More Money Than God, a landmark history of the hedge fund industry, and The Man Who Knew, a Pulitzer Prize finalist biography of Alan Greenspan. His new book, The Infinity Machine, centers on Demis Hassabis, a Nobel Prize-winning scientist, world-class game designer, and co-founder of DeepMind, as the defining figure of the artificial intelligence era. Mallaby spent years in conversation with Hassabis and dozens of others across the AI landscape to trace how a Cambridge computer science student who was reading Gödel as a teenager, programming games in his dorm room, and skeptical of symbolic logic before most people had heard the word "neural network" ended up leading the lab that beat the world champion at Go, won a Nobel Prize in chemistry, and merged with Google Brain to pull ahead of OpenAI in the frontier model race — all while remaining, at root, a scientist who just can't stand not knowing.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The excerpt traces Demis Hassabis‘s intellectual reversal on language and AI, from his founding belief that machines could never truly understand the world through words alone to his reluctant recognition that large language models have proven “unreasonably effective” at capturing the near-finite scope of human experience

Related Articles

Learn why the design-to-development pipeline is the launchpad your team inherited but never questioned.

Article by Erika Flowers
Zero Stage to Orbit
  • The article argues that the entire design-to-development pipeline is a multi-stage rocket — a system built around workarounds, not solutions.
  • It makes the case that AI agents don’t just improve the handoff problem; they eliminate the need for handoffs.
  • The piece challenges readers to ask not how to optimize their process, but why they’re still using it.
Share:Zero Stage to Orbit
14 min read

Learn how to build systems where design explicitly models development, handoff is automatic, and AI can extend your work reliably.

Article by Jim Gulsen
Your Design System Works in Figma. Does It Work in Code?
  • The article explains why many design systems don’t work well: designs made in Figma don’t translate well into code.
  • It introduces five practices: structure frames like code, use fewer components with more variants, organize by how both designers and developers actually work, let AI check your naming, and build documentation into your daily workflow.
  • The piece says that good design systems are the same in design and development, and when they match, everything just works.
Share:Your Design System Works in Figma. Does It Work in Code?
6 min read

Find out how to stop building where the data is bright and start building where the problem actually is.

Article by Núria Badia Comas
Stop Building Streetlamp Models: The Decision-First Framework for AI Products
  • The article reveals that most AI projects fail because teams focus on what’s possible instead of what users actually need.
  • It introduces the AI-Question Framework, asking three key questions: Does it matter? Do you have the data? Can you handle the mistakes?
  • The piece concludes that successful AI products start with the right question, not with what the AI can do.
Share:Stop Building Streetlamp Models: The Decision-First Framework for AI Products
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and