Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› How do human brains inform “thinking” machines

How do human brains inform “thinking” machines

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

What if your brain could predict the future—not in a mystical sense, but as a highly efficient machine constantly minimizing surprises? That’s the premise behind active inference, a “first principles” approach to understanding behavior and brain function. This week on Invisible Machines, Robb Wilson and Josh Tyson sit down with one of the foremost experts in this field, Dr. Thomas Parr, for a mind-expanding conversation that bridges neuroscience and AI.

Dr. Parr, a practicing physician and researcher at the Nuffield Department of Clinical Neurosciences at Oxford, explores how the free energy principle drives our brains to create models of the world, reducing the gap between what we expect and what we experience. He’s also the co-author of Active Inference: The Free Energy Principle in Mind, Brain, and Behavior, a must-read that connects ideas from physics, biology, and psychology to this revolutionary theory.

As Robb’s fascination with our brains as “prediction machines” collided with Dr. Parr’s work, this conversation dives deep into how active inference can influence the development of AI—particularly in designing cognitive architectures for conversational technologies. Can the principles that guide our behavior also shape the evolution of “thinking” machines?

Prepare for a thought-provoking journey into the mechanics of the mind and the future of AI. Now, enjoy this chat with Dr. Thomas Parr.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover how personalization crosses the line from serving users to silently shaping them.

Article by Tushar Deshmukh
The Ethics of Personalization: When UX Crosses the Line from Helpful to Harmful
  • The article argues that personalization walks a fine ethical line between empowering users and quietly manipulating them.
  • It exposes how over-filtering doesn’t just limit content; it limits identity, replacing user curiosity with algorithmic compliance.
  • The piece calls on UX practitioners to treat ethical personalization as a foundational responsibility: one that demands transparency, fairness, and respect for human dignity.
Share:The Ethics of Personalization: When UX Crosses the Line from Helpful to Harmful
4 min read

Learn why your users decide whether to stay or leave before they even understand your product.

Article by Tushar Deshmukh
The Psychology of Onboarding: First Impressions Rule the Brain
  • The article argues that onboarding is not where users begin; it is where they decide whether to stay or leave.
  • It shows that most onboarding failures are not design problems; they are psychological ones.
  • The piece challenges designers to recognize that first impressions are cognitive anchors and that the brain rarely revises its judgments.
Share:The Psychology of Onboarding: First Impressions Rule the Brain
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and