Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Scaled AI Requires Canonical Truth

Scaled AI Requires Canonical Truth

by Josh Tyson
2 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Before enterprises can deploy AI agents that actually work, they need something most organizations don’t have: a single, authoritative source of truth. Joe DosSantos, VP of Enterprise Data and Analytics at Workday, joins Robb and Josh for a wide-ranging conversation about canonical knowledge, the semantic layer, and why data governance, a concept from the 1990s, has suddenly become essential for AI deployment.

The core challenge? Large language models are predictive engines that “anticipate what you probably would mean,” as DosSantos explains. They’re “pretty good at words, but they suck at math.” For B2C applications where multiple interpretations are acceptable, this works fine. But in enterprise contexts, where revenue was exactly $1.625651 billion last year, organizations need deterministic truth, not probabilistic guesses.

The solution requires three layers: establishing canonical knowledge (the laborious human work of defining what data means in your organization), building a semantic layer (the translation mechanism between human definitions and machine-readable formats like YAML), and using the LLM as an interface to deterministic back-end systems rather than treating AI as the system itself.

DosSantos offers a compelling metaphor: trying to deploy AI agents without this foundation is like wanting granite countertops without building the foundation of the house first. You can’t skip it, it’s non-negotiable infrastructure. 

The conversation also tackles AI anxiety, with DosSantos referencing Kate Darling’s framework of thinking about AI as animals rather than human replacements, and Robb Wilson proposing we view AI as simply “smarter machines” — skill saws that know the difference between wood and fingers, stoves that prevent fires, and washing machines that don’t shrink clothes.

For leaders evaluating AI investments, this episode clarifies what actually needs to be built before agents can deliver value: not flashy use cases, but the unglamorous, essential work of data governance and semantic translation.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover how “consent theater” manipulates the psychology of choice, and what ethical design should look like instead.

Article by Tushar Deshmukh
Consent Theater: Are Users Really in Control?
  • The article argues that digital consent mechanisms are designed to look ethical while engineering the opposite outcome.
  • It exposes how legal compliance and ethical design have become dangerously decoupled.
  • The piece challenges designers to recognize that user psychology can serve as a tool for empowerment or a means of manipulation — the choice is theirs.
Share:Consent Theater: Are Users Really in Control?
8 min read

Learn why the design-to-development pipeline is the launchpad your team inherited but never questioned.

Article by Erika Flowers
Zero Stage to Orbit
  • The article argues that the entire design-to-development pipeline is a multi-stage rocket — a system built around workarounds, not solutions.
  • It makes the case that AI agents don’t just improve the handoff problem; they eliminate the need for handoffs.
  • The piece challenges readers to ask not how to optimize their process, but why they’re still using it.
Share:Zero Stage to Orbit
14 min read

Unpack how dark patterns manipulate users, why they’re becoming a legal issue, and what ethical designers can do about it.

Article by Tushar Deshmukh
Dark Patterns: When Design Crosses the Line
  • The article makes a clear case: dark patterns aren’t accidents but deliberate design decisions that put business gains over people.
  • The piece reminds us that no short-term conversion bump is worth losing user trust for good.
Share:Dark Patterns: When Design Crosses the Line
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Get Paid to Test AI Products

Earn an average of $100 per test by reviewing AI-first product experiences and sharing your feedback.

    Tell us about you. Enroll in the course.

      This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and