Before enterprises can deploy AI agents that actually work, they need something most organizations don’t have: a single, authoritative source of truth. Joe DosSantos, VP of Enterprise Data and Analytics at Workday, joins Robb and Josh for a wide-ranging conversation about canonical knowledge, the semantic layer, and why data governance, a concept from the 1990s, has suddenly become essential for AI deployment.
The core challenge? Large language models are predictive engines that “anticipate what you probably would mean,” as DosSantos explains. They’re “pretty good at words, but they suck at math.” For B2C applications where multiple interpretations are acceptable, this works fine. But in enterprise contexts, where revenue was exactly $1.625651 billion last year, organizations need deterministic truth, not probabilistic guesses.
The solution requires three layers: establishing canonical knowledge (the laborious human work of defining what data means in your organization), building a semantic layer (the translation mechanism between human definitions and machine-readable formats like YAML), and using the LLM as an interface to deterministic back-end systems rather than treating AI as the system itself.
DosSantos offers a compelling metaphor: trying to deploy AI agents without this foundation is like wanting granite countertops without building the foundation of the house first. You can’t skip it, it’s non-negotiable infrastructure.
The conversation also tackles AI anxiety, with DosSantos referencing Kate Darling’s framework of thinking about AI as animals rather than human replacements, and Robb Wilson proposing we view AI as simply “smarter machines” — skill saws that know the difference between wood and fingers, stoves that prevent fires, and washing machines that don’t shrink clothes.
For leaders evaluating AI investments, this episode clarifies what actually needs to be built before agents can deliver value: not flashy use cases, but the unglamorous, essential work of data governance and semantic translation.
