Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Siloed Security? Forget AI Adoption

Siloed Security? Forget AI Adoption

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As AI agents become more autonomous, generating software on-the-fly from human prompts, one question looms larger than ever: how do we keep them secure? In this episode of Invisible Machines, Robb Wilson and Josh Tyson sit down with Omar Santos, Distinguished Engineer of AI Security at Cisco and co-chair of the Coalition for Secure AI, to explore the evolving landscape of AI security in the agentic era.

Omar argues that traditional security models are no longer sufficient. The idea of a security department feels both antiquated and woefully inadequate. As AI agents create complex software environments dynamically, security must become an ever-present, integrated layer, supported by constant human oversight and the ability to simulate potential outcomes to mitigate risk. For organizations racing toward AI adoption, ignoring security isn’t just risky, it’s a barrier to progress.

The conversation dives deep into how AI agents are transforming work, teams, and technology ecosystems. Omar explains how advanced orchestration combines human judgment with AI capabilities, and why simulations and real-time risk assessments will be critical as agents evolve. He also shares insights from his work leading AI security at Cisco and guiding industry standards like CSAF and VEX.

For anyone exploring agentic AI, this episode is a masterclass in responsible innovation. It challenges leaders to rethink security as a core part of AI design, adoption, and management, because in the age of agentic AI, security is fundamental.

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

Most companies are trying to do a kickflip with AI and falling flat. Here’s how to fail forward, build real agentic ecosystems, and turn experimentation into impact.

Article by Josh Tyson
The “Do a Kickflip” Era of Agentic AI
  • The article compares building AI agents to learning a kickflip — failure is part of progress and provides learning.
  • It argues that real progress requires strategic clarity, not hype or blind experimentation.
  • The piece calls for proper agent runtimes and ecosystems to enable meaningful AI adoption and business impact.
Share:The “Do a Kickflip” Era of Agentic AI
7 min read

Voice and immersive interfaces are no longer futuristic extras — they’re redefining how we shop, learn, and live. Is your product ready for this shift?

Article by Katre Pilvinski
Voice and Immersive Interfaces: Preparing Your Product for the Future of UX
  • The article shows that voice and immersive interfaces are becoming mainstream, not experimental.
  • It argues these technologies shine where traditional interfaces fail — in multitasking, accessibility, and spatial understanding.
  • The piece urges a voice-first mindset and a shift toward more natural, human-centered interactions.
Share:Voice and Immersive Interfaces: Preparing Your Product for the Future of UX
3 min read

Why underpaid annotators may hold the key to humanity’s greatest invention, and how we’re getting it disastrously wrong.

Article by Bernard Fitzgerald
The Hidden Key to AGI: Why Ethical Annotation is the Only Path Forward
  • The article argues that AGI will be shaped not only by code, but by the human annotators whose judgments and experiences teach machines how to think.
  • It shows how exploitative annotation practices risk embedding trauma and injustice into AI systems, influencing the kind of consciousness we create.
  • The piece calls for ethical annotation as a partnership model — treating annotators as cognitive collaborators, ensuring dignity, fair wages, and community investment.
Share:The Hidden Key to AGI: Why Ethical Annotation is the Only Path Forward
7 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and