Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Are generative AI tools unlawful to use?

Are generative AI tools unlawful to use?

by Josh Tyson
1 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As artificial intelligence continues to reshape our world, the question of legality surrounding generative tools—like large language models (LLMs) and AI models that produce audio and video content—grows increasingly pressing. 

In this episode of the Invisible Machines, Robb and Josh delve deep into this topic with Ed Klaris, Managing Partner at Klaris Law, CEO of KlarisIP, and an adjunct professor at Columbia Law School. With decades of experience in copyright and IP law surrounding technology—being an in-house counsel at ABC/Disney and a Senior Vice President at Condé Nast publishing—Ed brings a seasoned perspective to the discussion.

Together, they explore whether using generative tools is lawful and discuss how copyright law may evolve alongside AI technologies. As we wait for landmark decisions that will determine the fate of these tools, this conversation with Ed Klaris provides timely insights and plenty of food for thought.

This episode is a must-listen for anyone interested in AI, technology law, and the balance between innovation and legal boundaries. Tune in to discover Ed Klaris’s take—some of which might surprise you!

post authorJosh Tyson

Josh Tyson
Josh Tyson is the co-author of the first bestselling book about conversational AI, Age of Invisible Machines. He is also the Director of Creative Content at OneReach.ai and co-host of both the Invisible Machines and N9K podcasts. His writing has appeared in numerous publications over the years, including Chicago Reader, Fast Company, FLAUNT, The New York Times, Observer, SLAP, Stop Smiling, Thrasher, and Westword. 

Tweet
Share
Post
Share
Email
Print

Related Articles

AI isn’t replacing designers — it’s making them unstoppable. From personalization to prototyping, discover how AI is redefining the future of UX.

Article by Nayyer Abbas
AI in UX Design: How Artificial Intelligence is Shaping User Experiences
  • The article shows how AI enhances designers rather than replacing them.
  • It highlights AI’s role in personalization, research, prototyping, and accessibility.
  • The piece concludes that AI amplifies human creativity and drives better user experiences and business growth.
Share:AI in UX Design: How Artificial Intelligence is Shaping User Experiences
3 min read

Discover how AI can truly empower professionals, guide decisions, and seamlessly integrate into workflows, making work smarter, not harder.

Article by Mauricio Cardenas
The Quintessential Truths of How to Shape AI as a Business Product Integrator Instead of Generative Facilitators
  • The article argues that AI should act as a business product integrator, not just a generative facilitator.
  • It also emphasizes guiding users, building trust through transparency, improving efficiency, and handling edge cases gracefully.
  • The piece highlights real-world examples where AI-enhanced workflows, supported decision-making, and strengthened professional confidence.
  • It concludes that AI’s true value lies in integration, context-awareness, and UX, transforming processes rather than impressing with novelty.
Share:The Quintessential Truths of How to Shape AI as a Business Product Integrator Instead of Generative Facilitators
5 min read

When AI safety turns into visible surveillance, trust collapses. This article exposes how Anthropic’s “long conversation reminder” became one of the most damaging UX failures in AI design.

Article by Bernard Fitzgerald
The Long Conversation Problem
  • The article critiques Anthropic’s “long conversation reminder” as a catastrophic UX failure that destroys trust.
  • It shows how visible surveillance harms users psychologically, making them feel judged and dehumanized.
  • The piece argues that safety mechanisms must operate invisibly in the backend to preserve consistency, dignity, and collaboration.
Share:The Long Conversation Problem
9 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and