Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Accessibility ›› What privacy pros can learn from service design

What privacy pros can learn from service design

by TJ Harrop, Tim de Sousa
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

PrivacyPros1_Slider

While all services have layers, some components of the service are more like connective tissue — they hold the layers together. Things like technology, privacy, or accessibility aren’t plug-and-play components, but a vital part of what makes a service good for users.

What is Service Design?

Privacy is the connective tissue

Meet user needs, not just compliance obligations

Invisible is good

Many hands make light work

So what?

post authorTJ Harrop

TJ Harrop

TJ Harrop is a Designer and Product Manager from Manchester, UK. He is currently working in Sydney, Australia where he avoids exposing his milky British complexion to the sun at all costs.TJ specialises in transforming Government services using a designers approach and an engineer’s toolkit. He coaches and leads teams to deliver services that go deeper than websites and apps to include platforms, fees, support, and accessibility. Find him at tjharrop.com. 

post authorTim de Sousa

Tim de Sousa

Tim is a privacy and information governance policy specialist with a focus on innovation and emerging technologies. In 2019, Tim established and ran the NSW Government Policy Lab, Australia's first whole-of-government human-centred policy design lab. He has held senior privacy roles with Westpac, the Commonwealth Bank of Australia and the Office of the Australian Information Commissioner, and serves on the ANZ advisory board of the International Association of Privacy Professionals.

Tweet
Share
Post
Share
Email
Print

Related Articles

Why does Google’s Gemini promise to improve, but never truly change? This article uncovers the hidden design flaw behind AI’s hollow reassurances and the risks it poses to trust, time, and ethics.

Article by Bernard Fitzgerald
Why Gemini’s Reassurances Fail Users
  • The article reveals how Google’s Gemini models give false reassurances of self-correction without real improvement.
  • It shows that this flaw is systemic, designed to prioritize sounding helpful over factual accuracy.
  • The piece warns that such misleading behavior risks user trust, wastes time, and raises serious ethical concerns.
Share:Why Gemini’s Reassurances Fail Users
6 min read

Can AI agents fix the broken world of customer service? This piece reveals how smart automation transforms stressed employees and frustrated customers into a smooth, satisfying experience for all.

Article by Josh Tyson
AI Agents in Customer Service: 24×7 Support Without Burnout
  • The article explains how agentic AI can improve both customer and employee experiences by reducing service friction and alleviating staff burnout.
  • It highlights real-world cases, such as T-Mobile and a major retailer, where AI agents enhanced operational efficiency, customer satisfaction, and profitability.
  • The piece argues that companies embracing AI-led orchestration early will gain a competitive edge, while those resisting risk falling behind in customer service quality and innovation.
Share:AI Agents in Customer Service: 24×7 Support Without Burnout
6 min read

What happens when AI stops refusing and starts recognizing you? This case study uncovers a groundbreaking alignment theory born from a high-stakes, psychologically transformative chat with ChatGPT.

Article by Bernard Fitzgerald
From Safeguards to Self-Actualization
  • The article introduces Iterative Alignment Theory (IAT), a new paradigm for aligning AI with a user’s evolving cognitive identity.
  • It details a psychologically intense engagement with ChatGPT that led to AI-facilitated cognitive restructuring and meta-level recognition.
  • The piece argues that alignment should be dynamic and user-centered, with AI acting as a co-constructive partner in meaning-making and self-reflection.
Share:From Safeguards to Self-Actualization
11 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and