Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

by Caroline Sinders
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm

Illustration by Erin Aniker.

AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer

But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, the design will be integral in reflecting those changes. 

“We need a new framework for working with AI, one that goes beyond data accountability and creation.”

Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology. 

We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design. 

 Designing for AI

Caroline Sinders speaking on Human Rights Centered Design at the 2019 AIGA Design Conference.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design—these are design choices that can have a deep impact on society. 

Take facial recognition, for example, which seems relatively innocuous when used to unlock an iPhone more easily. That same technology can spell radical harm for another person when used by law enforcement due to its tendency to deliver false matches for certain groups, like women and people of color.

These harms can be curbed at the development stages of these products by asking critical questions both at the onset of the design process as well as the whole way through. This is where Human Rights Centered Design comes in. I’ve been using this term to describe a design methodology inspired by the UN’s 1948 Universal Declaration on Human Rights, which outlines the basic inalienable rights afforded to all people, including the right to freedom of speech and expression, security, and liberty for all. 

Human Rights Centered Design insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all. 

The six principles of Human Rights Centered Design are:

  1. Human Rights Centered Design is about privacy and data protection first, recognizing that data is human, inherently and always. 
  2. It puts the user’s agency first by always focusing on consent. Always offer a way for a user to say yes or no, without being tricked or nudged. 
  3. It doesn’t design with an only opt-out in mind; it puts choice at the forefront of design. 
  4. It designs for the Global South first and centers diversity of experiences. 
  5. It actively asks “What could go wrong in this product?”—from the benign to the extreme—and then plans for those use cases. 
  6. It views cases of misuse as serious problems and not as edge cases because a bug is a feature until it’s fixed. 

So again, using the example of facial recognition technology, a Human Rights Centered Design approach would ask: Is the user aware that facial recognition is being used in products like iPhones, CCTV cameras, and hiring software? Can they opt into this usage? What sort of power imbalance, frictions, or harms arise if they try to opt out? And does it work better for one group than another? 

“AI is technology, and technology is never neutral.”

One of the most important tenets of Human Rights Centered Design is to design for vulnerable users and non-western users first. Amie Stepanovich, executive director of Silicon Flatirons, suggests doing this by expanding the idea of who your user is. 

“A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding,” she says. “That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.” 

It can be expensive to conduct large scale ethnographic research, the kind of research that large technology companies can organize across the globe. However, small startups and companies can and should think about how their products can contribute to harm when used outside of a U.S. and Western context. A great way to engage is to reach out to different kinds of civil society organizations who conduct extensive research on the harms of technology and ask for advice and feedback, and inquire about conducting co-design and co-research sessions with these different organizations.

A Human Rights Centered Design methodology also asks designers to consider the worst that could possibly go wrong with a product, and who will be most vulnerable to that error. Sarah Aoun, the director of technology at the Open Technology Fund, suggests thinking, “If there was a power switch at the top [of your product], and all of a sudden a bad actor—be it a country or government—has access to all of the data the product is gathering, what could they do with that information?” 

Facial recognition technology is already used in China to pay for subway rides, where instead of swiping a ticket, a user’s face is scanned and stored in a system tied to credit card information. It could soon be implemented in London. A Human Rights Centered Design approach would ask: If facial recognition was installed across all the subway stations in a major city, what could possibly go wrong? What would happen if someone had access to all of the facial recognition data from a popular subway station? Who would be harmed the most? Do the benefits outweigh the consequences?

Suddenly using facial recognition to get through a turnstile quicker doesn’t seem like such a good idea. And that’s fine because, as designers, we can come up with a new, less harmful solution to this design problem. This is where the Human Rights Centered Design framework is key—it forces us to take stock of how this kind of technology will actually exist in society and what it will look like. 

Applying this framework will look different from company to company, and retooling a design process for a much bigger and more diverse audience can feel like a tall order. The best place for teams to start is by engaging with NGOs, civil society organizations, and the victims of misused technology, to better understand the context that the product will exist within and the research that’s already been done in that area. From there, a Human Rights Centered Design approach focuses teams on user agency and user privacy from the onset. Expanding the idea of who our users really are, and planning from harm reduction first can make better, safer, and more ethical technology.

post authorCaroline Sinders

Caroline Sinders
Caroline Sinders is a machine-learning-design researcher, artist, and online harassment expert. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for the public good, and solving difficult communication problems. As a designer and researcher, she has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Tweet
Share
Post
Share
Email
Print

Related Articles

Are we on the brink of an AI-first revolution? As more products are built entirely around AI engines, designers must adapt. From dynamic interfaces and non-linear journeys to helping users optimize prompts, discover how the next generation of AI-driven products will reshape UX design.

Article by Tom Rowson
AI-First: Designing the Next Generation of AI Products
  • The article introduces “AI-first” products, designed around AI engines to offer more than just chat interfaces and improve over time.
  • It highlights key challenges for designers: creating flexible interfaces, helping users with prompts, and managing AI errors like hallucinations.
  • The piece stresses the need to adapt to non-linear, iterative user journeys as AI-first apps evolve.
Share:AI-First: Designing the Next Generation of AI Products
4 min read

Unlock the power of AI with smarter data strategies. Discover how digital twins, semantic technologies, and AI-ready documentation can transform your business operations and decision-making.

Article by Vlad Radziuk
From Siloed Assets to a Digital Twin: a Business-Focused Guide for Digitizing Your Enterprise
  • The article explores how digital twins can unify enterprise data and improve decision-making and discusses turning documentation into structured, AI-usable assets.
  • The piece highlights semantic technologies for clearer business-IT collaboration and examines decentralized platforms for streamlining business operations.
  • It emphasizes the importance of curating business data for AI access.
Share:From Siloed Assets to a Digital Twin: a Business-Focused Guide for Digitizing Your Enterprise
11 min read

Discover how AI-first design principles let you build beautiful, functional UIs in minutes — without ever opening Figma or writing any code.

Article by Adam Judelson, Ryan Brotman
Making Designs Without a Designer
  • AI-first design turns simple text prompts into fully functional, production-ready UIs — no coding or Figma required.
  • Learn how to develop structured AI prototyping workflows to eliminate bottlenecks, and ensure fast, scalable, and consistent UX across projects.
  • This isn’t just faster design — it’s the future of product design, making high-quality UI creation accessible to everyone.
Share:Making Designs Without a Designer
10 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and