Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

by Caroline Sinders
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm

Illustration by Erin Aniker.

AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer

But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, the design will be integral in reflecting those changes. 

“We need a new framework for working with AI, one that goes beyond data accountability and creation.”

Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology. 

We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design. 

 Designing for AI

Caroline Sinders speaking on Human Rights Centered Design at the 2019 AIGA Design Conference.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design—these are design choices that can have a deep impact on society. 

Take facial recognition, for example, which seems relatively innocuous when used to unlock an iPhone more easily. That same technology can spell radical harm for another person when used by law enforcement due to its tendency to deliver false matches for certain groups, like women and people of color.

These harms can be curbed at the development stages of these products by asking critical questions both at the onset of the design process as well as the whole way through. This is where Human Rights Centered Design comes in. I’ve been using this term to describe a design methodology inspired by the UN’s 1948 Universal Declaration on Human Rights, which outlines the basic inalienable rights afforded to all people, including the right to freedom of speech and expression, security, and liberty for all. 

Human Rights Centered Design insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all. 

The six principles of Human Rights Centered Design are:

  1. Human Rights Centered Design is about privacy and data protection first, recognizing that data is human, inherently and always. 
  2. It puts the user’s agency first by always focusing on consent. Always offer a way for a user to say yes or no, without being tricked or nudged. 
  3. It doesn’t design with an only opt-out in mind; it puts choice at the forefront of design. 
  4. It designs for the Global South first and centers diversity of experiences. 
  5. It actively asks “What could go wrong in this product?”—from the benign to the extreme—and then plans for those use cases. 
  6. It views cases of misuse as serious problems and not as edge cases because a bug is a feature until it’s fixed. 

So again, using the example of facial recognition technology, a Human Rights Centered Design approach would ask: Is the user aware that facial recognition is being used in products like iPhones, CCTV cameras, and hiring software? Can they opt into this usage? What sort of power imbalance, frictions, or harms arise if they try to opt out? And does it work better for one group than another? 

“AI is technology, and technology is never neutral.”

One of the most important tenets of Human Rights Centered Design is to design for vulnerable users and non-western users first. Amie Stepanovich, executive director of Silicon Flatirons, suggests doing this by expanding the idea of who your user is. 

“A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding,” she says. “That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.” 

It can be expensive to conduct large scale ethnographic research, the kind of research that large technology companies can organize across the globe. However, small startups and companies can and should think about how their products can contribute to harm when used outside of a U.S. and Western context. A great way to engage is to reach out to different kinds of civil society organizations who conduct extensive research on the harms of technology and ask for advice and feedback, and inquire about conducting co-design and co-research sessions with these different organizations.

A Human Rights Centered Design methodology also asks designers to consider the worst that could possibly go wrong with a product, and who will be most vulnerable to that error. Sarah Aoun, the director of technology at the Open Technology Fund, suggests thinking, “If there was a power switch at the top [of your product], and all of a sudden a bad actor—be it a country or government—has access to all of the data the product is gathering, what could they do with that information?” 

Facial recognition technology is already used in China to pay for subway rides, where instead of swiping a ticket, a user’s face is scanned and stored in a system tied to credit card information. It could soon be implemented in London. A Human Rights Centered Design approach would ask: If facial recognition was installed across all the subway stations in a major city, what could possibly go wrong? What would happen if someone had access to all of the facial recognition data from a popular subway station? Who would be harmed the most? Do the benefits outweigh the consequences?

Suddenly using facial recognition to get through a turnstile quicker doesn’t seem like such a good idea. And that’s fine because, as designers, we can come up with a new, less harmful solution to this design problem. This is where the Human Rights Centered Design framework is key—it forces us to take stock of how this kind of technology will actually exist in society and what it will look like. 

Applying this framework will look different from company to company, and retooling a design process for a much bigger and more diverse audience can feel like a tall order. The best place for teams to start is by engaging with NGOs, civil society organizations, and the victims of misused technology, to better understand the context that the product will exist within and the research that’s already been done in that area. From there, a Human Rights Centered Design approach focuses teams on user agency and user privacy from the onset. Expanding the idea of who our users really are, and planning from harm reduction first can make better, safer, and more ethical technology.

post authorCaroline Sinders

Caroline Sinders
Caroline Sinders is a machine-learning-design researcher, artist, and online harassment expert. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for the public good, and solving difficult communication problems. As a designer and researcher, she has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Tweet
Share
Post
Share
Email
Print

Related Articles

AI is breaking barriers, clearing up confusion, and making human connections smoother than ever. Discover how AI is transforming communication — one clearer, more efficient conversation at a time.

Article by Rich Weborg
Better Human-to-Human Communication with AI Agents
  • This article explores how AI enhances human-to-human communication by clarifying intent, summarizing information, and breaking down language and expertise barriers.
  • It discusses AI’s ability to interpret emotions and tone, providing real-time insights that foster empathy and improve the quality of interactions.
  • The piece highlights how AI agents can strengthen relationships, build trust, and serve as adaptive tools to enrich communication and human connection.
Share:Better Human-to-Human Communication with AI Agents
6 min read

Is consciousness the next frontier of technology? Explore how AI, psychedelics, and philosophy converge to challenge our understanding of awareness, reality, and what it means to be truly alive.

Article by Oliver Inderwildi
Conscious Machines: Impossible Feat, Ethical Nightmare, or Evolution’s Next Step?
  • This article delves into the enigma of consciousness, exploring its emergence, malleability, and the philosophical debate on whether it can exist independently of biological systems.
  • It examines key factors driving consciousness into mainstream discourse, including advancements in AI, the resurgence of psychedelics in mental health, and the influence of popular culture and science literature.
  • The piece critically evaluates the possibility of machine consciousness, contrasting philosophical perspectives on its feasibility and implications for ethics, technology, and our understanding of reality.
Share:Conscious Machines: Impossible Feat, Ethical Nightmare, or Evolution’s Next Step?
13 min read

Discover how AI-powered gesture-based navigation is redefining app experiences, making interactions more intuitive and personalized. Explore the opportunities and challenges of this design revolution.

Article by Kevin Gates
Designing Serendipity
  • This article explores the role of AI in enhancing app navigation through gesture-based interactions, emphasizing a shift from traditional menus to intuitive, swipe-driven experiences.
  • It examines the intersection of AI and interaction design, highlighting how machine learning can support user discovery by anticipating needs and surfacing relevant content.
  • The piece critically assesses the potential of gesture-based navigation to improve accessibility, user engagement, and overall app usability, while addressing design challenges and potential pitfalls.
Share:Designing Serendipity
11 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and