Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

by Caroline Sinders
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm

Illustration by Erin Aniker.

AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer

But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, the design will be integral in reflecting those changes. 

“We need a new framework for working with AI, one that goes beyond data accountability and creation.”

Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology. 

We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design. 

 Designing for AI

Caroline Sinders speaking on Human Rights Centered Design at the 2019 AIGA Design Conference.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design—these are design choices that can have a deep impact on society. 

Take facial recognition, for example, which seems relatively innocuous when used to unlock an iPhone more easily. That same technology can spell radical harm for another person when used by law enforcement due to its tendency to deliver false matches for certain groups, like women and people of color.

These harms can be curbed at the development stages of these products by asking critical questions both at the onset of the design process as well as the whole way through. This is where Human Rights Centered Design comes in. I’ve been using this term to describe a design methodology inspired by the UN’s 1948 Universal Declaration on Human Rights, which outlines the basic inalienable rights afforded to all people, including the right to freedom of speech and expression, security, and liberty for all. 

Human Rights Centered Design insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all. 

The six principles of Human Rights Centered Design are:

  1. Human Rights Centered Design is about privacy and data protection first, recognizing that data is human, inherently and always. 
  2. It puts the user’s agency first by always focusing on consent. Always offer a way for a user to say yes or no, without being tricked or nudged. 
  3. It doesn’t design with an only opt-out in mind; it puts choice at the forefront of design. 
  4. It designs for the Global South first and centers diversity of experiences. 
  5. It actively asks “What could go wrong in this product?”—from the benign to the extreme—and then plans for those use cases. 
  6. It views cases of misuse as serious problems and not as edge cases because a bug is a feature until it’s fixed. 

So again, using the example of facial recognition technology, a Human Rights Centered Design approach would ask: Is the user aware that facial recognition is being used in products like iPhones, CCTV cameras, and hiring software? Can they opt into this usage? What sort of power imbalance, frictions, or harms arise if they try to opt out? And does it work better for one group than another? 

“AI is technology, and technology is never neutral.”

One of the most important tenets of Human Rights Centered Design is to design for vulnerable users and non-western users first. Amie Stepanovich, executive director of Silicon Flatirons, suggests doing this by expanding the idea of who your user is. 

“A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding,” she says. “That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.” 

It can be expensive to conduct large scale ethnographic research, the kind of research that large technology companies can organize across the globe. However, small startups and companies can and should think about how their products can contribute to harm when used outside of a U.S. and Western context. A great way to engage is to reach out to different kinds of civil society organizations who conduct extensive research on the harms of technology and ask for advice and feedback, and inquire about conducting co-design and co-research sessions with these different organizations.

A Human Rights Centered Design methodology also asks designers to consider the worst that could possibly go wrong with a product, and who will be most vulnerable to that error. Sarah Aoun, the director of technology at the Open Technology Fund, suggests thinking, “If there was a power switch at the top [of your product], and all of a sudden a bad actor—be it a country or government—has access to all of the data the product is gathering, what could they do with that information?” 

Facial recognition technology is already used in China to pay for subway rides, where instead of swiping a ticket, a user’s face is scanned and stored in a system tied to credit card information. It could soon be implemented in London. A Human Rights Centered Design approach would ask: If facial recognition was installed across all the subway stations in a major city, what could possibly go wrong? What would happen if someone had access to all of the facial recognition data from a popular subway station? Who would be harmed the most? Do the benefits outweigh the consequences?

Suddenly using facial recognition to get through a turnstile quicker doesn’t seem like such a good idea. And that’s fine because, as designers, we can come up with a new, less harmful solution to this design problem. This is where the Human Rights Centered Design framework is key—it forces us to take stock of how this kind of technology will actually exist in society and what it will look like. 

Applying this framework will look different from company to company, and retooling a design process for a much bigger and more diverse audience can feel like a tall order. The best place for teams to start is by engaging with NGOs, civil society organizations, and the victims of misused technology, to better understand the context that the product will exist within and the research that’s already been done in that area. From there, a Human Rights Centered Design approach focuses teams on user agency and user privacy from the onset. Expanding the idea of who our users really are, and planning from harm reduction first can make better, safer, and more ethical technology.

post authorCaroline Sinders

Caroline Sinders
Caroline Sinders is a machine-learning-design researcher, artist, and online harassment expert. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for the public good, and solving difficult communication problems. As a designer and researcher, she has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and