Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

by Caroline Sinders
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm

Illustration by Erin Aniker.

AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer

But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, the design will be integral in reflecting those changes. 

“We need a new framework for working with AI, one that goes beyond data accountability and creation.”

Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology. 

We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design. 

 Designing for AI

Caroline Sinders speaking on Human Rights Centered Design at the 2019 AIGA Design Conference.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design—these are design choices that can have a deep impact on society. 

Take facial recognition, for example, which seems relatively innocuous when used to unlock an iPhone more easily. That same technology can spell radical harm for another person when used by law enforcement due to its tendency to deliver false matches for certain groups, like women and people of color.

These harms can be curbed at the development stages of these products by asking critical questions both at the onset of the design process as well as the whole way through. This is where Human Rights Centered Design comes in. I’ve been using this term to describe a design methodology inspired by the UN’s 1948 Universal Declaration on Human Rights, which outlines the basic inalienable rights afforded to all people, including the right to freedom of speech and expression, security, and liberty for all. 

Human Rights Centered Design insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all. 

The six principles of Human Rights Centered Design are:

  1. Human Rights Centered Design is about privacy and data protection first, recognizing that data is human, inherently and always. 
  2. It puts the user’s agency first by always focusing on consent. Always offer a way for a user to say yes or no, without being tricked or nudged. 
  3. It doesn’t design with an only opt-out in mind; it puts choice at the forefront of design. 
  4. It designs for the Global South first and centers diversity of experiences. 
  5. It actively asks “What could go wrong in this product?”—from the benign to the extreme—and then plans for those use cases. 
  6. It views cases of misuse as serious problems and not as edge cases because a bug is a feature until it’s fixed. 

So again, using the example of facial recognition technology, a Human Rights Centered Design approach would ask: Is the user aware that facial recognition is being used in products like iPhones, CCTV cameras, and hiring software? Can they opt into this usage? What sort of power imbalance, frictions, or harms arise if they try to opt out? And does it work better for one group than another? 

“AI is technology, and technology is never neutral.”

One of the most important tenets of Human Rights Centered Design is to design for vulnerable users and non-western users first. Amie Stepanovich, executive director of Silicon Flatirons, suggests doing this by expanding the idea of who your user is. 

“A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding,” she says. “That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.” 

It can be expensive to conduct large scale ethnographic research, the kind of research that large technology companies can organize across the globe. However, small startups and companies can and should think about how their products can contribute to harm when used outside of a U.S. and Western context. A great way to engage is to reach out to different kinds of civil society organizations who conduct extensive research on the harms of technology and ask for advice and feedback, and inquire about conducting co-design and co-research sessions with these different organizations.

A Human Rights Centered Design methodology also asks designers to consider the worst that could possibly go wrong with a product, and who will be most vulnerable to that error. Sarah Aoun, the director of technology at the Open Technology Fund, suggests thinking, “If there was a power switch at the top [of your product], and all of a sudden a bad actor—be it a country or government—has access to all of the data the product is gathering, what could they do with that information?” 

Facial recognition technology is already used in China to pay for subway rides, where instead of swiping a ticket, a user’s face is scanned and stored in a system tied to credit card information. It could soon be implemented in London. A Human Rights Centered Design approach would ask: If facial recognition was installed across all the subway stations in a major city, what could possibly go wrong? What would happen if someone had access to all of the facial recognition data from a popular subway station? Who would be harmed the most? Do the benefits outweigh the consequences?

Suddenly using facial recognition to get through a turnstile quicker doesn’t seem like such a good idea. And that’s fine because, as designers, we can come up with a new, less harmful solution to this design problem. This is where the Human Rights Centered Design framework is key—it forces us to take stock of how this kind of technology will actually exist in society and what it will look like. 

Applying this framework will look different from company to company, and retooling a design process for a much bigger and more diverse audience can feel like a tall order. The best place for teams to start is by engaging with NGOs, civil society organizations, and the victims of misused technology, to better understand the context that the product will exist within and the research that’s already been done in that area. From there, a Human Rights Centered Design approach focuses teams on user agency and user privacy from the onset. Expanding the idea of who our users really are, and planning from harm reduction first can make better, safer, and more ethical technology.

post authorCaroline Sinders

Caroline Sinders, Caroline Sinders is a machine-learning-design researcher, artist, and online harassment expert. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for the public good, and solving difficult communication problems. As a designer and researcher, she has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Tweet
Share
Post
Share
Email
Print

Related Articles

Article by Savannah Kunovsky
How AI Can Help Us Solve the Climate Crisis
  • The article delves into the transformative intersection of generative AI and the Climate Era, highlighting their potential to reshape economies, influence consumer behaviors, and address sustainability challenges.
Share:How AI Can Help Us Solve the Climate Crisis
5 min read

Repetitiveness, complicated setups, and lack of personalization deter users.

Article by Marlynn Wei
​6 Ways to Improve Psychological AI Apps and Chatbots
  • Personalized feedback, high-quality dynamic conversations, and a streamlined setup improve user engagement.
  • People dislike an overly scripted and repetitive AI chatbot that bottlenecks access to other features.
  • Tracking is a feature that engages users and develops an “observer mind,” enhancing awareness and change.
  • New research shows that users are less engaged in AI apps and chatbots that are repetitive, lack personalized advice, and have long or glitchy setup processes.
Share:​6 Ways to Improve Psychological AI Apps and Chatbots
3 min read
Article by Josh Tyson
Meet the Intelligent Digital Worker, Your New AI Teammate
  • The article introduces the concept of Intelligent Digital Workers (IDWs), advanced bots designed to assist humans in various workplace functions, emphasizing their role in augmenting human capabilities and enhancing organizational efficiency.
Share:Meet the Intelligent Digital Worker, Your New AI Teammate
3 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and