The Community Of Over 578,000

Home ›› Artificial Intelligence ›› We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center

by Caroline Sinders
Share this post on
Share on twitter
Share on linkedin
Share on facebook
Share on reddit
Share on email
Share on print


Designers need a methodology that helps them weigh the benefits of using a new technology against its potential harm

Illustration by Erin Aniker.

AI is going to radically change society. It will do so in exciting and even life-saving ways, as we’ve seen in early projects that translate languages (in your own voice!), create assistant chat bots, make new works of art, and more accurately detect and analyze cancer

But AI will also alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, the design will be integral in reflecting those changes. 

“We need a new framework for working with AI, one that goes beyond data accountability and creation.”

Indeed, there are many civil society organizations, nonprofits, think tanks, and companies that already understand AI’s effect on society, and have been working toward creating ethical standards for this burgeoning field. But for designers working with AI, we need something that goes even further than general guidelines and speaks directly to how design often impacts and perpetuates the biases in technology. 

We need a new framework for working with AI, one that goes beyond data accountability and creation. We need Human Rights Centered Design. 

 Designing for AI

Caroline Sinders speaking on Human Rights Centered Design at the 2019 AIGA Design Conference.

Here’s why we need this: AI is technology, and technology is never neutral. How we make technology, how we conceptualize it, how we imagine where it fits into culture, and what problems it will solve when placed into product design—these are design choices that can have a deep impact on society. 

Take facial recognition, for example, which seems relatively innocuous when used to unlock an iPhone more easily. That same technology can spell radical harm for another person when used by law enforcement due to its tendency to deliver false matches for certain groups, like women and people of color.

These harms can be curbed at the development stages of these products by asking critical questions both at the onset of the design process as well as the whole way through. This is where Human Rights Centered Design comes in. I’ve been using this term to describe a design methodology inspired by the UN’s 1948 Universal Declaration on Human Rights, which outlines the basic inalienable rights afforded to all people, including the right to freedom of speech and expression, security, and liberty for all. 

Human Rights Centered Design insists on the same sovereignty and protection for the user of a product. In essence, this means respecting a user’s privacy and data, thinking about the digital rights of people across the world (instead of just in our own backyards), and designing for all. 

The six principles of Human Rights Centered Design are:

  1. Human Rights Centered Design is about privacy and data protection first, recognizing that data is human, inherently and always. 
  2. It puts the user’s agency first by always focusing on consent. Always offer a way for a user to say yes or no, without being tricked or nudged. 
  3. It doesn’t design with an only opt-out in mind; it puts choice at the forefront of design. 
  4. It designs for the Global South first and centers diversity of experiences. 
  5. It actively asks “What could go wrong in this product?”—from the benign to the extreme—and then plans for those use cases. 
  6. It views cases of misuse as serious problems and not as edge cases because a bug is a feature until it’s fixed. 

So again, using the example of facial recognition technology, a Human Rights Centered Design approach would ask: Is the user aware that facial recognition is being used in products like iPhones, CCTV cameras, and hiring software? Can they opt into this usage? What sort of power imbalance, frictions, or harms arise if they try to opt out? And does it work better for one group than another? 

“AI is technology, and technology is never neutral.”

One of the most important tenets of Human Rights Centered Design is to design for vulnerable users and non-western users first. Amie Stepanovich, executive director of Silicon Flatirons, suggests doing this by expanding the idea of who your user is. 

“A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding,” she says. “That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.” 

It can be expensive to conduct large scale ethnographic research, the kind of research that large technology companies can organize across the globe. However, small startups and companies can and should think about how their products can contribute to harm when used outside of a U.S. and Western context. A great way to engage is to reach out to different kinds of civil society organizations who conduct extensive research on the harms of technology and ask for advice and feedback, and inquire about conducting co-design and co-research sessions with these different organizations.

A Human Rights Centered Design methodology also asks designers to consider the worst that could possibly go wrong with a product, and who will be most vulnerable to that error. Sarah Aoun, the director of technology at the Open Technology Fund, suggests thinking, “If there was a power switch at the top [of your product], and all of a sudden a bad actor—be it a country or government—has access to all of the data the product is gathering, what could they do with that information?” 

Facial recognition technology is already used in China to pay for subway rides, where instead of swiping a ticket, a user’s face is scanned and stored in a system tied to credit card information. It could soon be implemented in London. A Human Rights Centered Design approach would ask: If facial recognition was installed across all the subway stations in a major city, what could possibly go wrong? What would happen if someone had access to all of the facial recognition data from a popular subway station? Who would be harmed the most? Do the benefits outweigh the consequences?

Suddenly using facial recognition to get through a turnstile quicker doesn’t seem like such a good idea. And that’s fine because, as designers, we can come up with a new, less harmful solution to this design problem. This is where the Human Rights Centered Design framework is key—it forces us to take stock of how this kind of technology will actually exist in society and what it will look like. 

Applying this framework will look different from company to company, and retooling a design process for a much bigger and more diverse audience can feel like a tall order. The best place for teams to start is by engaging with NGOs, civil society organizations, and the victims of misused technology, to better understand the context that the product will exist within and the research that’s already been done in that area. From there, a Human Rights Centered Design approach focuses teams on user agency and user privacy from the onset. Expanding the idea of who our users really are, and planning from harm reduction first can make better, safer, and more ethical technology.

Fragmented approaches

post authorCaroline Sinders

Caroline Sinders, Caroline Sinders is a machine-learning-design researcher, artist, and online harassment expert. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for the public good, and solving difficult communication problems. As a designer and researcher, she has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Share on twitter
Share on linkedin
Share on facebook
Share on reddit
Share on email
Share on print

Related Articles

Building digital products for the web’s next billion users
  • Connectivity issues are further inflated by accessibility gaps. This, in turn, undermines user experience and creates obstacles for the wider use of digital products.
  • When designing for users, it’s worth considering such issues as poor connectivity, accessibility constraints, levels of technological literacy within different countries and cultural barriers.
  • In order to satisfy the needs of the next 3 billion users, it’s vital to build inclusive and accessible products that will provide solutions to the critical problems the next generation will face.
Share:Building digital products for the web’s next billion users
The Liminal Space Between Meaning and Emotion
  • To innovate well is to search for meaning behind the innovation first. This requires investing time into discovering what users need and think of unique ways to serve them and better solve their problems.
  • Emotions are widely misunderstood in UX design and often manipulation is used to predict user behavior. However, a much better approach to UX design is storyscaping, which aims at empowering users, rather than controlling them.

Read the full article to learn more about liminal space and how to apply this thinking to your design.

Share:The Liminal Space Between Meaning and Emotion

Stop frustrating your users. Invest in notification strategy instead.

The UX of Notifications | How to Master the Art of Interrupting
  • As part of UX, notifications are key to leading the user to a better interaction with the product. Therefore, notification strategy should have a central role in UX design.
  • A good starting point is to create a user’s journey map and identify major pain points. This should serve to understand when and where notifications might be of help, rather than create confusion.
  • It’s a good practice to use a variety of notifications and provide the user with opt-outs so they don’t feel overwhelmed.
Share:The UX of Notifications | How to Master the Art of Interrupting

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and