Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Customer Experience ›› Design like a Human in the Age of Algorithms

Design like a Human in the Age of Algorithms

by Pamela Pavliscak
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Algorithms are based on past behaviors—what you liked, what you purchased, what you clicked. Making predictions on past behaviors doesn’t take into account what is essentially human.

“Here’s the thing.” Michael leans in closer, angling his phone toward me. “I really think about what I like because I know it might narrow my world. I try to be careful about what I do while I’m logged in too, because that becomes part of the online version of me.” As a design researcher, I try to understand our digital experience from a human point of view. I interview people. I observe what they do. I review diary entries. I look at data. Lately, people describe something that can best be described as gaming the algorithm.

Gaming the Algorithm

People decide to click on some posts more, some posts less. They like things strategically. They instinctively open up private windows. They follow and unfollow. They block ads. They create different profiles.

Gaming the algorithm rarely involves settings. We have new options to participate in our own algorithms, and those options are easier to find. Facebook lets us choose the most important people, for example. Yet, we are reluctant to spend time curating.

Gaming the algorithm means first inventing a story about how the algorithm works. Then, based on that story, behaving differently to create a new algorithmic self. The hope is that a new experience will emerge that aligns with our version of our self.

The Uncanny Valley of the Algorithm

Google thinks I’m interested in Toyota and Hip Hop. Facebook lists dental floss and swans as hobbies. The data broker Acxiom thinks I shop for Collectible Antiques on a dial-up connection. Even though I find it comforting that the system doesn’t know me too well, it’s disturbing when it is so far off.

I’m not alone. People who use the devices, sites, and apps we create feel this way too. We don’t want the algorithms to know too much. And yet we take it personally when personalization get things so wrong.

Why does this happen anyway? Why do algorithms get things so wrong?

Algorithms are backwards-facing.

Algorithms are based on past behaviors—what you liked, what you purchased, what you clicked. Making predictions on past behaviors doesn’t take into account what is essentially human. We change a little every day. We get interested in new things. We change our minds. We feel a little conflicted. Algorithms have a hard time with that.

Algorithms paint in broad strokes.

Algorithms know you as a broad demographic, by gender and age at a minimum. They know a few topics you might be interested in based on your behaviors. And then they match this up against similar profiles or a predetermined profile to try to personalize the information you see. Sometimes it is close enough. Often it’s not.

Algorithms have incomplete information.

My algorithmic shadow shelf is comprised of extensive information for the big purchases in my life, like my home and car. The credit agencies are good with data like that. The data points from my day to day life are off the mark though. Presumably, the sites I visit could have access to some of that information, but there is still be a wide gap between data points. The real me lies somewhere in that space between.

Algorithms draw from a variety of sources.

The algorithms that shape choices we are shown in a personalized or anticipatory experience can draw from just one source—your history on a site—or many sources. More sources seems like a good idea. It’s a way to render a fuller picture after all. But it can also produce conflicts. We are complicated individuals living complicated lives, and it’s hard to come up with rules to resolve that.

Algorithms make a lot of assumptions. When we design with algorithms, the goal is to bring these assumptions closer to nuanced understanding of an individual. It’s not so easily accomplished though. So, how can we design to minimize uncanny valley of personalization?

Humanizing the Algorithm

In some ways, algorithms may know our behaviors perhaps better than we know ourselves. After all, we don’t remember every site we visit, our location a year ago today, even posts we made just last month. Algorithms easily collect that data to transform it into recommendations and predictions.

The problem is that we are more than just the sum of our behaviors. We don’t behave rationally, maybe not even predictably irrationally. A human presence can improve the algorithmic experience in a few ways.

Review.

Humans can review the output of the algorithm—your search results or the version of the site you see or your feed. Facebook relies on a human “feed panel” to inform tweaks to its algorithm. Personality profiling app, Crystal, lets you round out the data-based profile they create.

Curation.

Many sites prefer human curation over algorithms. Some research suggests that human-curated content may be more engaging. Apple Music has DJs create playlists rather than relying on algorithms like Spotify. Designers will certainly be tasked with figuring out how to combine the two, rather than strictly relying on one over the other.

Communication.

Virtual assistant apps, like Pana, rely on human intelligence. Why? Because humans still understand human language, especially tone, better. Humans are better are interpreting complex data that includes lots of choices and individual preferences and, for the time being, are still better at communicating it, too.

A human presence keeps the algorithm from being a little dumb about humans. It makes the experience feel considerate. But it still makes a lot of assumptions based on our algorithmic selves.

Humanizing the algorithm still doesn’t always solve for the disconnect we feel between our version of our selves and our algorithmic doppelgänger. How can we bring these different versions together into something that feels close, but not too close?

Revealing the Algorithm

Rather than inadvertently prompting people to create new mythologies about how technology works, we could design to reveal the algorithm. Not just adding human input to computer algorithms, but making the algorithm present.

What if you could play with the same algorithm that Facebook uses to analyze the emotional timbre of posts? You can. What if you could see yourself through the ads you encounter? You can do that too.

What if you knew your algorithmic self, and like that human who assists the artificial intelligence assistant, you could reflect on it and respond to it and adapt it to express who you are and who you want to be? Call it an algorithmic angel. Maybe that’s coming.

Ultimately, we want to see our self as the algorithm sees us to have more say in how the algorithm defines us. Transparent terms and conditions would be good, but not good enough. You can certainly go into the ads panel of your Google or Facebook accounts and make tweaks. Owning our data is a step forward. Design thinking, in my view, will get us even further.

Cultivating Algorithmic Empathy

Recently, I was having a conversation with a fellow human as part of a research project. As I usually do, I had her start by giving me a tour on her phone. She scrolled through Facebook and I was struck by the disconnect—here is a life radically different from my own but also different from the person who it is supposed to be.

I asked her how the feed matches up with how she sees herself. Together we considered whether we would want to hang out with that person. To myself, I wondered if I had met this version of her first, before talking in person, would my impression have been different.

This realization has led me to some new design research practices.

  1. Data Role Play. Data is so abstract, algorithms rather mysterious. Role play becomes a good way to start conversations about a new empathy. Think about asking a stranger for private information. Would that conversation seem respectful or rude? Then imagine simply collecting information without that person really being aware of it, and interacting with them based on that knowledge. How does that feel?
  2. Algorithm Swap. The experience we have with our devices is deeply private. We are reluctant to hand over our phone to a friend. Perhaps this will change as we start to interact with our voices rather than our hands. For now, it feels deeply weird to spend time with another’s private algorithmic self. And yet, it can be incredibly meaningful to be exposed to this side of the person you are designing for.
  3. Data Doubles. Another core exercise is listening to the perspective individuals have on their algorithmic doppelgänger. How close is your double? Imagine a stranger only met your algorithmic double. What would that be like? What would you change to make it a more meaningful match? Is that even desirable?
  4. Algorithmic Personas. Design teams typically base personas on a combination of broad strokes demographic categories, interviews with a small number of individuals, and specific behavioral date collected. Now we’ve started including the algorithmic shadow self to round out the stories we use to guide design.
  5. Shared Mythologies. When confronted with an algorithmic disconnect, as with any technology that doesn’t seem quite right, people will create a mythology about how it really works. We are learning to listen for these stories people tell themselves and the feelings or actions that evolve from these stories.

Algorithms will improve. Machine learning will factor in our micro-expressions and extrapolate emotions. Artificial intelligence will become more nuanced. Maybe our Internet Things will know us so well, based on our past behaviors and on physiological cues about our emotions, that there will no longer be an uncanny valley of personalization.

Maybe.

Experience designers will always be able to improve algorithms though. Designing with algorithms requires a new kind of empathy. But empathy is at the core of our practice and we will continue to learn.

post authorPamela Pavliscak

Pamela Pavliscak, This user does not have bio yet.

Tweet
Share
Post
Share
Email
Print

Related Articles

As consumers’ privacy concerns continue to grow, so should our attention to addressing privacy issues as user experience designers.

Article by Robert Stribley
Designing for Privacy in an Increasingly Public World
  • The article delves into the rising importance of addressing privacy concerns in user experience design, offering insights and best practices for designers and emphasizing the role of client cooperation in safeguarding user privacy.
Share:Designing for Privacy in an Increasingly Public World
9 min read

Navigating the Creative Landscape.

Article by Adri Mukund
Unveiling the Influence of Cognitive Biases on Design Decision-Making
  • The article explores the influence of cognitive biases on design decision-making, outlining various types of biases and offering strategies for mitigating their impact to foster inclusivity and objectivity in design processes.
Share:Unveiling the Influence of Cognitive Biases on Design Decision-Making
6 min read
Article by Chris Kernaghan
Do Founders Even Care About Design?
  • The article emphasizes the importance of design in startup success, highlighting the risks of ignoring user feedback and the necessity of effective communication between founders and designers.
Share:Do Founders Even Care About Design?
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and