Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Android ›› What I MEANT Was

What I MEANT Was

by Siri Mehus
9 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

To make voice technology more human we need to create technology that supports our natural ability to build and negotiate understandings with others.

“I hate you, Siri!”

Tween rage erupts from the backseat of my car. Even now—more than three years after my once-unusual first name became every iPhone owner’s personal assistant—my initial reaction is hurt surprise: “What did I do?” Seconds later, I realize with relief that it’s not about me. My 11-year-old daughter and her friends are talking to the iPhone.

We all get frustrated, but that’s a pretty strong reaction. While expectations may have been high at first, most iPhone users have settled into a less emotional relationship with their electronic assistants. A study previously conducted by Blink UX (PDF) described this process of initial excitement, optimistic curiosity, frustrated exploration, and acceptance and habit formation iPhone users went through in an initial month of Siri usage.

The Problem of Mishearing

Though our skills as individuals may vary, adult humans are generally pretty good at adjusting to others in interaction, even when the ‘other’ is a disappointing smartphone assistant. Most of us also have some sense of what we can reasonably expect a machine to be able to do, based on our (perhaps rough) understanding of how the machine might be doing it. So, after a brief hopeful period, we adjusted the way we talk to Siri (or stopped using it entirely). These days, though we use our voices, we do not attempt or expect conversations. Rather, we produce “input” based on what we expect the system to be able to use, and then do what we can with what we get back.

The frustration with Siri is about the difficulty of getting things back on track when they go wrong

It’s different for the kids in my backseat. Whether it’s due to their still-developing ability to take the more flexible perspective of what a machine can and cannot do, or simply less experience with the technology, Siri’s inability to perform as a competent communication partner seems to cause them great difficulty and annoyance. And no one can express annoyance like an 11-year old girl.

The frustration I’m hearing from the backseat is not just about being misheard or misunderstood by Siri. Sure, it’s going to get things wrong. Their frustration is mainly about the difficulty of getting things back on track when they do go wrong.

Here is a typical breakdown:

  1. Trina: How do you say, “I want the big cheeseburger” in Spanish?
  2. Siri: Here is what I found on the web.
  3. Trina: I mean, can you translate it for me?
  4. Siri: I’m sorry. I can’t find any translation services.
  5. Trina: How do I get translation services?
  6. Siri: I’m sorry. I can’t find any translation services.
  7. Trina: How do I get them?
  8. Siri: Let me think about that … Here is what I found on the web for “How do I get them?”
  9. Trina: Aaaaaah! Forget it!

At one time, I heard it estimated that Siri correctly interprets about 50% of the utterances spoken to it. This has gotten better—some reported a dramatic improvement in speech recognition accuracy with the release of iOS 8. But that’s not the real problem. Let’s keep in mind that human beings also do not accurately hear and understand every utterance directed to them. Mishearings and conceptual mismatches are common.

The difference is that humans have a robust system for recognizing and repairing these breakdowns.

This system is part of our common heritage and embedded in our languages, interactional norms, and social institutions.

Siri, at this point, does not take advantage of this system.

What if it did only understand about half of the time, but made it easy to clear up misunderstandings? That would bring us a lot closer to having successful conversations with a digital assistant.

Conversational Repair

Research on language and social interaction has identified methods that are used across cultures and languages to deal with the very common speech errors and misunderstandings that occur when people talk with each other.

Here are a few of those patterns, as relevant for this discussion:

First position: self-repair

This is called “first position” repair because it takes place within the same utterance as the problem that’s being repaired (the first position). It’s when we realize that we haven’t got it quite right and correct ourselves before even finishing the sentence.

  • Person A: How do I get to the ai- Sea Tac airport?
  • Person B: Probably easiest to take the light rail.
  • Person A: How do I get to the ai- Sea Tac airport?
  • Siri: I didn’t find any airports matching “Plasticraft”

Human beings use auditory and contextual cues to determine which parts of the sentence to pay attention to and filter out the rest (“How do I get to [ ] Sea Tac airport?”) – usually without even realizing that we are doing it.

Siri (and speech recognition systems in general) do not deal well with this type of repair, presumably because they attempt to understand the entire utterance as spoken.

Most experienced iPhone users anticipate this problem and prevent it by stopping and starting over with a full, well-produced sentence. It is not obvious to the kids in my backseat, however, that Siri is going to have trouble with this sort of thing—after all, no one else ever does.

Second position: other-initiated self-repair
  • Anna: I’m meeting with Mark tomorrow.
  • Mark: Who?
  • Anna: I’m meeting with Mike tomorrow. Sorry! (laughs)

This type of repair happens when, in response to an utterance, a person indicates that there is some source of trouble that is preventing his or her understanding. The initial speaker then corrects the problem in their next utterance.

Repair can be initiated in a number of ways. Some responses like “Huh?” or “What?” only indicate that there is a problem and give no clue as to what it is. In the example above, the question “Who?” suggests that the problem is with the name that has been provided.

Another way of initiating this type of repair is by repeating the problematic part of the initial utterance with a questioning intonation:

  • Anna: I’m meeting with Mark tomorrow.
  • Mark: Mark?
  • Anna: I’m meeting with Mike tomorrow. Sorry! (laughs)

Siri does not do well with these, either.

  • Trina: What’s in enchilada sauce?
  • Siri: Consulting my tea leaves. Here’s what I found on the web for “What’s an enchilada sauce”
  • Trina: Your tea leaves?
  • Siri: You’re certainly entitled to that opinion.

It’s hard to say how Siri interpreted “Your tea leaves?” but it clearly was not interpreted as the request for clarification intended.

Third position: repair

Third position self-repair occurs when the initial speaker realizes, after hearing the other’s response, that her initial utterance was misunderstood, and attempts to clarify her response.

For instance, in the first example above, in which Trina seeks a translation from Siri, Siri makes reference to translation services. In response, Trina asks how she can get them.

  • Trina: How do I get translation services?
  • Siri: I’m sorry. I can’t find any translation services.
  • Trina: How do I get them?

Trina uses a common repair mechanism: a partial repeat of the preceding utterance with stress placed to clarify how the utterance is to be interpreted (i.e., the important part here is the getting). Siri is not able to understand this as a repair attempt and rather, after trying to make sense of the phrase in isolation, offers to search for it on the web.

Trina is just like, whatevs. Never mind.

What’s Human?

In a way, these kids were set up. We all were. Among all the witticisms that Apple engineers programmed into Siri, the best was the practical joke of providing just enough smart and sassy answers that we believed—some of us for longer than others—that we could actually have conversations with our telephones (not just through them). We started talking to these chunks of glass and metal and expected to be understood. Hilarity, and frustration, ensued.

Google took a different approach with Google Now—emphasizing contextual awareness over chatty interaction. It’s the assistant that knows what you need before you know it yourself. The promised benefit is not that you can ask questions of your phone, but that you don’t have to.

Echo is Amazon’s recent offering in this space—currently available only in limited release. It is a speaker, not a phone, so its potential differentiators are the use cases afforded by its persistent location in the home, far-field voice recognition, and integration with streaming music services. Its dependency on a wake word (“Alexa”) makes natural conversation unlikely: Imagine an interaction in which you have to preface every comment with the name of the person you are talking to. Though a few witty responses have been programmed in (ask Alexa to “show me the money” if you’ve always wanted to hear Cuba Gooding Jr.’s role in Jerry Maguire played by your bank’s automated telephone system), chattiness does not seem to be a primary goal.

Microsoft, on the other hand, sought to imbue Cortana with the sass and humor that Apple has led us to expect from a digital personal assistant. In interviews, the company has indicated that building in “chit chat” is part of a strategy for eliciting natural language input from users, the idea being that people will be more likely to speak in a conversational manner to a digital assistant that has a personality. That natural language input will then form a rich source of data from which Cortana can learn and improve its responses.

But what about Siri? Plenty of sass, but it doesn’t seem like people are spending a lot of time chatting with her anymore. Is there reason to believe that canned responses—no matter how many—are going to elicit rich conversational input from users?

What if the effort these companies put towards giving digital assistants “personality” went instead towards giving them the ability to support evolved, instinctual human methods for making sense of one another? It’s a good bet that users would be more likely to speak informally and conversationally to a digital assistant that is capable of dealing with misunderstandings in a natural, non-frustrating way. This strategy is likely to stimulate greater amounts of more varied conversational input while also providing an improved experience for current users. Canned answers, on the other hand, are likely to elicit canned questions. (“Who’s your daddy?” “Where can I hide a dead body?”)

Doubtless, there are tough technical problems involved. Incredible advances have been made in the areas of natural language recognition and production, but there clearly is a great deal more to be done. One of the most valuable things we do as UX professionals is to help clients understand where their investments can make the biggest difference. There are other ways, besides improving the ability to smoothly fix miscommunication, that technologies like Siri could be improved to more closely replicate human interaction. For instance, these technologies need to be better at recognizing intonational cues, pronominal references, nonstandard accents, and distinguishing the vocal streams of separate speakers (and that’s just a start).

But few improvements offer as much immediate and long term benefit as making it easier to fix what’s gone wrong. The ability to smoothly recover from routine misunderstandings is a critical, if often overlooked, element of natural human interaction. A robust, intuitive process for conversational repair can help to:

  • reduce the cognitive load of voice interactions
  • ensure successful outcomes
  • make interactions feel more fluent and satisfying
  • elicit more natural language input, providing richer data for machine learning.

Even if most users have given up on conversations with Siri, the tech world is not giving up on natural interactions between people and computers. DARPA, for instance, recently launched a new program called Communicating with Computers (CwC) aimed specifically at making interactions with machines more like conversations between human beings.

As we move forward into a new era of natural voice interaction, let’s just keep in mind how much we stand to gain by designing machines that can use and understand the universal word: “Huh?”

 

Illustration of cheeseburger courtesy Shutterstock.

References:

 

post authorSiri Mehus

Siri Mehus
Siri is a user researcher at Blink UX in Seattle with an academic background in the study of language and social interaction. Around three years ago she considered changing her name, but decided to just go with it.

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and