Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› How Humility Helps AI Work Better with Human Users

How Humility Helps AI Work Better with Human Users

by Nona Tepper
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Because the outcome of an AI system is dependent on the information it’s trained on, ensuring a system presents a nuanced perspective can be tricky.

In the 1950s, Alan Turing proposed a contest that would become the gold standard for measuring AI sophistication: the Turing Test, wherein a machine attempts to trick people into thinking it’s human. 

But as AI’s ability to interact with humans has progressed, so, too, have companies’ understanding of what it should feel like to talk to a robot. 

“Even if you can make AI feel human, you don’t necessarily want to.” 

“Even if you can make AI feel human, you don’t necessarily want to,” said David Vandegrift, who is a former investor turned co-founder and CTO of an AI startup. “Because people will feel betrayed if they find that they’ve been talking to something that felt human, but wasn’t actually.”

Vandegrift’s Chicago startup, 4Degrees, leverages AI to help individuals identify their most important professional relationships. Their challenge lies in designing an AI system that nudges professionals to network with the right people, without being annoying or creepy about it. It’s a practice that has long proved resistant to disruption by tech, because it requires a human touch. 

Networking brings up feelings of anxiety for many, Vandegrift said, which can stop someone from reaching out to their old boss to catch up over lunch, for example. It’s hard to use technology to convince someone to do something they feel fundamentally uncomfortable about. It’s also tough when a machine reveals you know fewer people than you thought you did. And who wants AI telling them their connections are shallow? 

Even building a system for predicting who would be a good professional match presents a complication. Because the outcome of an AI system is dependent on the information it’s trained on, ensuring a system presents a nuanced perspective — and not just, say, a venture capitalist’s thoughts on who’s important to know — can be tricky. 

“For the most part, people today are still building relationships the same way they were 10, or 100, or even 1,000 years ago,” Vandegrift said. “It’s a problem that people have been trying to solve for quite awhile. But it’s a problem that’s proven very resilient, in terms of like, nobody’s solved it.” 

In taking on the challenge, Vandegrift and his team learned some hard-earned lessons about what humans really want from AI. 

  • Even if you could build an AI that seems human, you may not want to. Many people respond poorly to being tricked by computers.
     
  • Think carefully about the “voice” and tone of your application. Don’t make your users feel like they’re being bossed around. 
     
  • Your AI will inevitably get things wrong. Interactions that emphasize humility will make the user more likely to forgive you. 
     
  • Building nuanced AI requires nuanced training data. Your models won’t incorporate perspectives they’ve never been exposed to.

NUANCED AI NEEDS NUANCED TRAINING DATA

4Degrees derives most of its insights from email and calendar data. It currently focuses on metadata, like whom you email, how often they reply and how long it takes them to do so. It also leverages a set of approximately 250 tags to comb data from users’ Twitter, LinkedIn and other sources to find the most accurate way to categorize them by profession, industry, interests and skills. 

“Based on essentially the words that they’re using, we can infer [whether they are VCs or healthtech entrepreneurs]’ — things like that,” Vandegrift said.

Vandegrift — who recently published the book The Future of Business, about AI and its practical implications — said the main challenge in building an AI system is being thoughtful about the underlying data the system is trained on. Using training data that is reflective of a variety of perspectives is essential in ensuring that an AI model makes the best suggestions for everyone. 

“What data are you actually feeding into the model? Who are the people you asked, ‘Is this someone worth knowing or not?’” 

“What data are you actually feeding into the model? Who are the people you asked, ‘Is this someone worth knowing or not?’” Vandegrift asked. “Because that’s all going to show up in the outcomes of the model.”

4Degrees has steered clear of ranking an individual’s connections because that can introduce bias into the AI system. For example, CEOs of Fortune 500 companies are important people for any venture capitalist or investor to know, Vandegrift said. But the vast majority of company heads are also white men. Training the AI system to prioritize those relationships would inadvertently cause the system to discriminate against women and people of color.   

“We’re being proactive in what we’re not developing,” Vandegrift said.  

GOOD UX ISN’T ALWAYS ABOUT MAKING THINGS EASIER

4Degrees aims to change what it means to network. In the process, it has altered users’ understanding of their own connections. 

Most people think they have “hundreds or thousands” of connections in their industry, Vandegrift said. But after two years on the market, he’s convinced their reality really lies in Dunbar’s number, a theory that states there is a cognitive limit to the number of stable social relationships an individual can hold. Most people have about 100 strong connections, he said. After that, the quality of an individual’s relationships starts to falter. 

Vandegrift said most of 4Degrees users were surprised — or insulted — when the platform first revealed how few connections they actually had.

When people inflate the size of their networks, they tend to spend less time than they need to on growing their professional base. And because networking isn’t a task that requires immediate attention, it’s easy to prioritize other items instead. But, Vandegrift cautions people against cutting corners. For example: 4Degrees receives a lot of requests from users to build features that simply write and send emails for them — say, congratulating a CEO friend for receiving an award, or an old coworker for starting a new job. 

It sounds good in theory, but Vandegrift is reluctant to try it.

“We believe that the relationship component should come from you and it should be thoughtful.” 

“We believe that the relationship component should come from you and it should be thoughtful,” he said. 

PEOPLE DON’T LIKE BEING BOSSED AROUND BY AI

In 2018, Google unveiled its Duplex assistant at its annual developer conference. In front of a crowd numbered in the thousands, the machine called a hair salon and showcased conversational skills so strong that the receptionist on the other line had no idea they were talking with a robot. Attendees of Google’s I/O conference were shocked by Duplex’s technological advances — and a little creeped out.

The machine passed the Turing Test, but was that a good thing? Google eventually committed to notifying all end users of Duplex that they were engaging with a robot.

In Vandegrift’s mind, an AI system should accomplish its task and leave people feeling comfortable after their interaction. For 4Degrees, comfort comes in the form of how it structures its suggestions and illustrates its emails. 

When the company first launched, Vandegrift said 4Degrees’ system was designed so that, if a user failed to contact their connection during the suggested amount of time, 4Degrees would issue them a warning message, calling them out for missing a deadline the robot imposed. 

Vandegrift said 4Degrees eventually removed the feature because it made people feel bad for missing a deadline they hadn’t really agreed to. Instead of admonishing users for missing an opportunity to connect, now when the time comes for an individual to reach out, 4Degrees suggests additional opportunities for users to call up their professional connections, subtly nudging them to invite their old boss to coffee. 

“We’re prioritizing updates about those people now, rather than saying, ‘It’s so bad that you haven’t reached out,’” Vandegrift said.  

 

HUMILITY WILL MAKE YOUR AI EASIER TO FORGIVE

In its suggestions about when to connect, 4Degrees also structures its findings as suggestions, rather than declarative statements. If the system notices a flight booked to San Francisco, for example, it will ask about the upcoming trip, rather than go straight to making suggestions. By structuring its ask with humility, Vandegrift said it helps users forgive the system when it inevitably gets things wrong — AI operates on educated guesses, after all.

To Vandegrift, the key to building an AI systems’ user experience is to highlight the fact that it’s not human. In the emails 4Degrees sends to users with opportunities to reach out, for example, the firm features an illustrated cartoon robot.

Vandegrift named the Seattle-based Textio as an example of an AI company that is thoughtful about its user experience. Textio analyzes company job postings and helps firms make them better by focusing on its own research, which, for example, found that postings with the word “rockstar” in them attract half as many women candidates as career advertisements that do not. 

After running its AI through a company’s job posting, Textio then presented its suggestions to the client with an explanation of why they chose to eliminate certain words, as well as add others. The customer determines for themselves how they want to use the AI’s recommendations. By explaining the research that backs the AI system, Vandegrift said Textio drives human trust in the machine. 

“You actually need to approach the design of AI systems fundamentally differently than traditional systems.” 

“You actually need to approach the design of AI systems fundamentally differently than traditional systems,” Vandegrift said. “You have to bring humility to that suggestion versus the confidence of like, ‘No, I know that.’ It’s actually a skill set that you should look for in your product manager, your designer and your engineers.” 

*This piece was first published by Built In.

post authorNona Tepper

Nona Tepper,

Freelance journalist whose work has appeared in The Washington Post, Crain's Chicago Business, Slate, VICE, MarketWatch and many other outlets. Passionate reporter and editor, who specializes in trend-spotting, investigative journalism and breaking news. 

Tweet
Share
Post
Share
Email
Print

Related Articles

Users attribute human-like qualities to chatbots, anthropomorphizing the AI in four distinct ways — from basic courtesy to seeing AI as companions.

Article by Sarah Gibbons
The 4 Degrees of Anthropomorphism of Generative AI
  • The article delves into a qualitative usability study of ChatGPT, uncovering degrees of AI anthropomorphism in user behaviors.
  • The authors identify four levels — Courtesy, Reinforcement, Roleplay, and Companionship — providing insights into how users interact with and perceive generative AI.

Share:The 4 Degrees of Anthropomorphism of Generative AI
8 min read
Article by Hillary Black
13 Conversation Design Pivoters Share Their Stories and Advice for Breaking Into the Industry
  • The article features insights from conversation designers, detailing their diverse backgrounds, learning paths, and challenges in the evolving field, offering valuable advice for aspiring professionals.
Share:13 Conversation Design Pivoters Share Their Stories and Advice for Breaking Into the Industry
19 min read

ChatGPT can identify and describe human emotions in hypothetical scenarios.

Article by Marlynn Wei
ChatGPT Outperforms Humans in Emotional Awareness Test
  • New research found ChatGPT was able to outperform humans on an emotional awareness test.
  • Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions.
  • Given the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now.
Share:ChatGPT Outperforms Humans in Emotional Awareness Test
3 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and