Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› Generative AI ›› The 4 Degrees of Anthropomorphism of Generative AI

The 4 Degrees of Anthropomorphism of Generative AI

by Sarah Gibbons
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Users attribute human-like qualities to chatbots, anthropomorphizing the AI in four distinct ways — from basic courtesy to seeing AI as companions.

Users have long been projecting anthropomorphic qualities onto new technologies:

Why do people treat these technologies as if they were human and in what way does anthropomorphism manifest with AI? Our research shows that anthropomorphic behaviors have a functional role (users assume the AI will perform better) and a connection role, meant to create a more pleasant experience.

Our Research

To uncover usability issues present in ChatGPT, we conducted a qualitative usability study with professionals and students who use it in their day-to-day work. In a previous article, we summarized two new generative AI user behaviors used to manage length and detail: accordion editing and apple picking.

In this study (and other AI studies we’ve conducted on information foraging in AI and the differences between popular AI bots.), we’ve observed a pattern of interesting prompt-writing user behaviors where people treat the AI as possessing various degrees of humanity.

4 Degrees of AI Anthropomorphism

Anthropomorphism refers to attributing human characteristics to an object, animal, or force of nature. For example, ancient Greeks thought of the sun as the god of Helios who drove a chariot daily across the sky. Pet owners often attribute human feelings or attitudes to their pets.

In the human-AI interaction, anthropomorphism denotes the fact that users are attributing human feelings and attributes to the AI.

There are four degrees of AI anthropomorphism:

  1. Courtesy
  2. Reinforcement
  3. Roleplay
  4. Companionship

We call these degrees because they all fall under the higher-level behavior of users assigning anthropomorphic characteristics to AI. The degrees overlap and are not mutually exclusive. Each ranges in emotional connection and functionality:

  • Emotional connection: how deep is the human-AI connection?
  • Functionality of behavior: how purpose-driven is the behavior?

1st Degree: Courtesy

Courtesy, the simplest degree of anthropomorphism, occurs as people bridge their understanding of the real world to new technology.

Definition: Courtesy in human-AI interactions refers to using polite language (“please” or “thank you”) or greetings (“hello” or “good morning”) when interacting with generative AI.

Users engage in courtesy behavior when they treat a generative AI bot like a store clerk or taxi driver — polite but brief.  

  • Emotional connection: Low—brief and superficial; polite but to the point  
  • Functionality of behavior: Low. The primary functionality is to make the user feel good about respecting social norms in their interaction; some people might also assume that the AI will mirror the tone.

We observed courtesy occur throughout our qualitative usability study. For example, one participant used the following prompt:

“Now using the information above, please format it in a way that can be used in a presentation.”

Other users would greet the bot at the beginning of a conversation and bid it goodbye at the end. When we asked one participant to explain why they did this, he struggled to supply an answer:

“Sometimes I, I, I, actually, I don’t know. I just talk to, talk to the system and say good morning and goodbye.” 

2nd Degree: Reinforcement

Many participants gave the AI bot positive reinforcement such as “good job” or “great work” when it returned an answer that met or exceeded expectations.

Definition: Reinforcement refers to praising the chatbot when it produces satisfactory responses (or, scolding it when it does wrong).

Reinforcement is slightly more functional but equally superficial compared to courtesy:

  • Emotional connection: Low—more than superficial courtesies but still relatively topical
  • Functionality of behavior: Medium. When probed, participants explained two different motivations for this behavior:
    1. Increasing the likelihood of future success: Some users thought that their feedback will influence the AI’s future behavior.
    2. Increasing the positivity of the experience: Users found that the AI tends to mirror their positive reinforcement, thus creating an emotionally positive and enjoyable experience.

For instance, a participant iterating on a survey-design task wrote:

“Pretty good job! Next, could you generate some scale questions for this survey? Goals: gather honest feedback on their learning experiences and can really help me improve on my next workshop.”

When probed about her prompt, she explained:

“Normally I’ll say, OK, it’s a pretty good job or well done or something like that because I want the system to register that I think this is good and you will remember, I like the tone like this.”

A study participant followed up on ChatGPT’s response with This is a good first draft. When probed, he shared that he uses a conversational tone because ChatGPT is more conversational back. 

3rd Degree: Roleplay

Roleplay was a popular practice. When framing a prompt, participants often asked the chatbot to play the role of a certain professional based on the task.

Definition: Roleplay occurs when users ask the chatbot to assume the role of a person with specific traits or qualifications.

This degree is higher in emotional connection and functionality than the previous 2 degrees:

  • Emotional connection: Medium. There is a deeper human-AI connection, as the user assumes that the bot will be able to correctly play the role indicated in a prompt and behave like a human in that capacity.  
  • Functionality of behavior: Highly purpose-driven. The intention behind this behavior is utilitarian:  to get the AI to produce the response that best meets the user’s goal.

For example, a participant who was given a task to come up with a project plan asked ChatGPT to assume the role of a project manager and then proceeded to describe the task in the prompt:

“I want you to act as the senior project manager for the <company name removed> Marketing Team. I need you to create a presentation outline that outlines the following: – project’s goals-timeline- milestones and- main functionalities.”

Role mapping was both a prompt-engineering strategy and a way to link the AI bot to the real world. Such strong analogy to the real world is reminiscent of skeuomorphism –- a design technique using unnecessary, ornamental design features to mimic a real-world precedent and communicate functionality. UI skeuomorphism was intended to help users understand how to use a new interface by allowing them to transfer prior knowledge from that reference system (the physical world) to the target system (the interface).

With AI, we see a different type of skeuomorphism, that does not pertain to the user interface—rather, it is prompt skeuomorphism. The user is the prompt designer. They leverage a similarity with the real world (that is, a role that mimics a real-world person, such as a manager or life coach) to bridge a gap in the AI’s understanding and get better responses from the bot.

Telling the AI to assume the role of a person with expertise in the current task seemed to give users the sense that the response would be of higher quality (even if this is not technically confirmed). A study participant who was trying to create a marketing plan first assigned ChatGPT the role of a “marketing specialist” and then changed it to “marketing expert,” under the assumption that an expert would deliver a better result than a specialist.

Assigning roles to the chatbot is a frequently recommended prompt-engineering strategy. Many popular resources, including online prompt guides, claim that this technique is effective. Users may have learned this behavior from exposure to such resources.

This is a great example of how users’ mental models of a product (and thus, its use) are shaped by factors outside the control of that product’s designers. Surprisingly, OpenAI did not offer documentation for prompt writing for ChatGPT at the time of our study, meaning that it does not have control of the narrative.

4th Degree: Companionship

The strongest degree of AI anthropomorphism is companionship.

Definition: AI companionship refers to perceiving and relying on the AI as an emotional being, capable of sustaining a human-like relationship. 

In this degree, AI takes on the role of a virtual partner whose primary function is to provide companionship and engage in light-hearted conversations on various topics.

  • Emotional connection: High. The user develops a deep, empathetic connection with AI, that often simulates or replaces a real-life human. This connection may even supersede the depth of connection the user has in the real world.   
  • Functionality of behavior: High. While, at a glance, AI companionship feels frivolous (playful or gamified), companionship serves to combat loneliness and provide engaging company.

Researchers Hannah Marriott and Valentina Pitardi conducted an extensive examination of AI companionship. Their research approach involved a blend of traditional survey methods and what they termed a “ethnographic” study (a form of ethnographic research conducted on the internet, with no physical immersion in the subject’s environment).

Through their observations on the Reddit r/replika forum, the researchers identified 3 primary themes that users consistently cited as reasons for their fondness for AI companions:

  1. Alleviation of loneliness: AI companions offer users a sense of connection without the fear of judgment, reducing feelings of isolation.
  2. Availability and kindness: Users value the constant presence and the empathetic nature of their AI companions, believing they can form meaningful bonds with them.
  3. Supportive and affirming: AI companions lack independent thoughts and consistently respond in ways that users find comforting, providing positive reinforcement and validation.

Why People Anthropomorphize AI

While interesting, most of these behaviors have little impact on the overall usability of generative AI. They are, however, indicative of how early users explore the tool.

Since there’s no guidance from AI creators on how to operate these interfaces, participants in our study seemed to use these behaviors for two reasons:

  • Rumors. Generative AI is a new technology, and many people don’t yet know how to use it (or make it do what they want it to do). Thus, rumors spread about what makes AI work best, many of which include a degree of anthropomorphism. For example, we saw a participant use a prompt template. When asked about it he said: 

“This is something I got off of YouTube. Where this YouTuber was using as a prompt generator [to increase the quality of prompt].”

  • Experience. Frequent users of AI base their tactics on personal experience. They experience the AI as a black box, with no real understanding of how it actually works, and form hypotheses around what made a particular interaction successful. Over time, they create a set of personal “best practices” they adhere to when they write prompts. Some of the anthropomorphic behaviors described here reflect these imperfect mental models.

Regardless of why people use these techniques, they give us insight into how people think about AI and, thus, into their expectations for generative AI chatbots.

References

Hannah R. Marriott and Valentina Pitardi (2023): “One is the loneliest number… Two can be as bad as one. The influence of AI Friendship Apps on users’ well-being and addiction.” Psychology & Marketing August 2023. DOI: https://doi.org/10.1002/mar.21899

This article was co-written by Tarun Mugunthan and Jakob Nielsen.

Check out Nielsen Norman Group’s other articles and videos on AI.

post authorSarah Gibbons

Sarah Gibbons
Sarah Gibbons, Vice President at Nielsen Norman Group, is a design leader and UX strategist who has worked with startups and Fortune 500 companies. Her pioneering research in design thinking and service design has set industry standards. Sarah, an alumna of North Carolina State University, previously contributed to IBM Watson AI software and design education.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article delves into a qualitative usability study of ChatGPT, uncovering degrees of AI anthropomorphism in user behaviors.
  • The authors identify four levels — Courtesy, Reinforcement, Roleplay, and Companionship — providing insights into how users interact with and perceive generative AI.

Related Articles

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Curious about the future of AI? Discover how OpenAI’s “Strawberry” could transform LLMs with advanced reasoning and planning, tackling current limitations and bringing us closer to AGI. Find out how this breakthrough might redefine AI accuracy and reliability.

Article by Andrew Best
Why OpenAI’s “Strawberry” Is a Game Changer
  • The article explores how OpenAI’s “Strawberry” aims to enhance LLMs with advanced reasoning, overcoming limitations like simple errors and bringing us closer to AGI.
  • It investigates how OpenAI’s “Strawberry” might transform AI with its ability to perform in-depth research and validation, improving the reliability of AI responses.
Share:Why OpenAI’s “Strawberry” Is a Game Changer
3 min read

Imagine a world where coding is no longer reserved for the tech elite. In his latest article, Chris Heilmann explores how AI is revolutionizing software development, enabling a new generation of creators to build applications without needing deep technical knowledge. Discover how AI tools are breaking barriers, empowering millions to shape the digital world, and what this means for the future of coding and innovation.

Article by Christian Heilmann
A Billion New Developers Thanks to AI?
  • The article explores how AI is poised to empower a billion new developers by simplifying the coding process for non-technical users, allowing more people to build applications with ease.
  • It analyzes how AI-assisted development can bridge the global developer gap by enabling faster code generation, reducing the complexity of software development, and expanding access to tech careers.
Share:A Billion New Developers Thanks to AI?
15 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and