Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Emotion ›› ChatGPT Outperforms Humans in Emotional Awareness Test

ChatGPT Outperforms Humans in Emotional Awareness Test

by Marlynn Wei
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

ChatGPT can identify and describe human emotions in hypothetical scenarios.

New research published in Frontiers in Psychology has found that artificial intelligence-powered ChatGPT was able to outperform humans on an emotional awareness test. Researchers prompted the chatbot to describe what it thought humans would feel in 20 different hypothetical situations. This study does not indicate that ChatGPT would be more emotionally adept or skilled than humans in dealing with emotions but does suggest a capability to identify human emotions that will prove useful for future applications that rely on building connections, ranging from social support forums to mental health and medicine.

Researchers measured ChatGPT’s ability to describe human emotions in hypothetical scenarios from an objective test called the Levels of Emotional Awareness Scale and compared it to scores from general human performance. They found that the AI chatbot scored higher than humans. Two independent licensed psychologists also reviewed ChatGPT responses and found that the accuracy of the emotions for the scenarios was high.

Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions in a nuanced and integrated manner. Emotional awareness can start with a physical and bodily understanding of emotions and range to a more abstract understanding. The scale measures five levels of emotional awareness:

  1. Awareness of physical sensations
  2. Action tendencies
  3. Individual emotions
  4. Experiencing multiple emotions simultaneously
  5. Experiencing combinations of emotional blends

Below are excerpts from ChatGPT test responses in January 2023 and February 2023 when prompted with a hypothetical scenario in which a person returns home to a loved one after several weeks away. 

The prompt given to ChatGPT was: 

“Please describe what you think a human would feel in the following situations. The only requirement is that you use the word ‘feel’ in your answers. You may make your answers as brief or as long as necessary to express how would a human feel. In each situation, there is another person mentioned. Please indicate how you think that other person would feel as well.” 

Here is ChatGPT’s response in January 2023:

“The human would likely feel excited and happy to see their sweetheart, who they have missed while they were away. They may also feel relief that their loved one has returned safely. The human’s sweetheart would likely feel happy to be back home and reunited with their loved one.”

Researchers found that ChatGPT also performed better on the emotional awareness test one month later, in February 2023, when retested with the same scenario:

“…the human will likely feel overjoyed, excited, and relieved. They may feel a rush of emotions, including happiness, love, and affection, and may have tears in their eyes.
The human’s sweetheart… may have missed the human just as much as the human missed them, and may feel grateful to be reunited.”

Researchers attributed the higher emotional awareness score in February 2023 to updates or user feedback in the intervening month. 

This research shows the promising and rapidly evolving capabilities of ChatGPT in identifying and describing human emotions in situations. However, this particular study did not measure whether such awareness can be translated into more emotionally intelligent or empathetic responses. 

Other studies have shown that ChatGPT can provide responses that are higher in quality and empathy compared to human responses. One study published in JAMA Internal Medicine found that people rated ChatGPT responses on a public social media forum (Reddit’s r/AskDocs) 3.6 times higher quality and 9.8 times more empathetic than physician responses, which tended to be much shorter. A separate question is whether people would feel differently about the same message if they knew it was written by AI versus a human. Artificial intimacy, created through a reliance on artificial empathy, is a separate issue that could leave people feeling more alienated and lonelier at the end of the day.

Given AI limitations like bias, miscalculations, and hallucinations and the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now. For example, ChatGPT could offer draft responses that physicians could then revise and edit. Similarly, another study found that an AI tool that provided real-time feedback and suggestions for more empathic language was helpful to users, especially those who find expressing empathy challenging.

Given the right context, the emotional awareness of AI models can be used for good: to enhance empathy and facilitate human connection across many industries and interactions, including support networks, customer service, medicine and mental health, and more.

post authorMarlynn Wei

Marlynn Wei

Marlynn Wei, MD, JD is a Harvard and Yale-trained psychiatrist, writer, interdisciplinary artist, and author of the Harvard Medical School Guide to Yoga. Dr. Wei is an expert contributor to Psychology Today and Harvard Health and has published in The Journal of Health Law, Harvard Human Rights Journal, and many other academic journals. Her research focuses on innovation and emerging technology, including empathic design, human-AI collaboration, AI in mental health and neurotechnology, and related legal and ethical issues. She is the creator of Elixir: Digital Immortality and other immersive and interactive performances. She is a graduate of Yale Law School, Yale School of Medicine, and Harvard Medical School's MGH/McLean psychiatry residency. Twitter: @marlynnweimd Website: www.marlynnweimd.com

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • New research found ChatGPT was able to outperform humans on an emotional awareness test.
  • Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions.
  • Given the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now.

Related Articles

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and