Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Emotion ›› ChatGPT Outperforms Humans in Emotional Awareness Test

ChatGPT Outperforms Humans in Emotional Awareness Test

by Marlynn Wei
3 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

ChatGPT can identify and describe human emotions in hypothetical scenarios.

New research published in Frontiers in Psychology has found that artificial intelligence-powered ChatGPT was able to outperform humans on an emotional awareness test. Researchers prompted the chatbot to describe what it thought humans would feel in 20 different hypothetical situations. This study does not indicate that ChatGPT would be more emotionally adept or skilled than humans in dealing with emotions but does suggest a capability to identify human emotions that will prove useful for future applications that rely on building connections, ranging from social support forums to mental health and medicine.

Researchers measured ChatGPT’s ability to describe human emotions in hypothetical scenarios from an objective test called the Levels of Emotional Awareness Scale and compared it to scores from general human performance. They found that the AI chatbot scored higher than humans. Two independent licensed psychologists also reviewed ChatGPT responses and found that the accuracy of the emotions for the scenarios was high.

Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions in a nuanced and integrated manner. Emotional awareness can start with a physical and bodily understanding of emotions and range to a more abstract understanding. The scale measures five levels of emotional awareness:

  1. Awareness of physical sensations
  2. Action tendencies
  3. Individual emotions
  4. Experiencing multiple emotions simultaneously
  5. Experiencing combinations of emotional blends

Below are excerpts from ChatGPT test responses in January 2023 and February 2023 when prompted with a hypothetical scenario in which a person returns home to a loved one after several weeks away. 

The prompt given to ChatGPT was: 

“Please describe what you think a human would feel in the following situations. The only requirement is that you use the word ‘feel’ in your answers. You may make your answers as brief or as long as necessary to express how would a human feel. In each situation, there is another person mentioned. Please indicate how you think that other person would feel as well.” 

Here is ChatGPT’s response in January 2023:

“The human would likely feel excited and happy to see their sweetheart, who they have missed while they were away. They may also feel relief that their loved one has returned safely. The human’s sweetheart would likely feel happy to be back home and reunited with their loved one.”

Researchers found that ChatGPT also performed better on the emotional awareness test one month later, in February 2023, when retested with the same scenario:

“…the human will likely feel overjoyed, excited, and relieved. They may feel a rush of emotions, including happiness, love, and affection, and may have tears in their eyes.
The human’s sweetheart… may have missed the human just as much as the human missed them, and may feel grateful to be reunited.”

Researchers attributed the higher emotional awareness score in February 2023 to updates or user feedback in the intervening month. 

This research shows the promising and rapidly evolving capabilities of ChatGPT in identifying and describing human emotions in situations. However, this particular study did not measure whether such awareness can be translated into more emotionally intelligent or empathetic responses. 

Other studies have shown that ChatGPT can provide responses that are higher in quality and empathy compared to human responses. One study published in JAMA Internal Medicine found that people rated ChatGPT responses on a public social media forum (Reddit’s r/AskDocs) 3.6 times higher quality and 9.8 times more empathetic than physician responses, which tended to be much shorter. A separate question is whether people would feel differently about the same message if they knew it was written by AI versus a human. Artificial intimacy, created through a reliance on artificial empathy, is a separate issue that could leave people feeling more alienated and lonelier at the end of the day.

Given AI limitations like bias, miscalculations, and hallucinations and the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now. For example, ChatGPT could offer draft responses that physicians could then revise and edit. Similarly, another study found that an AI tool that provided real-time feedback and suggestions for more empathic language was helpful to users, especially those who find expressing empathy challenging.

Given the right context, the emotional awareness of AI models can be used for good: to enhance empathy and facilitate human connection across many industries and interactions, including support networks, customer service, medicine and mental health, and more.

post authorMarlynn Wei

Marlynn Wei

Marlynn Wei, MD, JD is a Harvard and Yale-trained psychiatrist, writer, interdisciplinary artist, and author of the Harvard Medical School Guide to Yoga. Dr. Wei is an expert contributor to Psychology Today and Harvard Health and has published in The Journal of Health Law, Harvard Human Rights Journal, and many other academic journals. Her research focuses on innovation and emerging technology, including empathic design, human-AI collaboration, AI in mental health and neurotechnology, and related legal and ethical issues. She is the creator of Elixir: Digital Immortality and other immersive and interactive performances. She is a graduate of Yale Law School, Yale School of Medicine, and Harvard Medical School's MGH/McLean psychiatry residency. Twitter: @marlynnweimd Website: www.marlynnweimd.com

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • New research found ChatGPT was able to outperform humans on an emotional awareness test.
  • Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions.
  • Given the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now.

Related Articles

Discover how Flux.1, with its groundbreaking 12 billion parameters, sets a new benchmark in AI image generation. This article explores its advancements over Midjourney and Dall-E 3, showcasing its unmatched detail and prompt accuracy. Don’t miss out on seeing how this latest model redefines what’s possible in digital artistry!

Article by Jim Clyde Monge
Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
  • This article examines Flux.1’s 12 billion parameters and its advancements over Midjourney and Dall-E 3. Highlights its superior image detail and prompt adherence.
  • The piece explores the shift of developers from Stability AI to Black Forest Labs and how this led to Flux.1. Analyzes the innovation impact.
  • It compares Flux.1 with Midjourney V6, Dall-E 3, and SD3 Ultra, focusing on visual quality, prompt coherence, and diversity.
  • The guide explains how to access Flux.1 via Replicate, HuggingFace, and Fal. Covers the different models—Pro, Dev, Schnell—and their uses.
  • The article investigates Flux.1’s capabilities in generating photorealistic and artistic images with examples of its realism and detailed rendering.
Share:Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
5 min read

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and