Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› Conscious AI models?

Conscious AI models?

by Daniel Godoy
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Conscious AI models?

If it looks like a duck…

A few days ago, the Internet was taken by countless tweets, posts, and articles about Google’s LaMDA AI being conscious (or sentient) based on a conversation it had with an engineer. If you want, you can read it here.

If you read it, you will also realize that it surely looks like a dialog between two people. But, appearances can be deceiving…

What IS LaMDA?

The name stands for “Language Model for Dialogue Applications”. It’s yet another massive language model trained by Big Tech to chat with users, but it’s not even the latest development. Google’s blog has an entry from more than one year ago called “LaMDA: our breakthrough conversation technology”. It’s a model built using Transformers, a popular architecture used in language models. Transformers are simple, yet powerful, and their power comes from their sheer size.

LaMDA has 137 BILLION parameters, and it was trained on 1.5 TRILLION words (for more implementation details, check this post).

To put it in perspective, and give away my age, the Encyclopaedia Britannica has only 40 million words in it. So, LaMDA had access to the equivalent of 37,500 times more content than the most iconic encyclopaedia.

In other words, this model had access to pretty much any kind of dialog that has ever been recorded (in English, that is), and it had access to pretty much every piece of information and knowledge produced by mankind. Moreover, once the model is trained, it will forever “remember” everything it “read“.

It’s impressive? Sure! It’s an amazing feat of engineering? Of course! Does it produce human-like dialog? Yes, it does.

But, is it conscious, or sentient? Not really…

“Why not?! If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, right?”

Yes, it is probably a duck. But that’s because we know what a duck is (and LaMDA knows it too, or at least it can talk about ducks just like you and me).

What IS consciousness?

Unfortunately, the definition of consciousness is not as obvious as the definition of a duck. And, before the late 80s, no one was even trying to define it. The “C-word” — consciousness — was banned from the scientific discourse.

Luckily, the scientific community forged ahead, and nowadays it has a much better understanding of consciousness. But the everyday usage of the term consciousness hasn’t changed, and it still covers a lot of different and complexphenomena.

What does “being conscious” usually mean when one uses the term? For one, being awake — “she remained conscious after the crash”. Alternatively, being aware — “he is conscious of his faults”.

In order to properly study a topic, though, it is required to properly define the object of study. And that’s what Stanislas Dehaene does in his book, “Consciousness and the Brain“, from which I am drawing the majority of the ideas in this section. The author distinguishes three concepts:

  1. vigilance: the state of wakefulness;
  2. attention: “focusing mental resources on a specific piece of information“;
  3. conscious access: “the fact that some of the attended information eventually enters our awareness and becomes reportable to others” (highlight is mine).

Hold this thought: with conscious access, the information becomes reportable. We’ll get back to this soon!

For Dehaene, vigilance and attention are required, but not sufficient, and only conscious access qualify as genuine consciousness.

Conscious Access

It looks trivial, right? You see something, say, a flower, and you instantly become aware of its properties, color, smell, shape, etc.

So, you can safely say that you’re aware of everything your eyes see, right?

Please watch the short video below (and don’t scroll down past the video otherwise you’ll spoil the answer!):

Wait for it…

So, did you see the gorilla in the video?

“Gorilla?! What are you talking about?”

Most people will fail to see the gorilla the first time they watch this video. So, if you didn’t see it, watch it again, and look for the gorilla in it.

What happened the first time? Do you think your eyes didn’t pick up the image of the gorilla? Is it even possible? Not really…

Even though you were not be able to tell there was a gorilla in the video, the image of the gorilla was perceived by your eyes, transmitted to your brain, processed (to some extent), but, ultimately, ignored.

This is simply to say that there’s a lot going on behind the scenes, even if you’re not aware of it. As Dehaene puts it, a “staggering amount of unconscious processing occurs beneath the surface of our conscious mind”.

Reflexive Processing

This unconscious processing is reflexive in nature: whenever there’s a stimulus — an input, like the image of the gorilla — there’s processing, and an associated output, a thought, is produced. These thoughts are accessible, but not accessed, and they “lay dormant amid the vast repository of unconscious states“, as Dehaene puts it.

Does it look familiar? An input comes in, there’s processing, and an output comes out. That’s what a model does!

What happens if you ask a question to a language model? Roughly speaking, that’s what happens:

  1. It will parse your sentence and split it into its composing words;
  2. Then, for each word, it will go over a ginormous lookup table to convert each word into a sequence of numbers;
  3. The sequence of sequences of numbers (since you have many words) will be processed through a ton of arithmetic operations;
  4. These operations result in probabilities associated with every word in the vocabulary; so the model can output the word that’s most likely the appropriate one every time.

As you can see, there’s no reasoning of any kind in these operations. They follow an inexorable logic and produce outputs according to the statistical distribution of sequences of words the model had access to during training (those 1.5 trillion words I mentioned before).

Once an input is given to it, the model is compelled to produce an output, in a reflexive manner. It simply cannot refuse to provide an answer, it does not have volition.

In our brains, an output — a thought produced by some reflexive processing — that fails to enter our awareness cannot possibly be reported.

But, in a language model, every output is reported!

And THAT’s the heart of the question!

To Report or Not To Report

In the human brain, attended information can only be reported IF we have conscious access to it, that is, if it entered our awareness.

Well, since we’re used to communicating with other humans, it’s only logical that, if someone is reporting something to us, they MUST have had conscious access to it, right?

But, what about a language model? We can communicate with it, and that’s amazing, but just because the model is reporting something to us, it does not mean it has conscious access to it. It turns out, the model cannot help itself, it must report at all times, it was built for it!

Language models used to be simpler: you’d give it some words, like “roses are red, violets are…“, and it would reply “blue” just because it is, statistically speaking, more likely than “red“, “yellow“, or “banana“. These models would stumble, badly, when prompted with more challenging inputs. So, back then, no one would ever question if these models were conscious or not.

What changed? Models got so big, training data got so massive, and computing power got so cheap, that it is relatively easy to produce outputs that really look like they were produced by an intelligent human. But they are still models, we know how they got trained, so why are we questioning ourselves if they became conscious?

My guess here is because we would like them to be conscious!

Steve, the Pencil

I am a big fan of the series “Community”. In the first episode, Jeff Winger gives a speech that seems quite appropriate in the context of our discussion here:

“… I can pick this pencil, tell you its name is Steve, and go like this (breaks the pencil in half, people gasp) and part of you dies a little bit on the inside because people can connect with anything. We can sympathize with a pencil…” (highlights are mine)

And that’s true, people can connect with anything, and people want to connect with others, even language models. So, it shouldn’t be surprising that we stare, marveled, at our own creation, and wonder — because it feels good.

And that’s actually a good thing!

A sophisticated language model can be used to address loneliness in the elderly, for example. People can, and will, connect with the model, and treat it as if it were a real person, even if the model itself is not a conscious entity. The applications, both good and bad, are endless.

At this point, you’re probably asking yourself what would it take for a model to actually be conscious, according to the latest scientific criteria?

Autonomy

If I had to summarize it in one word, it would be that: autonomy.

Unlike any language model, the human “brain is the seat of intense spontaneous activity” and it is “traversed by global patterns of internal activity originated from neuron’s capacity to self-activate in partially random fashion” (highlights are mine).

This spontaneous activity gives rise to a “stream of consciousness”, described by Dehaene as an “uninterrupted flow of loosely connected thoughts, primarily shaped by our current goals, and occasionally seeking information from the senses”.

The brain is constantly generating thoughts by itself, processing them, and mixing them with external inputs received through our senses, but only a tiny minority of them ever enters our awareness.

The role of the consciousness, according to Dehaene, is to select, amplify, and propagate relevant thoughts. The thoughts that “make it” are “no longer processed in a reflexive manner, but can be pondered and reoriented at will” and they can be part of purely mental operations, completely detached from the external world, and they can last for an arbitrarily long duration.

I’m sorry, but our current language models do not do any of these things…

Final Thoughts

That’s not an easy topic, and my line of argument here is heavily based on Stanislas Dehaene’s definition of consciousness, as it seems the most scientifically-sound definition I found.

In the end, it all boils down to how you define the duck.

Finally, if you find it hard to believe that your brain is running multiple parallel processes without you even realizing it, watch this video — you’ll be surprised!

post authorDaniel Godoy

Daniel Godoy,

Daniel Voigt Godoy is a data scientist, developer, speaker, writer, and teacher. He is the author of the “Deep Learning with PyTorch Step-by-Step” series of books, and he has taught machine learning and distributed computing technologies at Data Science Retreat, the longest-running Berlin-based bootcamp for several years, helping more than 150 students advance their careers. He has been a speaker at the Open Data Science Conference since 2019, delivering PyTorch and Generative Adversarial Networks (GANs) workshops for beginners. Daniel is also the main contributor of HandySpark, a Python package developed to allow easier data exploration using Apache Spark. His professional background includes 20 years of experience working for companies in several industries: banking, government, fintech, retail, and mobility. He won four consecutive awards (2012, 2013, 2014, 2015) at the prestigious Prêmio do Tesouro Nacional (Brazilian’s National Treasury Award).

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The author uncovers what LaMDA and consciousness are, and how they correlate.
  • While exploring conscious AI models, there are a few things that need to be considered:
    • Conscious Access
    • Reflexive Processing
    • The Question of Reporting
    • Autonomy

Related Articles

Repetitiveness, complicated setups, and lack of personalization deter users.

Article by Marlynn Wei
​6 Ways to Improve Psychological AI Apps and Chatbots
  • Personalized feedback, high-quality dynamic conversations, and a streamlined setup improve user engagement.
  • People dislike an overly scripted and repetitive AI chatbot that bottlenecks access to other features.
  • Tracking is a feature that engages users and develops an “observer mind,” enhancing awareness and change.
  • New research shows that users are less engaged in AI apps and chatbots that are repetitive, lack personalized advice, and have long or glitchy setup processes.
Share:​6 Ways to Improve Psychological AI Apps and Chatbots
3 min read
Article by Josh Tyson
Meet the Intelligent Digital Worker, Your New AI Teammate
  • The article introduces the concept of Intelligent Digital Workers (IDWs), advanced bots designed to assist humans in various workplace functions, emphasizing their role in augmenting human capabilities and enhancing organizational efficiency.
Share:Meet the Intelligent Digital Worker, Your New AI Teammate
3 min read

As consumers’ privacy concerns continue to grow, so should our attention to addressing privacy issues as user experience designers.

Article by Robert Stribley
Designing for Privacy in an Increasingly Public World
  • The article delves into the rising importance of addressing privacy concerns in user experience design, offering insights and best practices for designers and emphasizing the role of client cooperation in safeguarding user privacy.
Share:Designing for Privacy in an Increasingly Public World
9 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and