Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› 3 Ways ChatGPT is a Lot Like Galaxy Quest

3 Ways ChatGPT is a Lot Like Galaxy Quest

by Robb Wilson
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Discover the potential and pitfalls of generative AI against a surprising backdrop: Galaxy Quest.

Some of my early work with voice technology was as an assistant ADR editor on the film Galaxy Quest (1999). ADR stands for automated dialogue replacement, which is essentially recording additional voiceovers during post production. Working on the film was a high-note as I transitioned out of Hollywood and into the burgeoning field of experience design. Still, despite its well-deserved cult following, I don’t think about Galaxy Quest all that often. That changed when ChatGPT revealed to the world at large just how powerful a conversational interface can be.

I’ve been working with conversational AI for more than 20 years. In many ways, ChatGPT is something I’ve been waiting for a long time. It’s delivered a light-bulb moment where people everywhere are realizing how easy it can be to interact with machines using a conversational interface. This really is an inflection point in our relationship with technology. (Chat GPT amassed over a million users in its first five days, and it’s only grown from there). Why not take a moment to learn a few lessons about how to properly leverage generative AI from a screwball sci-fi classic.

1. Like Thermians, GPT Has a Severely Limited Understanding of the World

In Galaxy Quest, a group of actors from a syndicated series that’s a lot like Star Trek encounter an alien race with a deep connection to their work. The Thermians have been streaming television content from Earth for years, believing that it’s depicting real events. They’ve modeled their entire society around the show, but are unable to contextualize what they’ve seen.

“Since we first received transmission of your historical documents, we have studied every facet of your mission and strategies,” the Thermian leader explains to the fictional crew once they’ve been brought on board a fully functional recreation of their ship, the Protector. It quickly becomes evident, however, that Thermians are unable to understand human emotion and the contextual clues that other humans pick up on right away. They are also totally unaware that the source of all their inspiration is a work of fiction.

GPT is similar. It’s essentially been trained on all the knowledge available on the Internet, but it doesn’t really know anything. It’s a highly predictive best-guess machine, and any grasp it seems to have on the staggering amount of information it’s been fed is illusory at best. While plenty of users have unearthed novel uses for ChatGPT, its primary strength is as a conversational interface. It becomes infinitely more useful as a portal into an ecosystem where other technologies and data are being sequenced to automate sophisticated processes. Without that orchestration layer, ChatGPT, like Thermians, has a limited knowledge base.

2. Inefficiency and Bias Always Bubble Up, Even in Outer Space

The Protector is a state-of-the-art ship with a voice-activated computer system. Unfortunately, the system only responds to commands from Lt. Tawny Madison (Sigourney Weaver). The irony is thick. The inherent strength in conversational AI is that it enables anyone to communicate with technology, without any special training. Setting up the interface so that only one crew member can use it is almost surgically poor design.

As a result, Madison is forced to repeat everything the computer says to Commander Taggart (Tim Allen) and then repeat his response back to the computer. The reason it functions this way is so that an attractive female character has something to do. “Look, I have one job on this lousy ship,” Madison yells when one of the crew complains about her parroting behavior. “It’s stupid, but I’m going to do it!”

It’s a clever jab at the narrow ways women have been depicted in media for centuries, and points to the kinds of entrenched bias that could be extremely destructive inside of technology this powerful and pervasive. 

As Davey Alba pointed out recently in Bloomberg, “like all AI products, [ChatGPT] has the potential to learn biases of the people training it and the potential to spit out some sexist, racist and otherwise offensive stuff … [OpenAI]  has attempted to bake-in guardrails that “decline inappropriate requests” that have befallen similar programs run by artificial intelligence. It won’t, for example, offer up any merits to Nazi ideology, if asked.”

These are smart steps in the right direction, but we can’t put the onus on OpenAI to fix the societal biases and horrors that AI reflects back to us. There’s also the troubling news reported recently in Time that OpenAI’s efforts to detoxify their GPT technology were outsourced to a firm that paid Kenyan laborers less than $2 per hour. If AI is going to reach its potential as a great ally to all of humanity, we’ll all have to do better.   

Other disruptive large language models (LLMs) are also bound to pop up in the coming months and years. At the same time, large numbers of individuals are going to unearth opportunities to train generative AI in novel ways. With conversational AI becoming a regular player in all of our lives moving forward, we need to find ways to work together to strip bias from our systems. That can take the form of end users reporting troubling behavior as well as fostering more diversity among the ranks of developers and programmers who are shaping this tech.

To borrow a catchphrase from the Spock-esque Dr. Lazarus (Alan Rickman): By Grabthar’s Hammer, by the Sons Of Warvan, uh, the time for this difficult work is now!

3. Outcomes Improve Once Humans Are Involved

The Thermains have recreated the Protector down to the most minute detail, but they can’t use it on their own. They need human help with that. When the crew is attempting to beam Taggart off of a hostile planet using the ship’s “digital conveyor,” a wrinkle appears. One of the Thermains reveals that “theoretically, the mechanism is fully operational, however, it was built to accommodate your anatomy, not ours.” Thus, Tech Sergeant Chen (Tony Shalhoub) has to take the reins of the dangerous and temperamental device to save his captain.

In a similar fashion, the scheme to defeat Sarris only unfurls once the crew puts aside their differences and works together as a team. Their ability to use creativity to solve problems is the secret sauce. All of the ship’s impressive technical abilities are useless without humans at the helm.

The same holds true for AI, which frankly can’t create its way out of a wet paper bag. Without human input, generative AI in particular has nothing to generate. This is a good thing. The key to solidifying AI as an ally (and not some dark overlord) is to always keep humans in the driver’s seat (or captain’s chair).

ChatGPT is an extraordinarily powerful interface, but it needs humans to tell it what to do. Like all AI, it needs humans to monitor its activity and make sure it isn’t heading into dangerous areas. It needs humans to act on the choices it presents.

Never give up! Never surrender!

Surprisingly, the public reaction to ChatGPT has been far more frenzied and colorful than our reaction to the U.S. military confirming the actual existence of hundreds of unidentified aerial phenomena (or UFOs, colloquially). It seems like the aliens have landed, but they’re not aliens. 

Our unexpected visitor is a large language model that slurped up the internet and now looks back at us with gigantic eyes and no guile. Metaphorically, those eyes might as well be massive and almond-shaped, that head gigantic and teetering on a slender neck. We’re communing with an abstract reflection of humanity swollen with knowledge but lacking in understanding. It needs our help making forward progress.

The next logical step is putting ChatGPT to use as a front end to ecosystems built for creating and evolving elaborate process automation. This is how we pilot the mind-blowing spaceship our real-world Thermains have laid at our feet. There’s plenty of fear and uncertainty in the air, but don’t let it cloud a generational opportunity to push humanity into a new era. To quote Commander Taggart, “Never give up! Never surrender!”

post authorRobb Wilson

Robb Wilson

Robb Wilson is the CEO and co-founder of OneReach.ai, a leading conversational AI platform powering over 1 billion conversations per year. He also co-authored The Wall Street Journal bestselling business book, Age of Invisible Machines. An experience design pioneer with over 20 years of experience working with artificial intelligence, Robb lives with his family in Berkeley, Calif.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • In the article, the author draws a parallel between ChatGPT and elements from the iconic film Galaxy Quest, finding remarkable similarities.
    • Just like the aliens in Galaxy Quest, GPT has learned from a massive knowledge base, but it doesn’t really know anything.
    • GPT has been exposed to some of the same biases that the film skewers. It will require a lot of dedicated effort by designers and users to strip the bias out of LLMs.
    • In the film, aliens need humans’ help. Despite their expertise in creating technical marvels, they lack the creative abilities needed to use them. The same holds true for AI: it needs humans to guide it and tell it what to do.

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Curious about the future of AI? Discover how OpenAI’s “Strawberry” could transform LLMs with advanced reasoning and planning, tackling current limitations and bringing us closer to AGI. Find out how this breakthrough might redefine AI accuracy and reliability.

Article by Andrew Best
Why OpenAI’s “Strawberry” Is a Game Changer
  • The article explores how OpenAI’s “Strawberry” aims to enhance LLMs with advanced reasoning, overcoming limitations like simple errors and bringing us closer to AGI.
  • It investigates how OpenAI’s “Strawberry” might transform AI with its ability to perform in-depth research and validation, improving the reliability of AI responses.
Share:Why OpenAI’s “Strawberry” Is a Game Changer
3 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and