Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Designing for AI Hallucination: Can Design Help Tackle Machine Challenges?  

Designing for AI Hallucination: Can Design Help Tackle Machine Challenges?  

by Hjortur Hilmarsson
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Fascinating advancements in the field of Artificial Intelligence (AI) never cease to amaze us. As intelligent as they are, AI tools do make things up, from time to time. Chances are, you may have already encountered what’s known as AI hallucinations— a phenomenon where a large language model (LLM), often a generative AI tool, presents false information, despite lacking the knowledge to know the answer with certainty. It means the AI can say nonsensical things while giving the user the impression that it knows what it’s talking about.  

AI’s quirky side has real-life implications

AI hallucinations can take many forms. In some situations, the AI may create a sentence that contradicts the previous one or generate factual contradictions where it presents false information as authentic. Sometimes there could be calculations hallucinations with simple calculation errors, or it could even mess with the sources—  ever clicked on those extremely convincing links presented as sources only to realize they don’t exist? At times, these mishaps are easy to spot, and at other times, not so much. 

The cause for hallucinations could be anything from insufficient training data to lack of context provided by the user, and the implications could range from the spread of misinformation to serious consequences such as medical misdiagnosis. Researchers have discovered that LLM Model hallucinations can even be exploited by bad actors to disseminate malicious code packages among unsuspecting software developers. Needless to say, it’s an important problem to tackle.

Where does design fit in here?

While many solutions that aim to potentially prevent such hallucinations point towards the use of high-quality training data, data templates, or prompt engineering, there’s a lot that design could potentially do to allow users to decide how much trust and dependability they can place on AI. That is to say, design can help by visually showing how certain the AI is about the information it provides you with— the AI’s confidence level if you will. This involves clarifying whether the AI relies on provided data or is merely attempting to answer without certainty, thereby minimizing the risk of false information. 

In this context, let’s think of design not as a solution to prevent AI hallucinations but rather as something that gives the user a considerable degree of control over what they take away from their interactions with AI, a tangible manifestation of reliability. Besides, good design, being intuitive and resourceful, acts as the much-needed human intervention, as digital interactions are increasingly becoming more algorithmic and machine-like. 

How can design help with the hallucination problem? 

The power of design is such that a symbol can speak a thousand words; you just have to be smart with it. One may wonder how exactly design can help make our interactions with AI-powered tools better, or in this case, how design can help with AI hallucinations in particular. This idea can be best illustrated with a few examples as we go along. 

Every time we enter a prompt into a generative AI system like ChatGPT, there is nothing indicative that reassures us that the results are indeed valid or appropriate. Especially if we are venturing into unfamiliar territories, our lack of expertise in a particular topic could make it worse. Many questions could cloud our understanding —  how accurate is this data? Can I trust it? Is it based on verified data sources? Is AI making this up? According to a study by Tidio, almost one-third (32%) of users spot AI hallucinations by relying on their instincts, while 57% cross-reference with other resources. Isn’t it ironic that we seek the system’s help to gain a certain piece of information and now we have to really make sure that what it gets back to us makes sense? That’s double the effort, and that’s not helpful. 

Now, imagine if every time ChatGPT generated a response to our prompt, it came with a progress bar or a meter to visually convey the AI’s confidence level. A full meter for high confidence, a partially filled one for a lower level of confidence, you get the idea. Something like this allows users to easily gauge the certainty of the generated response through a simple visual representation. This is only one of the numerous ways in which design can be employed to convey the AI’s confidence level to the user. 

Icons or symbols can also be utilized to represent different levels of confidence- a checkmark, for instance, could indicate high confidence, whereas a crossmark could imply low confidence. Caution symbols or warning symbols could effectively alert the users to exercise caution while using the information- the AI could be hallucinating, now you know

Effective use of colors is another application of visual design that could offer an intuitive way for users to assess the reliability of the AI’s responses. Think of how we associate green with progress and positive outcomes, and how red demands our attention to matters of importance. For the in-between, there are colors like yellow and orange. In short, color coding is an inherently efficient way to convey crucial information without saying much. 

The potential of design in tackling AI-related challenges

AI-related threats are often overstated, as technology itself is not the primary issue; it’s human intent and actions that create the bigger challenges. Artificial Intelligence is, after all, a tool developed and used by human beings, fundamentally to catalyze productivity and efficiency. Its applications are products of human intent and intellect. Now, consider the undeniable influence of design in our everyday lives— the way we feel, think, and make decisions. As passengers on the road, we take cues from the traffic signals; as customers, we derive meanings from the packaging of products; the list goes on and on. As Paul Rand puts it rightly, “Everything is design. Everything!” Design is an inexplicable part of our lives and we cannot underestimate its potential to guide human actions in interactions with advanced technologies- AI is no exception.  

post authorHjortur Hilmarsson

Hjortur Hilmarsson

Hjortur Hilmarsson is the CEO and Co-Founder of the global design and technology studio 14islands. He is a creative leader in the digital industry, helping companies of all sizes grow and form strong relationships with their audience. Hjörtur is also a Program Director at Harbour Space University in Barcelona, where he teaches about design and technology. In addition to his professional commitments, Hjörtur is a devoted nature lover, CrossFit enthusiast, and lifelong learner about business and life.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article delves into the challenges posed by AI hallucinations, exploring their varied forms and real-world consequences.
  • The author advocates for the pivotal role of design in addressing this issue, proposing visual elements like progress bars and color coding to enhance user control and understanding in interactions with AI.

Related Articles

Discover how Flux.1, with its groundbreaking 12 billion parameters, sets a new benchmark in AI image generation. This article explores its advancements over Midjourney and Dall-E 3, showcasing its unmatched detail and prompt accuracy. Don’t miss out on seeing how this latest model redefines what’s possible in digital artistry!

Article by Jim Clyde Monge
Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
  • This article examines Flux.1’s 12 billion parameters and its advancements over Midjourney and Dall-E 3. Highlights its superior image detail and prompt adherence.
  • The piece explores the shift of developers from Stability AI to Black Forest Labs and how this led to Flux.1. Analyzes the innovation impact.
  • It compares Flux.1 with Midjourney V6, Dall-E 3, and SD3 Ultra, focusing on visual quality, prompt coherence, and diversity.
  • The guide explains how to access Flux.1 via Replicate, HuggingFace, and Fal. Covers the different models—Pro, Dev, Schnell—and their uses.
  • The article investigates Flux.1’s capabilities in generating photorealistic and artistic images with examples of its realism and detailed rendering.
Share:Flux.1 is a Mind-Blowing Open-Weights AI Image Generator with 12B Parameters
5 min read

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and