Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Artificial Intelligence ›› Designing Serendipity

Designing Serendipity

by Kevin Gates
11 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

How do you design interfaces that feel natural and intuitive in a world driven by AI? Gesture-based navigation, powered by artificial intelligence, is transforming the way users interact with apps and devices. From reducing cognitive load to creating deeply personalized experiences, this innovation offers exciting possibilities. It also reveals the emerging reality of modern digital product creation, where the lines between design, development, and AI are increasingly blurred. Dive into the future of AI-powered navigation and discover how it’s reshaping the digital landscape, one gesture at a time.

My bookshelves are filled with happy accidents. I’ve come to think of stumbling into ‘Aha!’ moments — where you suddenly realize a connection between ideas — as a gift. There’s something revelatory about accidental discoveries. They impart truths in a way that being told what to think never can.

An app I designed and built that reimagines browsing Wikipedia with AI-powered left and right swipes

For a while, I’ve had this vague concept in my head for an app. The idea was to employ AI-augmented left and right swipes to enable a kind of low-friction information foraging[1] through Wikipedia pages. Users would be able to skim through vast swaths of information until they stumble into something that piques their interest. Until now, it’s just been an idea.

As a designer-developer hybrid, I’ve been keen to explore approaches to designing a mobile app that is anchored around AI. (My toolkit for this project included React Native, Python, OpenAI’s API, and Figma.)

A screenshot of my design running in an iOS simulator showing key features

Design principles

From the beginning, I had a few principles I wanted the app to imbue:

  • Deemphasize AI: AI should be in the background, subtly working on behalf of users. Users needn’t know about it.
  • The UI should be fast and it should feel infinite: I want users to just swipe away with little or no friction, for as long as they like, with no dead ends.
  • The AI should augment the user experience, but not dictate what the user can do: Users can browse Wikipedia in my app as they normally would, but can also enlist AI features whenever they like with a swipe.
  • The app should be healthy for users to use long-term[2]: The idea is to get users to learn things they didn’t know they wanted to learn. But the same forces that make serendipitous discoveries thrilling are likely the same that make conspiracy theories alluring. I want everything users find to be factual to the extent possible. Therefore, Serenwikity is a layer on top of Wikipedia, and only points to Wikipedia pages or summarizes them.

The idea is to get users to learn things they didn’t know they wanted to learn. But the same forces that make serendipitous discoveries thrilling are likely the same that make conspiracy theories alluring.
Intro, detail view, and search

Challenges

A big contradiction

There is a fundamental contradiction in the idea of Serenwikity. Design is sometimes described as being intentional about what users experience. I wanted users to experience serendipity, which inherently cannot be willed into existence; it manifests when people make a connection between ideas that were hitherto unforeseen.

Non-determinism

What’s more, Serenwikity revolves around prompt engineering, which is something of a dark art; LLM output is not always easy to predict. And to compound the problem, each swipe would be used to modify the next prompt. What would the user experience after two or three swipes right? What about 10? What about a combination of left and right?

I wanted users to experience serendipity, which inherently cannot be willed into existence; it manifests when people make a connection between ideas that were hitherto unforeseen.

Simulations

Simulation one: concept validation

Because of the challenges inherent in the project, I had doubts whether the idea would even work.

Before I wrote any code or sketched out ideas, I created a Python-based simulation to see if swiping through Wikipedia pages would yield a coherent and interesting experience. The simulation was fairly straightforward: I wrote LLM prompts for left and right swipes, then Python code to run simulations of users swiping.

From J.S. Bach to 12-tone modern classical

One simulation was particularly encouraging. It started with a page about Johann Sebastian Bach, then right-swiped to The Well-Tempered Clavier, one of Bach’s most famous works, which was intuitive and appropriate.

The simulation then veered to a page about an Austrian pianist, Friedrich Gulda, whose 1950s recordings of The Well-Tempered Clavier are well-regarded. That swiped to a contemporary pianist, and eventually modern classical and the 12-tone technique.

A Python simulation of left and right swipes showed the concept had potential

This is exactly what I was envisioning. It also got me to define what a “good right swipe” meant. A good right swipe should be both logical and surprising, i.e., serendipitous.

A good right swipe should be both logical and surprising. In other words, serendipitous.

Simulation two: code performance and the user experience

I created my next simulation in React to help me explore approaches to pre-loading Wikipedia pages. I needed to figure out how to make the UI feel fast and simple to the user, when in fact, there is a lot going on in the background.

A React-based simulation of right and left swipes helped me figure out how to pre-load Wikipedia pages asynchronously

Designing the UX

Interaction design

Once I had confidence in my idea, I tackled swipes, which were core to the app’s concept. I started with a quick sketch that conveyed key swipe interaction sequences, then went directly to code. I was especially interested in how the swipes would feel in my hands, so I created a prototype in React Native and started experimenting and refining it.

This was my very first UI design. Beautiful, no? I wanted to get the key concepts out of my head quickly. This sketch shows a right swipe and zoom. The bottom right shows what I call an “explainer”, which shows how the current page is related to the previous

I worked in rapid, iterative cycles. My workflow went from exploring animations and gestures in code, to sketching ideas in Figma, and back to code. This back-and-forth helped me create ergonomic interactions and explore fun concepts like having right-swipe pages zoom in and left-swipes zoom out.

React Native’s ultra-fast code builds are a powerful feature that should be exploited by designers. I could tweak swipe animations and test them out on my phone instantly. I did a lot of experimenting around zooming in on right swipes and zooming out on left
An early nav exploration mimicked dating apps, but the heart icon in the nav was problematic. What if the Wikipedia page was about, say, the Rwandan genocide? The app would be asking the user to “like” or “heart” genocide

Getting the right swipe prompt right

Once I got the app working with real data, two problems with the right swipes began to emerge: One, they tended to repeat. Leonardo da Vinci would lead to Mona Lisa which would lead to Leonardo da Vinci. Two, when I added code to prevent URLs from repeating, the LLM would make up Wikipedia pages.

The solution I came up with works like this:

  • The app takes the current page, say the Mona Lisa, and it extracts all of its URLs, which lead to topics like sfumato (the painting technique da Vinci used), Florence, portrait painting, etc.
  • It then takes AI-generated summaries for the previous Wikipedia pages the user has right-swiped through (from say, the Italian Renaissance, Renaissance Art, and da Vinci) and compiles a prompt.
  • The compiled prompt basically says, “These Wikipedia page summaries are in order and create a narrative. Read through these URLs and find one that would offer a logical continuation of the summaries”. The actual prompt is much longer, but that is the gist of it.
Let’s say the user has swiped from da Vinci to the Mona Lisa. The app extracts all of the links from the Mona Lisa Wikipedia page and has the AI choose a link that continues the sequence. In this case, it takes the user to sfumato, a painting technique da Vinci used to create subtle gradients

A moving narrative

The new right prompt creates cohesion as it connects each page to the last logically. The result is an open-ended, AI-assisted journey through logically connected ideas. It works something akin to a moving average — users can swipe right as long as they like, and the AI will sit in the background, acting as a subtle guide that weaves a constantly emerging narrative.

This set of right swipes takes the user from Back to the Future to time travel to wormholes in logical, incremental steps

Intentional narrative deviation

Another problem with right swipes arose: they could be too predictable (i.e., boring).

I introduced a second prompt that works just like the first, with one small change. The second prompt has the LLM find a logical next page that “most people would find surprising or counterintuitive”. The app uses this prompt periodically at random.

In this case, the app deviates and takes the user to Lisa del Giocondo (the model for the painting), then to her noble Tuscan family

From Back to the Future to… animal welfare?

The deviating swipes seemed to get the right-swipe narratives out of their ruts. They could also be entertaining: In the 80s sci-fi classic, Back to the Future, Doc Brown, a mad scientist, put his Catalan Sheepdog in his DeLorean time machine and launched it one minute into the future. Apparently, a few people in test audiences in 1985 expressed concerns about the dog’s welfare and saw the demo as a kind of animal testing. They have a point.

The DeLorean time machine from Back to the Future and its test subject. (Image source: WikiCommons)

AI-augmented wayfinding

Everything in this app orbits around the right swipe. The right swipe is the functionality that allows users to skim through vast information spaces until something strikes them. But what if the user does not like where the app is taking them? I created several features that allow users to course-correct when things are not quite right:

  • Left swipe: When a right swipe leads to something that is not intriguing, the user can swipe left and go to a broader, related subject.
  • Undo: If they swipe right and don’t like the page it takes them to, but like the narrative path they’ve been on, they can click <undo>.
  • Random: If they want something totally new, they can click the <random> button.
  • Search: If they have a subject in mind they want to explore, they can search.
Controls such as <left swipe>, <random>, and <undo> let users alter course

All of these features allow for a low-friction way to course-correct based on the user’s immediate intuition about what’s interesting to them. It’s a kind of wayfinding that is semi-free-form: it’s part AI-generated and part user-influenced.

Thoughts on design and code

I kept designing tightly coupled with coding. React Native allowed me to instantly see changes on multiple Xcode iOS simulators

When working on my self-directed projects, I find myself doing much of the design in React. Here are a couple of thoughts:

  • Any value designers create only reaches users through production code. Keeping design tightly coupled with code keeps the design close to what users will actually experience.
  • When designing an app with prompt engineering at the core, there is a lot of very fast back and forth between coding and design. I might alter a prompt, and then modify the UI. Keeping these iterations tight is critical.
  • Our use of the word fidelity in design is peculiar. Its literal meaning is “the degree of exactness with which something is copied or reproduced”. Going from a high-fidelity Figma file to code means the thing that delivers value to users (production code) is of lower fidelity (i.e., a copy) than the Figma file. How design manifests as production code should be our focus, not Figma files.
  • The final touches in the visual design process were always in code. I think this is the way. Designers shouldn’t create refined designs in Figma and then go through the ritual of being disappointed with the production version that reaches users.
I used Figma as a scratchpad to quickly explore design directions, but my focus was always on the final version of the design in React
I refined the visual design for the bottom nav in the code. I explored very subtle changes in transparency, border color, and icon size in React Native. This let me see the actual thing users would see. I could also evaluate the changes in several iOS simulators at once

At the threshold of change

I believe we are at the threshold of fundamental changes in how users interact with computers. The ability to explore ideas and be creative in code will be where new paradigms and UX patterns emerge.

Think about how an app like this might have been created 10 years ago, or even five. It would have needed a data science team, who would have needed months to ingest data and train it. What’s more, the team would have needed to dedicate time and money to a vague concept (Wikipedia with dating app-like features).

In other words, the app would have never existed.

In this new world we are in, designers who can code and know something about prompt engineering, can think of an idea, and then effectively conjure an app that a data science team has been working on for months. It’s pretty crazy when you think about it.

In this new world we are in, designers who can code and know something about prompt engineering, can think of an idea, and then effectively conjure an app that a data science team had been working on for months. It’s crazy when you think about it.

Key takeaways

In building Serenwikity, I learned that the boundaries between design, engineering, and AI are increasingly blurred in interesting and exciting ways. Here are a few takeaways:

  • Simulations of the user experience are useful and should be in the UX design toolkit.
  • Designing an AI app requires tight coupling between design and code. Frameworks like React enable hot reloading, which makes design-to-code iterations nearly instant.
  • Prompt engineering could easily be called prompt design, and UX designers should learn to do it[3]. For AI apps, prompt engineering will likely sit at the center of the whole creative process for the foreseeable future.
  • Designers should code. I’ve gone back and forth on this one for years, but it feels like the tide has shifted.

For any aspiring design technologists out there, I hope my project inspires you to learn some new skills and explore your own ideas. The future of how products and apps are created is changing profoundly, and designers who code will be at the center of it.

Notes

The article originally appeared on Medium.

Featured image: AI-generated.

post authorKevin Gates

Kevin Gates
Kevin is a designer and technologist who can be creative in code and logical in pixels. Having worked at Google, the Obama 2012 campaign, and Pivotal, Kevin has designed greenfield products in diverse domains, including business intelligence, presidential elections, and cloud computing. He is currently focused on emerging technologies like AI, gesture recognition, and voice, and how they’re enabling fundamentally new ways humans can interact with computers.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • This article explores the role of AI in enhancing app navigation through gesture-based interactions, emphasizing a shift from traditional menus to intuitive, swipe-driven experiences.
  • It examines the intersection of AI and interaction design, highlighting how machine learning can support user discovery by anticipating needs and surfacing relevant content.
  • The piece critically assesses the potential of gesture-based navigation to improve accessibility, user engagement, and overall app usability, while addressing design challenges and potential pitfalls.

Related Articles

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

The role of the Head of Design is transforming. Dive into how modern design leaders amplify impact, foster innovation, and shape strategic culture, redefining what it means to lead design today.

Article by Darren Smith
Head of Design is Dead, Long Live the Head of Design!
  • The article examines the evolving role of the Head of Design, highlighting shifts in expectations, responsibilities, and leadership impact within design teams.
  • It discusses how design leaders amplify team performance, foster innovation, and align design initiatives with broader business goals, especially under changing demands in leadership roles.
  • The piece emphasizes the critical value of design leadership as a multiplier for organizational success, offering insights into the unique contributions that design leaders bring to strategy, culture, and team cohesion.
Share:Head of Design is Dead, Long Live the Head of Design!
9 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and