Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Augmented Reality and a Better User Experience

Augmented Reality and a Better User Experience

by Michael Marina, Olli Siebelt
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Long the domain of Tony Stark and his silver screen pals, AR is fast becoming a reality that user experience designers need to make considerations for.

After decades on the periphery, augmented reality finally appears poised to explode into the mainstream. Consumers will soon be dazzled by products such as the much-hyped Google Glass, the FieldTrip app, and Oculus Rift, prompting many brands to explore this strange new world for the first time.

Successful development still has its challenges, however. While Hollywood has given us some thrilling examples of the possibilities in films like Minority Report, Total Recall, and Iron Man, little substantive headway has been made in real life. Some cool work exists, but there’s been far more gratuitous AR eye candy with little utility or long-term usefulness.

To alleviate some of the mystery, below are six key learnings for building AR apps that will help enhance user experience and improve end results.

1. Categorize the Experience and Plan for It

Not all apps and their use cases are created equal. Think about the environmental conditions of where the campaign will be staged, as well as the type of interaction users will have with it. Rob Manson of AR UX has a great way of thinking about the scenarios AR users are likely might find themselves in:

  • “Public”—The whole body is involved, as are large screens and full movement of limbs and torso; think Nintendo Wii or Microsoft Kinect.
  • “Intimate”—Think of the user in front of their desktop computer with a webcam, generally sitting down with their body an average of 2-3 feet from the camera.
  • “Personal”—Using a smartphone in a public 360º space, like recent ARGs such as Layar and Junaio. Here the user will be standing up and/or walking and interacting with the environment around them.
  • “Private”—The newest category, including wearable tech: glasses like Google Glass or headgear like the Oculus Rift. This will be a personal and intimate experience that will be completely sensory and visual. In the case of the Oculus, it may completely remove the real-world stimulus from the user.

Identifying these scenarios upfront goes a long way toward getting interaction design and technical requirements sorted out before the heavy lifting starts.

2. Get Out in the Real World before Coding

The concept of “measure twice, cut once” is especially important in building AR apps. Unlike a website, which typically has a reasonably generic set of conditions under which it is viewed, the environmental conditions of an AR app are as broad as they are challenging. Start by being a location scout and collect all the details of the physical environment to be augmented. Details matter: Is there sunlight or are there shadows? Interior or exterior? Static or motion? The more devices and lighting and environmental conditions that can be tested before building starts, the better.

Also keep the user in mind when calculating all the technical details. If the campaign features “real-world” locations in public spaces, be sure to consider issues such as reception quality (nothing worse than having a perfect location only to have zero connectivity) and safety (a post-industrial landscape may sound great in the brief but will the audience really feel safe whipping out their expensive smartphones there?). Legwork done upfront saves headaches down the road.

3. Design for the Small Screen First

Designers may use the latest 27″ iMac with native resolutions of 2560×1440 but chances are most users will be on something much smaller, so design for their experience. Viewing an app on a small screen means typefaces must be clear and easy to read, and colors must be high-contrast and positioned in a way that don’t block a user’s view of the world. This becomes especially important when designing for wearable displays where there’s even less physical space to deal with. The risk of the user developing eye-strain or fatigue over the long run with flashing or heavy duty animations is also much greater than when viewed on a desktop.

4. Challenge Users Mentally, not Physically

User fatigue isn’t limited to eye-strain. Regardless of how much people might enjoy a single experience, they won’t want to walk around with their hands up, holding a device for an extended period of time. They also shouldn’t be asked to walk around staring at a small mobile screen the entire time and not at their surrounding environment, which is not only distracting but dangerous. Users will generally tolerate a higher degree of interaction while sitting, or if they are holding their mobile device at chest level or lower (15-20 degrees of tilt above eye level is fine), based on a near-full extension of their arm and elbow. It’s surprising how heavy a tablet can get when held outstretched for an extended time.

5. Frame Rates are as Important as Image Density

Director Peter Jackson made a big deal about moving up to 48fps in his recent films, but AR experiences can go as high as 60fps. There’s a big caveat, of course: this level of super-fast animation requires greater processing power. Keep in mind both the complexity of the 3D objects created and the frame rate at which they appear. Given the wide range of user devices, find a sweet spot. Running moderately complex polygons at anywhere from 48fps to 60fps will generally work well as long as GPU acceleration is added to the experience (it’s not recommended to go under 24fps). Taking the time to optimize the application before it goes out will ensure a better experience, regardless of the device.

That also applies when working on the overall “feel” of the experience. Many experiences are limited to a 180º field of vision. While this can sometimes work for a simple heads-up display, most experiences should seamlessly integrate as closely as possible into the real world. That means thinking in a 3D space, even if a phone screen is flat.

6. Two Hands are Better than One

When designing a mobile AR experience for use in a public space, add a high degree of automation in the app. Users find it awkward to hold a device with one hand, look through the screen and then have to tap or swipe navigation elements at the same time with the other hand. Having a user hold their device with two hands provides a better experience and will allow them to focus on the experience rather than the device.

Utility is a core element of user experience, so forget about gimmicky AR and make the app both fun and useful. If today’s savvy, highly demanding users take the time to download your app and learn it, make it worth their time and effort. Invest in users, and they will invest back in you.

post authorMichael Marina

Michael Marina

Michael Marina is Director of Interactive at The Famous Group in Culver City, California, a full-service hybrid production company, handling all aspects of live action filming and editing, CG/motion graphics, digital development, and production for broadcast and the web.

post authorOlli Siebelt

Olli Siebelt

Olli Siebelt is Head of Client Solutions at The Famous Group in Culver City, California, a full-service hybrid production company, handling all aspects of live action filming and editing, CG/motion graphics, digital development, and production for broadcast and the web.

Tweet
Share
Post
Share
Email
Print

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Imagine a world where coding is no longer reserved for the tech elite. In his latest article, Chris Heilmann explores how AI is revolutionizing software development, enabling a new generation of creators to build applications without needing deep technical knowledge. Discover how AI tools are breaking barriers, empowering millions to shape the digital world, and what this means for the future of coding and innovation.

Article by Christian Heilmann
A Billion New Developers Thanks to AI?
  • The article explores how AI is poised to empower a billion new developers by simplifying the coding process for non-technical users, allowing more people to build applications with ease.
  • It analyzes how AI-assisted development can bridge the global developer gap by enabling faster code generation, reducing the complexity of software development, and expanding access to tech careers.
Share:A Billion New Developers Thanks to AI?
15 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and