Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Looking Into the Screens of the Future

Looking Into the Screens of the Future

by Ken Yarmosh
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers are faced with alluring new challenges as users expect more sophisticated interactions with more persistant and pervasive screens.

We’re surrounded by screens. You’re looking at a screen right now. You might look into screens as much or more than you do into the faces of your loved ones, friends, and coworkers.

But this isn’t an article about the cultural and societal impacts of the increasing number screens in our lives.

Instead, I’m going to outline five forces powering an emerging multiscreen environment, painting the picture of a new world that designers will be crafting experiences for in the not so distant future.

Screen Pervasiveness

When the television set was no longer a luxury item and arrived in an average household, there was one set in one room. Yet, many modern homes today have large screen televisions in multiple rooms and smaller ones in others. Similarly, we are now seeing many more interactive screens in many more places, including the kitchen, car, and even mom and pop retail locations.

That’s what screen pervasiveness is all about: the expansion of the number and variety of screens in our lives. As costs continue to decrease, it’s not uncommon to see interactive touch screens on coffee tables and alarm clocks, on maps at malls and amusement parks, and yes, even in the bathroom.

Imagine, for example, if the walls in your bathroom were actually interactive screens. After a long day, you could watch your favorite show, read through emails, and catch up on Twitter (just don’t take that FaceTime call).

Screen Persistency

I saw Steve Jobs at his last WWDC appearance, and the only part of the keynote he gave was the iCloud presentation. You could tell he believed it was the future of Apple.

Apple has iCloud, Amazon has Whispersync, Google has well, Google. All of the major technology giants today are investing in what will help them with something I describe as “screen persistency.” Screen persistency is the ability to seamlessly move from one screen to another and continue the exact same operation without interruption.

If you have a Kindle, use Netflix, have any iCloud-enabled apps, or use Google Sync on Chrome, you likely already experience screen persistency. When these tools work, they feel almost like magic. The Kindle lets us pick up any device and brings us back to our last reading location. With iMessage, all of our messages exist across our iOS and Mac devices. And Chrome Sync allows all of our open tabs and browsing history to be accessible across any platform that supports Google Chrome. These more simplistic implementations are tolerated right now but as there are more and more screens present, screen persistency will need to become more sophisticated.

Consider closing down your work computer at the end of the day. Instead of just accessing open tabs or seeing the same messages, you should be able to completely pick up where you last left off on your mobile device, TV, or even in your car. Each device should have the same data available, the same applications installed, the same windows open, and even the same cursor position. This type of experience is why Jobs believed iCloud was critical to the future success of Apple and it’s why all of these giants are investing in similar technology.

Screens That Know About You

For years now, cookies have followed us around as we browse the web. In the mobile age, we’ve gone a step further with those cookies being carried in our pockets and in our bags with our mobile devices. Not only do our surfing trends go with us when we’re mobile, our location is now known. Combined with data like spending habits and the apps we regularly use, the screens in our lives know us better than we know ourselves.

Google Now combines location data along with information gleaned from email to provide up-to-the-minute details about when to leave for the airport or to pick up the kids on time. It takes into account a person’s current location, the traffic patterns of the area, and even weather conditions.

It’s also increasingly common to see biometric sensors being built into consumer devices. On Android, facial recognition can be used to determine if a device should unlock itself. Also, moving beyond a pin number or passcode, it would be incredibly useful to simply unlock a device with a thumbprint.

The point is that devices now know us by our habits, as well as our physical characteristics. In the future, they will share this information with other devices we own or other devices we interact with to better serve our needs.

Screens That Know About Themselves

You may have heard the term “second screen.” It’s a concept whereby a device’s function changes based on another device’s presence. Commonly, second screens relate to mobile devices—normally a primary device—becoming a secondary controller when interacting with a larger screen like a television.

The reverse is also true as shown by the Nintendo Wii U. The controller is normally the secondary device but it can become a primary device when junior is kicked out of the family room and is forced to take his game elsewhere.

Consider more interesting implementations, perhaps even with a basic task like a phone call. Instead of using just the phone, the call could be passed across various devices in a home as someone walked through it. When entering the living room, the voice call would transition to video or a Google Hangout on the TV itself. When leaving the house, the call would transition back to the phone, and then seamlessly to the car audio.

Screen Interactions

Since the dawn of the personal computer age, we’ve largely used the WIMP interaction model with our digital machines: windows, icons, menus, and pointers. Touch, and specifically multi-touch, has come on strong in the last five years but it’s not the final frontier.

Voice interaction will be the next big shift. It won’t be limited to basic searches or standard command syntax. We’ll move beyond the superficial, “Create an appointment,” to “Pull up my 2007 tax return and tell me how much my effective tax rate was,” or “Compare the heat index today to 1983.” And with the other four factors, voice becomes more powerful. My tax return information will be persistent across devices and can be authorized and accessed by my voice as well as my wife’s or my CPA’s.

There are, of course, other significant advancements regarding how we interact with screens. For example, the Kinnect shows how physical movement can be harnessed in what is sometimes described as “natural user interface.” More advanced implementations include Oblong Industries’ g-speak, which provides three dimensional building-scale work environments to manipulate large data sets. Oblong actually created many of the interfaces in the movie Minority Report. If you thought those were advanced, check out what they have available commercially today.

Conclusion

The design challenge of adapting interfaces to different screen sizes will seem trivial compared to developing experiences that work across mediums and contexts. That’s why it’s an exciting time be creating digital experiences. I believe we’re up to the task even if we’ll no longer be able to find those brilliant ideas in our touch-enabled, voice-activated, Internet-connected showers.

post authorKen Yarmosh

Ken Yarmosh,

Ken Yarmosh is the Founder & CEO of savvy apps. He is the brains behind multiple chart-topping mobile applications with honors ranging from Apple's prestigious Editor's Choice to the Webby Award.

His full-service mobile agency savvy apps helps big brands like the NFL Player's Association, as well as mobile-focused startups such as Homesnap, build their mobile apps on iOS, Android, and other mobile platforms. Ken also regularly speaks about application design & development, as well as the future of mobile at outlets ranging from Bloomberg TV to Google.

Tweet
Share
Post
Share
Email
Print

Related Articles

What do Architecture, Computer Science, Agile, and Design Systems have in common?

Article by Kevin Muldoon
A Pattern Language
  • The article explores Christopher Alexander’s impact on diverse fields, from architecture to software development, introducing the concept of design patterns and their influence on methodologies like Agile and the evolution of Design Systems.
Share:A Pattern Language
7 min read

Since personal computing’s inception in the 80s, we’ve shifted from command-line to graphical user interfaces. The recent advent of conversational AI has reversed the ‘locus of control’: computers can now understand and respond in natural language. It’s shaping the future of UX.

Article by Jurgen Gravestein
How Conversational AI Is Shaping The Future of UX 
  • The article discusses the transformative impact of conversational AI on UX design, emphasizing the need for user-centric approaches and the emerging societal changes driven by AI technology.
Share:How Conversational AI Is Shaping The Future of UX 
3 min read
Article by Eleanor Hecks
8 Key Metrics to Measure and Analyze in UX Research
  • The article outlines eight essential metrics for effective UX research, ranging from time on page to social media saturation
  • The author emphasizes the significance of these metrics in enhancing user experience and boosting brand growth.

Share:8 Key Metrics to Measure and Analyze in UX Research
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and