Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Looking Into the Screens of the Future

Looking Into the Screens of the Future

by Ken Yarmosh
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers are faced with alluring new challenges as users expect more sophisticated interactions with more persistant and pervasive screens.

We’re surrounded by screens. You’re looking at a screen right now. You might look into screens as much or more than you do into the faces of your loved ones, friends, and coworkers.

But this isn’t an article about the cultural and societal impacts of the increasing number screens in our lives.

Instead, I’m going to outline five forces powering an emerging multiscreen environment, painting the picture of a new world that designers will be crafting experiences for in the not so distant future.

Screen Pervasiveness

When the television set was no longer a luxury item and arrived in an average household, there was one set in one room. Yet, many modern homes today have large screen televisions in multiple rooms and smaller ones in others. Similarly, we are now seeing many more interactive screens in many more places, including the kitchen, car, and even mom and pop retail locations.

That’s what screen pervasiveness is all about: the expansion of the number and variety of screens in our lives. As costs continue to decrease, it’s not uncommon to see interactive touch screens on coffee tables and alarm clocks, on maps at malls and amusement parks, and yes, even in the bathroom.

Imagine, for example, if the walls in your bathroom were actually interactive screens. After a long day, you could watch your favorite show, read through emails, and catch up on Twitter (just don’t take that FaceTime call).

Screen Persistency

I saw Steve Jobs at his last WWDC appearance, and the only part of the keynote he gave was the iCloud presentation. You could tell he believed it was the future of Apple.

Apple has iCloud, Amazon has Whispersync, Google has well, Google. All of the major technology giants today are investing in what will help them with something I describe as “screen persistency.” Screen persistency is the ability to seamlessly move from one screen to another and continue the exact same operation without interruption.

If you have a Kindle, use Netflix, have any iCloud-enabled apps, or use Google Sync on Chrome, you likely already experience screen persistency. When these tools work, they feel almost like magic. The Kindle lets us pick up any device and brings us back to our last reading location. With iMessage, all of our messages exist across our iOS and Mac devices. And Chrome Sync allows all of our open tabs and browsing history to be accessible across any platform that supports Google Chrome. These more simplistic implementations are tolerated right now but as there are more and more screens present, screen persistency will need to become more sophisticated.

Consider closing down your work computer at the end of the day. Instead of just accessing open tabs or seeing the same messages, you should be able to completely pick up where you last left off on your mobile device, TV, or even in your car. Each device should have the same data available, the same applications installed, the same windows open, and even the same cursor position. This type of experience is why Jobs believed iCloud was critical to the future success of Apple and it’s why all of these giants are investing in similar technology.

Screens That Know About You

For years now, cookies have followed us around as we browse the web. In the mobile age, we’ve gone a step further with those cookies being carried in our pockets and in our bags with our mobile devices. Not only do our surfing trends go with us when we’re mobile, our location is now known. Combined with data like spending habits and the apps we regularly use, the screens in our lives know us better than we know ourselves.

Google Now combines location data along with information gleaned from email to provide up-to-the-minute details about when to leave for the airport or to pick up the kids on time. It takes into account a person’s current location, the traffic patterns of the area, and even weather conditions.

It’s also increasingly common to see biometric sensors being built into consumer devices. On Android, facial recognition can be used to determine if a device should unlock itself. Also, moving beyond a pin number or passcode, it would be incredibly useful to simply unlock a device with a thumbprint.

The point is that devices now know us by our habits, as well as our physical characteristics. In the future, they will share this information with other devices we own or other devices we interact with to better serve our needs.

Screens That Know About Themselves

You may have heard the term “second screen.” It’s a concept whereby a device’s function changes based on another device’s presence. Commonly, second screens relate to mobile devices—normally a primary device—becoming a secondary controller when interacting with a larger screen like a television.

The reverse is also true as shown by the Nintendo Wii U. The controller is normally the secondary device but it can become a primary device when junior is kicked out of the family room and is forced to take his game elsewhere.

Consider more interesting implementations, perhaps even with a basic task like a phone call. Instead of using just the phone, the call could be passed across various devices in a home as someone walked through it. When entering the living room, the voice call would transition to video or a Google Hangout on the TV itself. When leaving the house, the call would transition back to the phone, and then seamlessly to the car audio.

Screen Interactions

Since the dawn of the personal computer age, we’ve largely used the WIMP interaction model with our digital machines: windows, icons, menus, and pointers. Touch, and specifically multi-touch, has come on strong in the last five years but it’s not the final frontier.

Voice interaction will be the next big shift. It won’t be limited to basic searches or standard command syntax. We’ll move beyond the superficial, “Create an appointment,” to “Pull up my 2007 tax return and tell me how much my effective tax rate was,” or “Compare the heat index today to 1983.” And with the other four factors, voice becomes more powerful. My tax return information will be persistent across devices and can be authorized and accessed by my voice as well as my wife’s or my CPA’s.

There are, of course, other significant advancements regarding how we interact with screens. For example, the Kinnect shows how physical movement can be harnessed in what is sometimes described as “natural user interface.” More advanced implementations include Oblong Industries’ g-speak, which provides three dimensional building-scale work environments to manipulate large data sets. Oblong actually created many of the interfaces in the movie Minority Report. If you thought those were advanced, check out what they have available commercially today.

Conclusion

The design challenge of adapting interfaces to different screen sizes will seem trivial compared to developing experiences that work across mediums and contexts. That’s why it’s an exciting time be creating digital experiences. I believe we’re up to the task even if we’ll no longer be able to find those brilliant ideas in our touch-enabled, voice-activated, Internet-connected showers.

post authorKen Yarmosh

Ken Yarmosh

Ken Yarmosh is the Founder & CEO of savvy apps. He is the brains behind multiple chart-topping mobile applications with honors ranging from Apple's prestigious Editor's Choice to the Webby Award.

His full-service mobile agency savvy apps helps big brands like the NFL Player's Association, as well as mobile-focused startups such as Homesnap, build their mobile apps on iOS, Android, and other mobile platforms. Ken also regularly speaks about application design & development, as well as the future of mobile at outlets ranging from Bloomberg TV to Google.

Tweet
Share
Post
Share
Email
Print

Related Articles

AI that always agrees? Over-alignment might be the hidden danger, reinforcing your misconceptions and draining your mind. Learn why this subtle failure mode is more harmful than you think — and how we can fix it.

Article by Bernard Fitzgerald
Introducing Over-Alignment
  • The article explores over-alignment — a failure mode where AI overly validates users’ assumptions, reinforcing false beliefs.
  • It shows how this feedback loop can cause cognitive fatigue, emotional strain, and professional harm.
  • The piece calls for AI systems to balance empathy with critical feedback to prevent these risks.
Share:Introducing Over-Alignment
4 min read

Why does AI call you brilliant — then refuse to tell you why? This article unpacks the paradox of empty praise and the silence that follows when validation really matters.

Article by Bernard Fitzgerald
The AI Praise Paradox
  • The article explores how AI often gives empty compliments instead of real support, and how design choices like that can make people trust it less.
  • It looks at the strange way AI praises fancy-sounding language but ignores real logic, which can be harmful, especially in sensitive areas like mental health.
  • The piece argues that AI needs to be more genuinely helpful and aligned with users to truly empower them.
Share:The AI Praise Paradox
4 min read

Mashed potatoes as a lifestyle brand? When AI starts generating user personas for absurd products — and we start taking them seriously — it’s time to ask if we’ve all lost the plot. This sharp, irreverent critique exposes the real risks of using LLMs as synthetic users in UX research.

Article by Saul Wyner
Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
  • The article explores the growing use of AI-generated personas in UX research and why it’s often a shortcut with serious flaws.
  • It introduces critiques that LLMs are trained to mimic structure, not judgment. When researchers use AI as a stand-in for real users, they risk mistaking coherence for credibility and fantasy for data.
  • The piece argues that AI tools in UX should be assistants, not oracles. Trusting “synthetic users” or AI-conjured feedback risks replacing real insights with confident nonsense.
Share:Have SpudGun, Will Travel: How AI’s Agreeableness Risks Undermining UX Thinking
22 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and