Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› Looking Into the Screens of the Future

Looking Into the Screens of the Future

by Ken Yarmosh
5 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Designers are faced with alluring new challenges as users expect more sophisticated interactions with more persistant and pervasive screens.

We’re surrounded by screens. You’re looking at a screen right now. You might look into screens as much or more than you do into the faces of your loved ones, friends, and coworkers.

But this isn’t an article about the cultural and societal impacts of the increasing number screens in our lives.

Instead, I’m going to outline five forces powering an emerging multiscreen environment, painting the picture of a new world that designers will be crafting experiences for in the not so distant future.

Screen Pervasiveness

When the television set was no longer a luxury item and arrived in an average household, there was one set in one room. Yet, many modern homes today have large screen televisions in multiple rooms and smaller ones in others. Similarly, we are now seeing many more interactive screens in many more places, including the kitchen, car, and even mom and pop retail locations.

That’s what screen pervasiveness is all about: the expansion of the number and variety of screens in our lives. As costs continue to decrease, it’s not uncommon to see interactive touch screens on coffee tables and alarm clocks, on maps at malls and amusement parks, and yes, even in the bathroom.

Imagine, for example, if the walls in your bathroom were actually interactive screens. After a long day, you could watch your favorite show, read through emails, and catch up on Twitter (just don’t take that FaceTime call).

Screen Persistency

I saw Steve Jobs at his last WWDC appearance, and the only part of the keynote he gave was the iCloud presentation. You could tell he believed it was the future of Apple.

Apple has iCloud, Amazon has Whispersync, Google has well, Google. All of the major technology giants today are investing in what will help them with something I describe as “screen persistency.” Screen persistency is the ability to seamlessly move from one screen to another and continue the exact same operation without interruption.

If you have a Kindle, use Netflix, have any iCloud-enabled apps, or use Google Sync on Chrome, you likely already experience screen persistency. When these tools work, they feel almost like magic. The Kindle lets us pick up any device and brings us back to our last reading location. With iMessage, all of our messages exist across our iOS and Mac devices. And Chrome Sync allows all of our open tabs and browsing history to be accessible across any platform that supports Google Chrome. These more simplistic implementations are tolerated right now but as there are more and more screens present, screen persistency will need to become more sophisticated.

Consider closing down your work computer at the end of the day. Instead of just accessing open tabs or seeing the same messages, you should be able to completely pick up where you last left off on your mobile device, TV, or even in your car. Each device should have the same data available, the same applications installed, the same windows open, and even the same cursor position. This type of experience is why Jobs believed iCloud was critical to the future success of Apple and it’s why all of these giants are investing in similar technology.

Screens That Know About You

For years now, cookies have followed us around as we browse the web. In the mobile age, we’ve gone a step further with those cookies being carried in our pockets and in our bags with our mobile devices. Not only do our surfing trends go with us when we’re mobile, our location is now known. Combined with data like spending habits and the apps we regularly use, the screens in our lives know us better than we know ourselves.

Google Now combines location data along with information gleaned from email to provide up-to-the-minute details about when to leave for the airport or to pick up the kids on time. It takes into account a person’s current location, the traffic patterns of the area, and even weather conditions.

It’s also increasingly common to see biometric sensors being built into consumer devices. On Android, facial recognition can be used to determine if a device should unlock itself. Also, moving beyond a pin number or passcode, it would be incredibly useful to simply unlock a device with a thumbprint.

The point is that devices now know us by our habits, as well as our physical characteristics. In the future, they will share this information with other devices we own or other devices we interact with to better serve our needs.

Screens That Know About Themselves

You may have heard the term “second screen.” It’s a concept whereby a device’s function changes based on another device’s presence. Commonly, second screens relate to mobile devices—normally a primary device—becoming a secondary controller when interacting with a larger screen like a television.

The reverse is also true as shown by the Nintendo Wii U. The controller is normally the secondary device but it can become a primary device when junior is kicked out of the family room and is forced to take his game elsewhere.

Consider more interesting implementations, perhaps even with a basic task like a phone call. Instead of using just the phone, the call could be passed across various devices in a home as someone walked through it. When entering the living room, the voice call would transition to video or a Google Hangout on the TV itself. When leaving the house, the call would transition back to the phone, and then seamlessly to the car audio.

Screen Interactions

Since the dawn of the personal computer age, we’ve largely used the WIMP interaction model with our digital machines: windows, icons, menus, and pointers. Touch, and specifically multi-touch, has come on strong in the last five years but it’s not the final frontier.

Voice interaction will be the next big shift. It won’t be limited to basic searches or standard command syntax. We’ll move beyond the superficial, “Create an appointment,” to “Pull up my 2007 tax return and tell me how much my effective tax rate was,” or “Compare the heat index today to 1983.” And with the other four factors, voice becomes more powerful. My tax return information will be persistent across devices and can be authorized and accessed by my voice as well as my wife’s or my CPA’s.

There are, of course, other significant advancements regarding how we interact with screens. For example, the Kinnect shows how physical movement can be harnessed in what is sometimes described as “natural user interface.” More advanced implementations include Oblong Industries’ g-speak, which provides three dimensional building-scale work environments to manipulate large data sets. Oblong actually created many of the interfaces in the movie Minority Report. If you thought those were advanced, check out what they have available commercially today.

Conclusion

The design challenge of adapting interfaces to different screen sizes will seem trivial compared to developing experiences that work across mediums and contexts. That’s why it’s an exciting time be creating digital experiences. I believe we’re up to the task even if we’ll no longer be able to find those brilliant ideas in our touch-enabled, voice-activated, Internet-connected showers.

post authorKen Yarmosh

Ken Yarmosh

Ken Yarmosh is the Founder & CEO of savvy apps. He is the brains behind multiple chart-topping mobile applications with honors ranging from Apple's prestigious Editor's Choice to the Webby Award.

His full-service mobile agency savvy apps helps big brands like the NFL Player's Association, as well as mobile-focused startups such as Homesnap, build their mobile apps on iOS, Android, and other mobile platforms. Ken also regularly speaks about application design & development, as well as the future of mobile at outlets ranging from Bloomberg TV to Google.

Tweet
Share
Post
Share
Email
Print

Related Articles

Curious about the next frontier in AI design? Discover how AI can go beyond chatbots to create seamless, context-aware interactions that anticipate user needs. Dive into the future of AI in UX design with this insightful article!

Article by Maximillian Piras
When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
  • The article explores the future of AI design, moving beyond simple chatbots to more sophisticated, integrated systems.
  • It argues that while conversational interfaces have been the focus, the potential for AI lies in creating seamless, contextual interactions across different platforms and devices.
  • The piece highlights the importance of understanding user intent and context, advocating for AI systems that can anticipate needs and provide personalized experiences.
Share:When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
21 min read

Uncover the dynamic landscape of UX design as artificial intelligence continues to reshape the field. With automated tools revolutionizing our roles, what does the future hold for designers?

Article by Michal Malewicz
The End of Design?
  • The article explores the impact of AI on UX design, questioning the future role of designers as automated tools become more prevalent.
  • It highlights the historical evolution of UX design and the commodification of design roles, emphasizing the shift from creative problem-solving to efficiency-driven practices.
  • It emphasizes the need for future designers to be generalists with strong decision-making skills, capable of leading projects and maintaining creativity in an AI-driven landscape.
Share:The End of Design?
9 min read

Discover how digital twins are transforming industries by enabling innovation and reducing waste. This article delves into the power of digital twins to create virtual replicas, allowing companies to improve products, processes, and sustainability efforts before physical resources are used. Read on to see how this cutting-edge technology helps streamline operations and drive smarter, eco-friendly decisions

Article by Alla Slesarenko
How Digital Twins Drive Innovation and Minimize Waste
  • The article explores how digital twins—virtual models of physical objects—enable organizations to drive innovation by allowing testing and improvements before physical implementation.
  • It discusses how digital twins can minimize waste and increase efficiency by identifying potential issues early, ultimately optimizing resource use.
  • The piece emphasizes the role of digital twins in various sectors, showcasing their capacity to improve processes, product development, and sustainability initiatives.
Share:How Digital Twins Drive Innovation and Minimize Waste
5 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and