Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Apple ›› Vision Is Not About The Goggles; It Is About A New Way Of Looking

Vision Is Not About The Goggles; It Is About A New Way Of Looking

by Iskander Smit
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

What is the meaning of the introduction of the Vision Pro device and Vision OS? After a week of reflection, and reading about experiences, I like to do a take here.

The last time I tried to reflect on a new Apple category here was shortly after when the AirPods were launched back in 2016. My thoughts back then were about the potential impact of intelligence in the audio device on the phone ecosystem. I am not doing a valuation of that ‘prediction’; the impact of immersive audio devices as touchpoints are felt for sure after half a decade. In that early period, the AirPods were still seen as weird and even ugly devices that could not become successful. I don’t have to make the case that turned out differently. The AirPods are now often mentioned as a comparison to responses to the introduction of the Vision Pro and Vision OS of Apple, which make partly sense from an emotional and receptive standpoint; much more interesting is it about the role of technology in our lives: the computational filter in our hearing that is becoming accepted as integrated part; a lot of people wear the AirPods all the time also during conversations, and not only AirPods, also other brands, it is accepted almost, and the transparency mode is often even more clear than reality. We live in the audio domain already in a synthetic context.

To give away my conclusion for the Vision Pro up front: even more than with the AirPods, this is not the introduction of a new device; it is about a new relation towards tech in our social context, in the way technology is mediating our experience and how it is creating a synthetic layer. It makes so much sense that it is called VisionOS and not RealityOS. It enhances our vision in a new way. The current form factor of the goggles, whether high-end or not, is not the ultimate device as a form factor, it is an ultimate implementation of the synthetic layer, and it will become the testbed to shape how we will interact through this synthetic layer. That will have two aspects: (1) the interaction with a vision intervening device, and (2) the role a synthetic living environment will work out.

First: reality is the fundament to build on.

As Casey Newton framed it in the Hard Fork podcast, the essence is that we are looking to a moment where we are wearing computers on our faces. He might have a different intention with this frame, but it is essential; the current goggles are just a form factor of creating a way to wear computing capabilities on our faces. If you believe in this as a sensible thing, then the big question is how it will become the way to interact with this computing (not the computer, the computing capabilities).

The Vision Pro is the fully loaded version that aims to reduce all possible barriers to understanding and experiencing what that will mean. I think Apple wants to create here a starting position with the maximum merge from adding the computational layer without interfering with your sight. It is described by several testers that wear the device; the most impressive part is how unintrusive it is to the experience of the reality to have this filter on your head. All the cameras and the sensors, and the computing power together create a point of departure that is seamless.

The next step is to start adding elements step by step to this computation canvas. The first elements are probably rather mundane. Having an appstore is like a sanity factor. Next is the possibility of having large screens all over the place. All the stuff is like finding a nearby iteration of known concepts from the real world. In that sense, Apple is creating a desktop metaphor in spatial computing, like we started the desktop computing area. And Apple is trying to stay as close to the viewing concept in the interaction; watching the icons to select is the proposed interaction method.

We will see new interaction concepts emerge and these will be driven by app makers and companies that will merge into the OS if they make sense, as always.

Apple has created a style guide for spatial computing. It is interesting to dive into experiments done before, art projects, and academic research to develop the interaction framework further. I expect that some of the presented concepts are already updated when the Vision Pro is released for buying next year.

Second: find the right agency in the synthetic environment

So the canvas is created as a blank sheet for new concepts, the fundamentals that are rigid enough to build upon but open enough to stimulate new ideas.

Thinking about the synthetic layer generated as really immersive for the first time delivers the next step in running developments in synthesizing the experience of the world. The Vision Pro is not only creating a true immersive canvas, but it is also creating the potential first real immersive synthetic world viewing. We are already filtering our audio continuously, and of course, there is a lot of discussion about the other side: the synthetic media and the potential fake realities we now need to deal with.

I don’t want to dive too deep into the relationship between the synthetic experience and the concept of technology mediation, but it is interesting to think about the consequences here. I had to think about this short introduction of the thinking of Don Ide by Peter Paul Verbeek, and I think there will be a growing need for the agency in a fully mediated experience. Who is, in the end, controlling our perception, our vision if everything is mediated all the time? It is what is called hyperreality.

One important thing that Apple is trying to do is to secure trust in the technology. Where the introduction of VR by Meta was welcomed with reluctance, the privacy reputation and lack of a primary add model might open up possibilities to start creating and extending concepts in the synthetic world.

The new Vision is here again supported with the setup of a trusted framework to develop on. The uncanny valley of synthetic experiences will become part of the explorations.

What is here super interesting is the merger with generative AI tools for vision-related objects. Will Apple integrate generative AI tooling in the Vision toolkit? Will it acquire Midjourney to make an immersive real-time version of shaping the world and artifacts of the synthetic experience? I expect there are different scenarios developed already.

Finally, it is interesting to make a comparison between the boom of ChatGPT and the importance of the interface. Having a system where the interaction with the GenAI is both creating a popular service and helping to make the AI more intelligent could be an outcome of using Vision too. Will the interaction with generative images become just as powerful as interacting with large language models? Is interacting via spatial computing the same as the chat is for GPT?

So Vision Pro is all about a new way of looking.

When Google Glass was introduced, I found the new model of timely information and interaction the most interesting. That is also the case with Vision. Not about the goggles and even not about the apps themselves. The potential new vision we will have and interact with. That might be the biggest impact of Vision. As a side note: The refreshed introduction of interactive widgets in the OS might even be a preparation for a more timely interaction paradigm.

It still can go wrong. The strategy of the Apple Watch and AirPods to let them grow into a social position was only possible by people actually using them. The current first iteration might be too expensive to run into now and then. But maybe the SE version will be introduced next to the Pro next year very soon.

Goggles are not the future of computing; a computer on your face might be though. I won’t say that we entered the age of goggles, but we very well might have entered the age of new synthetic vision.

post authorIskander Smit

Iskander Smit
Iskander Smit (@iskandr) is the design director at Structural. Before he was innovation director at agency INFO in Amsterdam, responsible for research and development and leading LABS. Next to that, Iskander was a visiting professor and lab director at Delft University of Technology Faculty of Industrial Design researching Cities of Things, now a research foundation.  Iskander is chairman and organizer of the foundation ThingsCon Amsterdam. Earlier he co-founded the Behavior Design AMS meetup.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article discusses the introduction of Apple’s Vision Pro and Vision OS, highlighting their significance in shaping a new relationship with technology through a synthetic layer that enhances vision.

Related Articles

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Discover how digital twins are transforming industries by enabling innovation and reducing waste. This article delves into the power of digital twins to create virtual replicas, allowing companies to improve products, processes, and sustainability efforts before physical resources are used. Read on to see how this cutting-edge technology helps streamline operations and drive smarter, eco-friendly decisions

Article by Alla Slesarenko
How Digital Twins Drive Innovation and Minimize Waste
  • The article explores how digital twins—virtual models of physical objects—enable organizations to drive innovation by allowing testing and improvements before physical implementation.
  • It discusses how digital twins can minimize waste and increase efficiency by identifying potential issues early, ultimately optimizing resource use.
  • The piece emphasizes the role of digital twins in various sectors, showcasing their capacity to improve processes, product development, and sustainability initiatives.
Share:How Digital Twins Drive Innovation and Minimize Waste
5 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Got it!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and