The Community Of Over 578,000

Home ›› Apple iOS ›› Long Term Memory: Touchscreen Interaction

Long Term Memory: Touchscreen Interaction

by Ryan Hunt
Share this post on
Share on twitter
Tweet
Share on linkedin
Share
Share on facebook
Post
Share on reddit
Share
Share on email
Email
Share on print
Print

Save

As we have new experiences and learn new things, we store away information for recall at a later date. Information is first stored in our short-term memory for easy access and is then consolidated and stored in our long-term memory. It is used to store our knowledge, compare new information with old information, and keep track of the order in which things happen. Because of this, when a new product is introduced on the market, if it is designed in such a way that allows people to relate it to the way we use another product, it has a greater chance of being accepted.

As we have new experiences and learn new things, we store away information for recall at a later date. Information is first stored in our short-term memory for easy access and is then consolidated and stored in our long-term memory. Our long-term memory is “our storehouse of facts about the world and how to do things”.[16] Long-term memory is used to store our knowledge, compare new information with old information, and keep track of the order in which things happen. Because of this, when a new product is introduced on the market, if it is designed in such a way that allows people to relate it to the way we use another product, it has a greater chance of being accepted.

We have become accustomed to the interactions that were introduced on touchscreen phones, like swiping, scrolling, pinch-zooming, and tapping. Now that these interactions are available on laptops, they have become an indispensable way of interaction with a device because this model of interaction has been stored in our long-term memory.

I’ll use the recent rise in touchscreen laptops, specifically the Microsoft Surface, to illustrate this. Because of our familiarity with touchscreens on mobile phones, transitioning to new touchscreen laptops has been easier than prior attempt to introduce touchscreen to mobile computing.

Memory Organization

One of the ways we use our long-term memory is to make comparisons between new information and old information. This allows us to make assumptions about the new information as we receive it and is commonly referred to as top-down processing. The differences between top-down and bottom-up processing are essentially the differences between providing data for decision making versus providing context for decision making.

Bottom-up processing begins with signal detection but does not depend on prior experiences or cultural meaning. Top-down processing, however, relies on prior knowledge and context. Bottom-up processing can be thought of as data-driven processing, while top-down processing can be thought of as being conceptually guided.[4][14]

Schema, Frames, and Scripts

The schema theory in cognitive science is a flexible template for inferring information based on the current context and past experiences. There are a pair of schematic systems that are of particular note: frames and scripts. A frame is used to indicate context in understanding information. For example, if someone brings up “pinch-zoom” in a conversation, your mind frames the conversation around touchscreen interaction because of the semantic cues.

Frames are a generalized structure for understanding typical situation. Scripts, however, are a more focused schematic system than frames, in that they rely on specific information that fulfills requirements and occurs over a duration of time. The way that Schank and Abelson describe scripts is as “a very boring little story.” The script for changing your ringtone on an Android device would be read as: go to settings, go to sounds and notifications, go to ringtones and sounds, go to ringtone, choose a new ringtone. The components of the script elude to details about device’s system settings hierarchy, but each part of the script must occur in its specific order for the task to be accomplished.[6][9][13]

Mental Models

The idea of a mental model is that we create a model of information to which we compare new information. If we are able to integrate this new information into the existing model, we are able to make better sense of it. Mental models are used for understanding complex systems and when we are presented with new complex information, we choose an appropriate existing model to relate the new information to:

Different models are applied in different contexts (e.g., when the car fails to start, the driver remembers that the battery provides power to the starting motor and hypothesizes that the battery must therefore be dead).[16]

The mental model in this example is of the car’s electrical system. The driver knows enough about this system to understand that power generation starts at the battery, but doesn’t necessarily need to understand the intricacies of automotive electronics to determine the cause of the issue.[7]

Similarly, when Windows 8 was released and touchscreen laptops were more common, interaction designers relied on our experiences with mobile devices to guide their designs. Many of the common mobile interactions were used to manipulate laptop screens. Touch scrolling, tap and drag, multi-touch interactions, and swiping off screen were common interactions on Windows 8.

Interestingly, this seems to have caused a debate in how to scroll using mouse wheels and touch pads. Traditionally, rolling the mouse’s scroll wheel towards the user (or two-finger scrolling downwards on a touchpad) would move the document down so users could read lower on the page. With the release of Apple’s OS X 10.7 Lion, however, designers opted to reverse the scroll direction to mimic the way we interact with mobile devices. This caused a minor kerfuffle with reviewers saying, “With natural scrolling, Apple was messing with the laws of the universe!”.[1] Disrupting people’s mental model and interaction frame so abruptly is a risky design decision.

Interconnection
Semantic Networks and Priming

A semantic network is a system that is defined by its relationship with other systems. Semantic networks are similar to frames in that the context of one system indicates a connection to another system. Semantic networks work by connecting concepts together by their terminology. Certain phrases prime a listener’s understanding of the context of the word. For example, food as a primer separates edible things from non-edible things, and yellow adds a finer layer of connection, narrowing down the possible properties of the food in question. By Apple referring to its design change in scrolling direction as “natural scrolling,” they were probably trying to prime users into thinking that the old way of scrolling was unnatural and wrong.[3][8]

Spreading Activation

This idea that priming contextualizes semantic networks can also be described as associative processing, or spreading activation. Spreading activation is the waterfall effect of understanding as more context on the matter is gained. As more relationships between different ideas are created, the expanded mental model of various interconnected semantic networks can lead to a very broad and complex association between concepts.[3][11]

In the early versions Microsoft Surface’s user guide, users were primed for familiar touch interactions methods, “Like a smartphone, you can interact with Surface by touching the screen”.[15] Users were also primed for connecting touch interactions with familiar mouse interactions: Tap once performs the same actions as a left click, tap and hold for right click, slide for mouse-like scrolling. They also added some new interactions like swiping from the edge to open menus and recently opened apps. By priming these interactions, Microsoft situates their new touchscreen interactions in a mental model that PC and smartphone users have already mastered.[15]

Evolution
Accretion, Tuning, Restructuring

Mental models grow as we get new information. New information comes in, and we categorize it with a frame or semantic network. In response, the mental model expands. This process is known as accretion and is the most common way we take in more information. Fitting new information into existing models is a very efficient way of learning. Accretion is simply adding more information to an already existing model.

However, sometimes new information doesn’t seem to fit into existing models. When this happens, we make adjustments to the categories that are used to organize information and tune the model in order to apply it more effectively. Other times, after a period of confusion, and a subsequent ‘A-HA’ moment of understanding, we are able to find a way to fit the new information into our now expanded mental model. This is the process of restructuring. Learning this way requires tremendous effort but leads to more complex ways of taking in new information.[8][12]

Assimilation and Accommodation

Another way to think about how we learn is through Jean Piaget’s dialectical theories of assimilation and accommodation. Assimilation, like accretion, is when new information fits into an existing structure. Accommodation, as in restructuring, adjusts standing models to allow new information to fit. Piaget emphasizes that assimilation and accommodation are parts of the cycle of learning and development.

Typically, new information is received and assimilated into a current mental model. Some new information may not seem to fit and we need to make adjustments to maintain the balance of our mental model. Not being able to understand something is not a pleasant experience. This unpleasant feeling is what forces us to make the accommodation in an effort for get back to learning by assimilation as soon as possible.[2][5][10]

Microsoft’s framing of the Surface interactions allowed the idea of a touchscreen laptop to be learned through assimilation. To make it easier for people to use a touchscreen, they needed to be sure the targets were large enough for accurate selections. With Windows 8, Microsoft introduced a tiled menu interface that made it easier to access apps by touch only. This was a major departure from Windows 7, which was widely adopted at the time, but the majority of users didn’t replace their older, non-touchscreen laptops for new touch-enabled ones. This essentially threw their products into a state of disequilibrium, with newer touch screen products having better chances of adoption than older products that were originally designed for a more traditional form of interaction.

Conclusion

The success of Microsoft Surface relies on the mental models that have been stored in our long-term memory due to our interaction with our phones. These models allow for contextualization and are flexible enough to help us understand new information or understand new systems of interaction. Some understanding comes easily, while other information causes us to struggle to comprehend it. However, the malleable nature of mental models and their connections to other models can help us make sense of new ways of doing things.

Works Cited
  1. Agger, M. (2011, September 20). Apple’s Mousetrap. Slate. Retrieved from here
  2. Atherton, J. S. (2013). Learning and Teaching; Assimilation and Accommodation. Retrieved March 26, 2016, from here
  3. Cohen, P. R., — Kjeldsen, R. (1987). Information retrieval by constrained spreading activation in semantic networks. Information Processing — Management, 23(4), 255–268.
  4. Lindsay, P. H., — Norman, D. A. (1977). Human information processing: An introduction to psychology (2nd edition). New York: Academic Press.
  5. McLeod, S. A. (2015). Jean Piaget. Retrieved March 22, 2016, from here
  6. Minsky, M. (1974). A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306. Retrieved from here
  7. Moray, N. (1998). Identifying mental models of complex human–machine systems. International Journal of Industrial Ergonomics, 22(4), 293–297.
  8. Norman, D. A. (1982). Learning and memory. San Francisco: W.H. Freeman.
  9. Norman, D. A. (1986). Reflections on cognition and parallel distributed processing. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 2, 531–546.
  10. Piaget, J. (1961). The genetic approach to the psychology of thought. Understanding Children, 52, 35.
  11. Roediger III, H. L., Balota, D. A., — Watson, J. M. (2001). Spreading activation and arousal of false memories. The Nature of Remembering: Essays in Honor of Robert G. Crowder, 95–115.
  12. Rumelhart, D. E., — Norman, D. A. (1976). Accretion, tuning and restructuring: Three modes of learning DTIC Document.
  13. Schank, R. C., — Abelson, R. P. (1975). Scripts, plans, and knowledge. Yale University New Haven, CT.
  14. Sincero, S. M. (2013, August 1). Top-Down VS Bottom-Up Processing. Retrieved March 21, 2016, from here
  15. Surface User Guide. (2014, March). Microsoft. Retrieved from here
  16. Wickens, C. D., Hollands, J. G., Banbury, S., — Parasuraman, R. (2013). Engineering psychology and human performance (4th edition). Boston: Pearson.
post authorRyan Hunt

Ryan Hunt, I'm a Bay Area based user-centered researcher and designer that has worked on projects that range from cars, places, apps, and services. I've got a degree in Urban Studies from UC Berkeley and I'm finishing my MS in Human Factors in Information Design at Bentley University.

Share on twitter
Tweet
Share on linkedin
Share
Share on facebook
Post
Share on reddit
Share
Share on email
Email
Share on print
Print

Related Articles

Building digital products for the web’s next billion users
  • Connectivity issues are further inflated by accessibility gaps. This, in turn, undermines user experience and creates obstacles for the wider use of digital products.
  • When designing for users, it’s worth considering such issues as poor connectivity, accessibility constraints, levels of technological literacy within different countries and cultural barriers.
  • In order to satisfy the needs of the next 3 billion users, it’s vital to build inclusive and accessible products that will provide solutions to the critical problems the next generation will face.
Share:Building digital products for the web’s next billion users
The Liminal Space Between Meaning and Emotion
  • To innovate well is to search for meaning behind the innovation first. This requires investing time into discovering what users need and think of unique ways to serve them and better solve their problems.
  • Emotions are widely misunderstood in UX design and often manipulation is used to predict user behavior. However, a much better approach to UX design is storyscaping, which aims at empowering users, rather than controlling them.

Read the full article to learn more about liminal space and how to apply this thinking to your design.

Share:The Liminal Space Between Meaning and Emotion

Stop frustrating your users. Invest in notification strategy instead.

The UX of Notifications | How to Master the Art of Interrupting
  • As part of UX, notifications are key to leading the user to a better interaction with the product. Therefore, notification strategy should have a central role in UX design.
  • A good starting point is to create a user’s journey map and identify major pain points. This should serve to understand when and where notifications might be of help, rather than create confusion.
  • It’s a good practice to use a variety of notifications and provide the user with opt-outs so they don’t feel overwhelmed.
Share:The UX of Notifications | How to Master the Art of Interrupting

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and