Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Apple iOS ›› Long Term Memory: Touchscreen Interaction

Long Term Memory: Touchscreen Interaction

by Ryan Hunt
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As we have new experiences and learn new things, we store away information for recall at a later date. Information is first stored in our short-term memory for easy access and is then consolidated and stored in our long-term memory. It is used to store our knowledge, compare new information with old information, and keep track of the order in which things happen. Because of this, when a new product is introduced on the market, if it is designed in such a way that allows people to relate it to the way we use another product, it has a greater chance of being accepted.

As we have new experiences and learn new things, we store away information for recall at a later date. Information is first stored in our short-term memory for easy access and is then consolidated and stored in our long-term memory. Our long-term memory is “our storehouse of facts about the world and how to do things”.[16] Long-term memory is used to store our knowledge, compare new information with old information, and keep track of the order in which things happen. Because of this, when a new product is introduced on the market, if it is designed in such a way that allows people to relate it to the way we use another product, it has a greater chance of being accepted.

We have become accustomed to the interactions that were introduced on touchscreen phones, like swiping, scrolling, pinch-zooming, and tapping. Now that these interactions are available on laptops, they have become an indispensable way of interaction with a device because this model of interaction has been stored in our long-term memory.

I’ll use the recent rise in touchscreen laptops, specifically the Microsoft Surface, to illustrate this. Because of our familiarity with touchscreens on mobile phones, transitioning to new touchscreen laptops has been easier than prior attempt to introduce touchscreen to mobile computing.

Memory Organization

One of the ways we use our long-term memory is to make comparisons between new information and old information. This allows us to make assumptions about the new information as we receive it and is commonly referred to as top-down processing. The differences between top-down and bottom-up processing are essentially the differences between providing data for decision making versus providing context for decision making.

Bottom-up processing begins with signal detection but does not depend on prior experiences or cultural meaning. Top-down processing, however, relies on prior knowledge and context. Bottom-up processing can be thought of as data-driven processing, while top-down processing can be thought of as being conceptually guided.[4][14]

Schema, Frames, and Scripts

The schema theory in cognitive science is a flexible template for inferring information based on the current context and past experiences. There are a pair of schematic systems that are of particular note: frames and scripts. A frame is used to indicate context in understanding information. For example, if someone brings up “pinch-zoom” in a conversation, your mind frames the conversation around touchscreen interaction because of the semantic cues.

Frames are a generalized structure for understanding typical situation. Scripts, however, are a more focused schematic system than frames, in that they rely on specific information that fulfills requirements and occurs over a duration of time. The way that Schank and Abelson describe scripts is as “a very boring little story.” The script for changing your ringtone on an Android device would be read as: go to settings, go to sounds and notifications, go to ringtones and sounds, go to ringtone, choose a new ringtone. The components of the script elude to details about device’s system settings hierarchy, but each part of the script must occur in its specific order for the task to be accomplished.[6][9][13]

Mental Models

The idea of a mental model is that we create a model of information to which we compare new information. If we are able to integrate this new information into the existing model, we are able to make better sense of it. Mental models are used for understanding complex systems and when we are presented with new complex information, we choose an appropriate existing model to relate the new information to:

Different models are applied in different contexts (e.g., when the car fails to start, the driver remembers that the battery provides power to the starting motor and hypothesizes that the battery must therefore be dead).[16]

The mental model in this example is of the car’s electrical system. The driver knows enough about this system to understand that power generation starts at the battery, but doesn’t necessarily need to understand the intricacies of automotive electronics to determine the cause of the issue.[7]

Similarly, when Windows 8 was released and touchscreen laptops were more common, interaction designers relied on our experiences with mobile devices to guide their designs. Many of the common mobile interactions were used to manipulate laptop screens. Touch scrolling, tap and drag, multi-touch interactions, and swiping off screen were common interactions on Windows 8.

Interestingly, this seems to have caused a debate in how to scroll using mouse wheels and touch pads. Traditionally, rolling the mouse’s scroll wheel towards the user (or two-finger scrolling downwards on a touchpad) would move the document down so users could read lower on the page. With the release of Apple’s OS X 10.7 Lion, however, designers opted to reverse the scroll direction to mimic the way we interact with mobile devices. This caused a minor kerfuffle with reviewers saying, “With natural scrolling, Apple was messing with the laws of the universe!”.[1] Disrupting people’s mental model and interaction frame so abruptly is a risky design decision.

Interconnection
Semantic Networks and Priming

A semantic network is a system that is defined by its relationship with other systems. Semantic networks are similar to frames in that the context of one system indicates a connection to another system. Semantic networks work by connecting concepts together by their terminology. Certain phrases prime a listener’s understanding of the context of the word. For example, food as a primer separates edible things from non-edible things, and yellow adds a finer layer of connection, narrowing down the possible properties of the food in question. By Apple referring to its design change in scrolling direction as “natural scrolling,” they were probably trying to prime users into thinking that the old way of scrolling was unnatural and wrong.[3][8]

Spreading Activation

This idea that priming contextualizes semantic networks can also be described as associative processing, or spreading activation. Spreading activation is the waterfall effect of understanding as more context on the matter is gained. As more relationships between different ideas are created, the expanded mental model of various interconnected semantic networks can lead to a very broad and complex association between concepts.[3][11]

In the early versions Microsoft Surface’s user guide, users were primed for familiar touch interactions methods, “Like a smartphone, you can interact with Surface by touching the screen”.[15] Users were also primed for connecting touch interactions with familiar mouse interactions: Tap once performs the same actions as a left click, tap and hold for right click, slide for mouse-like scrolling. They also added some new interactions like swiping from the edge to open menus and recently opened apps. By priming these interactions, Microsoft situates their new touchscreen interactions in a mental model that PC and smartphone users have already mastered.[15]

Evolution
Accretion, Tuning, Restructuring

Mental models grow as we get new information. New information comes in, and we categorize it with a frame or semantic network. In response, the mental model expands. This process is known as accretion and is the most common way we take in more information. Fitting new information into existing models is a very efficient way of learning. Accretion is simply adding more information to an already existing model.

However, sometimes new information doesn’t seem to fit into existing models. When this happens, we make adjustments to the categories that are used to organize information and tune the model in order to apply it more effectively. Other times, after a period of confusion, and a subsequent ‘A-HA’ moment of understanding, we are able to find a way to fit the new information into our now expanded mental model. This is the process of restructuring. Learning this way requires tremendous effort but leads to more complex ways of taking in new information.[8][12]

Assimilation and Accommodation

Another way to think about how we learn is through Jean Piaget’s dialectical theories of assimilation and accommodation. Assimilation, like accretion, is when new information fits into an existing structure. Accommodation, as in restructuring, adjusts standing models to allow new information to fit. Piaget emphasizes that assimilation and accommodation are parts of the cycle of learning and development.

Typically, new information is received and assimilated into a current mental model. Some new information may not seem to fit and we need to make adjustments to maintain the balance of our mental model. Not being able to understand something is not a pleasant experience. This unpleasant feeling is what forces us to make the accommodation in an effort for get back to learning by assimilation as soon as possible.[2][5][10]

Microsoft’s framing of the Surface interactions allowed the idea of a touchscreen laptop to be learned through assimilation. To make it easier for people to use a touchscreen, they needed to be sure the targets were large enough for accurate selections. With Windows 8, Microsoft introduced a tiled menu interface that made it easier to access apps by touch only. This was a major departure from Windows 7, which was widely adopted at the time, but the majority of users didn’t replace their older, non-touchscreen laptops for new touch-enabled ones. This essentially threw their products into a state of disequilibrium, with newer touch screen products having better chances of adoption than older products that were originally designed for a more traditional form of interaction.

Conclusion

The success of Microsoft Surface relies on the mental models that have been stored in our long-term memory due to our interaction with our phones. These models allow for contextualization and are flexible enough to help us understand new information or understand new systems of interaction. Some understanding comes easily, while other information causes us to struggle to comprehend it. However, the malleable nature of mental models and their connections to other models can help us make sense of new ways of doing things.

Works Cited
  1. Agger, M. (2011, September 20). Apple’s Mousetrap. Slate. Retrieved from here
  2. Atherton, J. S. (2013). Learning and Teaching; Assimilation and Accommodation. Retrieved March 26, 2016, from here
  3. Cohen, P. R., — Kjeldsen, R. (1987). Information retrieval by constrained spreading activation in semantic networks. Information Processing — Management, 23(4), 255–268.
  4. Lindsay, P. H., — Norman, D. A. (1977). Human information processing: An introduction to psychology (2nd edition). New York: Academic Press.
  5. McLeod, S. A. (2015). Jean Piaget. Retrieved March 22, 2016, from here
  6. Minsky, M. (1974). A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306. Retrieved from here
  7. Moray, N. (1998). Identifying mental models of complex human–machine systems. International Journal of Industrial Ergonomics, 22(4), 293–297.
  8. Norman, D. A. (1982). Learning and memory. San Francisco: W.H. Freeman.
  9. Norman, D. A. (1986). Reflections on cognition and parallel distributed processing. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 2, 531–546.
  10. Piaget, J. (1961). The genetic approach to the psychology of thought. Understanding Children, 52, 35.
  11. Roediger III, H. L., Balota, D. A., — Watson, J. M. (2001). Spreading activation and arousal of false memories. The Nature of Remembering: Essays in Honor of Robert G. Crowder, 95–115.
  12. Rumelhart, D. E., — Norman, D. A. (1976). Accretion, tuning and restructuring: Three modes of learning DTIC Document.
  13. Schank, R. C., — Abelson, R. P. (1975). Scripts, plans, and knowledge. Yale University New Haven, CT.
  14. Sincero, S. M. (2013, August 1). Top-Down VS Bottom-Up Processing. Retrieved March 21, 2016, from here
  15. Surface User Guide. (2014, March). Microsoft. Retrieved from here
  16. Wickens, C. D., Hollands, J. G., Banbury, S., — Parasuraman, R. (2013). Engineering psychology and human performance (4th edition). Boston: Pearson.
post authorRyan Hunt

Ryan Hunt, I'm a Bay Area based user-centered researcher and designer that has worked on projects that range from cars, places, apps, and services. I've got a degree in Urban Studies from UC Berkeley and I'm finishing my MS in Human Factors in Information Design at Bentley University.

Tweet
Share
Post
Share
Email
Print

Related Articles

Navigating the Creative Landscape.

Article by Adri Mukund
Unveiling the Influence of Cognitive Biases on Design Decision-Making
  • The article explores the influence of cognitive biases on design decision-making, outlining various types of biases and offering strategies for mitigating their impact to foster inclusivity and objectivity in design processes.
Share:Unveiling the Influence of Cognitive Biases on Design Decision-Making
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and