Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Accessibility ›› Priming the Electronic Prototype

Priming the Electronic Prototype

by Sarah Martin
6 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

Whether users conceptualize the content as a “paper” document, “website,” or “electronic” document with assumed capabilities is a critical UX inquiry.

Meaningful UX results come from observing users in their natural environment, but money, time, and user availability often constrain traditional in-person UX sessions. Fortunately, remote testing has tackled many of these constraints. However, as remote testing becomes ubiquitous, it raises questions about the electronic systems that users engage with during prototype testing. Particularly, remote prototype testing influences user expectations of electronic systems—regarding their functionality, capabilities, and interactivity—that may impact how they appraise the prototype itself.

This article describes the challenges of a remote mixed-fidelity prototype UX evaluation. It addresses whether researchers and practitioners should give greater consideration to how electronic test systems impact remote paper prototyping outcomes. Researchers diligently find representative users, refine meaningful tasks, follow scripts, and strive to avoid facilitator effects like culture, gender, language, etc. But what, if anything, should we tell remote users about the capabilities of the electronic systems that support the prototypes they engage with?

Remote UX testing yields meaningful, measurable results for websites, software, and other electronic interfaces, where the prototype medium is native to the nature of product itself. It’s a quick way to get more participants of broader demographics, and high-speed, low-cost feedback. Even in the case of crowdsourced user testing, with its drawbacks of less quality feedback, less interaction, and less focused users, the users engage with a product that functions in its native medium. But what happens when we use electronic systems to conduct prototyping sessions of non-electronic, or, what I will describe later as hybrid-electronic, products?

In paper prototyping, we invite people with no technical background into the design process

In the case described here, users focused on the constraints and affordances of the electronic system Google Sheets that was used to conduct paper and medium-fidelity prototype testing. The product being tested was financial content that is presently available to users in multiple formats: an online electronic viewing screen table accessed through a personal login, an electronically viewable and printable PDF document version with different content and format from what is visible in the electronic viewing screen, and a hard-copy paper document version with different content and format from the electronic versions. Ultimately, this was a hybrid-electronic product; it was accessible via an internet connection and computer system, but with no electronic capabilities except viewing, printing, or saving. Users could encounter different document-viewing option functions on the screen, but they could not manipulate the actual data.

As a researcher then, I had to determine what I was evaluating and how I should test it. What was the nature of the product? Was it a paper document? An electronic document? A website? Just information? Like Inception’s dream within a dream within a dream, was it content within a system within a system? If so, which system should I test? Did it matter? If paper prototyping sessions are supposed to use “non-functional” products to yield user-generated content and ideas, is it possible to do this remotely without users focusing on the functionality of the system used to administer the prototype itself?

Put another way, when users use a computer and electronically interact with a prototype of anything, it is functional. They may not use this functionality, but in this case, I offer that users are not immune to the functionality of the test systems presence. Even if the test system features are unrelated to the product or concept under evaluation, it can impact users’ mental models during prototype testing.

In the case of my hybrid-electronic product (financial document), my UX evaluation goals warranted paper prototype and medium-fidelity prototype testing. Due to geographic restrictions, remote testing was the only option. While I do not think that in-person testing would have given me more accurate results, I was surprised by the way users pointed to the functionality of Google Sheets as they interacted with the prototype.

User comments about the functionality of Google Sheets raises questions about how much guidance, if any, we should give remote users about electronic prototype test systems:

  • How specific should facilitators be about the capabilities of the product vs. the system used to test it? Do we explain to users that just because they are looking at content via electronic means in a remote test, that they should view it “like paper” or “like a website”? When it comes to remote paper prototyping, which is it? Or was this uncertainty just a function of a poor test plan or poor facilitation? In this study, I informed users that they would be looking at a “document.” Should I have specified it was a document on a website, or a website itself? What product parameters, if any, should users be aware of beforehand? What type of systems should they envision as they interact with the prototype? Users clearly indicated the benefits, or limitations, of Google Sheets in their appraisals; the ability to click, sort, merge, and group.
  • Should facilitators clearly define which test system functionalities users should be aware of or “ignore” in a remote test? The prototype in this study used a pop-up comment function in Google Sheets to showcase some of the test content. One user identified the comment markers but stated that they figure they were the “researcher’s personal notes” unrelated to the prototype so they ignored them. Google Sheets forces a comment identifier (usual author name, like Excel) before each comment. The user saw a name followed by a comment, and determined that it was unrelated to the document. Was I supposed to jump in and say that they were in fact part of the prototype? That it was simply a function of Google Sheets itself? Or, should we prime users and explain that Google Sheets uses a small triangle marker in the upper right-hand corner of each cell to signify that additional content is available and includes an author name? That decision ultimately depends on how informed you want users to be.
  • How do we accommodate test system limitations during remote prototype test sessions? I used Google Sheets to conduct the remote test sessions due to time and financial constraints. Google Sheets is free, doesn’t require a shared Internet connection that can decrease session quality (like Skype), and easily allows for screen sharing with live observation of user activities. However, there were limitations that forced users to interact with the prototype in ways that they might not have otherwise. For example, some users had to click on a specific content cell that they were looking at so I could visibly see what content they were referencing on my screen. While this was easier for users than having to state, “I’m looking at cell B36,” I do wonder how it may have influenced how they interacted with the prototype. Was I introducing faux interactivity (the cell click) that might have colored their view of the content as more of an interactive, electronic document vs. paper document? Or were they still viewing it as a paper document, just housed in Google Sheets?
  • How do we account for user comments about the test system vs. the prototype? User comments about Google Sheets indicated an ambiguous relationship between the test system and prototype content. For example, users who conceptualized the prototype content as a website attempted to double-click text. They quickly realized this was not a capability of Google Sheets. As a researcher then, how do I account for this? Do users expectations about what an electronic prototype might do mimic the expectations that they have about the prototype in general? How does it affect UX?

For example, one user indicated that they wanted the capability to select specific content items to appear on a new screen—with a pop-up calculator. She added: “I know that’s not possible in Google Sheets, so I’m not sure if that is even possible at all.” I documented her thoughts in my evaluation, but couldn’t help but wonder whether the limitations of Google Sheets perhaps limited her prototype appraisals. The point of a paper prototype is to have as few limitations as possible; did my user escape the limitations of Google Sheets?

In paper prototyping, we invite people with little or no technical background into the design process. Yet we cannot ignore the expectations that they might have about what something electronic might be capable of doing.  Ultimately, I got the answers that I needed. I was just surprised—and maybe a more seasoned UX researcher simply wouldn’t be—at how the test system influenced user conceptualizations of the product as either an electronic system or straight content as they interacted with it.

Yet, whether users conceptualize the content as a “paper” document, “website,” or “electronic” document with assumed capabilities is a critical UX inquiry. It just might be best determined prior to prototype development or better defined in the test session script.

post authorSarah Martin

Sarah Martin
This user does not have bio yet.

Tweet
Share
Post
Share
Email
Print

Related Articles

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and