Review of the New UX Book "Remote Research"
NOTE: UX Magazine is running a give-away of the book reviewed in this article and it's not too late to enter!
Damnit. I wanted to write a book like Remote Research (Rosenfeld, 2010), but these guys beat me to it. Though I probably wouldn't have done it as well, or as thoroughly, or as amusingly as Nate Bolt and Tony Tulathimutte did. And if I didn't like those guys and respect them so much, I'd be really upset. I'll admit that my intention was to just skim the book enough to write a minimally credible review. But I couldn't put it down. (If that makes me a geek, I'm okay with that.)
Doing remote research is all the rage. I first started doing it when my clients wanted geographic diversity but didn't want to pay for travel to the geographic locations. This trend continues even as budgets begin to grow back to pre-economic-crisis levels because it just makes good sense; remote research is a smart tool to have in the toolbox.
There are many advantages to conducting remote sessions, and the UX world is now seeing them. It's very inexpensive. You can gather a lot of data without leaving the comfort of your own office. Team members who are remote can easily observe sessions. Even if you don't buy special tools for conducting online sessions, it appears pretty easy to put a study together.
The discussion lists are full of questions about which tools to use and how much they cost. Nate and Tony have anticipated every question and answered each clearly and intelligently. This book is smart, it's plain, and it is clear that these guys know of what they speak. But they haven't said, “Do it our way or hit the highway.” They present the options, share their experiences, and call on others to tell their stories (I'm one of them). We even get lessons learned.
For example, one of the most difficult things to do on any usability test (especially for automated remote usability tests) is to create tasks for participants to perform. Nate and Tony do a masterful job of using their own experiences to illustrate what to do and what not to do, and why. In fact, this section of the book is worth its price alone. They explain beautifully how to develop task scenarios based on research questions. I actually think their section about task development (or "elicitation," as they call it) is better than the one in my book, Handbook of Usability Testing Second Edition. I absolutely intend to point people to their explanation every chance I get. This book is loaded with just those kinds of gems.
Though there is a lot of information about tools, applications, and services to use, this book is not masquerading as a third-party user manual for any of them. The undercurrent here is an outline for performing research projects, including advice about methodological subtleties. I just love the section in Chapter 5 called "Quiet, Chatty, Drunk, Bored, and Mean" (preview an excerpt from this section) about how to handle different types of problem participants. And while the authors make no claims of having done "remote ethnography," the authors share some really useful tips about the lovely qualitative information you might pick up about the physical context the participant is in if you are very observant.
The heart of the book is a straight read: conversational prose about what remote research is about, why you would do it, when it is best and when it might not answer your questions, and how to do it to gather the data you need. If you're wondering about whether remote research is the right approach to answer your research questions, you'll get well-informed answers in this book.
After that, there are also dozens of neat inserts and insets. From excellent, brief case studies to side-by-side comparisons along with samples of deliverables, the book offers tips, tricks, and techniques told by the authors and a cast of others who have tread these remote waters and lived to tell the tale.
The greatest contribution this book makes to the UX world, however, is in how Nate and Tony handle the definitions, constraints, and advantages of moderated versus automated remote research. I personally am not a fan of automated tests, especially for teams that are new to user research. People who ask me about it seem to be hoping to put a website out there and then sit back and wait for the data to flow in. But as Remote Research points out, performing a good, useful automated test takes careful research design and scripting.
For this public service, I salute the authors. As the guys say, "don't waste your life doing pointless research." Get this book. Then email Nate and ask how he talked Rosenfeld Media into putting a pink cover on it.
Read a sample excerpt from Remote Research
You can download an excerpt of the book here.
ABOUT THE AUTHOR(S)
Dana has helped hundreds of people make better design decisions by giving them the skills to gain knowledge about users. She's the co-author, with Jeff Rubin, of Handbook of Usability Testing Second Edition. (Wiley, 2008) Visit Dana's website at usabilityworks.net or follow her on Twitter @danachis.