Lee Gingras
UX Research & Strategy

In Defense of Doing it the Hard Way

The job of a user-research professional is undoubtedly a hard one. Understanding problems, getting the right sample of people in our labs, extracting insights from data, and evangelizing the user’s needs can make for challenging work. At the same time, rewards abound in this profession: the joy of diving into a new topic, engrossing conversations with some of the hundreds of people that pass through my lab, and of course the aha moments—those glimmers of awesomeness alone more than make up for any difficulties. But every now and then I wish it were all a little…easier.

In Defense of Doing It the Hard Way

I’m a published author! My article “In Defense of Doing it the Hard Way” has been published in the Interactions March+April issue as part of the Evaluation and Usability Forum. ACM allows me to post a version in my personal collection as well, so here it is!

The job of a user-research professional is undoubtedly a hard one. Understanding problems, getting the right sample of people in our labs, extracting insights from data, and evangelizing the user’s needs can make for challenging work. At the same time, rewards abound in this profession: the joy of diving into a new topic, engrossing conversations with some of the hundreds of people that pass through my lab, and of course the aha moments—those glimmers of awesomeness alone more than make up for any difficulties. But every now and then I wish it were all a little…easier.

In the heat of the workweek, I’ve been tempted by quick fixes and shortcuts. A glance at the battlefield of user research tells me I’m not alone. It seems as if every week I read about some paradigm-shattering new tool that promises to blow my mind, crunch all of my data by 5 o’clock, and have dinner on the table by 7. Tools like these are often pitched to us, an eager audience of open-minded, tired, bored, inexperienced, or budget-starved user-experience evaluators.

These promises are rarely fulfilled. I still end up spending hours hunched over my computer, or I don’t get the insights I was hoping for, or the quality of my work just plain suffers. After many failed experiments, I’m starting to think that these gimmicks and borrowed techniques from other fields amount to shortcuts, and shortcuts are not exactly formulas for success. Worse, I’m concerned that the quality of our work as a whole suffers: Every time we cut corners, we deliver subpar work that waters down the value that user research can offer.

We might not intend to skimp on our work, or we might feel pressured to cut corners in our quest to deliver more work more quickly, but no matter how you slice it, shortcuts aren’t actually doing us any favors. Shortcuts don’t help us produce good work, and if we strive to produce good work, shortcuts don’t actually save time. We have to do it the hard way.

What Do Shortcuts and Cut Corners Look Like?

Let’s get something out of the way first: When picking on shortcuts, I’m not targeting appropriate guerrilla user research. I have no issue with designers or one-man bands who just want to know how to improve their products. They don’t need to do it the hard way, and when they are ready to do it the hard way, they’ll approach their work differently, either by learning new skills or bringing in a seasoned researcher.

Rather, this discussion is intended for people whose primary focus is user research, day in and day out, whose job is to learn more about users and to understand their context. Solid user research requires both sweat and diligent work. Whether we are in the lab running usability studies or out in the field conducting ethnographic research, our core value as user-research professionals lies in our deep understanding of context, our analytical skills, and our ability to bring empiricism into the product-development process.

To put it another way, user-research professionals get hired not just because we are good at excavating truth, but also because we have a knack for mapping the knowns and unknowns around those truths, finding new points to investigate, and communicating the core truths that we learn in a way that’s helpful and productive. When we do that, we can help a design plow all the way to the other end of development, through shifting requirements and slippery scopes, without ever losing focus on the needs that the design was built to address.

It’s our job to ensure that rigor backs our process, and that we are actually being as precise in our measurements as we think we are. Unfortunately, in our line of work there are many opportunities to deceive ourselves into thinking we can save time, energy, or money without sacrificing the precision and accuracy of our work. Anything that doesn’t require much sweat, plodding, or careful attention to detail is a shortcut, whatever form that shortcut may take. Sometimes a shortcut promises to reduce the amount of time we spend planning and executing studies. Other times a shortcut claims to make analyzing data easier. Still other times, a shortcut takes the form of a misapplied tool.

The Shotgun Shortcut: Executing Studies Ineffectively

Generally, the greater the amount of time that a shortcut claims to save me, the more suspicious I am of the shortcut. A case in point: I’m highly suspicious of unmoderated open card sorts, in which remote participants are given a heap of cards and asked to sort them into categories and label the categories. Hosting card sorts online saves time and resources by allowing users to complete them at their leisure and without need for professional attention. However, this adaption comes at a cost. It sacrifices the main benefit of that particular research methodology, namely access to the rich, qualitative verbal report from our participants that helps us understand the way in which they construe the world. With this understanding, we can address the why of things. Unmoderated card sorts can’t give you this why; they can give you only the what and the when.

One rebuttal is that if we run a large enough sample, we can call it quantitative data. But this is still qualitative data; in adapting this methodology poorly, we lose its intent and its strength. When we mechanize it and remove that ability to follow our participant’s train of thought, we shove all that beautiful qualitative data into a quantitative box. The result is a monster pile of data, stripped of context and of any good foothold from which we can understand what these categories and labels mean to the user and their work.

I have put online card sorting to good use before, of course: in validating a preexisting idea. Only after I’ve interviewed enough people and had in-person sorts and developed a prototype of a navigation tree can I bring the online card sort out of the toolbox and test the ideas I’ve come up with. The context and the why are still missing, but that’s not what I’m trying to get at in this particular study. I fill in those in gaps by triangulating data collected from other studies. Of course, once we undertake the difficult task of piecing together data from different sources, we are no longer cutting corners.

The Drive-by Analysis Shortcut: Skimping on Thinking

From fear of analysis paralysis (spending too much time poring over data, needlessly turning over stones, and beating long-dead horses), we can swing to the other extreme and rush through analysis. I have learned that when something does need to be examined thoroughly, nothing can substitute for the grunt work of teasing out answers and squinting our eyes to see if the puzzle is yet complete.

Web analytics readily fall into this trap. They are indispensable and you will have to pry them from my cold dead fingers, but they are stunningly easily to screw up. Here is a basic example that many of us have grappled with: “Time on page is up 20 percent since last month!” If we take statistics like these at face value, we might consider it a win, but we need to dig deeper to figure out what the numbers mean. What kind of design changes have we made since then? Do people spend more time on that page because it takes longer to get stuff done? Did our South Korean users abandon us, taking with them their stunningly high-speed Internet? Increased time on page is just one of many deceptively simple numbers that, without context, raises more questions than it answers.

No single bit of analytics data can stand by itself, and analytics is at its most powerful when it’s part of a holistic picture of users, their goals, and their environment. When we bring together insights from other qualitative and quantitative studies, our understanding of the truth sharpens. Because of this, I approach analytics in much the same way that I approach salt: as an essential seasoning to (almost) every main dish.

For example, while planning an in-person usability test, I first sniff out the goals of the study and what we need to learn, and then find ways to season those questions with analytics. If we’re concerned that a new form field will frustrate users, before we conduct a usability test we compare before-and-after numbers on form-completion rates, exit page destinations, and time on page. This helps identify things to watch for in the lab. If analytics show a pattern of folks heading en masse for the “about our company” link, we can keep a special eye on our in-person participants’ behavior and expectations around that link.

At the end of the day, our job is to solve problems. Analytics, like anything else, is a means to an end. Taking an iterative, systematic, and rigorous approach to problem solving yields a clearer connection between the problem, the research done around it, and the design that gets to the root of the issue.

The Square Peg in a Round Hole Shortcut: Using the Wrong Tool for the Job

After moving to a new apartment in an unfamiliar part of town, I drove to work using a familiar route that added five miles to my daily commute. I did this not just once or twice, but every day for over a month. I vaguely knew that there was another road to town, but I was afraid of getting lost, so I didn’t venture out of my comfort zone. In user research, it’s also tempting to stay inside our comfort zone and stick to tried-and-true methodologies even when they are not appropriate for the job. I’ve watched professional user researchers adapt them, stretch them, and hack them together with other methodologies. Inevitably, these “franken-methodologies” resemble a snake with legs stapled onto it, sadly attempting something for which it was never built.

We are not always as careful as we should be when planning research. Once we have identified a research need, it’s fantastically handy to have a wide range of tools and approaches to pick from to address that need. However, different tools answer our questions from different angles, and sometimes we simply pick the wrong angle, ending up with empty or inaccurate answers. All methodologies bias research in some way; when we understand what our bias is, we can take steps to address it. Because of this, it’s essential to understand the strengths and weaknesses of our tools, and what implications that holds for our results.

And here’s a good example: Eye tracking, in all of its popular glory, is a notoriously misapplied methodology. Eye-tracking technology monitors where and for how long people’s eyes fixate on a target. The original idea back in the day was to learn how people read and to correlate eye fixation with cognition. It was long the exclusive tool of labs with very deep pockets, but times have changed, and at UX conferences these days you can’t throw a rock without hitting an eye-tracking vendor. These vendors claim to deliver the power of the eye-tracking lab at dirt-cheap prices. Eye-tracking presentations and seminars (often given by said vendors) spring up like weeds, offering “eye-tracking 101” and “eye-tracking boot camp.” It’s not so expensive, they promise, and not so hard. Anybody can do it.

Great! What’s the catch? Well, eye tracking in UX is based on the premise that the resulting heat maps will reveal thoughts that users don’t verbalize, because they are not conscious of their attention processes. Unfortunately, the heat-map data does not actually represent the user’s mental processes. Like chocolate cake, you have to bake it before you eat it. Cognitive scientists understand this. When they use eye-tracking studies to learn how we process information, they actively take account of all relevant work, no matter the methodology or the discipline. When vendors promote eye tracking as easy and accessible, they gloss over that work, and because the heat maps look scientific, we fall for it.

It’s easy to understand why eye-tracking maps are so easily mistaken for findings. Humans intuit that data is messy, so if it looks nice, it must be analysis-ready. Unfortunately, because eye-tracking is so deceptively easy, it enables enormous fallacies in user research. It’s marvelous at proving other people wrong (“See, I told you green wouldn’t work”), proving our own points (“If the button were red, people would see it”), drawing shaky conclusions (“It’s not that people don’t want to use it, it’s that they don’t see it”) and discrediting our profession (“This isn’t so hard. Remind me again why we’re paying an expert to do this?”).

Like the other shortcuts I’ve mentioned here, eye tracking gives a dangerous amount of latitude for anybody to make their own guesses and draw their own conclusions. Eye-tracking data seems very approachable, and it looks fun to play with. However, its data is stripped of all meaning and context, and when we take it at face value, we run the risk of drawing unsubstantiated conclusions. Unfortunately, our clients may also mistake eye-tracking data for insights, and it’s our responsibility to ensure that they don’t draw unsubstantiated conclusions either. Our clients (who are not trained in the fine art of considering data in a holistic context) need solid information to make solid business decisions. In supporting that need, we must ensure that our insights are rich and that they provide information our clients can trust.

Certainly, eye-tracking studies can be used constructively in our research. But this requires them to be carefully written, carefully moderated and observed, and very carefully analyzed. The results must be situated in an existing understanding of the user’s intentions and workflows. In short, a successful eye-tracking study calls for a skilled practitioner with a sixth sense for subtleties.

And really, if you’re that good at making sense of patterns of user behavior, you probably don’t actually need eye tracking to succeed. Everything that you can learn from eye tracking at this point you can learn using simpler, cheaper methods. If you actually do all of that work, it’s no longer a shortcut. You’re back to doing sweaty labor.

What Can We Do, Then?

These are pitfalls for new and seasoned user researchers alike. Folks new to the field, including those transitioning from research in guerrilla-style environments, might inappropriately adapt techniques they already know, or they might address weak points in their research by Band-Aiding over them. Seasoned practitioners might tire of dealing with stakeholders who don’t care about deep, rich data, so they might look for ways to develop more insights faster, or yield to bad compromises.

Shortcuts, in all their varied and sneaky and tricky disguises, can entrap anybody along the entire spectrum of experience and cause our work to suffer. Even if our enthusiastic adoption of shiny things distracts us from noticing the weaknesses in our research, others will notice the problems. This seed grows into distrust of our individual work and has the potential to scatter seeds of distrust of the user-research profession.

Many shortcuts share the shiny allure of modern, sophisticated-looking technology, but in the end they are a poor substitute for our critical-thinking skills. They might look good, but they are compromises, and they don’t replace the fundamental skills we should be developing. These skills are not new; they are the skills polished by curious people across all scientific fields: Once we have made an observation and defined the problem, we form a hypothesis and test it. Those skills take a lifetime to perfect, and when we are neck-deep in fads, we can’t hone them. We might suffer the illusion that superficial understandings will suffice, and we might conclude that our restless minds are at their sharpest when wielding the newest of an endless series of gadgets, but in reality we’re letting the best things about us—our curiosity and our intellect—waste away.

This is perhaps the saddest thing about shortcuts. While we’re leaping from gimmick to gimmick, we forget the reason we started poking around and asking questions and knitting our brows in puzzlement. We forget about the basic human need, as old as the wheel, to understand the world and its people. This is a huge undertaking. We should do it right.

About the AuthorLeanna is the User Research Coordinator at ITHAKA and a problem-solver by trade. As part of her calling to create holistic and delightful experiences, she manages research studies, conducts social experiments on teammates, and juggles between quantitative and qualitative analysis.

© ACM, (2012). This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Interactions, VOL19, ISS2, (March + April 2012) http://doi.acm.org/10.1145/2090150.2090168.