Discover more from The ¼″ Hole
When pet peeves discredit the whole field of UX
And supersonic robopilots in the January '22 Edition.
Happy New Year!
And hello from snowy Denver!
Hope you all are well, warm, and staying satisfactorily busy in these dark days of winter.
A thought on… hot takes and newborn onesies
You wouldn’t know it from the state of discourse in our field, but user experience research long ago left its awkward teenage phase. You could date the origin to 1993 when Don Norman coined the term, or a lot further back to Bell Labs and early human factors or HCI research. I’ll go with Joe Dumas, who argued that our kind of work really began to crystallize in the late 80s.
And about 18 years later, as the field reached maturity, Arnie Lund began pushing for a more multidisciplinary field, incorporating a wider range of methods to capture a more holistic view of the user experience. To an extent, that’s exactly what’s happened. We have a lot of methods. Some of them are classics, unlikely to go anywhere. Others pop up every few years, riding the wave of some business trend.
Researchers tend to be more curious and more skeptical than the general population. They love pursuing understanding, which we understand is always an asymptotic endeavor. That it’s easy to get blinded by bias. And so it’s natural for researchers to think twice about any method that hasn’t been time-tested or thoroughly validated.
But what this leads to is a popular genre of writing in UX research: a methodological criticism that proves too much. When you take a close look at the arguments and apply them generally beyond the topic at hand, it ends up invalidating a lot of what we know to work.
Take the much-maligned Net Promoter Score. It’s harmful—indeed, the worst (and in case people don’t know how you really feel, you can dress your baby in the message). Why? Well, because people are so dumb that they can’t differentiate a 6 from a 10. Plus, perverse incentives might lead some who ask it to game the system. And honestly, that weird calculation stresses me out.
But every one of these criticisms could, in principle, apply to all the scales that we use on a regular basis, like the SUS or SEQ. Personally, I have found a lot more participants to get confused answering the SUS than the NPS, but we still use it—and rightfully so. A skilled facilitator can help mitigate any confusion. And you can assume that the noise affects every respondent more-or-less equally, making comparisons totally legitimate.
Or you may have heard that surveys are dangerous. After all, you could ask a bad question! Really, you should be trained so that you ask more good questions and analyze the data properly, checking it for anomalies before you report it out. But… wouldn’t all these “dangers” apply to good ol’ user interviews as well?I sure hope not, otherwise researchers shouldn’t ask anything except validated psychometric scales. Even if a colleague and I are moderating a single study using the same script, we’re never going to conduct it exactly the same.
We largely got the multidisciplinary field that Lund wanted. We’re lucky to have so many methods and approaches as tools in our toolkits. By all means, we should look at them critically. But there’s no universal tool that applies equally well in every situation. And even the best tool for the job could be used poorly.
In many of these cases, we should be holding the mirror up to ourselves. When a researcher doesn’t choose the right method, or perhaps chooses the right one but fails to use it properly, that’s not a judgment on the tool. It’s a reflection of the researcher.
Thanks for reading The ¼" Hole! Subscribe for free to receive new posts and support my work.
What I’m up to: helping others ease from academia to UXR.
Just one article between last month and now, continuing the series for transitioning grads. More to come here:
Folks get a lot of valuable experiences in grad school—in many cases, doing the same kinds of things they might later do at a company. Unfortunately, this often isn’t recognized, but grads can still find upside if they know where to look.
The Distractor: Maybe, soon, flying will suck less.
Beyond talking about date selectors and the choice architecture of seats and upgrades, air travel doesn’t come up much in user experience, despite its importance to the foundation of the field, and the multifaceted ways in which flying becomes a more miserable experience each year.
But we may see some major innovation in flight this decade, starting with autonomous commercial jets:
We can't make flying cheaper or increase the traffic density without making it less safe. We can't make it safer without making it more expensive or blocking off more airspace. There is a shortage of pilots, people do want to do more economic activity in the air, nobody wants to compromise the safety, so how do we move the Pareto front forward instead of being constrained to that surface of trade-offs?
Trust is going to be a major hurdle, and a big opportunity for researchers.
Elsewhere, a Denver startup is progressing steadily towards scaleable supersonic passenger flights. Think about it: Seattle to Tokyo in 4 and a half hours. New York to London in 3 and a half hours. United has already ordered 15 planes!
Tell me what you think.
As always, I’d love to hear your feedback—just hit reply. If you’ve found this issue interesting or useful, please forward it along to someone else.
Last thought: I’m curious how folks are feeling about all the web3 hubbub. How should we think about it, and approach it, as UX researchers?
Until next time,
In fairness, I don’t think Jeff and co. are arguing this, and none of their research has borne that hypothesis out. But the fact that they’ve run a bajillion studies on variations of this question suggests that a significantly large population of researchers must actually believe this, and needs substantial data to be convinced otherwise.