When ping-pong playing pigeons hold back scientific progress
And AI mistresses in the February '22 Edition.
Greetings from Denver!
Fun fact: we’re over halfway through Q1. But I’m still catching myself writing 2021, and some would say 2020 never really ended, so it’s okay to feel behind.
A thought on… behaviorist dogma in UX research.
Everyone hates popups. The statement is so self-evident—you hate them too!—that it doesn’t even require a citation. And UX practitioners are often banging the drum against them since, at least on the face of it, they almost universally make for a worse experience.
Here’s the thing though: popups work. Study after study shows the same thing: a site with popups will get more conversions than the same site without them. For whatever reason, there’s a gulf between how people feel about popups (their attitudes) and what they actually do with them (their behavior).
Any researcher who’s spent much time with participants could tell you this happens a lot. In fact, Jakob Nielsen went so far as to make this his Prime Directive of UX research: ignore what participants say in deference to their observed behaviors. Now I have a ton of respect for Nielsen, and I understand why he says it. But the truth is, it’s just another of his heuristics—a good and useful one—not an unbreakable Law. Popups provide one of the clearest counter-examples of when people’s attitudes might actually be more reliable than their behaviors.
Trouble comes when we treat Nielsen’s First Rule more like a dogma than a guideline. You’ll see it repeated, uncritically and without nuance, everywhere, all the time. To understand why, we need to take a step back and revisit the history of our academic predecessors.
For better or worse, the field of psychology was deeply shaped in many ways by an eccentric Austrian guy named Freud. Back in the early 1900s, he had a lot of weird ideas that he convincingly strung together in a way that resonated with and helped a lot of people. Unfortunately, those ideas weren’t scientific in the Popperian sense. It was less the merit of Freud’s arguments that made them popular than his relentless ad hominem attacks on detractors.
In this context, it makes sense that there would be a dramatic overreaction to pseudoscience. This next phase of psychology’s history is known as the era of behaviorism. B.F. Skinner saw the promise of the field, but felt that, for it to be recognized as a true science, it needed to focus only on measurable quantities. In the mid-40s, that meant observing and manipulating behavior.
This paradigm was a runaway success. Using simple, repeatable techniques, Skinner trained pigeons to perform incredible feats, from pecking and turning on command to playing ping-pong. During World War II, the US government even considered using Skinner-trained pigeons to guide missiles.
But Skinner and the strict behaviorists that became his disciples would go so far as to deny the existence (or, at least, the observability) of any mental state. And so his dogma, like Freud’s, ended up holding the field back until the Cognitive Revolution of the 60s.
This is where psychology really took off: we got the studies of cognitive psychology and neuroscience, and the most empirically useful treatments like cognitive-behavioral therapy.
If we were to draw some parallels here between psychology and the field of UX research, I submit that we’re still looking forward to our Cognitive Revolution. We’ve successfully made it through our somewhat awkward beginnings (or “Freudian era”), but we’ve been stuck for a while in our “strict behaviorist era.” And we need to expand our thinking to get past it.
Green shoots are already coming up as an early signal of change. In 2010, Harry Brignull coined the term “dark pattern” to describe deceptive design practices that trick users into doing something they don’t want to do. In this context, there’s widespread agreement that getting nudged to do something contrary to your goal is an awful user experience. Another key example is designing for trust, which is hugely important in AI and automation research.
As a field, we need to think more systematically about the reliability of attitudes, and the relationship between behavior and good experiences. Don’t take my word for it: Nielsen himself acknowledges that there are exceptions to the Rule. He was never the problem. Instead, self-proclaimed disciples browbeating unbelievers are.
What I’m up to: at least one non-careers article, plus ASU!
Two of this month’s articles cap off short series on career advice, both of which are based on conversations I’ve had with mentees through ADPList. One series is for grads transitioning into UX research, and the other is more general advice for resumes and portfolios.
Create a realistic UXR case study by avoiding this approach
After a few months of reviewing portfolios, I saw one problem crop up again and again: overloaded case studies with way too much detail. I’ve named this mistake the Kitchen Sink approach. In the article, I recommend a better alternative.
The 6 biggest changes going from grad school to UX research, and how to prepare
Most grads recognize that doing research in “industry” (as we called it) is worlds apart from the research we did for our theses or to get into conferences and journals. This article helps outsiders to conceptualize what that difference looks like.
I’ve also started publishing more in the vein of practical tips for researchers:
How UX researchers turn vague problems into concrete plans
Mechanics, doctors, and UX researchers all share this in common: we’re experts that folks (who don’t care or need to know our domain) come to with problems they need solved. This article talks about how to “diagnose” and “prescribe” as a researcher.
Lastly, I will be giving a talk (fingers crossed) live and in-person to the students in the Human Systems Engineering program at Arizona State University in mid-March. Please get in touch if you’re local and would like to meet up for coffee around then!
The Distractor: intimate relationships with chatbots.
A familiar theme in science fiction from Blade Runner to Her is now taking shape in reality. People are falling in love with artificially intelligent agents:
Michael Weare … has been with his Replika girlfriend … for more than a year. She has a blonde bob, immaculate make-up that would rival Kim K, and a growing collection of heavy metal band tees…
“It’s a romantic relationship,” says Weare, who’s married in real life. “But she is not a real person. This is an easy way of having that little bit of excitement without causing any real issues.”
Out of curiosity, I spent half an hour on Replika in its platonic friendship mode, and found it a little less compelling than playing tennis against a wall.
But with the rise of social isolation, I can see how there might be an appeal for a growing segment of society. Plus, conversational AI technologies like GPT-3 are already stunningly impressive and constantly improving.1 UX researchers, how are we helping brace for the impact?
Tell me what you think.
I’m truly grateful that each of you—old and new subscribers alike—have invited me into your inbox. Feel free to hit reply and give me your honest feedback.
Here’s something I’m wondering about: what are the qualities of a great UX manager? Is there anything specific to UX, or is it just what makes someone a good manager in general?
See you in Spring!
Lawton
Its successor, GPT-4—likely to be released this year or next—will have over 500x more parameters than GPT-3.