Cobwebbed spreadsheets, UX/business conflicts, and research through litigation
A special link-fest edition of The ¼" Hole

June is upon us again! Meaning: conference season for UX researchers is at its crest. In the span of a month, we get UXRConf and Quant UX Con among many others.1
I hope you’ve had the chance to benefit from all the knowledge and experiences being shared. This week, I’m headed to Austin to deliver two talks at UXPA — so we’ll be resuming our annual tradition of a special link-fest2 edition of The ¼" Hole.
Next month, we’ll resume our regularly scheduled programming with write-ups related to those talks. Until then, sit back, relax, and queue up some tabs for your reading breaks between conference sessions — and if you’re hungry for more, check out last year’s roundup.
There is no one way to do UX research (Gregg Bernstein)
Layoffs have hit the tech sector hard over the past 12 months, and there is some evidence that UX has had an even rougher go. One explanation going around is that we’ve largely brought that upon ourselves, so UX research is facing a reckoning. I’m not convinced by this narrative, as I’ve written elsewhere — and I think Gregg adds valuable perspective to the conversation.
The not-so-subtle art of job searching in the open (The Career Whispers)
An unfortunate consequence of the layoffs and corresponding reduction in role availability is that job seekers are spending more time on the market. During veteran UX leader Andy Warr’s search, he’s shared a wealth of his past and present experiences with the discipline, and we’re all better for it. Others may do well to consider the “do things, tell people” approach.
Research is a job that benefits businesses first, users second (Tech Workers Coalition)
Many of us enter UX with the ideal of helping people, removing friction and frustration from their lives. Unfortunately, what’s best for users sometimes conflicts with what’s best for the business, and the final decisions are ultimately out of our hands. Part of our role is balancing these two competing goals — but as Claire writes, that’s not always possible.
Calendar analysis: my day-to-day as a Principal UX Researcher (Lisa Koeman)
When job candidates are given the opportunity to ask questions about the role, they often want to know what a typical day looks like. There are lots of articles that answer that question in a general way, but comparatively few that take Lisa’s approach: using actual weekly schedules as data. I’m inspired (and encourage others) to do something similar sometime.
No one cared about my spreadsheets (Econlib)
Writer Bryan Caplan makes an interesting observation about the most common criticisms of one of his popular books: nobody actually checked his math. Though we in UX research also draw conclusions from data, it is likely also true that our conflicts over those conclusions aren’t based on data. (Previously on The ¼" Hole: what actually persuades.)
The Kano model, revisited (P. 25, Proceedings of the Sawtooth Software Conference)
Since 1984, Dr. Noriaki Kano’s feature prioritization model has been popular among product teams. But like any methodology, it has its shortcomings — and quant UXR pros Chris Chapman and Mario Callegaro point out some serious ones, including questionable item validity and low small-sample reliability. As a Kano fan, I’ll follow this debate with interest.
Human factors guidelines for presenting quantitative data (Human Factors)
Doug Gillan (my advisor) and colleagues note with some irony that many submissions to a journal about designing with human capabilities in mind include visualizations that were seemingly designed without the reader in mind. Using that rubric, they propose principles for effective data presentation, along with the helpful flowchart above.
Fifty psychological and psychiatric terms to avoid (Frontiers in Psychology)
Far from dogmatic language policing, the authors have compiled a compelling list of “inaccurate, misleading, misused, ambiguous, and logically confused words and phrases,” including some common items like: operational definition, statistically reliable, and scientific proof.
Cognitive bias in LLMs (ArXiv preprint)
Large language models are AI tools that have been trained to find relationships between words using a vast corpus of human-produced texts. And though it is true that these systems are in some sense intelligent, it is also true that they may reproduce flaws in human intelligence like our many well-documented cognitive biases, as was demonstrated in this clever study by Alaina Talboy and Elizabeth Fuller.
Lightning round
The last man without a cell phone • Hollywood professionals give casually sophisticated commentary on the UX of streaming (h/t Steve Portigal) • When do visualizations persuade? • A primer on designing mixed methods studies • What’s the deal with curved lines in design? • The value of inconvenient design • HCI research through litigation • How UX research teams allocate budgets towards their hierarchy of needs • A community for UX freelancers • Greater variability in workplace rhythms may reduce emotional distress in knowledge workers • The seven levels of busy
Drill deeper
Depth is produced by Drill Bit Labs, a leading UX and digital strategy consulting firm working side-by-side with UX and product design leaders to elevate their digital strategy, delight their users, and outperform business goals.
Whether you need guidance on specific methods or broader strategic support, we’re here to help you achieve your goals. How we help: user research projects to inform confident design decisions and optimize digital experiences, live training courses that teach teams user research skills, and advisory services to improve UX processes and strategy.
Connect with us to discuss your upcoming projects or ongoing UX needs.
A small plea to any readers who may also be conference organizers: might we consider spreading things out across the calendar a bit starting next year?
With apologies to Clive Thompson.