When 250,000 participants aren't better than 10
Plus overconfident digital brains in the Inaugural Edition.

Hello, world!
This is the first issue of the newsletter, and… well, it's likely been a bit since you subscribed. In case you forgot: I'm Lawton Pybus.
Thank you for being an “early adopter!” It means a lot. Going forward, I’ll be sending these on every third Sunday of the month.
A thought on… beer money and sampling bias
A lot of my day job involves complex study designs, reaching impossible-to-find participants, and doing it all at the scale of massive sample sizes. So I end up thinking a lot about how hard it is to make sure we’re reaching the right people for the best data.
Beyond the standard advice for “how many participants do we need?”, the composition of that sample becomes extremely important. From Nature this month:
A survey of 250,000 respondents can produce an estimate of the population mean that is no more accurate than an estimate from a simple random sample of size 10. Our central message is that data quality matters more than data quantity,1 and that compensating the former with the latter is a mathematically provable losing proposition.
How does something like this happen? In short, no matter the sample size, GIGO applies. The authors continue: “When biased samples are large, they are doubly misleading: they produce confidence intervals with incorrect centres and substantially underestimated widths.”
There was a great example of this a few months back when a viral TikTok video about easy ways to make some beer money ended up throwing off dozens of ongoing academic studies conducted on the popular participant sourcing platform, Prolific. The creator’s predominantly female audience flooded the panel, skewing demographics in studies that didn’t explicitly control for it. (Meanwhile, in our field, filling studies using any non-essential demographic data is becoming an increasingly controversial practice.)
We don’t necessarily need representative samples, but in my experience, researchers don’t think critically enough about sampling bias beyond asking a few standard behavioral screening questions, or (more rarely) including some carefully-worded attention checks. In fact, there are many subtle ways you can construct your study to invite bias, whether in the questions and tasks themselves or in how and where you’re recruiting.
I propose an exercise to bring one facet of this closer to home: sign up as a study panelist2—not for the incentives, but to understand the participant experience. Dive into some of the communities where these folks hang out and try to understand their motivations and priorities. What makes a study satisfying or disappointing? Why would they spend 25 minutes doing tasks and answering questions on your site at all?
You’ll naturally start to think about and design your studies differently. It’ll help you understand on a visceral level why, for example, that niche B2B survey isn’t filling as fast as your team would like. And it’ll make you a better researcher.
The Distractor: A new source-monitoring error?
(Right, so this will be a section for neat stuff tangential to the topic of UX research.)
From PNAS: People mistake the Internet’s knowledge for their own
Those who use Google predict that they will know more in the future without the help of the internet, an erroneous belief that both indicates misattribution of prior knowledge and highlights a practically important consequence of this misattribution: overconfidence when the internet is no longer available.
I’ll file this under evidence for the theory that 21st century minds—whose boundaries we believe already to encompass the whole body, and at times even blur into other people—now have a digital element as well.
Drill deeper
Depth is produced by Drill Bit Labs, a leading UX and digital strategy consulting firm working side-by-side with UX and product design leaders to elevate their digital strategy, delight their users, and outperform business goals.
Whether you need guidance on specific methods or broader strategic support, we’re here to help you achieve your goals. How we help: user research projects to inform confident design decisions and optimize digital experiences, live training courses that teach teams user research skills, and advisory services to improve UX processes and strategy.
Connect with us to discuss your upcoming projects or ongoing UX needs.
Emphasis mine, here and in later quoted text.
I’m suggesting Prolific here since a lot of those studies are basic academic ones. But generally, you know, be cool, and don’t intentionally contaminate fellow researchers’ data with your domain expertise, etc.