The Science of Uncertainty
Getting comfortable with the unknown — a Q&A with Christie Aschwanden
In addition to co-running a winery in Colorado, and having been a pro cross-country skier, Christie Aschwanden is one of my favorite science writers. So I was eager to check out her new Scientific American podcast miniseries, UNCERTAIN, on the science of uncertainty.
I recently listened to all five episodes of the miniseries in a day, and invited Christie to discuss it. Below is our conversation.
David Epstein: I want to jump in with episode two of the series, because it gave the best explanation of the viral dress phenomenon (Is it gold and white or blue and black??) that I’ve ever heard. The star of the episode is neuroscientist Pascal Wallisch. How did you decide to feature the dress and what does it tell us about uncertainty?
Christie Aschwanden: This episode explores some of the ways that we can overlook uncertainty. This usually happens because we make assumptions about the world that are incorrect, and we don’t even realize we’ve made them. We think something is certain, but it’s not.
I wanted to find an example that people would be familiar with and also have some feelings about. The dress itself isn’t uncertain. If you look at it in broad daylight, it’s clearly blue and black. But the photograph of the dress that blew up the internet was taken in ambiguous light. Pascal Wallisch hypothesized that how your brain processed that ambiguity — what assumptions you subconsciously made about the lighting — shaped whether you saw the dress as blue and black or white and gold. Because these assumptions were made subconsciously, he couldn’t just ask people what they’d assumed. Instead, he needed a proxy for determining what assumptions someone’s brain had made.
DE: And what was the proxy?
CA: He settled on chronotype. He thought that if you were a morning lark, you’d spend more of your time in daylight and your brain would be more likely to default to assuming the photo was taken in bright light, and therefore you’d see it as white (because your eyes would adjust to account for shadow effects). On the other hand, if you were a night owl, you probably spend more waking time at night, under incandescent lighting, and your brain would be more apt to assume this is the kind of light shown in the photo. If that’s the case, you see it as blue. And indeed, when he tested this hypothesis in more than a thousand people, he found that the more someone identified as a night owl, the more likely they were to see the dress as blue, and vice versa.
In the broad sense, the assumption we all made about the dress is something that psychologists call “naive realism” — the assumption that we see the world exactly as it is, without bias. I saw the dress as white and gold, and it felt anything but ambiguous. It felt absolutely certain. And that’s the point of this episode: that sometimes things that appear certain to us aren’t.
DE: That issue — of things seeming more certain than they are — is a topic where I think you’ve done some of the best popular coverage, particularly as it relates to science itself. And in this episode you talk about a study in which twenty-nine different research groups were given the same soccer data, and asked to determine whether referees give more red cards to darker-skinned players. Twenty of the research teams found that referees did give more red cards to darker-skinned players, and nine of the research teams found no significant relationship between skin color and red cards. And this is the same data! How is this possible?
CA: A fundamental truth about studies is that the answer you get depends on how you ask the question. What are you counting as evidence? What measurements do you use? Are these measurements really quantifying the thing you care about? What statistical methods are you using to analyze the data? These decisions can have big consequences, and sometimes the researchers may not even realize they’re making them.
DE: We were just discussing how different peoples’ brains make different assumptions, which leads to different visual perception. And here you’re saying different research teams make different assumptions, and use slightly different methods, which may not feel like a big deal to them, but can lead to different answers.
CA: Right. If you only looked at one of these analyses, you would think: ok, here’s a pretty good answer. But if you look at all of them together, you see that it’s maybe not as certain. In this same episode, I explore a study that presented five research questions to fifteen different research teams and asked them to design studies to answer each of them. The groups used a wide range of different approaches and methods.
DE: And those different approaches presumably led to different answers…
CA: Exactly! Again, the answer you get depends on how the question is framed.
DE: In the show you discuss how science is the “process of uncertainty reduction,” but it’s slow, and difficult.
CA: One of the reasons I made this podcast was to help people understand that science is a process, and we need to be careful to remember that it’s always provisional and open to updating. As the evidence accumulates, we can greatly reduce the uncertainty, but it never goes to zero. We should be careful about putting too much stock in any single study.
DE: I tend to be extra cautious about studies that are really new if they’re surprising. If they fit with an existing body of work, that’s one thing. But if it’s both new and surprising, I start out wary.
CA: New and surprising makes headlines, but it should also warrant caution.
DE: This is getting at something that has been a central issue for both of us: how to judge what scientific research seems solid enough to promote. I want to share some of what I do, and see if you have suggestions.
First, I tend to avoid relying on cutting-edge findings, which sounds lame. I think new findings are great, but if there’s no larger body of work for context, I want to wait, at least in terms of touting it as clearly true just because it’s published in a peer-reviewed journal. Second, I have a list of general red flags in my head that raise the chances of a study being a false positive: when variance around some average is reported in strange ways (or not at all); when subjects in a study are broken into groups that seem arbitrary, because that’s often a sign of slicing and dicing data in search of some positive result; or when a study claims that some small intervention leads to a suspiciously large effect. Or if scientists declare their hypotheses ahead of time, and then report results for some other question they didn’t declare.
On top of that stuff, I do things like discuss studies with a few scientists who are methodology sticklers, and now I use Scite.ai, which lets me quickly see how other studies are citing my study of interest — like whether they agree or disagree. But beyond all that, I think after reading and interviewing in an area, I just want to be attuned to how (and if) a finding fits in the larger area of work. I think many misleading science headlines, typically based on single studies, could be avoided if the writer (and the scientist) understand that the particular study doesn’t actually make sense in the context of a much larger body of research. After that, the best I figure I can do is be open-minded, realize some findings I think are true won’t hold up, and be ready to update my own beliefs. Do you have any other suggestions for me, or for any of us who try to understand what new research to trust?
CA: I’m glad you asked, because this is much of the focus of my next book. You’ve already covered many of my checklist items. One thing I always try to do is gut check the methodology. Is the thing they are measuring a reasonable way to quantify the thing you care about? Is the study really answering the question you really care about, or has it created an artificial situation that may not apply to real life? One of the things I think is especially important is to be careful about over-interpreting or over-generalizing the finding. Just because a particular study found something in that particular setting, does that really mean that it applies to other scenarios and situations? We really want to find the one concept that explains everything, but these don’t always exist.
DE: I was thinking about this while listening to episode three, where you talk to political scientist Brendan Nyhan about the “backfire effect” that he documented, in which if someone is corrected about a politically charged fact, they become even more entrenched.
But the finding turned out not to be real, as Nyhan himself explains in the episode. It turns out that getting factual, corrective information that directly addresses the false information (a “truth sandwich”) actually does move people in the right direction, and the more often the better. How did Nyhan, who had this headline-making discovery, come to decide that it was not, in fact, true?
CA: Well, to begin with, Nyhan and his collaborators only found it in two of the five experiments in their original paper. And then follow-up studies by others and by him did not find it at all. So, being a good scientist, Nyhan updated his beliefs accordingly. The problem is that the original finding had made headlines and gotten a lot of traction. Once it was out in the public mind, it was very hard to reel in.
DE: So that was a new, and pretty surprising or counterintuitive finding, and we’ve talked about being cautious with those. But sometimes they are indeed true! Episode four discusses relativity and quantum mechanics. I mean, quantum entanglement — in which two subatomic particles can be an enormous distance apart and yet instantly influence one another — seems to me like peak counterintuitive. Einstein called it “spooky action at a distance.” But it makes predictions that have been confirmed repeatedly.
In this episode, physicist Chanda Prescod-Weinstein says: “You have to shift your relationship to what you call intuitive.” So how do we balance skepticism with the need, at times, to be able to embrace ideas so counterintuitive they’re barely even imaginable?
CA: Episode 4 is all about the habits of mind that scientists (and the rest of us) need to navigate uncertainty. And Prescod-Weinstein really eloquently describes how important it is for scientists to be open-minded to new findings and discoveries. One of the key traits needed here is intellectual humility, keeping in mind that you might be wrong. In a field like physics, especially, this can mean making room for ideas that might initially seem outrageous. But, of course, cultivating intellectual humility doesn’t mean you can’t ever know anything. You can feel appropriate confidence about the things you have good evidence for, while still being open to updating if new evidence arises.
DE: I think this gets at a larger question, which is how to make decisions under uncertainty. And you discuss that in the final episode. You talk to Nidhi Kalra, a policy analyst at RAND, about her work with the city of Lima, Peru, on making decisions about water infrastructure, given their dry climate and growing population. What they should build depends heavily on their long-term forecasts, which are highly uncertain. So how can they proceed?
CA: Kalra suggests that since you can’t know for certain what will happen, you have to use a more probabilistic approach. Instead of trying to make decisions that optimize for a particular outcome, you look for decisions that will be good or at least good enough for the maximum number of likely scenarios.
DE: This reminds me of the idea I read about in some writing by psychologist Barry Schwartz of “robust satisficing.” Instead of trying to optimize for a single particular scenario, you make the choice that produces an acceptable outcome in the widest array of scenarios.
Alright, last question for you: we know that comfort with uncertainty is important for all kinds of good thinking, from the artistic to the scientific; and I remember in research on forecasting it was a hallmark of the people who make the best predictions. But we also know that feeling like one is in control of their own life — a sense of agency — is really important for well-being. So how should we balance these?
CA: I think the best approach is to work on feeling comfortable sitting with uncertainty. (I recently wrote about how this has manifested in my own life.) One thing that really helps me is focusing on the fact that uncertainty isn’t necessarily negative. Uncertainty can also be exciting. Uncertainty means possibility. It can be an invitation to learn something new. Uncertainty is a very powerful driver of creativity and scientific discovery (which is the subject of episode 1).
DE: On that note, I want to end with a beautiful quote you used in the show, from poet and Nobel laureate Wisława Szymborska. She’s talking about how difficult it is to understand the source of inspiration. And it ends on a note about the beauty of uncertainty:
“When I’m asked about this on occasion, I hedge the question too. But my answer is this: inspiration is not the exclusive privilege of poets or artists generally. There is, has been, and will always be a certain group of people whom inspiration visits. It’s made up of all those who’ve consciously chosen their calling and do their job with love and imagination. It may include doctors, teachers, gardeners – and I could list a hundred more professions. Their work becomes one continuous adventure as long as they manage to keep discovering new challenges in it. Difficulties and setbacks never quell their curiosity. A swarm of new questions emerges from every problem they solve. Whatever inspiration is, it’s born from a continuous ‘I don’t know.’”
Thanks to Christie for her time, and you can check out UNCERTAIN wherever you get podcasts (or here if you’d like written transcripts along with the audio).
And thank you for reading. If you enjoyed this post, please share it.
And if a friend sent this to you because they were certain you’d like it, you can subscribe here:
Until next time…
David
Love this, David! I actually work with the woman who originally posted the dress meme (Cates Holderness, who works in marketing at Tumblr). I think about that way more often than is normal.
What a great rabbit hole to go down – the rabbit hole of the science of uncertainty! Thank you so much for bringing this up. You've interviewed somebody who is so highly respected not only by you, but by the very astute group of a few amazing science writers with whom I was speaking in the hallway at sports medicine meetings a couple of years ago. I knew them, but I didn't know the person to whom they introduced me -- your interviewee! And they clearly revered her also. The conversation between the two of you is very thoughtful, and I can see why they spoke of her so highly! I will definitely be exploring her and your links on this topic further. Thank you as always for choosing phenomenal folks to interview about amazing topics!