How to Get Better At Learning From Experience
In "wicked learning environments," success can be a poor teacher
“For geoscientists, much of the death and destruction from big quakes is preventable with better building practices. Tragedy can be anticipated with Cassandra-like clarity. But human behavior and investment are often motivated by experience — things that have happened in our lifetimes, or within a few generations.”
That passage from a Washington Post article about the earthquake in Turkey struck me. Normally, I probably would have glanced over it, but three days earlier I had been on a Zoom call with two behavioral scientists; one of them, Emre Soyer, lives in Turkey. At the start of the call, he alerted me to news of the earthquake before I’d heard about it anywhere else.
Beyond the geographic connection, though, the article passage stuck with me because I was talking with Soyer, and his colleague Robin Hogarth, about their fascinating book, The Myth of Experience: Why We Learn the Wrong Lessons and Ways to Correct Them. The book focuses on why experience is not always as good a teacher as we intuit, like in the case of rare but impactful events, such as earthquakes.
The book also dives into research that has shown that people get reliably better with experience in certain tasks, while in others they don’t get better and yet become more confident. A dangerous combination.
That conundrum was central to my last book, Range. I had noticed the issue in scientific literature on expertise — that sometimes experience was an adept teacher, and other times it wasn’t — but I didn’t have a framework for investigating it. Until I came across Hogarth’s description of “kind” and “wicked” learning environments.
In “kind learning environments,” rules are clear, patterns repeat, feedback is quick and accurate, and work next year probably looks like work last year. There, experience generally leads to improved judgment. In “wicked learning environments,” on the other hand, rules might change (if there are any), patterns don’t just repeat, feedback can be delayed or inaccurate, and work next year might not look like work last year. In wicked domains, experience often gives practitioners the feeling of improvement, without the reality.
Hogarth’s decades of research have helped illuminate certain flaws in human judgment and how we learn (and mislearn) from experience. Here’s how, in Range, I wrote about one amazing example that Hogarth shared in his book Educating Intuition:
“In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons. Hogarth noted a famous New York City physician renowned for his skill as a diagnostician. The man’s particular specialty was typhoid fever, and he examined patients for it by feeling around their tongues with his hands. Again and again, his testing yielded a positive diagnosis before the patient displayed a single symptom. And over and over, his diagnosis turned out to be correct. As another physician later pointed out, ‘He was a more productive carrier, using only his hands, than Typhoid Mary.’ Repetitive success, it turned out, taught him the worst possible lesson.”
In 2020, Hogarth co-authored The Myth of Experience with Soyer (his former PhD student). I really enjoyed the book, and meant to talk to the authors about it when it came out, but pandemic life got in the way. Below is an edited version of our conversation about getting a bit better at learning from experience.
David Epstein: I think one theme in The Myth of Experience is that we really need to come up with measures, or definitions to measure against when we try to learn. And I was really intrigued by the “Reclaiming Experience” section of the book, because you apply this idea to our personal lives. Can you talk a little bit about the importance of personally defining success and failure rather than leaving it up to chance?
Emre Soyer: Think of your phone. Somebody has to build these interfaces. But while building these interfaces, inevitably they anchor you on things that they would like you to pay attention to. And hence, you kind of lose some control. Reclaiming that experience means there has to be some way we can defend ourselves, or at least understand what’s going on. If we leave it to other peoples’ designs, it would be like going to a huge supermarket with a very empty stomach. You would be drawn to all sorts of smells and colors. That's what Robin and I have been discussing. We need to know what success and failure looks like to us personally, to have a working definition. I mean, we can update this. But that leads to some kind of definition of a goal and objective. Then we can go into experiences that are valid for those successes and failures. Otherwise, you’re following someone else’s design.
DE: I like that idea, that we have to have a kind of definition of what we want, or what direction we want to go, and that will help us learn whether whatever we’re doing on our phone or computer is helping or hindering that. And we update it, of course, but it gives a benchmark for learning. That feels especially important to me in the social media age, where there’s so much design for attention, and it’s very easy to get carried away without thinking about where you’re going, or why. I took Twitter off my phone last year when I realized I was prone to doomscrolling; I like using it sometimes and find it genuinely useful, but I’d say about 70% of the time I was spending there wasn’t aligned with my personal or professional goals. So now I have to go to the actual website to use it, and it’s much more deliberate, with some reason in mind, not just to see what’s trending.
[Note from David: It didn’t come up in our conversation, but there was a great section in The Myth of Experience about the “remembering self” versus the “experiencing self,” and how “when it comes to happiness, moderation is key. Excessively striving to capture every moment of a given experience can have adverse effects on our experiencing selves. We may reduce the immediate joy those moments provide. We may also start doing things not because they make us genuinely happy but primarily because they produce moments that we can easily capture and share with others.” This reminded me of a talk I recently watched by The Marginalian writer Maria Popova. Popova has a huge Instagram following and posts beautiful nature photos, but in the talk she said: “I made a pact with myself…to not take a picture of anything ever until I’ve actually taken it in first, which means some of the most beautiful things I see I don’t document, because they’re fleeting.” Recent research has found that taking pictures can “undermine enjoyment” of an experience, particularly when the photos are taken “with intention to share.”]
DE: I think one of the most important notions underlying a lot of the book is the idea of basically stealing little tactics here and there from scientific thinking in order to improve our ability to learn in our own lives. What are some examples of that that you can share?
Robin Hogarth: A simple example where [a business] can experiment is understanding base rates in hiring decisions. That’s where they can get a quick win.
DE: Can you explain specifically for people that might not be familiar with base rates?
RH: Hiring decisions are a good example because you’ve got a person in front of you who’s the interviewee, and that person is showing all kinds of signals on various aspects of their abilities. And you have to figure out how able they actually are. And to do that, you should have a base rate, and then build a model from there.
DE: So meaning: you’re examining their qualities, and some of those might seem impressive, or not, but really what you want to know is how common are those qualities among people who perform a certain way at this task in your organization, or your sector in general. So you should determine some of the qualities you’re looking for, and then try to understand the base rate of those qualities in your industry (i.e. how common they are), so you can really understand how to compare candidates.
RH: Exactly, yes.
DE: I think that falls in line with the idea that we don’t want to leave everything up to intuition. The person doing the hiring in your example, just by doing it a lot will feel like their judgment is improving, but it likely isn’t if they don’t understand principles like base rates. So we should actually study the landscape we’re dealing with, and ask about information that isn’t placed right in front of us, which I think is a theme of your book.
ES: Another thing we observe in business is that people kind of benchmark success, and obsess about failure, and we do these things more separately than we should. So coming back to Robin's argument, if we try to think in a more scientific way, we would put those things together. If you only focus on successes, you get bombarded by best practices. And when you focus on failures, you get bombarded by certain common elements of those cases. But when you look at successes and failures at the same time, you see there are some common elements across success and failures.
DE: This kind of reminds me of some of the marketing around a book Tony Robbins wrote about money, and the marketing material sort of said that his research is better than any normal PhD because he got a personal PhD from the most successful investors, or something like that. But my takeaway was the idea that it only focused on knowledge acquired through individual success cases, and not broader research, and equated that to the best way to study things. And maybe many failures have the same qualities he highlighted.
ES: Let's say you have 100 failures, and you dive into 100 failures, and you kind of bombard them, scrutinize them, inspect them, fire people, you know, you essentially create big accurate data on 100 failures. You'll find lots of stories there as potential reasons for failure, so from that experience you think you’ll learn to avoid failures. But here you have 100 successes that you don't look at. You would be better off if you could take maybe 10 failures and 10 successes, and study them at the same time. You would be more precise about what actually causes the difference. And they may be exactly the same, by the way. And then you will say: ‘Ok, it's very complicated, and chance plays a large role.’ Maybe it's not a given at all that they will be very different.
DE: And that doesn’t feel very satisfactory, but at the same time you won’t be basing your strategy on lessons you think you discovered but that aren’t valid lessons. This brings me to another point I want to discuss: “specific curiosity.” I was reading a paper that suggested that, rather than just aimlessly browsing the web, we should start out with some specific question, like: “How did that magic trick work?” And that having something specific to anchor on improves learning and creativity. Maybe I’m stretching here, but I feel like part of what you’re saying is that if you take this more holistic look at success and failure, you’re going to generate better specific questions that then allow you to look in a more targeted way for lessons from experience. In other words, you’re generating better hypotheses so that you have a question to focus your learning.
RH: Consider doctoral students doing research. The most typical thing for a doctoral student to do is search to actually find the question. Students will say something like, “I'm interested in decision making and explaining how important decision making is.” But that doesn't get you very far. You have to find out what aspect of decision making, which is specific. A lot of the research that gets done and published is our responses to specific questions, which are not necessarily that close to the first issues.
ES: I must say, I did my PhD with Robin and one of the most difficult things was to find that problem. We were not given specific guidelines, or budgets or constraints in the beginning. And that was for good, benevolent reasons, but it was like swimming in the ocean. I don't know where I am. Where am I going? And I think with companies — and I’m going to over-generalize here — the instinct they have is to find a best practice from a success and optimize it. And people become problem solvers within that, but we should also give the time and some structure to come up with certain problems, not just solutions. Especially because the world changes, so the things you optimized can soon become irrelevant.
DE: So what would be an example of a structure that might incentivize people in an organization to spend time hunting for specific problems, rather than just trying to optimize for whatever is right in front of them?
ES: One of the kinds of structures we talk about in the book is the “senate,” which actually works quite well. It creates a bit of space for a group of people in a company to talk about problems quite freely, and they have only two hours every two weeks. They have to bring a problem to explain to others, and everybody has to speak. And it works best if the people are in the middle of the company. They're looking up and down, and they feel stuck. I don't know if you've ever worked for a large company, but you can feel stuck if you're in middle management. You're used like an operator; nobody listens to you; if you express a problem, it might get ignored. But since they’re looking up and down the company, it turns out they have the best understanding of some problems. So, for example, we created a small group of these people across different parts of a company — and the range is really important, the range needs to be wide. You know, one person from HR, one person from operations, one person from IT. And these people need to be about the same level so they don’t outrank each other so much. And they bring a problem of theirs to the table, and the others think about it, and suddenly you get all sorts of synergies. If they bring 10 problems, some of them won’t get solved, but some get solved quickly, you know: “My cousin works at this other company, and they solved it this way.” And suddenly you have an idea you can adapt and at least test. And this takes maybe two, three hours a month. It cuts through the silos, and the bureaucracy and the politics of the company. And people talk freely because the top management is not allowed in there. But they do report to the top management, and then as a group they're harder to ignore if they identified a problem.
DE: Interesting. Sounds like a case of what I’ve called in the past “thinking outside experience.” Our intuition might be that the manager seeing the problem every day would best know how to solve it, but actually you get a diverse group in the room for just a little while, and it can be really powerful. This kind of reminds me of Kevin Dunbar’s work, which I’ve written about, where he studied problem solving in biology labs, and the diversity of analogies that a team could generate was really important in problem solving. Of course, our instinct is probably usually to get together with people who have our same function or department, but as he wrote: “When all the members of the laboratory have the same knowledge at their disposal, then when a problem arises, a group of similar minded individuals will not provide more information to make analogies than a single individual.” Or, as he told me in an interview: “It’s sort of like the stock market. You need a mixture of strategies.”
Ok, one last topic I want to talk about, because it’s a topic I love: self-regulatory learning — basically thinking about your thinking. I’ve seen this in a number of domains, but was just reading a relevant study of cardiac procedures at hospitals. To simplify it: in some instances, hospital staff just racked up experience doing procedures; in others, they got some experience doing procedures, but then also had some time to reflect on and try to articulate what they had actually learned from the experience — whether it went well or poorly. This whole literature on self-regulatory learning, to me, gets at some of the challenge of learning from experience if we leave it up to intuition. It turns out that doing this explicit reflection on lessons is a powerful learning aid, but it’s also not something we intuitively feel we need to do. In the hospital study, the groups that had some experience and some reflection time performed better than the groups that had more procedural experience but no reflection time. That seems like a big deal to me.
RH: Oh, I like that idea. Because there are two ways of learning from experience. One is automatic and it doesn't take any effort on your part, and the other requires effort. And if you just leave it to no effort and rely on what you pick up, you'll pick up strange things. And therefore, what is very helpful is if you actually have to analyze some of these issues and understand why you've learned one thing as opposed to anything else. So I think the general idea of sacrificing, as it were, some experience for some thinking time is really important.
ES: I know I’ve repeated this point, but usually if we just learn from a success or failure, you will think you’re learning and you’ll keep doing something. And you may be blind to what causes the difference. If you stop and think about it, and try an experiment, you will not need as much experience. But, as Robin said, our instinct is thoughtless learning, to just double your efforts doing the same thing over and over. People like the warm feeling of learning from experience and continuing, but sometimes it’s just a feeling and they like that rather than stopping and reflecting on it a bit.
RH: Also, the thing you're learning is how to design a better experiment the next time around. So taking time out, to not only experience things but spend that time analyzing and designing experiments is important. And it’s one of the things that we're not very good at — teaching people how to be experimenters, how to actually design experiments in the real world.
DE: Given that peoples’ lives aren’t in labs and can’t conform to the formal scientific method, what do you think we can take to get just a little bit better in our own experiments? Maybe trying to isolate particular variables when we experiment so we have a better chance at understanding causality?
RH: Having awareness of different hypotheses is probably the key. This can be done in real life interaction with people. You do experiments with your own family just by interacting with them. You don't necessarily analyze them as experiments, but you can have hypotheses.
DE: You mean about why they behave the way they do or react to you or to some activity a certain way?
DE: So, ok, if you had to share one tip for people to adapt something from scientific thinking to their own lives — and I know there are many in the book — what’s one you’d share here?
RH: I think the thing I would share with them is the notion of generating alternative hypotheses — speculation. That actually speculation is good. And by speculating a lot, you will actually test better. And you wouldn't be jumping to conclusions so fast either. So speculation is good.
DE: I love that idea. Because when we say that someone is “just speculating,” that usually has a negative connotation. And here we're saying: generate some ideas about what might be going on, and you’ll be a better thinker because of it.
RH: Consider your behavior with a group of people. You're speculating on why or how people are behaving in the group. And how do you know you're right? Well, you have to generate lots of speculations, and then see which is better.
DE: Because if you haven't generated the hypothesis that actually is useful or is closer to the answer, then you’re probably not going to recognize the answer or important lesson when it happens. Is that right? Now that I think of it, scientists in general seem highly attuned to alternate possible explanations in life, not just in their work.
Thanks so much to Robin Hogarth and Emre Soyer for talking with me. I’m a big fan of their work. And check out The Myth of Experience if you’re interested in more.
As always, you can support Range Widely by subscribing.
And if you liked this post, please share it.
Until next time…