top of page
Search

Big & Small: Mistakes in Drawing General Conclusions From Specific Instances


Both real and model Volkswagen Microbus Type 2

Back in the early 2000s, American actress and TV personality Jenny McCarthy began lending her celebrity to a claim that there is a causal connection between the measles, mumps, rubella (MMR) vaccine and autism in children. While her mistaken view has complex origins in fraudulent research and online discourse among vaccine sceptics, what launched McCarthy into this world of misinformation—and what became core to the narrative she shares with the public—is her belief that the MMR vaccine caused her son to have autism. That is, McCarthy takes a giant leap from a dubious personal account to a general perspective on the dangers of childhood vaccines (to be clear, it should be noted that extensive research shows no evidence that the MMR vaccine can cause autism; see Hviid, Hansen, Frisch, & Melbye, 2019).


This leap is a typical example of the errant use of anecdotes as evidence for bigger picture, population-level claims. This sort of faulty generalization happens a lot in online discourse about scientific subjects—particularly among those who have not developed the reasoning capacities needed to effectively evaluate what may and may not be used as evidence.


We generalize when we use specific instances as evidence for broad conclusions. It's not always a mistake—in fact, generalization is quite necessary and is often the best way to understand the world around us (see more on this below). The problem is that we don’t always have a very good radar for when such generalizations are erroneous.


Hasty generalization—a fallacious mode of generalization—occurs when the isolated case or sample doesn't offer enough evidence for the general conclusion one wishes to draw. Here's a simple example of this mistake in reasoning (adapted from here):


Tim: Some kids got caught vandalizing the high school the other day.

Eric: Wow, kids are so reckless and destructive.


If we take Eric literally, his mistake is clear: he’s drawing conclusions about all children from the actions of a tiny subset. Given the small and non-random sample, we can assume that this isn’t a very good representation of all children. Really, we can’t even conclude, from this single instance, that these kids are reckless and destructive. The events might be an aberration among otherwise well-behaved children.


A reasonable rebuttal might be that this may just be a casual conversation—Eric probably isn’t really making a serious claim. He appears to be having an informal chat with Tim. Maybe he’s simply trying to connect with Tim by conveying agreement with what he thinks Tim might believe. We say things that aren’t to be taken literally all the time. We can’t really know precisely what Eric's up to unless we understand these people and their social relationship. Fair.


A weaker defence of Eric’s point would be if you said, “Eric's right. Kids are reckless and destructive!” What’s wrong with this? While it's too broad a statement to be literally true (that's one problem), what's important to remember is that good reasoning is not just about whether our conclusions end up being true or false (that can be hard to grasp!). Good reasoning has a lot to do with how we arrive at those conclusions. If the reasoning backing up our conclusion isn’t any good, we have little basis for supposing those conclusions reflect reality.


Below, to get a better idea of where generalization can steer us wrong, we'll compare sensible and erroneous movement from specific observations to general conclusions. We'll then look at typical instances of faulty use of generalization in the Covid-19 pandemic—a context in which hasty generalization runs rampant.


1. When is it sensible to move from small to large?


The ability to move from the specific to the general is something we need—the world often doesn’t give us access to the bigger picture, so we must take what we can from select instances we can perceive. We’re going from the small to large like this all the time, often without any awareness that we're doing it.


Mostly, the stakes of informally leaping from small to large are quite low, and we can arrive at our conclusions tentatively or with a grain of salt. For example, we make judgments about a new colleague’s personality (general) from their behaviour at last Thursday’s Zoom meeting (specific). All we have is a tiny sliver of experience with the person and that experience may not represent their typical ways. Nevertheless, we can’t help but take a best guess given the limited data at our disposal and any potential negative impact of such judgement is relatively slim. Perhaps our later experiences with the person will surprise us, so we'll adjust our views. This tendency to generalize and adjust is, of course, part of the normal process of getting to know someone.


Scientists, too, need to move from the specific to the general. For example, they must often use samples to draw inferences about the nature of larger populations of organisms or events. However, science is—or ought to be—conducted with the aim of minimizing the effect of human error on the collection and interpretation of data. Good scientists know when a generalization is and is not appropriate. And if they too boldly move from the specific to the general, a network of peers is there to tell them where they went wrong.


To take a simple example, a group of psychologists might be interested in whether playing violent video games causes aggression among children (this turns out to be a surprisingly difficult thing to study, but that’s not important here). These researchers want to draw inferences about the causal relationship between playing video games and aggression in some larger population of children (i.e., their interest is not limited to their sample), but they clearly can’t study all kids.


So what do they do? They need to move from small to large—they take a sample to represent that larger population. For example, they might conduct an experiment with two groups of children—a group of 100 kids who play a violent video game for 3 hours and a group of 100 kids who spend 3 hours playing some violence-free game. Then, to compare the groups, the researchers would measure some operationalization of aggressiveness among kids in both groups.


That’s okay. And necessary. There’s generally no problem with inferring that a rigorous study on an appropriate sample provides useful, if tentative, evidence for how things work in the larger population.


However, regardless of the measures put in place by scientists, it is, “certain that we will make errors in trying to predict the unobserved based on the observed” (Zimring, 2019, p. 379). Good scientists know this and take great care with interpreting their findings. Further, as part of the scientific process, additional research would need to be done to check whether the initial findings replicate (using the same and/or different methods to study the question), and other researchers are there to criticize the methods and findings of both the original study and subsequent replications.


Contrasting with sampling in scientific research, personal anecdotes generally don’t provide enough evidence for even a tentative conclusion about a larger population. For comparison, we'll stick with the same example context. In class, I sometimes pose the following question to students, “do you think violent video games cause aggression in children?” and ask them to explain. Many students will say something like this response from fictitious student, Aaron:


“Violent video games surely cause aggression. I saw my niece play some war game once and she got all violent afterwards. She was hitting her brother and yelling at her parents.”


Although this anecdote might provide us with a little info about Aaron’s niece's behaviour, it says nothing about whether violent video games cause aggression. Aaron’s response is typical of what we’re calling hasty generalization.


This might not feel right to you. Can’t what I experience in my environment speak to what’s true about the world? Well, yes and no. To the extent that your perceptions are accurately capturing reality, what you notice may speak to what’s true about your immediate environment.


If we take Aaron’s word for it (maybe we ought not to!), yes, his niece once played a violent video game and, yes, she did hit her brother afterwards. At the same time, inferring relationships between events is hard and often impossible. Specifically, observing that things occurred in sequence is necessary but not sufficient to suggest causality. How’s Aaron to know that it’s the video game that's causing the aggression and not, for example, some unobserved negative interaction with a sibling? Aaron doesn’t have a window into his niece’s mind or her full set of prior experiences. In general, although we may sometimes be fairly good at observing and reporting on events we encounter, we’re often not very good at detecting true causal relationships between one occurrence (playing video game) and the next (aggression).


Further, the question Aaron's trying to answer matters a lot. Aaron was asked whether violent video games cause heightened aggression in a general sense, not whether games make his niece more aggressive. Aaron’s observation doesn’t give us enough information about what we might expect to see in the general population of children, or for any other particular child. Aaron's engaging in hasty generalization.


To see the problem with drawing conclusions from this anecdote, let’s imagine we hear a second hypothetical perspective from class. Aaron’s classmate, Nico, speaks up:


“No, violent video games don’t cause aggression. My two sons play violent video games every day, and they never become more aggressive afterwards.”


Uh oh. What do we do with this? Do we throw our hands up and say we can’t resolve the issue because of competing accounts? Do we ask a third student to respond? Then a fourth, and only then draw conclusions? Do we opt to go with our own personal experience or gut feeling as a tie breaker? Do we assume Nico is right because she’s stated things more confidently or claims a bit more evidence?


No. None of these will do. Nico’s “evidence” and our own personal perspective probably aren’t much better than Aaron’s. They’re just more potentially biased anecdotes that depend on our fallible subjective experience with unique cases.


Above all else, Aaron and Nico’s observations don’t give us enough evidence to draw inferences one way or the other about aggression in the larger population of children. There’s so much complexity and noise at play in individual accounts that they simply don’t offer enough to draw conclusions about the larger world.


To see just how ineffective Nico and Aaron’s anecdotes are, think about what it would mean if their family members were part of the above noted hypothetical psychology experiment. That is, instead of being observed by Nico and Aaron, these family members are part of the group of kids who got to play violent video games for three hours in the lab.


Aaron and Nico's family members are just drops in the bucket among kids who are demonstrating a wide range of aggression after playing violent games. We need a larger data set to see where particular cases lie relative to all other cases.


We can also see that even if there happens to be evidence of an overall difference between the groups, Nico’s possibly unaffected children can still exist among those who played violent games (we can't know whether Nico's kids, in particular, were affected by the gameplay or not because we don't know how they would have behaved if they hadn't played the violent game. It's possible that they may have had even lower levels of aggression if they happened to be in the other group). We don't get to cherry pick individual cases demonstrating low aggressiveness as evidence that there was no effect. Likewise, we don't get to use Aaron's niece as evidence that kids are aggressive after playing violent games. By themselves these single cases do not help us to understand whether violent games cause aggression.


What we should instead care about is the distribution of scores for this large sample of kids and compare it to the distribution of scores for kids who didn’t play violent video games. As long as this was well-conducted experiment, the overall effect may allow us to draw tentative inferences—the individual cases do not.


2. Hasty generalization in the pandemic context


There are important upshots of the video games and aggression example for the context of Covid-19. We're making a mistake when we use our own personal experiences with Covid-19 to draw broad inferences about the general nature of the disease. Similarly, it’s a mistake to take a single scientist’s opinion about vaccine efficacy to represent the scientific consensus. In both cases, we need more information to make an inference about the larger population.


One of the most common faulty generalizations concerning Covid goes something like this:


I got Covid, and it felt a lot like the common cold for a few days and then it was over. Covid is clearly not a big deal. It’s basically now the common cold.


In this example, the reasoning is largely the same as the above classroom examples concerning violent media and aggression. It’s wonderful that Covid-19 had little to no effect on this person. However, this anecdote does not serve as an effective premise for the conclusion that Covid-19 is relatively inconsequential. We all know that thousands of cases ending in death or requiring hospitalization are out there, so why put so much stock in a personal experience and so little in those of others?


Critically, it doesn’t matter whether your conclusion turns out to be correct. Perhaps the current dominant variant of Covid-19 is much like a bad cold for most people. Even if this is true, we can’t get there from personal anecdotes. We need population data and scientific research to come to that conclusion.


Again, it’s the reasoning behind the conclusion—the generalization—that we’re interested in critiquing, not the conclusion by itself. Despite an opposing conclusion, the following argument contains the same reasoning, and is therefore just as weak:


I got Covid-19 and ended up in the hospital hooked up to a respirator. I thought I was going to die. Getting Covid is a horrendous experience.

Of course, if we alter the conclusion to "Getting Covid was a horrendous experience for me", it's completely fine. See the difference?


3. Uncritical use of anecdotal evidence lets us conclude virtually anything we want about Covid-19, vaccines, and even zombies.


The problems run deeper than making the occasional error in reasoning. Given the vast array of information to which we have access, if we accept anecdote as legitimate foundation for arriving at population-level claims, then we can find support for virtually any conclusion we want to draw.


For example, if you want to “support” your view that something’s wrong with the Covid-19 vaccines, you might turn to the personal experience of Robert Malone, a scientist who became semi-famous after he appeared on Joe Rogan’s podcast. According to the Atlantic, Malone says that Covid-19 symptoms can be made worse by getting vaccinated, sharing his view that the Moderna vaccine made his own symptoms worse.


What’s the problem? First, Malone’s personal account is dubious—there’s no way for him to know how Covid-19 and the vaccine interacted in his body and he doesn’t have the ability to compare what would have happened in the alternate universe in which he hadn’t received the vaccine (hmmm... sounds similar to the the Jenny McCarthy example above). As a scientist who holds significant sway with a contingent of the general public, he should know better than to share this kind of nonsense with the public.


Again, even if Malone's correct (there's no way for him or anyone else to know), just like in each example above, the single instance simply does not speak to the bigger picture of vaccine efficacy. In short, we'll be better off if we disregard this personal anecdote entirely.


People buy and sell outlandish and uncorroborated stories every day in support of their prior beliefs. Recently, I witnessed a small group of people on Facebook expressing their concerns about vaccines and pregnancy, citing—without sources or any good evidence that these are real instances—vague anecdotes from friends of friends (e.g.,"a friend of a friend had a vaccine in their third trimester. Her baby was born deformed") and far away countries (e.g., "Honest Doctors came out and showed all the mutated babies being born coming out with tails n such since the vaccine").


Scary information. Fortunately, it should now be clear how these second or third hand anecdotes are utterly useless to our ability to arrive at well-founded beliefs that reflect reality and enable us to make good decisions about whether to get vaccinated. The mix of cherry-picked examples (that could well be utterly made up for all we know), awful causal reasoning, and hasty generalization, unfortunately, are typical of social media conversations about difficult topics.


When we have the world at our fingertips, we can come up with questionable or ridiculous premises to support almost anything. How about this, for example: zombies are real. Here’s how I know:


“A man in Germany informed his doctors that he had drowned in a lake the year before. The only reason he could explain his condition to them was that radiation from cell phones had turned him into a zombie.” (Zimmer, 2021, p. 17)


Fortunately, we have nothing to worry about. This is an example of a person with Cotard’s syndrome, which sometimes involves the delusion that one is dead (look into it—it’s fascinating!). But if we’re mining personal anecdotes from across the world as evidence for the dangers of vaccines, then why not zombies? We can just as easily find “support”—using virtually the same logic—for the view that cell phones cause dead people to become zombies.


It’s important to note that I’m not disparaging people’s vaccine concerns here—that’s entirely beside the point. Rather, I’m making light of deriving and supporting beliefs about scientific matters, like vaccine efficacy, using cherry-picked anecdotes. The supposed personal experiences communicated by strangers on the internet are not useful—and can be dangerous—foundations for beliefs and decisions about our health.

4. Take special care with the frightening human-interest story


As we scan the news, sketchy arguments are not all we have to worry about. True human-interest stories can also subtly lead our thinking astray.


A human-interest story can paint a vivid picture of individual experience with, for instance, disease or war. The richness of these stories can be compelling, and for good reason—they offer concrete info and vivid imagery to latch onto, unlike sterile and often yawn-inducing research findings! Unfortunately, our tendency to overgeneralizing can sneak up on us as we process these kinds of stories.


This often happen automatically and without our awareness. Humans are equipped with a heuristic or mental shortcut called the availability heuristic. According to Amos Tversky and Daniel Kahneman, who first studied it in the 1970s, the availability heuristic is a mental shortcut whereby we, “assess the frequency of a class or the probability of an event by the ease with which instances or occurrences can be brought to mind."


When we lack clear answer or relevant data, we often substitute the question we ought to ask (e.g., “what’s the probability of contracting COVID-19?”) with a simpler question that harnesses ease of recall (“How easily do instances of COVID-19 come to mind?”). We use it to estimate likelihoods of relatively innocuous events, like the chance of getting stuck in traffic on our way home from work, and more serious ones, like whether contracting a particular virus will kill us.


It works relatively well much of the time—particularly when the ease with which something can be brought to mind is tethered to real-world frequency. An example relayed by Kahneman in Thinking Fast and Slow asks readers to judge which of the following lists of letters can be used to construct more words:


XUZONLCJM

TAPCERHOB


You likely arrived at an answer rather quickly (it’s the second list). You don’t know how many words can be created from either list but your intuitive judgment in this case was probably apt. You don't even have to generate a single word from either one!


However, the availability heuristic can go awry when it’s poorly adapted to the environment, such as when it’s used to judge the extent to which we're in danger from the kinds of deadly events that receive disproportionate media attention. Exposure to sensational stories, like those involving murder and natural disasters can make them appear to be greater threats than say diabetes and heart disease, which in fact kill more people. Similarly, air travel accidents receive more attention in the news than car accidents despite the relative safety of flying. As a result, we overestimate the danger of flying.


When we try to judge probability of contracting Covid-19 or getting severely ill from the virus, our minds often substitute a simpler question: "how easy is it for me to recall stories about infection and severe cases?"


Human-interest stories, being memorable and vivid, are easy to recall. As a result, our brains can recall and weight them more heavily than they, perhaps, ought to. Relative to a story about a patient’s experience with the COVID-19, valuable scientific findings don’t generally have the narrative structure or striking imagery that our minds like to recall. Thus, the information we truly ought to weigh most heavily is harder to call to mind and therefore is less likely to be used.


So, we're prone to overweighting personal anecdotes in our assessments of how the world works. This is another instance of our fallible use of select instances to support our sense of the bigger picture. The added concern here is that it's ingrained in us, operating in the background automatically and usually in the absence of our awareness.


Conclusion


Obviously, we shouldn’t (and can't) dismiss personal experiences—reflecting on our lives and the lives of others is valuable, both personally and socially. Our personal experiences matter, for instance, when we want to inform other people about how we’re doing or share an amusing story. It's true, too, that anecdotes from those we can trust may indeed tell us something about disease, vaccines, etcthey’re not devoid of information. Anecdotes may, for example, provide information about the range of possibilities (insofar as those anecdotes reflect reality, unlike say the causally dubious stories shared by Robert Malone and Jenny McCarthy). A story about Covid might tell us that it's possible to die from the illness. A story might tell us that it's possible Covid manifests much like the common cold. In the absence of good evidence, as was the case early in the Covid-19 pandemic, stories can be better than nothing.


But we should be careful with how we’re using our own and others' personal accounts—particularly when we want to make a move from the story to the bigger picture.

26 views

Recent Posts

See All

Comments


bottom of page