The human mind is a poor processer of large numbers (“large” in this case is any figure with more than two digits).
Accounting for fatalities from 1918’s influenza pandemic is, unfortunately, a logical place to begin a discussion about this evolutionary shortcoming. As curiosity naturally piqued interest about that strain’s impact on the World War I-era population, a bit of research revealed that 17 million to 100 million souls succumbed to the disease.
Absent context, this 83 million range seems absurd; but given the extreme variables at play, the relative “margin for error” is understandable. Comparatively, consider the old “jellybeans in a jar” guessing game – an estimate of 100+ candies when there are clearly no more than 17 would be mocked; but find a jar large enough to hold 17 million jellybeans and all bets are off.
To make the point on a more practical scale, just fill in the blank (note this is not a riddle or trick question): “The average human adult male has 17,000 _______________.”
Perspective is equally deceiving. Coming of age in the early 1990s, “Generation X” would scoff at the thought of any flu strain representing a greater population threat than AIDS, yet 1918’s H1N1 ended more lives in 24 weeks than HIV/AIDS claimed in 24 years.
Recency bias notwithstanding, mental processing of scale is a major challenge that, in real time, warps society’s response. To compensate, we default to “comparison” as
a proxy for direct measurement. The impediment to accurate comparative analysis is that all independent variables must be identical when assessing outcomes – aka “apples- to-apples.”
For example, ask five people born in different decades the following question:
Who is the greatest basketball player of all time? There is no correct answer. Yes, the game is the game, but enough small, accretive changes over eras invalidate objective performance measures required for scientific comparison (while recency bias alters perspective in more qualitative ways).
To read the rest of this article, click here.