A popular joke attributed to a bunch of historical figures says: “It is difficult to make predictions, especially about the future.”
How true. Writing for the WIRED magazine, Paul Ford described his recent experience of reading The Book of Prediction, a 1980 anthology about what life would be in 50 years. Ford’s summary of the Book’s predictions is short and unambiguous: “All predictions are wrong.”
It always amuses me how seemingly smart and competent people can say something that is so blatantly stupid (in hindsight, of course).
“Everything that can be invented has been invented” (Charles Duell, Commissioner of the U.S. Patents Office, 1899)
“The atomic bomb will not go off, and I speak as an expert in explosives” (Adm. William Leahy to President Truman, 1945).
“There is no reason anyone would want a computer in their home,” (Ken Olson, President of Digital Equipment Corp., 1977).
But let’s not ridicule folks living a hundred or even 40 years ago, for the human ability to predict the future has hardly improved ever since. Take, for example, the March-April 2020 issue of the MIT Technology Review. Headlined “The Prediction Issue,” it features a dozen or so leading futurologists predicting which technology trends will dominate in 2020-2030. Characteristically, none of the proposed trends even mentioned the threat of a worldwide pandemic like the one that is currently ravaging the globe.
It appears that with the exception of Bill Gates (who, being neither a professional futurologist nor, for that matter, a professional epidemiologist, predicted a pandemic caused by a highly-infectious virus back in 2015), experts have particularly tough time with foreseeing disease outbreaks. A case in point is the 2019 Global Health Security Index, the first comprehensive assessment of the health security capabilities across 195 nations. The Index specifically focused on nations’ preparedness for infectious disease outbreaks that can lead to international epidemics and pandemics. (Sounds to me exactly like COVID-19.)
To the credit of its authors, the Index finds no single fully prepared country: the average overall score among all 195 countries was 40 of a possible 100.
But what genuinely surprised me were the scores that the Index assigned to individual countries. The United States led the world in the overall preparedness (with a score of 83.5). The U.S. also scored the highest in a few specific categories, including prevention of the emergence of pathogens, early detection and reporting of epidemics, and sufficient and robust system to protect health workers. The U.S. was second to the U.K., though, in the category of rapid response to the spread of an epidemic.
I wonder what the predictive power of the Index has been given that the U.S. was among the countries with the highest per capita numbers of COVID-19 infections and COVID-19-related deaths? (Of note: New Zealand, a poster boy for handling the pandemic, came only 35th with the overall score of 54.0.)
Like every normal human being, I love guessing about what will happen tomorrow. And I know that organizations need to peek into the future to foresee upcoming threats and opportunities and to plan for the next steps. Yet, it worries me how fast some self-appointed Cassandras have rushed to tell us about our next future (a.k.a. “next normal”). Did you guys have enough time to figure out first what happened in the near past?
* * *
Why can experts be so wrong when predicting the future?
A 2019 neurobiology study provides a useful, if provocative, insight into the issue. A team of scientists analyzed neuronal activity in the brains of mice forced to learn new decision-making skills. As the mice progressed through learning new tricks, more and more neurons in their brains got involved. However, the neuronal activity rapidly became very selective:the individual neurons only responded when the mice made one choice and not another. This pattern became even stronger as the mice learned how to do a task better (i.e., became “experts”). Moreover, when the expertise was fully achieved, the mouse’s brain was ready for that expert decision even before the mouse began executing the task.
In other words, “expert” mice knew how to solve the problem even before starting to solve it! In contrast, the neuronal activities in the brains of “non-expert” mice remained non-selective, meaning that the mice were approaching the task with an “open mind.”
If we take the risk of extrapolating these results to humans, the implication would be that experts approach the problem with the patterns that are already pre-formed in their brains by their prior experience. So, when predicting the future, they see it as a slightly different version of the past and present they’re already familiar with.
* * *
To me, that means that predicting the future has a future only if the prediction process will begin systematically engaging large groups of people, experts and non-experts alike. I believe that the more people can be brought together (figuratively speaking, of course), the more granular picture of the future will emerge.
One useful prototype of such an approach could be Wikistrat, a geostrategic business consultancy that leverages crowdsourcing to deliver strategic intelligence. Many Wikistrat’s clients include government agencies interested in geopolitical scenario planning (and, on occasion, in the location of the next black swan hatching).
Corporations that value operational agility—and, therefore, unable to use a large-scale crowdsourcing format all the time—could use another, hugely underappreciated form of utilizing the proverbial “wisdom of crowds”: prediction markets, internal platforms allowing employees to speculate on future events and outcomes. A few large companies included Google have used corporate prediction markets to dramatically improve the quality of their decision-making.
And what is left to regular folks like me who can’t afford the expense of running a crowdsourcing campaign or prediction market? I don’t know about others, but I train myself not to be surprised with anything that tomorrow can bring, a task that was made much easier by watching the past five years of U.S. politics.
Image Credit: Javier Allegue Barros on Unsplash