Insurers, take heed of these 3 common forecasting fallacies
Few forecasters predicted the 2008/9 global financial crisis or ongoing COVID-19 pandemic. Could they have foreseen these crises and their severity?
Article information and share options
In behavioural economics, we often come across three misconceptions around forecasting, which also affect insurers' decision making. Let's debunk these myths.
Myth 1: Forecasting models should become more complex as uncertainty rises
In forecasting, complex models like machine learning (ML) methods, can outperform simple models. Sophisticated quantitative models, including those powered by ML, have already improved customer analytics and claims processing.1 When put to the test, ML methods beat the benchmark of a simple model by 5%, while the winning method exceeded the simple model's performance by 19.3%.2
But which models should we rely on as the degree of uncertainty increases? Uncertainty is inherent in most prediction problems facing insurance. We often do not know all possible outcomes and the future is usually not like the past. The apparent solution to fine-tune a model based on historic outcomes can therefore lead to more errors.3
Why is that? An example of sophisticated modelling gone wrong in the face of high uncertainty is Google's Flu Trends. Meant to predict the spread of flu, it ended up missing the peak of the 2013 flu season by 140 percent.4 Google's big data algorithm picked up seasonal search terms unrelated to the flu. Searches for “high school basketball", for example, peaked during the March flu season in the US, but were not related to flu prevalence. Another factor throwing off the algorithm included Google's growing efficiency in helping users find health-related information with fewer key terms.5
A simpler approach offered superior results. A model specifying only the number of flu-related doctor visits during the previous two to three weeks and the number of last year's doctor visits one week prior to the patient's flu diagnosis as reported by the US Center for Disease Control was 30% more accurate than the advanced big data analysis.6
In addition to their performance, simple forecasting models can offer greater transparency, enabling forecasters and decision-makers to understand what the model captures and what it leaves out.
Myth 2: Machines will put human forecasters out of a job
Another longstanding narrative we encounter reflects the tension between man and machine. Some suggest humans will play a lesser role in data analytics and forecasting with the proliferation of ML and artificial intelligence.7 While many data tasks in insurance have been – or could be – automated, forecasting and decision making in the face of uncertainty still require human judgement.
Human expert judgement remains superior when data is not available or insufficient, for instance, when evaluating a start-up or analysing emerging market macroeconomic trends. The wisdom of the crowd also offers an edge compared to algorithmic selection when it comes to choosing the highest performing models, because humans have the advantage of more easily spotting and avoiding nonsensical, worst-in-class forecasting models.8
However, cognitive biases are abundant in forecasting. Experts tend to adjust upwards rather than downwards (optimism bias), are overly influenced by last year's results (anchoring), and neglect the possibility of exponential growth.
How can forecasters shield against these cognitive biases? Feedback on forecasters' ongoing performance is one of the most potent ways to self-correct for possible biases.9 Not only should forecasters systematically evaluate their outcomes, but they should also assess the method used to produce the forecasts in the first place. This is particularly useful when evaluating forecasts over time is not practical, such as very long-range forecasts – mortality or morbidity projections 50 years into the future – or once-off forecasts of mergers and acquisitions.10
Myth 3: Improved forecasting capabilities invariably mean better business decisions
Furthering the science of forecasting leads to better business decisions, right? Not so fast. When insurance sales managers predict expected sales, incentives and culture play a role too. High sales projections can lead to higher bonus payments and better resource allocation. This incentive can act as a catalyst, but it can also backfire. A good example is the Wells Fargo scandal where employees had to achieve unrealistic sales targets – ideally sell eight products per customer – and resorted to fraudulently opening bank accounts on behalf of consumers without their consent.11 So a technically good sales forecast model is far from enough. How these forecasts will be used to motivate or even pressure employees is equally, if not more, important to consider.12
Progress in the science of forecasting also does not automatically mean business leaders will listen. Sadly, even where forecasters are believed, it does not necessarily translate into the right, or any, action.
Before 2020, some epidemiologists suggested a pandemic was on the cards.13 But even when leaders acknowledge this warning, they need to overcome an "intention-action gap". In the face of other day-to-day pressing matters, leaders struggle to think about and take action to address future threats with a just-in-case, rather than solely just-in-time mindset. 14
Forecasting in insurance amid a pandemic
As the COVID-19 pandemic and countries' varied responses show, the future is truly uncharted. While insurance is built on making predictions of future events, we should remember history does not always signal what's next. This means that rare and extreme events like the 2008/9 crisis and COVID-19 pandemic and their fallout remain difficult to forecast. Big data and artificial intelligence can do much heavy lifting and offer early warning signals. Yet when it comes to decision-making in an uncharted domain, simple models and step-by-step methodologies for developing expert judgement boosted by behavioural economics can give insurers a much-needed edge when plotting the future.
1. Swiss Re Institute (2020). sigma 5/2020 – Machine intelligence in insurance. Availablehttps: //www.swissre.com/institute/research/sigma-research/sigma-2020-05.html.
2. Makridakis, S. and Petropoulos, F. (2020). The M4 competition: Conclusions. International Journal of Forecasting, 36: 224-227.
3. Makridakis, S., Hyndman, R.J. and Petropoulos, F. (2020). Forecasting in social settings: The state of the art. International Journal of Forecasting, 36: 15-28
4. Lazer, D. and Kennedy, R. (2015). What We Can Learn From the Epic Failure of Google Flu Trends. Wired. Available: https://www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends/.
6. Lazer, D., Kennedy, R., King, G. and Vespignani, A. (2014). The Parable of Google Flu: Traps in Big Data Analysis. Science, 343: 1203-1205.
7. World Economic Forum (2018). The Future of Jobs Report 2018. Available: http://www3.weforum.org/docs/WEF_Future_of_Jobs_2018.pdf.
8. Petropoulos, F., Kourentzes, N., Nikolopoulos, K., and Siemsen, E. (2018). Judgmental Selection of Forecasting Models. Journal of Operations Management, 60: 34-46.
9. Makridakis, S. et al. (2020). Forecasting in social settings: The state of the art. International Journal of Forecasting, 36: 15-28.
10. Goodwin, P. (2017). Forewarned: a sceptic’s guide to prediction. Biteback Publishing.
11. Flitter, E. (2020). The Price of Wells Fargo’s Fake Account Scandal Grows by $3 Billion. New York Times. Available: https://www.nytimes.com/2020/02/21/business/wells-fargo-settlement.html.
12. Makridakis, S. et al. (2020). Forecasting in social settings: The state of the art. International Journal of Forecasting, 36: 15-28.
13. Osaka, S. (2020). These Scientists Saw COVID-19 Coming. Now They’re Trying to Stop the Next Pandemic Before It Starts. Mother Jones. Available: https://www.motherjones.com/environment/2020/05/these-scientists-saw-covid-19-coming-now-theyre-trying-to-stop-the-next-pandemic-before-it-starts/.
14. Heffernan, M. (2020). Uncharted: How to Map the Future. Simon & Schuster.