Making machine intelligence profitable

In a discussion we had at a conference a little over a year ago, a data scientist lamented the fact that their predictive model had identified a clear shift in the loss trend for a particular risk category that ended up costing that particular insurer a material amount of money.

Unfortunately for them, their business executives had trouble interpreting the analysis, and the derived output that ended up on decision makers' desks was mostly out of date by the time a serious discussion could occur. This is a clear example of failed machine intelligence (MI), which our recent SRI sigma on enterprise-scale MI explores in detail. This example illustrates several root causes of failure including lack of a robust, dynamic data engineering architecture, poor data visualization, and haphazard education of business leaders or processes for executive consumption of output.

The current state of MI in insurance

Most insurers already use some kind of conventional MI such as generalised linear models (GLM). Many companies have been experimenting with newer MI such as machine learning (ML) and artificial intelligence (AI). Yet despite these investments of time and effort, almost all MI-enabled system deployments are currently failing to profitability transform insurance companies.

However, the SRI sigma research noted above argues that insurers should not be tempted to walk away from MI-related investment or disband future MI development plans as a result of these lackluster initial results. Rather, we concluded that the focus should shift to what the industry can do to realise the potential of MI-enabled transformation. Learning from the few successful MI-enabled system deployments within the insurance industry and the many successes in other sectors such as Big Tech, we found that, first, investment should shift from the current focus on models/algorithms to data engineering. In parallel, companies deploying MI-enabled systems should also spend time redesigning their organisation and processes to leverage MI, matching use cases to particular MI categories, and finding ways to retain better prepared talent.

In almost all cases, addressing the under-investment and wrong investment in data engineering is the first necessary step to materially improve the productivity of MI-enabled systems. Moreover, every link in the insurance value chain—product development, marketing, underwriting, pricing, client servicing, claims management, portfolio analysis, asset-liability matching, and capital & liquidity management—are increasingly data driven. Growing use of ML models with a greater parameter capacity than their linear counterparts require ever greater data volumes to function, which in turn compounds this issue. We estimate that successfully implementing enterprise-scale MI systems has the potential to improve insurer's profit margins between 200 and 400 basis points leading to combined ratio improvements in the range of 6-9% in 2-3 years’ time according to Galytix Analysis. Data needs to be seen as a strategically important asset that is managed across the organisation. Regardless of where an insurer is today on its data engineering path, the time is rapidly coming when data backbones will be recognized as necessary to retain a basic level of competitiveness.

Inefficiencies and opportunities for improvement

As we highlight in our sigma, insurers are already investing heavily in data-related systems & projects. Due to poor or missing enterprise-wide data strategies and insufficient numbers of qualified data-system architects, data value chain management (i.e., identify, ingest, curate, credential, process, transform, analyse, visualize, use—and store for future processing) is still woefully inefficient. As a result of these circumstances, we have identified specific inefficiencies—and thus, opportunities for improvement:

  • Data professionals in insurance today spend 60%-80% of their time on making relevant data ready for analysis, i.e., cleaning/curating data themselves
  • Data engineering efforts tend to be duplicated across multiple functions
  • Insurers lack comprehensive data ontologies (defining relationships among data) and taxonomies (categories) limiting ability to scale and operationalize data across the organization
  • 95% of available, but unused data lie in unstructured forms (e.g., text, audio, and video); many institutions have reservoirs of untapped, unstructured data that languish in silos around their organisation and/or could easily access publicly available sources
  • Useful historical data are often deleted handicapping future development & back-testing efforts—storage is cheap enough now that data should never be deleted
  • Underinvestment in well-designed data visualization tools leads to poor use of data-derived insight—even when the underlying transformation and analysis is high quality
  • Most data value chain management systems are poorly designed leading to inefficient MI use and excessive manual processes resulting in inefficient human-machine process integration
  • Anecdotally, data exploration to MI-enabled system deployment averages 12 to 24 months
  • Inadequate data-system architectures result in long lag times between data ingestion and data-derived actionable insight; often decisions are made using surprisingly stale data

Peering into the future, we see competitively successful insurance companies shifting their data management paradigm from one of disconnected and stagnating data lakes to networked data rivers. Such automated flows can dynamically update data-communication tools that enable understanding and action far better than static PowerPoint slides.

How to avoid MI pitfalls

In the past few years, myriad powerful tools, systems, open source algorithms, and vendors have appeared to address many (if not all) of these challenges. Even so, any organisation that haphazardly attempts to implement a mix of new capabilities runs the risk of becoming lost in the thicket of choices leading to Frankenstein systems that only breed more inefficiencies. What can well informed institutions do?

  1. Develop a comprehensive data strategy
  2. Create process and organization to maintain dynamic data taxonomies and ontologies
  3. Find and hire quality data-system architects (these are not data scientists or developers)
  4. Emphasize end-to-end, enterprise-scale use cases and plans, not targeted algorithm/model pilots and prototypes
  5. Keep humans in the process loop in the context of organizational re-designs that account for opportunities afforded by newer MI
  6. Plug into data rivers—even when it is not clear how the data will be used
  7. Hire designers (these are also not data scientists or developers) to design compelling data visualization
  8. Implement dynamic data visualization and train non-technical executives on how to consume the derived output in order that it regularly leads to actionable insight
     

Following all these recommendations is a tall order and impractical for many institutions in the near term. However, some of them can be quickly realised. For example, almost all insurers already invest heavily in "conventional" MI such as generalised linear models. This conventional MI informs risk selection, risk pricing, capital allocation, and risk management—albeit at too high of a cost due to poor data-system architectures. Thus, well placed data engineering investment will almost immediately improve returns to this conventional MI investment that has already been made by many firms. Even if comprehensive re-engineering of data architectures is not possible now, targeted efforts on improving data ingestion and curation using cost-effective, end-to-end systems (a few particularly good ones are now available) and investing in well-designed data visualisation are two practical recommendations almost all institutions can follow today. Comprehensive data strategy development and data engineering efforts will likely require more time and budget.

As previously commented, and perhaps counter-intuitively, the path to MI success in the risk-transfer industry does not focus on the algorithms and models that ultimately generate value-adding insights and predictions. Rather today's challenge is efficiently fuelling that growing analytical capability and enabling business leaders to digest its growing output. This introductory blog will be followed by a three-part series offering our more detailed views on tackling those engineering and consumption challenges.

Tags

buildingsocietalresilience drivingdigitalinsurancesolutions insurancesolutions sigma ai digitalecosystems

Authors

​sigma 5/2020 – Machine intelligence in insurance