Algorithms are only human too – opaque, biased, misled

Are computers providing better decisions? Increasing numbers of business processes are driven by algorithms. We are seeing algorithms take over advanced service functions at an ever faster pace. This development is eminently relevant for the insurance industry, since it has a dramatic effect on several risk environments namely, financial systems, the analysis of risk pools and the pricing, underwriting and marketing of risk transfer.

Often, algorithms are portrayed as being objective, without human bias. But algorithmic applications are not infallible. The fact that “intelligent” algorithms base their actions on what they’ve learned from flawed human judgement may lead to discriminatory effects. Discriminatory bias may also translate into defective modelling and prediction, bringing a two-fold risk to insurance and other industries.

The risk can be increased by non-transparent or black-boxed algorithmic services. For example, doctors are still responsible for errors that may occur from their diagnoses and procedures, even though they may not be able to assess the reliability of the AI-device they are using. The problem of black- boxing may also affect parts of insurance underwriting supported by intelligent digital tools. This also challenges auditability. The black box problem of algorithms has invited a new research area dedicated to reconstruct the backgrounds of algorithmic findings.

Regulatory authorities are increasingly focusing on the “algorithmic economy”. There is also a growing ethical debate around this topic.

Up to now, there has been a lack of clear governance around development and application of algorithms. That said, antitrust laws have prompted the filing of charges relating to the non-transparent pricing tools applied by airlines.

In October 2017 in the context of new EU data protection regulation, the EU stressed that profiling and automated decision making are prohibited unless certain conditions are met. This regulatory risk could have drastic impact in the fields of telematics, as well as in predictive or automated underwriting.

Potential impact

  • Significant impacts on investments in financial markets are likely if automated high-frequency trading can be abused.
  • As far as the insurance industry is concerned, algorithmic risk pooling, assessment and pricing have to be carefully analysed, as overreliance on models may prove costly. In underwriting, reliance on black-boxed underwriting tools may reinforce certain biases.
  • Demands for specific skills such as data scientists will increase and may lead to shortage in the market.
  • Insurance and/or its clients may be affected by property losses or liabilities, triggering respective covers.
  • Feedback loops can amplify false inputs, potentially leading to losses.
  • Regulation is paying increased attention to algorithmic business procedures, and pricing tools can come under scrutiny. Initial emergence of requirements relating to ensuring algorithms are “compliant by design” (e.g. data protection legislation, antitrust laws).