The future is bright; the future is humanoid chatbots?
Article information and share options
Chatbots are increasingly replacing direct interaction channels within the insurance industry. Their functions and capabilities are currently limited, and they struggle with more complex tasks. However, they are evolving and becoming increasingly sophisticated, with an ability to understand human emotions not far over the horizon. Their convenience, 24/7 service and cost advantages suggest chatbots will become integrated across insurance interaction channels.
So, what is a chatbot?
A chatbot (bot for short) aims to conduct chats by mimicking unstructured conversations between humans. There are two widely used types of chatbot systems - the (1) rule-based and the (B) intent-based chatbot, also called NLP bot (Jurafsky and Martin, 2020).
- The rule-based chatbot, as the name suggests, conducts a conversation by using predefined rules. The chatbot only comprehends buttons and cannot understand free chat – and is therefore sometimes called click-bot (Figure 1). Online insurer Lemonade offers its entire onboarding process for customers through a rules-based chatbot.
- NLP-based chatbots evaluate human conversations and are extremely data-intensive. Intent- or NLP-based bots work with Natural Language Processing (NLP) components. There are two subcategories of intent-based chatbots: (a) information retrieval and (b) machine-learned sequence implementation. While an information retrieval chatbot responds to a consumer's query by repeating an appropriate answer from natural text, a machine-learned sequence transduction system learns to respond to a question with an answer (Jurafsky und Martin, 2020). Users can also type in their questions into rule-based chatbots. If the question is already stored in the chatbot system or, as is often said in the chatbot world, 'the chatbot is already trained for this user question', the bot can answer the user's question directly.
Figure 1: Overview of the functionality of rule-based and NLP-based chatbots
Source: Swiss Re Institute
Developers are also deploying hybrid bots - chatbots that start with an open question and are NLP-based and then switch to rule-based chatbots as the conversation progresses.
The following table provides an overview of key differences between chatbot systems (Table 1).
Table 1: Differences between rule-based and NLP-based chatbots
Source: Weber (2021) (accessed: 09. November 2021)
Enormous market growth expected
The chatbots are coming and many are already here. Emirates Vacation integrated a chatbot into its display ads and increased interaction rates by 87%. Amtrak's chatbot achieved a return on investment (ROI) eight times above expectation and increased revenue per booking by just under 30%. This suggests that specifically focused bots are more successful than broadly focused bots, quasi "omniscient virtual assistants" (CB Insights, 2021). The chatbot industry is moving through early-stage growth, with investment now concentrated in mid- to later-stage investments as technology matures.
The global market for intelligent virtual assistants was estimated at around USD 2.5 billion in 2019 and is experiencing strong market growth, particularly in the US (see Figure 2). Globally, the market is forecast to be worth nearly USD 25 billion by 2027, with a compound annual growth rate of 37% (Polaris Research, 2020). Market growth will be fuelled by further automation in sectors such as customer care; and integration into daily environments such as household, mobility, finance and health. Efficiency and productivity drivers will further catalyse this growth. On the supply side, increasingly sophisticated technology will have a positive impact, both vertically among existing users and horizontally across different industries.
Figure 2: Expected global market development of intelligent virtual assistants
Source: Polaris Research (Jan 2020) Intelligent Virtual Assistant Market Growth, IVA Industry Report, 2020-2026
In Asia, particularly in the online commerce and marketing sector, personality bots are being developed, based on the virtual presence of celebrities, brand ambassadors or social influencers (see also Grapee, 2021; Unity, 2021; Insider, 2021). Samsung, for example, has begun its explorative "Neon.life" project (check also Neon.life on Facebook), which could significantly change the world of work as we know it today.
It is probably only a matter of time until these virtual personalities will nestle into the entire interaction chain of companies and therefore become more involved in value creation as an integral part of the brand. At the same time, BigTechs, with well-known assistants such as Alexa, Google Now, Cortana, Siri, Bixby, Xiaoyu, Genie, etc. are also continuously developing their assistants, which will further intensify competition. Insurance bots will ultimately have to interact, coordinate and communicate with these BigTech assistants. This will be easier for purely transaction-oriented tasks and use cases.
An impressive Google Duplex prototype has shown how bot interaction can be very natural and almost human. However, humans may increasingly struggle to know when they are not talking to a human; and regulation of bots is currently light (see also Lamo & Calo, 2019). Finally, while many consumers in many situations accept using bots, they may not choose to do so in more sensitive or personal matters.
What do consumers think of chatbots?
Market research shows that most consumers still prefer to interact with a human than a chatbot. A study conducted by CGS (2019) shows that 86% of consumers prefer a human to a chatbot; and as many as 71% of consumers said they would not choose a company if a human employee was not available. Intercom (2021) also came to similar conclusions: 87% of consumers surveyed would rather speak to a human than a chatbot during short interactions.
One possible explanation for consumers' reluctance could be that half of consumers believe chatbots cannot solve their problems (CGS, 2019). A report by eMarketer (2018) shows that almost half of all respondents (48%) believe: chatbots give too many unhelpful answers; almost 40% say chatbots redirect them to self-service FAQs; and almost 30% believe a chatbot only makes bad suggestions (eMarketer, 2018). This difficulty in having a productive interaction with a chatbot is also reflected in consumers' emotional association with chatbots. When asked what consumers think of chatbots, 57% expressed negative sentiments such as "crazy", "impersonal", "irrelevant", "premature", "useless" or "doomed", to name but a few (CGS, 2019). Thus, privacy and trust are important factors when it comes to customer interactions. Even more crucial is the customers' perceived usefulness and convenience of a chatbot that might drive its usage and adaption (Cardona et al., 2021).
How are chatbots used in the insurance industry today?
Our studies showed most insurance chatbots are currently used in the pre-purchase phase, including: marketing; providing offers; calculating premiums; product recommendation; or consumer information/education. Some chatbots were employed in the purchase phase, encompassing: issuing policies; making payments; or uploading documents. A small number are employed in the post-purchase phase, such as processing claims. Chatbots currently cannot perform many customer requests themselves – but they can direct customers to where they have to input something, for example a change of address. Within the insurance sector, most chatbots are used within property insurance. Only around a fifth of total were employed in life and health (L&H).
The use of chatbots in the pre-purchase phase is that tasks are typically repetitive, simple, and limited in scope, such as creating a quote or answering frequently asked questions (FAQs). Repetitive tasks can be reduced to buttons; or provide a relatively simple basis of learning and hence prediction for NLP bots. The relative simplicity of property insurance lends itself to bots. Casualty and L&H are both more complex in structure. The post-purchase phase of insurance – such as a claims damage report – is far more nuanced, granular and individual (see Figure 3).
Figure 3: Use cases and embedding chatbots in customer experience in the insurance industry
Sources: Swiss Re Institute
As suggested, most of the chatbots we studied are simple. Their functionality was limited to generating a simple quote; relocating the customer to the right place on a website; or recalling an FAQ. They cannot yet engage in a full and unstructured conversation.
Why can't a chatbot do complex tasks with unlimited scope yet?
Some BigTechs have tried to develop chatbots prepared to answer anything. Facebook's chatbot "M" was discontinued in January 2018 after it could only fulfil 30% of requests made (CB Insights, 2021; Simonite, 2017). The other 70% of tasks were completed by humans (Simonite, 2017). The experience of Chatbot M was of complex requests rarely repeated, giving relatively little data to learn from (CB Insights, 2021; Simonite, 2017). Google's 'Allo' faced similar problems and was discontinued in December 2018 (CB Insights, 2021).
These 'all-answer' chatbots not only had insufficient learning data; they could only mimic conversations rather than understand free speech. Both Facebook and Google have made some of their natural language understanding tools open source in the hope of pushing the technology further with the help of a wider audience (Metz, 2016). The AI behind chatbots struggles to understand identical spoken words with different spellings (knight and night); or identical words with different meanings in context (might as in possible, might as in powerful) (Kumar et al., 2011; Knight, 2016). Language researchers also note words are learnt in conjunction with vision, as babies associate a word with observations in their environment (Knight, 2016) (Finley, 2016). Integrating visual and linguistic systems could be crucial to further develop natural language processing and deep learning capabilities. A further factor in artificial language learning will be emotional and social intelligence.
Current research initiatives are trying to overcome chatbot limitations. A joint research project between Stanford University and Facebook aims to solve the problem that many consumers find chatbots too generic and insufficiently intelligent. The goal is to develop a self-improving chatbot that can evaluate consumer conversational feedback and learn from interactions that go well (Davis, 2019; Wiggers, 2019). Another interesting area of research is how to make chatbots more "human-like".
Researchers at Stanford University are currently constructing a chatbot that can imitate popular television show characters to give the chatbot a personality and identity (Nguyen, Morales und Chin, 2017). A further research project seeks to give chatbots more human-like characteristics by equipping chatbots with the ability not only to understand, but also to show emotions (Wei et al., 2019). The ability of chatbots to recognise different emotional states and provide appropriate responses could also be relevant to chatbots being employed for mental illness (Vaidyam et al., 2019; Clark und Althoff, 2016). A further research challenge is reducing the volume of data need to teach the chatbot. One promising research project uses machine learning to translate lost languages. With limited datasets available for lost or rarely used languages, this new approach could help find a way to train machines on much smaller datasets (Luo et al., 2019).
How can chatbot maturity levels be characterised?
In recent years, chatbots have developed in two distinct directions. The level of communication competence manifests itself primarily in functional skills. There are four development stages in language learning, which could lead from basic communication to personalised communication. Emotional and empathic capabilities is a very young field of research and is intricately linked with local cultural factors. The challenge is to move from functional interaction towards an independent personality. Both development directions are mostly AI-based (Figure 3).
Figure 3: Chatbot competency maturity model
When chatbots become more functionally competent
There are also differences among AI chatbots. Simple versions do not yet work with a developed NLU (Natural Language Understanding) but can only read individual keywords from the user input and then give a suitable answer if possible. To avoid incorrect answers, this type of AI bot is used for limited use cases in which the keywords that the bot needs to know are manageable.
The next step for a chatbot is to understand contexts within a dialogue. This allows the chatbot to conduct situational communications. These types of chatbots are mainly used when the use case cannot be restricted exclusively to a single topic area. One such chatbot is used by Helvetia Switzerland.
The fourth level of chatbots is still in development. Having mastered personalised communication, the chatbots learn automatically during communication. They "notice", so to speak, if a user was satisfied or dissatisfied with an answer; and this will influence how it answers the same question in the future. The following Table 2 framework provides an overview of the stages of chatbot maturity.
Table 2: Chatbot maturity model of communication competence
Note: NLU (Natural Language Understanding)
Source: Swiss Re Institute, Sophie Hundertmark
When chatbots should become more and more 'human'
The qualities that make a chatbot more 'human' requires a short digression into research questions around "empathy" or "compassion", or the "ability and willingness to recognise, understand and empathise with sensations, emotions, thoughts, motives and personality traits of another person." (Wikipedia). The literature speaks of emotional, cognitive and applied empathy, that is the ability to understand an interaction and react situationally (see also de Waal, 2007). The development of empathetic chatbots is still in its infancy and has recently gained more interest. This reflects current technological progress, which presupposes abilities of "empathetic feeling" (Wartburton, 2007), as the previously-mentioned empathetic chatbot Wysa is doing in supporting mental health (Inkster et al., 2018).
It is conceivable that bots with empathetic sensibilities could develop their own personalities, that could be adapted to their brand values. This remains a big step and is not yet a commercial reality. Training emotional skills is demanding, especially since (visual) emotion is perceived and interpreted differently in different cultures (see also Ringeval et al., 2019). The maturity levels of the chatbot personality are presented in Table 3.
Table 3: Chatbot maturity model of empathy competence
Source: Swiss Re Institute, Sophie Hundertmark
The development of empathic bots is currently exploratory in nature. Chatbot requirements remain largely functional and the ROI case for greater chatbot empathy is not being made. However, empirical studies are demonstrating the potential worth of empathic bots (see also Naotus et al., 2020). Casas et al. (2021) have shown that empathic chatbots outperform the benchmark bot and even human-generated responses in terms of perceived empathy.
The march of technology may also pressure companies to make customer interactions more "human". It may be a balancing act to what extent empathic aspects can be incorporated into commercial chatbots.
Bots will be integrated as a highly competent alternative interactions channel along the value chain
Consumer expectations are changing. The 24/7 economy is here across many sectors and that will spill over into the insurance industry. That always-on function will only be met with the use of chatbots. These direct customer interfaces will have to further develop their communication skills to enable personalised interaction. The ability of a chatbot to be empathetic and 'human' could be a valuable differentiation for an insurer vis-à-vis competition. Furthermore, the chatbot's ability to support other channels as an alternative knowledge source will add further value.
However, chatbots are still in their infancy and are unlikely to become humanly empathetic at any point soon. Just to employ AI with the ability to learn from customer interactions and improve services will bring organisational challenges. Nevertheless, change will come. Consumer expectations will change; the data needs of companies will evolve; new market participants will enter with new levels of technology; and chatbot avatars will become more ubiquitous on social media and online commerce. Those insurers ready for the coming changes will be in a prime position to exploit them. It's time to chat.