Autonomous Decision-Making: Assessing the Technology and its Impact on Industry and Society

Transparency, not only in the technical aspects of the algorithms – this is limited up to a certain point – but also more transparency about the driving force behind the algorithms.

Autonomous Decision-Making: Assessing the Technology and its Impact on Industry and Society

“The need for more transparency” was one of the key reasons for the workshop.  Participants agreed that more transparency would be vital to build the level of trust needed to accept and integrate the technology, especially for autonomous decision-making. For the near future, an ideal scenario would be to have hybrid decision-making systems, for example, running computer-based systems in parallel to human systems and comparing them.

AI as "a collective weird thing"?

The first part of the event assessed the scope and limits of the technology. In his opening talk, ETH Professor Thomas Hofmann, defined intelligence as the ability to understand or to make sense and to act accordingly. Characterising intelligence as the "crown of evolution" with the ultimate goal to get more intelligence, on this journey, we, as humans might only be an intermediate product. Machine intelligence as such is not bound to mimic just human skills. Far from that, Hofmann expects that machines will not stop at the human level but will enter into new dimensions of intelligence, making use of a networked intelligence. With our knowledge of today, we may only think about AI in the future as a “collective weird thing”. In terms of computer vision, machines have already achieved forms of autonomy by perception and super human face recognition. Combining this with recent achievements in language understanding, like human voice recognition, machine reading by linking text with knowledge representation allow some form of reasoning across billions of documents. Recent developments in machine translation and more even more recently in reinforcement learning show the enormous potential lying ahead of us.

In the second talk, Prof. Thomas Hills from Warwick’s Department of Psychology showed that interactions in social media, for example, clicking, actually means “acting” and by using these platforms, or processing the information from social media, we, in turn, change our personality, too. Therefore, social information is a huge influence on our personality and the algorithms are co-evolving along these lines. This creates a feedback process which, most importantly, might also amplify our biases. Social risk amplification is one example. According to Hills, we must be aware of the fact that whenever we create data (with our biases in it), and have algorithms processing these data, the algorithm takes over our biases. When we want to build trust to using algorithms for decision-making, he pointed to the psychological fact that people want narratives for doing things. In the end, we need reasons to justify an action. Reasons are only deductible from the system if it has a certain level of transparency about itself and about the mechanisms in which it is placed.

In his talk, Prof. Christoph Hölscher, Chair of Cognitive Science from ETH Zurich added that machines today may “understand” a lot already but in most situations this understanding is very context specific and, whenever the context switches, the machine gets into trouble. In terms of human interaction and feedback processes in between, the computer also helps to understand what is going on in the human mind. Concerning the acceptance of autonomous decision systems, Hölscher pointed to the psychological fact that people want to predict, understand and be in control and do not want to be limited in their actions. He also warned of active filtering by an algorithm, e.g. refraining information on a specific person in a social network, might lead us to the conclusion that this was the willing action of the specific person. However, it might have been the algorithm who “unfriended” us from the person. He underlined that we have still problems to separate human action from computer based algorithmic action and more transparency is also necessary here.

Nico Lavarini, Chief Scientist from Expert Systems gave insights into state of the art integration of AI in insurance business processes like claims management and contract matching, as well as property and risk evaluation. Reduction of subjectivity and time efficiency, as well as cost saving are favourable outcomes of the integration in cases when the scope of the problem can be framed accordingly. However, a global quality evaluation of the success of the integration is complex, due to cognitive biases because of variable inter-rater agreements, sometimes about only 60%. Therefore, we should be aware of how much efficiency we can actually expect to achieve using technological innovations, which are ultimately based on human input.   

A painful learning phase for the industry

To close the first part, Prof. Patrick Cheridito, Chair of Mathematics at ETH and Member of the Risk Center, hosted a panel discussion. Martin Schürz, the Head of Engineering Services at Swiss Re, acknowledged that the “learning phase” was painful and is still ongoing. However, the acceptance of the technology has to be increased. He underlined that more transparency would be necessary in the first place, but after a while, when a certain level of trust is reached, people will stop caring about explanation. Still more effort has to be invested in understanding the specific nature of problems, in order to make them “solvable” by the computer. Taking into account that business has only limited capacity of time and money, spending time and investing effort on trying to solve, or to frame, one single problem with an uncertain outcome is also risky in itself. Olivier Verscheure, Executive Director of the Swiss Data Science Center gave insights into their new platform to deliver the necessary infrastructure and knowledge to solve problems. Asked about future perspectives, Prof. Hills suggested we ask ourselves: What do we really need? What improves our lives, and how do we create a product to increase wellbeing?   


Historical perspective – is it really different this time? 

Live demonstrations of ETH Spin-offs featured in the lunch break, followed by the second part of the conference, on society and the future of work.

Daniel Castro from the Information Technology and Innovation Foundation (ITIF), Washington, framed the impact of the “algorithm economy” on industries, firms, and workers. Putting the recent debate in a historic perspective he revealed that the impact of technology on workers has been always there. Although the slogan “this time is different” is attractive now, historically, we observe a stable environment. Many times in the past, he added, people expressed their fears that automation will eliminate workers. However, none of these cases became a reality that matched those fears. In contrast, technological innovation always boosted productivity and created new tasks for millions of new jobs. As such, according to the ITIF, AI is expected to create 5 to 6 trillion dollars annually by automating knowledge work, and much of AI will boost quality, not eliminate jobs. Inevitably, some jobs will be eliminated, but most occupations, like brick masons, machinists, dental laboratory technicians, social science research scientists, firefighters, are still very difficult to automate. From a macro-perspective, Castro explained, developed countries need higher productivity to maintain the current standard of living. For example, the EU working age-to-older person ratio drops from 3.5 to 2.2 by 2040. In turn, productivity would have to increase by 13% to keep worker after-tax incomes from declining. Governments have to provide the efficient framework for allowing innovation to happen and not be hindered in the end.

Sarah Spiekermann

AI as "a collective weird thing"?

The first part of the event assessed the scope and limits of the technology. In his opening talk, ETH Professor Thomas Hofmann, defined intelligence as the ability to understand or to make sense and to act accordingly. Characterising intelligence as the "crown of evolution" with the ultimate goal to get more intelligence, on this journey, we, as humans might only be an intermediate product. Machine intelligence as such is not bound to mimic just human skills. Far from that, Hofmann expects that machines will not stop at the human level but will enter into new dimensions of intelligence, making use of a networked intelligence. With our knowledge of today, we may only think about AI in the future as a “collective weird thing”. In terms of computer vision, machines have already achieved forms of autonomy by perception and super human face recognition. Combining this with recent achievements in language understanding, like human voice recognition, machine reading by linking text with knowledge representation allow some form of reasoning across billions of documents. Recent developments in machine translation and more even more recently in reinforcement learning show the enormous potential lying ahead of us.

In the second talk, Prof. Thomas Hills from Warwick’s Department of Psychology showed that interactions in social media, for example, clicking, actually means “acting” and by using these platforms, or processing the information from social media, we, in turn, change our personality, too. Therefore, social information is a huge influence on our personality and the algorithms are co-evolving along these lines. This creates a feedback process which, most importantly, might also amplify our biases. Social risk amplification is one example. According to Hills, we must be aware of the fact that whenever we create data (with our biases in it), and have algorithms processing these data, the algorithm takes over our biases. When we want to build trust to using algorithms for decision-making, he pointed to the psychological fact that people want narratives for doing things. In the end, we need reasons to justify an action. Reasons are only deductible from the system if it has a certain level of transparency about itself and about the mechanisms in which it is placed.

In his talk, Prof. Christoph Hölscher, Chair of Cognitive Science from ETH Zurich added that machines today may “understand” a lot already but in most situations this understanding is very context specific and, whenever the context switches, the machine gets into trouble. In terms of human interaction and feedback processes in between, the computer also helps to understand what is going on in the human mind. Concerning the acceptance of autonomous decision systems, Hölscher pointed to the psychological fact that people want to predict, understand and be in control and do not want to be limited in their actions. He also warned of active filtering by an algorithm, e.g. refraining information on a specific person in a social network, might lead us to the conclusion that this was the willing action of the specific person. However, it might have been the algorithm who “unfriended” us from the person. He underlined that we have still problems to separate human action from computer based algorithmic action and more transparency is also necessary here.

Nico Lavarini, Chief Scientist from Expert Systems gave insights into state of the art integration of AI in insurance business processes like claims management and contract matching, as well as property and risk evaluation. Reduction of subjectivity and time efficiency, as well as cost saving are favourable outcomes of the integration in cases when the scope of the problem can be framed accordingly. However, a global quality evaluation of the success of the integration is complex, due to cognitive biases because of variable inter-rater agreements, sometimes about only 60%. Therefore, we should be aware of how much efficiency we can actually expect to achieve using technological innovations, which are ultimately based on human input.   

A painful learning phase for the industry

To close the first part, Prof. Patrick Cheridito, Chair of Mathematics at ETH and Member of the Risk Center, hosted a panel discussion. Martin Schürz, the Head of Engineering Services at Swiss Re, acknowledged that the “learning phase” was painful and is still ongoing. However, the acceptance of the technology has to be increased. He underlined that more transparency would be necessary in the first place, but after a while, when a certain level of trust is reached, people will stop caring about explanation. Still more effort has to be invested in understanding the specific nature of problems, in order to make them “solvable” by the computer. Taking into account that business has only limited capacity of time and money, spending time and investing effort on trying to solve, or to frame, one single problem with an uncertain outcome is also risky in itself. Olivier Verscheure, Executive Director of the Swiss Data Science Center gave insights into their new platform to deliver the necessary infrastructure and knowledge to solve problems. Asked about future perspectives, Prof. Hills suggested we ask ourselves: What do we really need? What improves our lives, and how do we create a product to increase wellbeing?   


Historical perspective – is it really different this time? 

Live demonstrations of ETH Spin-offs featured in the lunch break, followed by the second part of the conference, on society and the future of work.

Daniel Castro from the Information Technology and Innovation Foundation (ITIF), Washington, framed the impact of the “algorithm economy” on industries, firms, and workers. Putting the recent debate in a historic perspective he revealed that the impact of technology on workers has been always there. Although the slogan “this time is different” is attractive now, historically, we observe a stable environment. Many times in the past, he added, people expressed their fears that automation will eliminate workers. However, none of these cases became a reality that matched those fears. In contrast, technological innovation always boosted productivity and created new tasks for millions of new jobs. As such, according to the ITIF, AI is expected to create 5 to 6 trillion dollars annually by automating knowledge work, and much of AI will boost quality, not eliminate jobs. Inevitably, some jobs will be eliminated, but most occupations, like brick masons, machinists, dental laboratory technicians, social science research scientists, firefighters, are still very difficult to automate. From a macro-perspective, Castro explained, developed countries need higher productivity to maintain the current standard of living. For example, the EU working age-to-older person ratio drops from 3.5 to 2.2 by 2040. In turn, productivity would have to increase by 13% to keep worker after-tax incomes from declining. Governments have to provide the efficient framework for allowing innovation to happen and not be hindered in the end.