Blame your robot – emerging artificial intelligence legislation

The rising importance and capability of robotics, artificial intelligence (AI) and self-learning machines pose questions about the changing status of intelligent machines. If the latter are able to assume increasing numbers of human tasks, if they can learn and adapt, and ultimately make decisions for human beings, should they still be treated as inanimate objects? Initially at least, could a robot manufacturer or a robot itself, or an autonomous car be held accountable for its decisions, and thus made liable for eventual damage caused?

Is it reasonable to establish a new legal personality for autonomous systems like ‘electronic persons’ with specific rights and obligations, and make them liable for harm caused to third parties?1 Should, in such a case, the manufacturers of these intelligent machines be freed of their responsibilities for injuries and damages caused by the machine’s autonomous decision?

Discussions about future liability regimes and financial security instruments tailored to respond to the liability risks associated with autonomous systems in general, and with robotics/AI/machine learning in particular, have gained momentum in the European Union. Indeed, the EU is one of the industrial
champions in the development and production of such intelligent systems.

In the current EU product liability framework, product liability focuses on the strict liability of the manufacturer/importer for bodily injury and property damage caused by the defect of the product. The ongoing review of product
liability might bring major shifts to the very concept of liability, and gaps in consumer protection.

The case illustrates the ambiguities raised by artificial intelligence. The uncertainties as to stringent future regulations are underlined by differing cultural attitudes toward machine intelligence and robotics in different parts of the world.

Potential impact:

  • Discussions on future liability regimes gain importance with the take-off of autonomous systems and robotics in general, and with artificial intelligence and machine learning in particular.
  • Shifts from the current liability regimes could leave consumers with more vulnerability.
  • Implementation of mandatory financial security requirements may negatively impact the development of voluntary insurance solutions.
  • AI and increased capabilities of robots highlight the questions regarding the role of human decision making in automated processes and the ways in which human ethical frameworks relate to non-humans.

This text is an excerpt from the "Swiss Re SONAR, New emerging risk insights", June 2017.

_______________________________________________

1 http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+REPORT+A8-2017-0005+0+DOC+PDF+V0//EN


The big drying – growing water...

Most of California, the U.S. Southwest and Central states had been obliged to deal with an on-going water crisis. Water (over-)use continues in the U.S. Midwest with triple pressure from farming, industrial...

Read the whole story

The human factor – stress and...

High-profile accidents such as Deepwater Horizon highlighted the role of the human factor in high-risk system vulnerability.1

Read the whole story