by Benjamin Teng, TLF

Artificial Intelligence (“AI”) is the ability of machines to imitate human decision-making and behaviour.[1] AI might imitate intelligent human behaviour by simply executing a series of preordained, prioritised protocols. More advanced AI might be self-learning – possessing the ability to rewrite its own code in response to its own experiences (“self-learning AI”).

Self-learning AI poses interesting and difficult questions for the law. In particular, who is legally liable when self-learning AI goes wrong? The programmers? The users? Or even the AI itself?

This article will introduce this AI liability conundrum, and then offer some solutions to it.

 

The AI Liability Conundrum

In one sense, the law is about attributing fault. The law attributes fault through legal mechanisms such as causation and foreseeability. In tort, for example, an individual is only liable for negligence if they caused foreseeable harm.[2] This in mind, can it really be said that a programmer has been negligent or has caused foreseeable harm when self-learning AI learns to act in a way that it was not intended to? When it becomes effectively autonomous?

What if, for example, AI-controlled traffic lights, programmed to ensure efficient traffic flow, learned that they could manage traffic more efficiently by changing to a green light one second instead of three seconds after the pedestrian crossing lights turned red, but that this caused more accidents. It is difficult to see how those accidents were caused by the programmers, or foreseeable.

So, what happens when AI goes wrong?

 

Some Solutions

There are a number of possible solutions to the AI liability conundrum.

First, AI programmers could be held strictly liable. This is a simple solution, but it risks suppressing our exploration of the awesome potential of AI by deterring would-be programmers for fear of being held strictly liable.[3] Strict liability makes sense in a paradigm manufacturer and consumer compensation case, but self-learning AI is unique.

Second, no one could be held liable. Instead, parties who suffer civil damages caused by AI could be compensated from a ‘claims pool’ maintained by the AI industry.[4] AI programmers and manufacturers could be required to pay a levy in consideration of the risk that the AI will cause harm. A similar no-fault system via a claims pool, albeit not in an AI context, already exists in New Zealand under the Accident Compensation Act 2001.

Finally, could it be possible to hold the AI itself liable? Practically, AI (probably) will not ever have currency, so to hold AI liable is probably not going to assist in the resolution of civil cases requiring the payment of compensation. In the context of AI crime, our criminal justice system currently seems irreconcilable with actually prosecuting AI. More generally, there are complicated philosophical issues associated with imbuing ‘machina sapiens’ with legal personhood and holding ‘them’ liable under the law.[5]

 

Conclusion

The exponential rate at which AI technologies are developing can be contrasted with the careful and gradual march of the law.[6] When put in perspective, the liability conundrum, while significant, is just one of the legal issues engendered by AI.

 

 


[1] N P Padhy, Artificial Intelligence And Intelligent Systems (Oxford University Press, 2005) 3.
[2] See, eg, Civil Liability Act 2003 (Qld) s 11; Wyong Shire Council v Shirt (1980) 146 CLR 40.
[3] Gary Lea, Who’s to blame when artificial intelligence systems go wrong (2018) The Conversation <https://theconversation.com/whos-to-blame-when-artificial-intelligence-systems-go-wrong-45771>.
[4] Ibid.
[5] Gabriel Hallevy, ‘The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control’ (2016) 4(2) Akron Intellectual Property Journal 175
[6] Quinn Emanuel Urquhart & Sullivan LLP, above n 7.

Check out last month’s TLF article, Technology with a Legal Personality by Josephine Bird

%d bloggers like this: