by Milan Gandhi, National Director of The Legal Forecast.

Chief Justice WALL-E? Choosing a humane future for AI in the law

AI is the buzzword that keeps on buzzing. What does it actually mean? What is its relevance to the legal world? And how do we ensure we choose a humane future for AI’s role in law and justice?

 

Beyond the hype

AI is a term commonly used to describe the performance of tasks by a machine where such tasks were traditionally thought to require human intelligence.

An important distinction to appreciate is that between a machine that mimics the fruits of intelligence by performing specified functions (“narrow AI”), and a machine which is approximately your intellectual equal (“general AI”).

Examples of narrow AI are wide-ranging. Existing technologies can discern patterns, identify trends and make predictions on the basis of enormous data, automatically learn and improve from experience, recognise human language and speech, and recognise and respond to human emotion.[1]

Unfortunately, or (according to Elon Musk) very fortunately, we haven’t achieved general AI … just yet.

 

Today’s legal applications

Law firms are already adopting tools to overtake low-level cognitive aspects of the lawyer’s role, as well as augment high-level functions. Examples of the former include intelligent search engines like ROSS Intelligence, algorithmic-assisted document review in eDiscovery,[2] and tools which contribute to the due diligence process by identifying anomalous contract clauses.[3] Examples of the latter include systems that analyse huge amounts of data to predict the outcome of pending litigation,[4] or to generate market insights and industry trends.[5]

 

The immediate future

Even if a tool like ROSS Intelligence could recognise and extract legal arguments, “[it] could not itself construct the explanation from first principles.”[6] According to Kevin Ashley, the next breakthrough in AI for the legal world will come with AI-generated “explanations and arguments in law”.[7]

 

Choosing a humane future for AI’s role in law and justice

The makers and beneficiaries of AI are human, and some of AI’s worst failures may mirror our own. The Guardian reported that “[m]achine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use.”[8] Recently, a trial court’s reliance on a predictive algorithm during criminal sentencing was challenged (albeit unsuccessfully) as a violation of due process rights in the US.[9]

While the marriage of law and new tech is a rightfully tantalising prospect, as we rely on machines to solve many of our human problems, we have a duty to ensure that we do not, through tech, create entirely new ones.

 

TLF Tip

Having a basic understanding of how increasingly capable machines might affect the lawyer’s role is practically important for at least three reasons:

  • let’s be honest… If you’re a law student, it’s a nifty go-to topic for that clerkship interview (just bite your tongue before saying “the professions are dead! Viva la robot revolución!” as that will not go down so well);
  • if you’re a junior lawyer, it’s the kind of knowledge that can assist you to build rapport with the IT guys over the watercooler… but it’s also the kind of knowledge that you may want to raise with your seniors. After all, the existing services offered by companies like Neota Logic, Luminance, and ROSS Intelligence could save you time and save your firm money, and make everyone happier in the process;
  • finally, and this is the big-picture item, understanding AI and where it’s headed might assist you to “skill-up” more wisely and avoid obsolescence… students, maybe pick that abstract “Theories of Law” subject over the inevitably automatable “Research Methods” one.

 

Special thanks to Michael Bidwell and Benjamin Teng of The Legal Forecast for editorial advice and input. The Legal Forecast aims to advance legal practice through innovation. It is a not-for-profit run by early career professionals passionate about disruptive thinking and access to justice.

 

The Legal Forecast

[1] Ibid 170, 275.

[2] Milan Gandhi, ‘Technology-assisted review 101 – The Rise of machines in eDiscovery’ (2017) 37(1) Proctor 6.

[3] See, for example, Luminance: https://www.luminance.com/.

[4] Jane Wakefield, ‘AI predicts outcome of human rights cases’, BBC (online), 23 October 2016 <http://www.bbc.com/news/technology-37727387>.

[5] Misa Han, ‘’Bloomberg terminal for lawyers’: Startup set to replace mundane legal research’, Financial Review (online), 12 December 2016 <http://www.afr.com/business/legal/bloomberg-terminal-for-lawyers-startup-set-to-replace-mundane-legal-research-20161204-gt3yrw>.

[6] Ibid 18.

[7] Ibid 31.

[8] Hannah Devlin, ‘AI programs exhibit racial and gender biases, research reveals’, The Guardian (online) 14 April 2017 <https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals>.

[9] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

%d bloggers like this: