by Milan Gandhi, National Director of The Legal Forecast.
Chief Justice WALL-E? Choosing a humane future for AI in the law
AI is the buzzword that keeps on buzzing. What does it actually mean? What is its relevance to the legal world? And how do we ensure we choose a humane future for AI’s role in law and justice?
Beyond the hype
AI is a term commonly used to describe the performance of tasks by a machine where such tasks were traditionally thought to require human intelligence.
An important distinction to appreciate is that between a machine that mimics the fruits of intelligence by performing specified functions (“narrow AI”), and a machine which is approximately your intellectual equal (“general AI”).
Examples of narrow AI are wide-ranging. Existing technologies can discern patterns, identify trends and make predictions on the basis of enormous data, automatically learn and improve from experience, recognise human language and speech, and recognise and respond to human emotion.
Unfortunately, or (according to Elon Musk) very fortunately, we haven’t achieved general AI … just yet.
Today’s legal applications
Law firms are already adopting tools to overtake low-level cognitive aspects of the lawyer’s role, as well as augment high-level functions. Examples of the former include intelligent search engines like ROSS Intelligence, algorithmic-assisted document review in eDiscovery, and tools which contribute to the due diligence process by identifying anomalous contract clauses. Examples of the latter include systems that analyse huge amounts of data to predict the outcome of pending litigation, or to generate market insights and industry trends.
The immediate future
Even if a tool like ROSS Intelligence could recognise and extract legal arguments, “[it] could not itself construct the explanation from first principles.” According to Kevin Ashley, the next breakthrough in AI for the legal world will come with AI-generated “explanations and arguments in law”.
Choosing a humane future for AI’s role in law and justice
The makers and beneficiaries of AI are human, and some of AI’s worst failures may mirror our own. The Guardian reported that “[m]achine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use.” Recently, a trial court’s reliance on a predictive algorithm during criminal sentencing was challenged (albeit unsuccessfully) as a violation of due process rights in the US.
While the marriage of law and new tech is a rightfully tantalising prospect, as we rely on machines to solve many of our human problems, we have a duty to ensure that we do not, through tech, create entirely new ones.
Having a basic understanding of how increasingly capable machines might affect the lawyer’s role is practically important for at least three reasons:
- let’s be honest… If you’re a law student, it’s a nifty go-to topic for that clerkship interview (just bite your tongue before saying “the professions are dead! Viva la robot revolución!” as that will not go down so well);
- if you’re a junior lawyer, it’s the kind of knowledge that can assist you to build rapport with the IT guys over the watercooler… but it’s also the kind of knowledge that you may want to raise with your seniors. After all, the existing services offered by companies like Neota Logic, Luminance, and ROSS Intelligence could save you time and save your firm money, and make everyone happier in the process;
- finally, and this is the big-picture item, understanding AI and where it’s headed might assist you to “skill-up” more wisely and avoid obsolescence… students, maybe pick that abstract “Theories of Law” subject over the inevitably automatable “Research Methods” one.
Special thanks to Michael Bidwell and Benjamin Teng of The Legal Forecast for editorial advice and input. The Legal Forecast aims to advance legal practice through innovation. It is a not-for-profit run by early career professionals passionate about disruptive thinking and access to justice.
 Ibid 170, 275.
 Milan Gandhi, ‘Technology-assisted review 101 – The Rise of machines in eDiscovery’ (2017) 37(1) Proctor 6.
 See, for example, Luminance: https://www.luminance.com/.
 Jane Wakefield, ‘AI predicts outcome of human rights cases’, BBC (online), 23 October 2016 <http://www.bbc.com/news/technology-37727387>.
 Misa Han, ‘’Bloomberg terminal for lawyers’: Startup set to replace mundane legal research’, Financial Review (online), 12 December 2016 <http://www.afr.com/business/legal/bloomberg-terminal-for-lawyers-startup-set-to-replace-mundane-legal-research-20161204-gt3yrw>.
 Ibid 18.
 Ibid 31.
 Hannah Devlin, ‘AI programs exhibit racial and gender biases, research reveals’, The Guardian (online) 14 April 2017 <https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals>.
 State v. Loomis, 881 N.W.2d 749 (Wis. 2016).