by Richard Gifford, The Legal Forecast

When one thinks of frontier areas of law, criminal law is hardly the first that springs to mind. However, in recent times, criminal justice has seen the emergence of artificial intelligence in the courtroom, causing great controversy. At least ten states in the US are using algorithms to determine the likelihood of a defendant skipping bail or reoffending.[1] Advocates say this tech can alleviate the congestion in the courts while improving fairness and safety[2] – while opponents have significant concerns about transparency, oversight and agency.


COMPAS and the case of Wisconsin v Loomis

Wisconsin v Loomis[3] was a case involving a defendant who was found guilty for his role in a drive-by shooting. During his sentencing, COMPAS technology gave him a “high risk” score, meaning that he was deemed to have a high likelihood of reoffending, which was then taken into account by the judge. Creators of the software, Northpointe Inc, have not released how the software makes its assessments, causing an uproar within the legal fraternity. Loomis’ counsel unsuccessfully appealed on the grounds that their client should have been allowed to assess the algorithm.[4]


Allegations of racial bias

Criticisms of criminal sentencing algorithms range from their bias against ethnic groups, to the simple fact that they just don’t do what they say they do. A report by ProPublica found that a negative risk assessment “was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants”.[5] They were also found to be inaccurate.[6] This seems extraordinary when one considers Attorney General Eric Holder’s caution at the time against the rollout of such technologies before thorough testing was undertaken – yet they were rolled out anyway.


The ethical arguments for algorithms in sentencing

Interestingly, those tasked with designing criminal sentencing algorithms are motivated by the thought that they can remedy the perceived faults of the current system. A 2011 study published that a person’s chances of being granted parole could be dependent on whether the judicial officer has had their lunch or not, or even how well their local college football team is doing.[7] Thus, software like COMPAS is seen as a solution to the poor decisions made unwittingly by human lawyers. The software boasts that it could cut crime by up to 24.8% with no change in jailing rates, and reduce jail populations by 42% with no increase in crime rates.[8]


The case for Australia and New Zealand?

So, should this technology be implemented in Australia and New Zealand? It seems like an inevitability, so in the interest of transparency and due process, the software needs to be rule-of-law-proof. Firstly, capable of being appealed. Legislation needs to reflect this from the outset. Secondly, thorough studies that weren’t conducted in the US into the software’s utility, accuracy and bias must be conducted prior to implementation. These are only a couple of suggestions and lots of thought will need to be put into how to successfully employ AI in the courtroom. When the day comes for our jurisdiction we must make sure we are prepared to ensure that the rule of law and due process is protected.




[1] Watney, C. (2017). It’s time for our justice system to embrace artificial intelligence. [online] Brookings. Available at: [Accessed 9 Nov. 2017]. Hamilton, M. (2017). We use big data to sentence criminals. But can the algorithms really tell us what we need to know?. [online] The Conversation. Available at: [Accessed 10 Nov. 2017].

[2] Ibid.

[3] 881 N.W.2d 749 (Wis. 2016).

[4] Tashea, J. (2017). Courts Are Using AI to Sentence Criminals. That Must Stop Now. [online] WIRED. Available at: [Accessed 8 Nov. 2017].

[5] Angwin, J., Mattu, S., Larson, J. and Kirchner, L. (2017). Machine Bias. [online] ProPublica. Available at: [Accessed 8 Nov. 2017].

[6] Ibid.

[7] Danziger, S., Levav, J. and Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), pp.6889-6892.

[8] Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J. and Mullainathan, S. (2017). Human Decisions and Machine Predictions. The Quarterly Journal of Economics, 27.

%d bloggers like this: