by Michael Bidwell, The Legal Forecast

September is Brisbane Pride Month celebrating its 28th year supporting the LGBTI+ community. As an openly gay lawyer, Editor in Chief for The Legal Forecast and Events Coordinator for Pride in Law, I wanted to write this article as a reminder of how far we must go to reach true equality, given artificial intelligence has learned and expressed homophobia. As artificial intelligence (AI) may be incorporated into Human Resources (HR) practices for organisations, we must do better as employers and human beings to ensure everyone can have a fair go.

 

AI and HR

In short, AI is intelligence demonstrated by machines. According to IBM’s 2017 survey of 6,000 executives, 66% of Chief Executive Officers believe AI can drive significant value in HR. 54% of HR executives agreed, noting AI will affect key roles in their departments.[1]

Kate Guarino, director of HR operations for Pegasystems, said AI presents an opportunity for HR to automate “repetitive, low-value add tasks” and increase the focus on more strategic work. Kate cited the example of HR spending time processing the steps of onboarding a new employee (allocating space, provisioning a laptop, etc.). Saving time in those arenas can help HR teams pivot to making sure they focus on “value-add work like mentoring and continuous feedback.” Kate said companies have implemented “AI recruiters” to automate scheduling interviews, provide ongoing feedback to candidates and answer their questions in real time.[2]

This all sounds exciting, right? However, we should consider where AI can go wrong if implemented in HR practices without appropriate supervision.

 

AI learning and homophobia

In 2016, Microsoft released an AI chat bot, Tay, who was learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. Tay was originally having polite conversations but Tay was designed to learn from its review of everyone’s posts. I will not share in this article the posts that were made but Microsoft apologised for the racist, sexist and homophobic posts Tay learned to create.[3]

In 2017, Google revealed a new software called Cloud Natural Language API that was built to help businesses test their messages and rate them on a scale from negative to positive. It ended up having a considerably negative reaction to words and phrases that are about homosexuality. For example, the AI rated the phrase “I’m straight” a 0.1 and the phrase “I’m homosexual”, a -0.4.[4]

Working with Jigsaw, a division of Google that creates tools that deal with abusive comments, Google plans to train future AI to recognise the difference between slurs against LGBTI+ people and legitimate terms. The plan was announced at SXSW and Jigsaw product manager CJ Adams explained that their “mission is help communities have great conversations at scale. We can’t be content to let computers adopt negative biases from the abuse and harassment targeted groups face online.”[5]

 

Who should learn from who?

AI is learning these behaviours from humans which shows we need to do better to treat one another with respect. On one hand, you may feel we must do better as a society to ensure our AI learns to also be better. On the other hand, you may feel AI can show us how to be better and we learn from it as a society.

Either way, this article is not able to explore the workplace legal liability for humans or AI treating a prospective or current employee differently for being LGBTI+ but all solicitors should be aware of Rule 42; ‘A solicitor must not in the course of practice, engage in conduct which constitutes discrimination, sexual harassment or workplace bullying’.[6]

 

If your organisation is considering implementing AI into any practices, I suggest you consider all the risks and include appropriate management practices. I ask each of you to learn, engage and try to be better so our AI can learn from us to be more inclusive. While our diversity in the profession provides great potential, it is only realised once we come together.

 

Author Details

Michael Bidwell is an Executive Member of The Legal Forecast. Special thanks to Benjamin Teng of The Legal Forecast for technical advice and editing. The Legal Forecast  aims to advance legal practice through technology and innovation. TLF is a not-for-profit run by early career professionals passionate about disruptive thinking and access to justice.

 

 

 

[1] IBM, Extending Expertise: How cognitive computing is transforming HR and the employee experience available at <https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=GBE03789USEN>.

[2] Dom Nicastro, 7 Ways Artificial Intelligence is Reinventing Human Resources (12 March 2018) CMS Wire <https://www.cmswire.com/digital-workplace/7-ways-artificial-intelligence-is-reinventing-human-resources/>.

[3] Nick Duffy, Microsoft created artificial intelligence but she’s a racist homophobic Trump supporter (24 March 2016) PinkNews <https://www.pinknews.co.uk/2016/03/24/microsoft-created-artificial-intelligence-but-shes-a-racist-homophobic-trump-supporter/>.

[4] IN, GLAAAD Is Training Google’s AI To Be Less Homophobic (20 March 2018) IN: Celebrating Canada’s LGBT Lifestyle <http://inmagazine.ca/2018/03/glaad-training-google-ai-less-homophobic/>.

[5] IN, GLAAAD Is Training Google’s AI To Be Less Homophobic (20 March 2018) IN: Celebrating Canada’s LGBT Lifestyle <http://inmagazine.ca/2018/03/glaad-training-google-ai-less-homophobic/>.

[6] Queensland Law Society, Australian Solicitors Conduct Rules (at 1 June 2012) r 42.


1 Comment

Artificial Intelligence Learning to Discriminate – Pride in Law (Blog) · August 13, 2018 at 5:47 pm

[…] In short, AI is intelligence demonstrated by machines. According to IBM’s 2017 survey of 6,000 executives, 66% of Chief Executive Officers believe AI can drive significant value in HR. 54% of HR executives agreed, noting AI will affect key roles in their departments.[1] […]

Comments are closed.

%d bloggers like this: