Female AI Assistants Are Programmed To Reinforce Harmful Gender Biases

Apple’s Siri, Amazon’s Alexa, Google’s Google and Microsoft’s Cortana all have one thing in common – their default voice is female, white sounding and subservient. There is a growing conversation around how female AI assistants reinforce harmful gender biases. In 2019, the UN published a study titled ‘I’d Blush if I Could’ that highlighted how female AI assistants are programmed not to stand up for themselves when consistently receiving sexist and abusive comments. The same year, Caroline Criado Perez published her ground breaking book Invisible Women: Exposing Data Bias in a World Designed For Men. In this book Perez sought to expose the gender data gap: a gap in the data that makes up the algorithms for the world’s technology that fails to account for 50% of earth’s population. 

Perez explains that the data we use to create everyday technology contains silences – the silence of women, not to mention larger gaps for women of colour, disabled and working-class women. The data gap can explain everything from the fact that iPhones are too big to fit in a woman’s palm comfortably, to the reason why there are never enough women’s toilet cubicles.  

Gendered AI assistants prove that little progress has been made in closing the gender data gap, as some of our most cutting-edge technology is still learning from data that is far from impartial. AI assistants learn from algorithms that glean their data from large sets of voice recordings, called corpora. As the corpora are made up of mostly male voice recordings, the AIs “have been trained on data sets that are riddled with data gaps,” which is ironic considering that the majority of assistants are female. A 2016 study of the datasets used by Google called the data “blatantly sexist".

Biases even exist in how the technology responds to female users. Because of the male-heavy corpora, although women are in-fact more likely to own an AI assistant, the AI was 70% more likely to understand a command from male speech. What’s more, is that according to the UN report, women make up just 12% of AI researchers. We must ask – if women aren’t in the room where the design decisions are made, how can we expect the products to understand the needs of female users? How can we expect female AI assistants to not reinforce stereotypes, when the data they are programmed with is riddled with misogyny?

 

In Siri’s early days, she could help find prostitutes and Viagra, but not an abortion clinic. She also claimed to be able to help during a crisis, but if you told her you’d been raped, Siri would respond “I don’t know what you mean by ‘I was raped’.” Thankfully, Siri now responds to that comment with a sexual abuse hotline, but perhaps if more female developers had been involved in her creation, Siri would have always provided the help that was needed. 

When you call Siri a slut, she responds “I’d blush if I could.” The abuse that AI assistants receive could be passed off as playful horsing around, but in reality it is a reflection of the sexism that real women experience every day. When AIs flirt back with borderline catcalls, they perpetuate the male gaze and continue a long tradition of male entitlement of women’s bodies. Subservient female AI assistants do not only allow gender stereotypes to be continued, but teach a tolerance of gendered verbal abuse and violence. Safiya Umoja Noble, a sociology professor at the University of Southern California, described gendered AI as “a powerful  socialisation tool that teaches about the role of women, girls and people who are gendered female to respond on demand”

 

Perez believes that male-oriented design “is at the root of perpetual, systemic discrimination against women". As AI technology advances, we risk blurring the lines between machine and human voices, and when male-on-female violence is permitted by the technology we use in our homes, sexist abuse and remarks continue to be commonplace. At a time when domestic abuse is on the rise due to the coronavirus pandemic, it is more important than ever that we work to combat gender-based violence. Since 96% of murders worldwide are committed by men, should we be allowing female AI assistants to respond to threats with passive and obedient comments?

As artificial intelligence becomes more prominent in our everyday lives, it is vital that we tackle the biases embedded within the data that programmes them. There is a movement calling for AI assistants to be less subservient when it comes to violent and misogynistic comments. The Hey Update My Voice campaign in Brazil is calling on developers to update the assistants’ responses, whilst another group of developers, linguists and technicians are working on a genderless AI voice called Q.

 

Written by India Lawrence

Follow India on Twitter and Instagram


Leave a comment

Please note, comments must be approved before they are published