What Makes Artificial Intelligence Racist and Sexist

Artificial intelligence is infiltrating our daily lives with apps that process your phone’s photos , manage email, and translate text from one language to another . Google, Facebook, Apple and Microsoft are actively exploring how to integrate AI into their core services. Before long, you’ll likely be interacting with the AI ​​(or its output) every time you pick up your phone. Should you trust him? Not always.

AI can analyze data faster and more accurately than humans, but it can also inherit our biases. To learn, he needs huge amounts of data, and the easiest way to find this data is to send them text from the Internet. But there are a few highly biased phrases on the Internet. A Stanford study found that Internet-trained AI associates stereotypical white names with positive words, such as “love,” and black names with negative ones, such as “failure” and “cancer.”

Luminoso Chief Scientist Rob Speer oversees the ConceptNet Numberbatch open source dataset , which is being used as a knowledge base for artificial intelligence systems. He tested one of the Numberbatch data sources and found obvious problems with their word associations . When a question is asked by analogy “A man is for a woman, like a shopkeeper for …”, the system fills in the “housewife” form. He also associated women with sewing and cosmetics.

While these associations may be appropriate for certain applications, they can cause problems in general AI tasks such as evaluating job candidates. The AI ​​doesn’t know which associations are causing the problem, so it shouldn’t have a problem ranking a woman’s resume lower than a similar resume from a man. Likewise, when Spear tried to create a restaurant review algorithm, he rated Mexican food lower because he learned to associate “Mexican” with negative words like “illegal”.

So Speer stepped in and canceled the ConceptNet bias . He identified inappropriate associations and corrected them to zero, while retaining relevant associations such as male / uncle and female / aunt. He did the same with words related to race, ethnicity, and religion. It took a man to fight human prejudice.

Numberbatch is the only semantic database with built-in anti-bias functionality, says Speer in an email. He is pleased with this competitive advantage, but hopes that other knowledge bases will follow his example:

This is an AI threat in the near future. This is not some sci-fi scenario in which robots take over the world. These are artificial intelligence services that make decisions we don’t understand, and those decisions end up hurting certain groups of people.

The scariest part of this bias is how discreetly it can take over. According to Speer, “some people [will] live their lives without knowing why they have fewer opportunities, fewer job offers, more interaction with the police or TSA …” Of course, he notes, racism and sexism are integral parts of society. and promising technological advances, even if clearly designed to counteract them, often amplify them. There is no objective tool based on subjective data. Therefore, AI developers have a huge responsibility to identify flaws in their AI and fix them.

“There needs to be more understanding of what’s real and what’s hype,” Speer says. “Artificial intelligence is easy to overestimate, because most people do not yet have the right metaphors to understand it, and this prevents people from treating it with due skepticism.

“There is no AI that works like the human brain,” he says. “To counter the hype, I hope we can stop talking about the brain and start talking about what’s really going on: it’s mostly statistics, databases and pattern recognition. Which shouldn’t make it any less interesting. “

More…

Leave a Reply