Chatbots Magazine

Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more.

Follow publication

We Need to Talk About Biased AI Algorithms

--

Many people are immediately fascinated when an artificial intelligence (AI) application gives them the answers they need in seconds. Maybe you can relate. Unfortunately, though, they don’t stop to think deeply about how the program’s algorithms get trained and whether they contain biased information.

I’m concerned about the negative impacts biased algorithms could have on society and even more worried by the fact that companies don’t seem to want to make their technology more balanced.

AI Learns When Humans Input Data

Researchers at MIT set out to prove a point by training a psychopathic AI named Norman. They used content from a Reddit thread about gruesome deaths to teach Norman and then gave him a Rorschach test. They found that Norman came up with gory images, as compared to the non-violent things other AIs would describe.

So, that experience begs the question: Are we trusting AI algorithms too much by assuming the people who train them use well-rounded content? I believe many people are. They’re so impressed by the technology that they don’t think about the techniques and training used to make it work.

Some Technology Ignores Certain Dialects or Groups

People who use voice-activated assistants often find that the tech can’t understand them if they have strong accents or use slang. Research indicates that poor and minority groups are especially likely to have trouble and that some widely used tools regularly misunderstand African American users.

However, companies don’t seem compelled to fix the issues, even after analysts shine a light on them. Often, the organizations assert that the content in their algorithms is proprietary information, restricting independent researchers from using their talents to make improvements.

Sometimes, the developers themselves say the algorithms are so advanced that they don’t understand how they work, so coming up with fixes for biases would be like searching for a needle in a haystack.

But, the challenges involved in making a valuable algorithm span beyond the technical. Humans are complex, emotional beings. When interacting with a chatbot, for example, the conversation could go in countless directions.

Sometimes, human behaviors make chatbots biased, as well. That’s exactly what happened with Microsoft’s Tay chatbot. It got taken offline after only a day because people interacted with it in ways that taught it racism.

Also, no regulations exist about the process of training an algorithm or the information companies use while doing so. That means businesses don’t worry about getting fined for releasing biased AI algorithms.

Going back to the issue of certain groups being ignored by some AI algorithms, it’s easy to see why some companies — particularly those fixated on the bottom line — don’t feel it’s financially worthwhile to address the problem. They believe most of the people purchasing their products don’t come from minority, disadvantaged groups and thereby don’t see sufficient value in making positive changes.

Some AI researchers suggest that the way forward in making algorithms more equalized is to be aware of humans’ cognitive biases and how they could affect the algorithms. However, such a process is neither straightforward nor quick.

Algorithms Could Put People At Risk

Two researchers from Stanford University trained an AI algorithm to guess people’s sexual orientations based on photographs. They found that it was more accurate than humans who tried the task. The scientists say their goal was to prove that it was possible to create such an algorithm.

If consumers ever accessed it, some might target people in their communities and torment them with homophobic slurs, never considering the validity of the data used to create the technology.

Or, if law enforcement officers used something similar to look for criminals, biased algorithms could make people of certain ethnic groups increasingly likely to be targeted for crimes they didn’t commit.

Biased AI Could Cause Job Candidates to Miss Out on Opportunities

Hiring managers regularly use AI algorithms to screen candidates and ensure they don’t overlook the people who are most qualified for open positions. Such technology is the way of the future, but it’s not free from bias.

For example, a poorly written algorithm could ignore people who don’t meet some requirements but are otherwise well equipped to excel in a position. There are also algorithms that evaluate candidates on factors such as gestures and vocal inflection. What if such an algorithm takes the compiled data and makes an incorrect assumption based on what it interprets?

If that happens, people who are thoroughly qualified and ready to work might get passed over because they fiddled with their hands for the first minute of an interview or didn’t speak loudly enough, making the algorithm conclude they were nervous and lacked confidence.

AI Bias Must Be Reduced

Racism and other kinds of prejudice are rampant throughout our society, and biased AI is only making it worse. People in the tech sector must stand together and insist that AI developers and the companies they work for take action to make AI bias less problematic.

Fixing the issue will require both taking precautions when developing new algorithms and promptly addressing problems in existing ones.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Published in Chatbots Magazine

Chatbots, AI, NLP, Facebook Messenger, Slack, Telegram, and more.

Written by KaylaMatthews

tech and productivity writer. bylines: @venturebeat, @makeuseof, @motherboard, @theweek, @technobuffalo, @inc and others.

Responses (1)

Write a response