Google Feeds Prejudice and Bias

google feeds prejudice StopAd

When you have a question on your mind, what do you normally do?

You head over to Google. You do this because it’s the easiest, fastest, and cheapest way to get your questions answered. Plus, it’s trustworthy and accurate. But is it?

Unfortunately, there are grounds to assume Google’s search is not completely impartial.

In actuality, there are bias and prejudice coded into Google’s search algorithms. The information, the accuracy of which we rarely question, has been allocated in a way to represent particular social groups, events, or individuals negatively. Could Google’s autocomplete feature be just as dangerous as it is time-saving?

In fact, there are multiple cases showing just this.

Google Is Not What It Seems  

Since the beginning, we’ve been thinking of Google as a transparent, democratic disruptor meant to make information easily accessible to all.

And that was true for a short period of time when Google was just a public information resource. It didn’t last too long, though. It took the company less than two years to turn its search engine into a global advertising platform.

It is crucial to understand what Google really is. When using Google, we should never forget that it is a multinational advertising company, not just an open information resource.

If we start seeing Google for what it is, it becomes a lot easier to stay critical and reasonably alert. Google is a giant in advertising, and it stores so much data on all of us that it would be naive to believe that governments or corporations would never try to leverage it.

Google Is Not Impartial, It Has Views and Opinions of Its Own

If you wonder what Google “thinks” about something, simply start typing your question. It takes less than a second for an autocomplete suggestions to appear. Before we even finish typing, the algorithm already has its predictions about what we might be looking for.

It might seem helpful in terms of user experience: it saves us time and energy by freeing us up from having to type the entire question or sentence. If we dig deeper, though, we’ll realize the feature is anything but innocent. By finishing our phrases on its own, algorithm-based way, Google plays a big role in forming our opinion on a topic.

Before we discuss exactly how Google’s algorithms work and why this is the case, let’s look at some examples of the consequences.

Minorities are Affected the Most

In 2016, Google had to deal with a search algorithm problem that made headlines relating to Jewish people and the way Google’s algorithm works. When people typed in “are jews,” the first question the search suggested was, “are jews evil?”

Jewish people were not the only social group affected by the imperfections of Google’s search algorithm. Those typing “are Muslims” were suggested to ask “are Muslims bad?”

When the biggest public source of information signals that such questions are most frequently asked, should we be surprised there is so much racism in the world?

The worst part is that people who are the most affected by this imperfection in the search algorithm are those who deserve it the least—people who represent minorities and already are subject to oppression. If you dig deeper into the topic, you’ll find algorithm-related cases about feminists, Muslims, and black people, to name a few.

Global Problems Are Another Target

Type in “global warming is” and see what the algorithm will suggest. When I tried, I saw nine autosuggestions and six of them were to convince me that global warming is not a big deal. It’s no wonder so many people actually think so.

Google feeds prejudice and bias StopAd

Another interesting case I found is related to people’s food preferences. I started to type in “vegans are…” and Google’s autocomplete suggested ten ways to finish my sentence. All of them were judgemental or even offensive to some degree.

Google feeds prejudice StopAd 2

Politicians Take Advantage, Too

Another interesting case of Google’s algorithm manipulation is of a political nature. In June 2017, The Washington Beacon published a detailed analysis of search queries about Hillary Clinton made in Google and Bing. Interestingly, the autocompletes were shockingly different, as if there were two different Hillarys—one for those living in Bing’s reality and one for those living in Google’s. For the query “Hillary Clinton li… ” Bing suggested “lies” and “liar,” why Google predicted it was a question about Hillary’s Linkedin or lipstick. Was it Google trying to bury the negative facts about Clinton or just a difference in how the algorithms of two search engines work?

In this sense, it’s no longer about bias or prejudice. Now it’s about political manipulation on a global scale.

Flaws in the System and Google’s Explanation

If all this has you feeling a little peeved, it’s worth mentioning that Google has a good excuse (though maybe not an acceptable one.) Its algorithm makes assumptions based on the most searched queries on a subject. This means that when we see autocompletes that might offend someone, that’s because lots of people have made “offensive queries” on the topic before. The problem with this system is that Google does not have a good correction process to prevent hate and racism scandals from happening.

The biggest problem with such autocompletes is that they show only one side of the story and inevitably perpetuate echo-chamber “debates” and confirmation bias.

Notably, autocompletes don’t convey intent. If, for example, someone’s goal was to debunk the myths about vegans or global warming, that too might drive up the number of queries for those keywords. However, if a climate change denier does a search and sees a “climate change myths” autocomplete, he is more likely to experience confirmation bias based on this result.  Such one-sided representation of facts is a perfect tool for feeding stereotypes and bias.

Google Feeds Prejudice. Who’s To Blame?

Given everything covered, it might seem that Google Search is evil. But is it really?

If we think about it, it is not in best interests of Google to be in the center of scandals related to racism, sexism, or anti-semitism. It is also highly unlikely that the autocomplete feature was designed with fueling prejudice in mind.

As we mentioned above, autocomplete search queries are just a reflection of the previous search activity of all internet users. It is determined by a few factors, but popularity of search terms is the biggest.

This means that we, ordinary internet users, are responsible, too. If the autosuggestions we see look racist or biased, it is because we’ve taught the algorithms racism and bias through our searches. By asking questions that are prejudiced or biased in nature in high volume, we’ve taught stereotypes and prejudice to search engines. It’s us who are in charge.

The truth might be hard to swallow, but autosuggestions are a reflection of our society and its biggest problems. It’s true that one of Google’s social responsibilities is to make sure that its technologies do not exacerbate those problems, but the technologies themselves are not the root cause.

The Controversy Of Google Autocomplete Regulations

There are two things we know for sure. First, the way Google’s autocomplete feature works gives us grounds to say that Google feeds prejudice. It does so by suggesting negative search queries for certain social groups. Second, the search algorithm is learning from us.

In light of this, there are two layers of the problem that need to be addressed. On one hand, we need to remember the long-term affect our “out of curiosity” search queries might have. The more prejudiced and biased queries we make, the more prejudiced the algorithm becomes.

On the other hand, Google needs to take its part of responsibility. As it’s clear that the algorithm reflects the patterns and phenomena society is trying to fight against, it should be a part of Google’s corporate social responsibility strategy to make sure its technologies do not make that fight even more difficult to win.

Google’s responsibility, however, requires a delicate balance. First, they must ensure that queries generated by the autocomplete feature are not defamatory or offensive to representatives of any social groups. Secondly, keeping it under too much control might be considered a form of censorship. If Google’s search algorithm reflects the actual state of things in our society, then correcting the results of autocomplete means trying to sugarcoat the hard truth—continuous issues with racism, hate, and prejudice.

If we think about it this way, what would be the right thing to do?

Share