Searching for AI Algorithms’ Hidden Biases

Last year, Microsoft launched a chatbot, powered by artificial intelligence, on Twitter as a fun social experiment to see how the bot interacted in the social media sphere. Within 24 hours, the bot had started spewing racist and inflammatory tweets—all learned from those interacting with it. The experiment was more than a public failure for Microsoft. It was an alarming portrait of how easy it is to taint artificial intelligence with cultural bias—and for our robots to continue to build upon those learnings.

Indeed, it turns out machines are just as judgmental as humans. That’s the consensus surrounding the use of artificial intelligence (AI) algorithms now being used to sort everything from news and status updates to online searches, hiring processes, and even parolee status. The issue begs the question: is our goal in using AI algorithms simply to make decisions faster—or to make them better? If the latter, it seems we’re currently failing.

Although machines seem to be completely unemotional and objective in their processing of information, studies continue to show the machines are wrought with bias—from the humans who created them, and the society from which the information is pulled. Says one writer, “AI is just an extension of our existing culture”—and in the case of machine learning robots, they’re reinforcing our culture’s biases in numerous different ways, from sexism and racism to political activism. The following are a few things to keep in mind as more businesses continue to implement AI into their decision-making processes.

Bias Comes in Many Forms

Surprisingly, machines obtain bias from more than just programmers. Because they continue to learn and hone their assessments, their biases can grow and solidify over time. They develop from the data they process, their own interaction with the public (such as Microsoft’s chatbot, noted above), and from trends they see among users themselves. For instance, we’ve all had that experience on Facebook where we wonder if one of our friends has disappeared from the planet because they haven’t posted in a while. But when we visit their page, we see they have been posting daily—Facebook has simply chosen not to show us the messages. Facebook’s algorithm gives preference to those posts and people a user tends to interact with the most. It’s not that the programmer doesn’t like certain friends’ posts—it’s that it figures you won’t, based on your past user experience. Although it’s irritating and far from perfect, it’s Facebook’s way of helping us process the mass amounts of data posted every minute.

It Doesn’t Have to be “Fake News” to be Biased News

Just as Facebook determines which friends’ posts to share with us, they also choose which news article we get to see, based on which we tend to interact with more regularly. The effect is more alarming than most realize. While it might be convenient to read “Posts You Might Like,” it also means you’re always being fed news that you will agree with—not news that is unbiased and true. Users need to be aware of this bias, built into almost every piece of information they now find on the web.

Make Human Judgment Part of the Process


Just because machines process information quickly does not mean humans should give them ultimate decision-making power. As with anything, humans need to be prepared to think through the trends machines find and judge them against their own company’s goals and values. For instance, imagine you work for an engineering company and have a goal of increasing the number of female engineers in your firm. If your HR team uses AI to process resumes, and that AI has been programmed to associate engineers with male-ness—simply because most engineers are male—you may falsely believe no females applied for your open positions. The truth is your machine just chose to ignore them based on their built-in algorithms. In this case—and all cases—it makes sense to step back and look at the information with your own objective eye.

So—using our judgment is a short-term solution to the biased AI issue. But what about the future of AI moving forward? Luckily, numerous organizations, such as AlgorithmWatch, have popped up to help educate the public about the issues surrounding AI bias. In fact, this area of AI and algorithm governance may be a new sector of job-growth as AI continues to grow and evolve. In any case, we all need to continue to remind ourselves that robots were never meant to be a replacement for common sense and solid human judgment—no matter how much easier life would be if we could hand those tasks over.

Daniel Newman

Principal Analyst of at Futurum Research
Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited by CNBC, CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best-Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.