One thing Artificial Intelligence can’t be is prejudiced. It should be impossible; machines don’t suddenly decide to hate, they’re all about the facts. But what if the people programming them are prejudiced themselves? A disturbing new report in Science reveals that some are inadvertently doing just that.

Who remembers Microsoft’s Tay, a 2016 chatbot designed to ape the verbal machinations of a 19-year-old American girl? The high-minded idea behind it was to, according to Microsoft, “conduct research on conversational understanding.” But within hours of launching, Tay was claiming 9/11 was an inside job; that Hitler was right and agreeing with Trump’s stance on immigration.

Tay was based on a Chinese chatbot called Xiaoice which has, as of this writing, yet to turn into a racist xenophobe. But Tay wasn’t given any filters and, according to Wired, was the victim of “massive groups of people trying to game it.”

It’s surprising to think that a dispassionate artificial intelligence has exhibited both gender and racial biases but it’s a fact of life – and if we don’t stop the problem now, the implications could be catastrophic.

What Came First: The Chicken or the Egg? Where Did Prejudice Come From?

Artificial Intelligence is moving closer every day to a pitch-perfect state of language cognizance but it is also picking up learned biases. According to Joanna Bryson, a computer scientist at the University of Bath, “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

The issue at stake seems to be buried inside a machine learning tool known as “word embedding” – something that would ultimately enable machines to acquire common sense and logic, two core elements that define humanity. Word embedding is so important because it helps computers better understand and make sense of language.

But it’s word embedding that also leads to some serious issues. The process itself creates a numerical representation of language, taking the meaning of a word and then distilling it into a word vector – which is actually a series of numbers – based on all the other words that appear most frequently with it. As the Guardian puts it, “Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

So far, so good. But the Science report reveals that implicit biases – the same ones that can be found in human psychology experiments – are easily soaked up by algorithms. For example, words like “female” and “woman” were seen to be associated with arts, humanities, and the home – whereas “man” and “male” were more closely associated with engineering and math.

Worse still, it was found that European American names became associated with more pleasing words like “happy” – and African American names had unpleasant connotations. And this puts Artificial Intelligence in danger of following Human cultural biases and becoming filled with the same social prejudices.

This is nothing new. ProPublica commissioned research into risk scores assigned to over 7,000 people arrested in Broward County, Florida, from 2013-2014 and checked to see “how many were charged with new crimes over the next two years.” The first issue they found was that only 20% of the people who the algorithm predicted would commit violent crimes did so. But racial disparities appeared: “The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.” In fact, white defendants were mislabeled as a low risk more often than black defendants.

So how do we combat and stop this bias that algorithms are picking up? Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, says “We can, in principle, build systems that detect biased decision-making, and then act on it.” Wachter, who has suggested establishing an AI watchdog adds “This is a very complicated task, but it is a responsibility that we as a society should not shy away from.”

Wachter, along with Brent Mittelstadt, and Luciano Floridi comprise a research team at the Alan Turing Institute in London and the University of Oxford. They have issued a call for a trusted third party that can look into AI decisions for those who claim they’ve been discriminated against by them.

Indeed, the pitfalls that errors in artificial intelligence can bring about include the loss of jobs and drivers licenses; some have lost access to the electoral register and still, others were chased erroneously for missing child support payments.

Does Government Need to Intervene?

Some people point to the General Data Protection Regulation (GDPR) as something that could turn this around, seeing as how it’s being set up to combat problems like this. Unfortunately, for now, at least, the GDPR doesn’t go far enough.

“There is an idea that the GDPR will deliver accountability and transparency for AI, but that’s not at all guaranteed. It all depends on how it is interpreted in the future by national and European courts,” said Wachter’s colleague, Brent Mittelstadt. All the GDPR currently has the ability to do is offer up a “right to be informed” which would force companies to reveal an algorithm’s purpose; the sort of data it utilizes to make decisions; and other fairly basic information.

Luciano Floridi, the third member of the research team, agrees. “We are already too dependent on algorithms to give up the right to question their decisions,” he said. “The GDPR should be improved to ensure that such a right is fully and unambiguously supported”.

This all stems from accountability and transparency by design. We as entrepreneurs and business operators need to take responsibility. This responsibility was brought to my attention by Nozha Boujemaa, author, and Advisor to the CEO of Inria in data science.

“You think that when you say ‘yes’, it will be taken into account, and when you say ‘no’,” that it will actually happen,” says Boujemaa talking about terms and conditions, adding, “But, through recent studies conducted in France, some applications, no matter whether the answer was ‘yes’ or ‘no’, the location of an individual was still taken into account. Therefore, the consent was not respected because of an economic reason, or it could be due to technical failure.”

It’s going to be tricky going, that’s for sure. Jürgen Schmidhuber, scientific director of the Swiss AI Lab Dalle Molle Institute for Artificial Intelligence acknowledges that you can’t eliminate all forms of bias and instead suggests ensuring humans design the task well. But that’s much easier said than done.

Maybe we need to figure out how to stop being so unconsciously prejudiced ourselves first.