Racism in technology: how our tools acquire bias
100 years ago, racism was rampant. But that didn’t mean the toolbox in your garage was harbouring prejudiced opinions against you.
Nowadays, racism is less accepted. (Though we still have a long way to go to abolish it.) Yet we are facing the peculiar situation in which our tools are discriminating against us.
So, we investigate the bizarre reality of racism in technology. As the prevalence of artificial intelligence grows, how do our tools acquire bias?
Racism in technology tools
It’s possible to find instances of racism in technology throughout the industry. The products and software we use daily are guilty of prejudice.
For example, even simple hygiene needs are subject to racial bias. Automatic taps and soaps dispensers are unable to detect darker skin tones. Seemingly, dark skin did not make it into consideration when designing these common sanitation products.
Or take cameras, for instance. From the start, coloured film for home printing was designed to make white people look good. They came with calibration cards to ensure quality photographs, known as ‘Shirley cards’. These cards made sure light skin looked great in photographs, but those with darker skin tones found their photos lacking.
Modern cameras haven’t fared much better and still display some racial insensitivity. Take the Nikon camera that highlighted smiling Asian faces, asking ‘did someone blink?’ Not to mention the camera filters that make you ‘more attractive’ by lightening your skin tone.
And it doesn’t stop at cameras and plumbing, either. Algorithms and AI are also falling foul of racial bias.
Facebook, YouTube and Google all use algorithms that control what information we see. These algorithms create ‘filter-bubbles’. This means you see more of what you agree with and less about the other side. Unfortunately, this makes these algorithms prone to recommending increasingly radical content. This then begins to promote discriminatory views.
Redlining is also an issue. For apps like Pokémon Go, or for delivery algorithms, the areas of availability are another instance of racism in technology. That is, key areas such as minority neighbourhoods suffer from redlining. This means that they’re excluded from participation due to their location. For example, there are fewer PokéStops in minority neighbourhoods.
Perhaps the most obvious (and concerning) tool displaying the racism in technology is artificial intelligence. As AI rises in use and ability, racial bias could grow into a major issue.
AI has already demonstrated racism in its facial recognition technology. For example, Joy Buolamwini recently discovered that a robot recognised her face better when she wore a white mask. Another example is the webcam that couldn’t track non-white faces. Not to mention the AI slip-up that tagged an African American couple ‘gorillas’ in their photos.
It’s not just facial recognition AI, though. Machine learning, in general, is proving prone to discriminatory outcomes.
How is bias embedded in our tech tools?
Much of the racism in technology doesn’t come from malice, but ignorance. It’s born of the way tech tools, like AI, are trained and coded. It involves unconscious biases, the limitations of technology, and racial oversight.
The thing is, with IoT and the continued growth of AI, more and more tools have the potential to display bias. So, how exactly has racism in technology become reality?
• Biased data
AI needs data to learn how to do things. A lot of data. And if that data is biased, the output will be too. This is a problem showcased in facial recognition AI. Insufficient training data, then, leads to cameras that can’t recognise or identify people of colour.
For example, an AI tool that’s trained to identify people gets 100 images of faces to study. Only 10% of them represent people of colour. This means that the AI learns more about identifying white faces than it does any other race. So, to alleviate this instance of racism in technology, training data should be as diverse and free from bias as possible.
Then, there’s the fact that algorithms can learn too much. A great example of this is Tay, the 2016 chatbot that learnt to tweet racist messages. But it applies to any algorithm.
Worse, this can lead to inappropriate sweeping generalisations. Prejudicial thoughts often have a foundation in generalisation. That is, they come from making general assumptions about a given type of people. When it comes to racism in technology, though, generalisation is a core part of how machines learn.
So, our tools are combining the wrong learnt data with sweeping generalisations. Is it any wonder, then, that outcomes are sometimes racist, sexist, or discriminatory in some way? Fixing this comes down to transparency in AI-powered decisions. We need to make clear the reasoning behind the AI output and be ready to tweak and adjust answers with human understanding.
Finally, there’s artificial stupidity and the limitations of technology. Look at the soap dispensers that wouldn’t recognise dark skin. This came down to the hardware itself being unable to detect infrared — which black skin doesn’t reflect.
Artificial stupidity is another technology limitation that contributes to racism in technology. This is the idea that artificial intelligence doesn’t know any better. It doesn’t have a moral compass. So, it’s not going to pick up the fact that it is being fed bias or alert us of biased results.
So, continuing to tune and track our AI-powered tools is a must.
Alongside being unethical and unfair, racism in technology poses multiple problems. It reduces the accessibility of everyday tools in life for people who aren’t white.
Indeed, there are few areas of life today that don’t involve software at some level. It’s how we communicate, how we see and interact with the world. As Marc Andreessen famously remarked, ‘software is eating the world’.
With tech tools so ingrained in modern life, racism in tech could stand to exacerbate prejudicial attitudes. For example, the impact of algorithms leading to radicalisation is already observable. Consider the labelling of the New Zealand terrorist attack as an act created ‘of and for’ the internet.
There are also several safety risks to consider. For instance, could a driverless car with racial bias pose a risk to pedestrians (and drivers) of colour? How about hygiene issues, as seen with the racist soap dispenser? If autonomous factory machinery holds a racial bias, could it put non-white workers at risk?
What to do about racism in technology
The first step to solving the problem is being aware of it. In the fight against racial discrimination, the naivety of our AI tools and algorithms is another hurdle to overcome.
So, let’s start to quell the racism in technology before it gets out of hand.