Technology is rapidly developing to improve our everyday lives, so much so we may not even notice it, but is it being designed with all of us in mind equally? Our personal opinions and belief systems can cause bias in almost every aspect of our lives, but we do not expect that bias to exist in technology, sadly it does exist, and if left unmonitored could prove to be dangerous for our society.
Advancement in everyday commodities such as automated soap dispensers were first developed without the consideration of darker skin tones. The sensors would only recognise and therefore work for white skin. Something so simple as this sanitation technology was only tested on one skin tone, causing a technological prejudice and exclusion.
What do examples like this mean for advances in Artificial Intelligence (AI) which is developing faster than its laws and ethics? The racial, age and ethnic bias we are slowly becoming aware of could spiral out of control.
In June 2020, IBM abandoned their research on facial recognition technology they were developing for the police force, due to fears of racial bias and civil rights abuses. They stated,
“AI systems used in law enforcement needed testing “for bias”.”
IBM CEO Arvind Krishna wrote a letter to Congress in which he said,
“IBM firmly opposes and will not condone the uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms.”
Many experts are sceptical and think it is too late as we already live in a world of mass surveillance and tracking. Two African-American men were tagged as gorilla’s by Google’s facial recognition technology due to its lack of data inputs, which led to outrage. Trust within minority groups needs to be built.
There is also a racial bias in automated speech recognition technology from Apple, Google, IBM, Amazon and Microsoft. Research has shown the voice of white people are better recognized, with up to 35% of words from black people not being recognized. Certain accents and African Ebonics are not detected which has led to a fear of cultural identities being lost. Speech recognition is also known to be more accurate for men compared to women, also showing a gender bias. More and more examples of racial bias in technology, both conscious and unconscious is now emerging.
It is not just AI which comes into question, but also Machine Learning — the ability of machines to learn new information and solve problems. They do so using algorithms to organize and sort through data. How can such a methodical process be racist? It’s what’s included in these datasets by its programmer to develop and test the technology that will set its limitations and ultimately its bias. Semi-supervised or unsupervised deep learning can carry on this bias, in multi-layered artificial neural networks (ANNs).
Automation can emphasize unintentional bias by incorporating or excluding certain data, which could reflect the racial bias of the programmer. For example, a recruitment experiment was held where an algorithm filtered out non-white sounding names. If this was to exist more widely in automated technology it could create discrimination in housing, education, and employment.
Many smaller private companies have been able to jump onto the bandwagon of facial recognition thanks to advances in AI, but with little to no regulation or legal oversight, no corrections to the algorithms used can be enforced.
We do need to openly recognize the racial bias of technology, as algorithms cannot automatically factor in social factors or oppressive history. We can however learn from one protected minority group, not only race but gender etc. to ensure all technological developments are unbiased to be inclusive of everyone in our communities.