How human bias is creating racist algorithms
Abstract
Long known, human psychology is strongly influenced and characterized by numerous different biases. Biases lead to severe problems in society and may often create racist behaviour and increase social inequality in different areas of life.
Artificial Intelligence, Big Data and model-based decision making is meant to help us make more neutral decisions and reduce racial bias. However human-designed Artificial Intelligence often leads to the same racial bias occurring in our algorithms. When our algorithms are trained on biased data, the result will only be as good or as bad as the data and as the programmer using it. “Biased data in, leads to biased data out.” (Ben-Aharon, 2019). Artificial Intelligence can offer us immense advantages in many areas. To achieve these, however, it is important to take a differentiated view of the methods and develop effective mechanisms for debiasing.
Introduction and motivation
Data, Artificial Intelligence, algorithms and model-based decision making has become a tremendously important part of all businesses, industries as well as of our private everyday life. Artificial Intelligence and algorithms are thought to help us make better decisions. Based on Daniel Kahnemann’s theory that human behaviour does not correspond to the behaviour of the homo economicus, but is much more guided by emotions and shaped by bias, Artificial Intelligence opens up a seemingly new possibility for us to achieve more rational and neutral results and create more objective bases for decision-making. Algorithms are thought to be neutral and to find better, not to say the best possible answers to questions from various fields of interest. “However, in practice, an AI is merely a tool and an algorithm, and as such, it is unlikely to be perfect.” (Computer in Human Behaviour, S. 80).
There are many factors in data that lead to the fact that an algorithm and Artificial Intelligence is anything but perfect and biased as well. With the vast amounts of data used in today’s world noises and errors are more than common. Just as human thinking, judgement and memory is based on bounded rationality (Kahnemann and Tversky, 1974), data can be biased, too. The collection, analysis as well as the interpretation of data is often based on cognitive heuristics and biases.
There are many examples of the huge gains of Artificial Intelligence being harmed by human failure: “The math-powered applications powering the data economy were based on choices made by fallible human being. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives.” (O’Neil 2016, p. 3).
One very common example of bias in Machine Learning and Artificial Intelligence is racist results. There is more than one well-known case in which racism and a lack of social diversity has been displayed in AI. “The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way – which means they are passing on the unconscious biases. When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforce that are developing them” (Morris, 2020).
Machine Learning and Artificial Intelligence can be a massive win for our society and lead to great advantages and improvements. However bias in AI can be harmful and dangerous and drive the increase of inequality within the society which is why data should be handled carefully and bias such as the racial one must be well-known in order to work on debiasing and creating a better and neutral base for our decision making processes.
Initial situation
As already introduced, racism in AI can be found in numerous different examples:
- Google`s AI platform labelled two dark-skinned man as “gorillas”
- Facial Analysis is often performing worse on dark-skinned people
- In Systems for weapon recognition in the US, dark-skinned are more likely to be flagged as dangerous
- Technology used in self-driving cars were not able to detect dark-skinned
- Crime risk algorithms are rating dark-skinned at higher risk
- The U.S. health care system uses algorithms to decide on the level of care people receive. “Obermeyer et al. find evidence of racial bias in one widely used algorithm, such that Black patients assigned the same level of risk by the algorithm are sicker than White patients. (…) The authors estimated that this racial bias reduces the number of Black patients identified for extra care by more than half.” (Obermeyer et al., 2019)
To give one example in detail: In the context of the coronavirus, hand-held fever thermometers are used at the more frequent in public places such as airports, bus or train stations in order to prevent the spread of the virus. In an experiment it was found that “Google Vision Cloud, a computer vision service, labeled an image of dark-skinned individual holding a thermometer “gun” while a similar image with a light-skinned individual was labeled “electronic device”.” (Kayser-Bril, 2020).
Those kind of labelling errors occur often when it comes to Vision analysis using Artificial Intelligence. “Deborah Raji, a tech fellow at New York University’s AI Now Institute and a specialist in computer vision, wrote in an email that, in the United States, weapon recognition tools are used in schools, concerts halls, apartment complexes and supermarkets. In Europe, automated surveillance deployed by some police forces probably use it as well. Because most of these systems are similar to Google Vision Cloud, “they could easily have the same biases”, Ms Raji wrote. As a result, dark-skinned individuals are more likely to be flagged as dangerous even if they hold an object as harmless as a hand-held thermometer.” (Kayser-Bril, 2020). Companies developing technologies using computer vision still produce racial bias within their products which appears to be a great risk of unknown discrimination and disadvantage for dark-skinned people in many areas of everyday life.
Complication
“ ‘As human behavior makes up large part of AI research, bias is a significant problem,’ says Jason. ‘Data sets about humans are particularly susceptible to bias, while data about the physical world are less susceptible.’” (Murray, 2019).
In this particular case we are mainly talking about the racial bias. The programmers working on algorithms are simply projecting the already existing data and therefore also the already existing example bias into their developments which in fact doesn’t make the algorithms more neutral as they are thought to be, but reflects upon human bias. Moreover the following bias can lead to racism in Artificial Intelligence:
- Confirmation bias: Wason (1968) found, that there is a tendency to be less critical about results that confirm our believes rather than of data which reject them. This in fact may lead to the same racist interpretation of data which already existed before.
- Sampling bias
- Selection bias or the tendency to racist results in data as described above.
- Cognitive bias: “These are effective feelings towards a person or a group based on their perceived group membership. More than 180 human biases have been defined and classified by psychologists, and each can affect individuals we make decisions.” (Kantarci, 2020)
- Lack of data: If the data you’re using to train your algorithm is not complete, it may harm the representativeness of the results and lead to biased interpretation.
- Label choice bias: “Label bias occurs when the set of labelled data is not fully representative of the entire universe of potential labels. This is a very common problem in supervised learning, stemming from the fact that data often needs to be labelled by hand (which is difficult and expensive).” (Foust, 2019)
Especially incomplete and non-representative data sets are a severe problem leading to racial bias. “Black men, they argued, were six times more likely to be incarcerated than white men and twenty-one times more likely to be killed by police (…). The biased data from uneven policing funnels right into this model. (…) Another way of looking at the same data, though, is that these prisoners live in poor neighborhoods with terrible schools and scant opportunities. And they’re highly policed. (…) In this system, the poor and nonwhite are punished more for being who they are and living where they live.” (O’Neil, 2016, pp. 93, 97)
Conclusion
“There is a saying in computer science: garbage in, garbage out. When we feed machines data that reflects to our prejudices, they mimic them – from antisemitic chatbots to racially biased software.” (Buranyi, 2017).
Technically, any algorithm can be completely unbiased. In reality it appears to be very hard to fully debias Artificial Intelligence. For this very reason it is important to be aware of this problem and develop certain approaches to fix bias in AI. “Eliminating forms of bias is critical, not just because stereotyping can be dangerous for specific populations but because anything that skews our picture of reality can be just as detrimental. Algorithms don’t think for themselves. The tools are only as good as we make them.” (Open Data Science, 2019).
In the context of racism those could be the following (Kantarci, 2020):
- Understand the data you’re using as well as the algorithm in depth to find out where there is a risk of it being racially biased.
- Remove or change the labels the lead to racially biased algorithms
- Establish different debiasing strategies
- Organizational: Transparently present metrics and processes at your workplace
- Operational: Improve the process of data collection
- Technical: Involve Tools and Processes that identify bias and take care of the accuracy of the model.
- Use the knowledge of human bias in order to check your data and algorithm for data bias. Try to improve training data, process design and the company culture to reduce bias and improve algorithms.
- Build diverse teams to mitigate unwanted biases on minorities.
McKinsey & Company offers a similar sic-step approach to reduce racial bias in Aritficial Intelligence:
Besides those steps including processes, awareness and techniques to reduce bias in AI the most crucial point is displayed in step number six of the figure above. “We need diversity in the people, creating the algorithms. We need diversity in the data. And we need approaches to make sure that those biases don’t carry on” (Morris, 2020). Try to test your algorithms and challenge your algorithmic decisions based on the question whether the result would still remain the same if the person was of another ethnicity. “Rather, we must change the data we feed the algorithm – specifically, the labels we give it. Producing new labels requires deep understanding of the domain, the ability to identify and extract relevant data elements, and the capacity to iterate and experiment.” (Obermeyer et al., 2019)
In addition, there is one missing aspect that should play a crucial role in connection with the importance of data-driven decisions in the modern world: Legal regulations on the handling of data and the use of AI. “In the UK, there are some existing protections. Government services and companies must disclose if a decision has been entirely outsorced to a computer, and, if so, that decision can be challenged.” (Buranyi, 2017). But still there is a whole lot to do from a governmental perspective to avoid bias in AI and regulate decision-making based on algorithms.
AI offers a huge chance to remake the world into a much more equal place. But to do that we need to build it the right way. We need people of different races and ethics. We need to think very carefully what we teach our algorithms and what data we give them so they don’t just repeat our own past mistakes.
Bibliography
O’Neil, C. (2016). Weapons of Math Destruction. Penguin Books.
Kahneman, D. (2012). Thinking fast and slow. Penguin Books.
Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 /6464), 447 – 453.
Phelps, E. (1972). The Statistical Theory of Racism and Sexism. The American Economic Review, Sep 1972, Vol. 62, No. 4 (Sep., 1972), pp. 659-661.
Hong, J., Williams, D. (2019). Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent. Computers in Human Behavior, 100, 79-84.
Ben-Aharon, A. (2019, October 29). Is AI doomed to be racist and sexist?
Retrieved 22.11.2020, from https://uxdesign.cc/is-ai-doomed-to-be-racist-and-sexist-97ee4024e39d
Murray, J. (2019, April 24). Racist Data? Human Bias is Infecting AI Development.
Retrieved 22.11.2020, from https://towardsdatascience.com/racist-data-human-bias-is-infecting-ai-development-8110c1ec50c
Buranyi, S. (2017, August 8). Rise of racist robots – how AI is learning all our worst impulses.
Retrieved 22.11.2020, from https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
Kayser-Bril, N. (2020, April 7). Google apologizes after its Vision AI produced racist results.
Retrieved 22.11.2020, from https://algorithmwatch.org/en/story/google-vision-racism/
Cuthbertson, A. (2019, March 6). Self-driving cars more likely to drive into black people, study claims.
Retrieved 22.11.2020, from https://www.independent.co.uk/life-style/gadgets-and-tech/news/self-driving-car-crash-racial-bias-black-people-study-a8810031.html
Morris, N. (2020, April 1). The race problem with Artificial Intelligence:”Machines are learning to be racist”.
Retrieved 22.11.2020, from https://metro.co.uk/2020/04/01/race-problem-artificial-intelligence-machines-learning-racist-12478025/
Kantarci, A. (2020, November 5). Bias in AI: What it is, Types & Examples, How & Tools to fix it.
Retrieved 22.11.2020, from https://research.aimultiple.com/ai-bias/
Open Data Science (2019, May 20). 9 Common Mistakes That Lead To Data Bias.
Retrieved 22.11.2020, from https://medium.com/@ODSC/9-common-mistakes-that-lead-to-data-bias-a121580c7d1f
Doctorow, C. (2016, September 6). Sampling bias: How a machine-learning beauty contest awarded nearly all prizes to whites.
Retrieved 22.11.2020, from https://boingboing.net/2016/09/06/sampling-bias-how-a-machine-l.html
Foust, T. (2019, March 25). 4 Approaches to Overcoming Label Bias in Positive and Unlabeled Learning.
Retrieved 25.11.2020, from https://blogs.oracle.com/datascience/4-approaches-to-overcoming-label-bias-in-positive-and-unlabeled-learning