London: Artificially intelligent machines could demonstrate prejudice by simply identifying, copying and learning this behaviour from one another, according to a study. Researchers from Cardiff University in the UK and Massachusetts Institute of Technology (MIT) in the US found that showing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by machines.
It may seem that prejudice is a human-specific phenomenon that requires human cognition to form an opinion of, or to stereotype, a certain person or group, according to the study published in the journal Scientific Reports. "It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population," said Roger Whitaker, a professor at Cardiff University.
Though some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans, the research demonstrates the possibility of AI evolving prejudicial groups on their own.
The findings are based on computer simulations of how similarly prejudiced individuals, or virtual agents, can form a group and interact with each other. In a game of give and take, each individual makes a decision as to whether they donate to somebody inside of their own group or in a different group.
This is based on an individual's reputation as well as their own donating strategy, which includes their levels of prejudice towards outsiders. As the game unfolds and a supercomputer racks up thousands of simulations, each individual begins to learn new strategies by copying others either within their own group or the entire population. "By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it," said Whitaker.