AI experts call to ban neural model that claims to predict crime based on a person's face
It claims to predict criminality based on a person's face “with no racial bias”.
Publishing company Springer has an upcoming paper that describes an AI system that can predict if a person can commit crime based on how they look. Academics and AI experts from Harvard, MIT and tech companies like Google and Microsoft have written an open letter to stop this paper from being published.
The paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing,” claims to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” This paper is in the process of being published in the upcoming book series “Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence.”
The letter signed by over 1,000 tech, scientific and humanistic experts strongly condemn this paper saying that no system can be developed to predict or identify a person's criminality with no racial bias.
“Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the “face of a criminal”,” the letter reads.
Through this letter, the experts also expect Springer to issue a statement condemning the very use of criminal justice data to predict crimes in the future. They also urge publishers to not publish such studies in the future.
This paper comes amid protests against systemic racism and police violence in the US. While people in the US and other countries have been protesting widely, tech companies have also stepped up in their fight against racism.