Because of protest: An article examining whether AI can detect criminals in advance is shelved

The researchers turned to the publisher who reviewed the article and demanded that it be shaken off and similar studies in the future. An Israeli researcher who signed the letter against the article: “The problems in the article can lead law enforcement agencies to dangerous decisions”

Source: Pixabay

The use of artificial intelligence algorithms for predicting various phenomena has long been an uncommon phenomenon, but researchers’ use of the algorithm to try to determine the level of “crime deadlines” of people based on images and information from fire officials among many researchers – who are trying to stop the publication of this research article.

The argument: Using the court system data creates racial bias

More than 1,700 researchers from Artificial Intelligence, Data Science and Sociology Sign a letter In which they urge Springer Nature not to publish a future article that attempts to test the capabilities of an artificial intelligence algorithm for identifying “criminal” levels (or deadlines for committing crimes) from analyzing human face images and analyzing crime data.

In the letter, the researchers urge them not to legitimize research and that it is a decision that can create a dangerous precedent, because it is research that cannot be defined as “scientific.” In addition, the researchers who signed the letter sent to the publisher claim that the publication of this type of research will only continue to perpetuate prejudice against people who are dermatologists and people who are not “white”.

According to the letter, several studies show how the justice system is misused against these groups, so any algorithmically trained algorithm will eventually come with a built-in bias that will only highlight and deepen social differences based on social biases and inherent racism.

The researchers who signed the letter urge Springer’s Peer Review Committee to withdraw from the proposal to publish the article, along with the publication of the criteria used to examine it. In addition, they are asking the publisher to make a statement in which they vehemently oppose the use of legal system data in order to predict crime deadlines and acknowledge the fact that they have encouraged this in the past. The researchers also present a general call for publishers to refrain from publishing such articles in the future.

Researchers want to make it clear that “there is no way to develop a system that can predict or identify deadline crimes that are not racially motivated – because the category of“ crime-prone ”is itself racially biased. Studies of this kind – and claims to be somewhat accurate – are based on the assumption that all arrests and convictions can be neutral and objective information about structured criminal activity, when in practice they are far from it. “

The fact that the article on the study is likely to be published was published on the University of Harrisburg, USA. The university’s statement on the article states that researchers have claimed 80% success in identifying crime deadlines by using an algorithm they developed.

The researchers won their fight and Springer published their response to the letter, stating that they would not publish the article: “The article was submitted to a future conference and after extensive peer review it was rejected.” The University of Harrisburg removed the announcement of the study’s publication from the Web, announcing that “the faculty is updating the article to address concerns raised.”

“The problems in the article can lead law enforcement agencies to dangerous decisions”

Among the researchers who signed the letter against the article are several Israeli scholars. One of them is Dr. Eran Tuch from the Department of Industrial Engineering at Tel Aviv University. Talking to geeky Dr. Tuch explains his objection to the article, saying: “I think there are professional and ethical issues in this article: using information that comes from prisons, ignoring the fact that there are many cultural traits on our faces, and especially ignoring the processes that lead people to prisons.”

He added that “the problems in the article can lead law enforcement agencies to dangerous decisions: arresting people based on what their face looks like or involving such issues in bail release. There have been some such articles lately, mainly from China. Because these articles are based on machine learning, they feel ‘objective’, but are based solely on the input and labeling they give them. ”

Dr. Toch is somewhat reminiscent of the past of the research field, which will sound familiar to most of us: “Using facial features to learn something about a person is nothing new. In the 1930s, Europe and the US was a kind of pseudoscience based on the skull and facial structure (Physiognomy, AA). It did not lead anywhere well. “

מתוך: Xiaolin Wu and Xi Zhang, “Automated Inference on Criminality Using Face Images”

Do you know that some Israeli researchers have considered taking part in this kind of research?

“I don’t know Israeli researchers who work in these areas, but Israeli companies are definitely engaged in various facial recognition and visual analyzes. Unfortunately, the Israeli high-tech ethical lines are very flexible. We know Israeli companies that have helped totalitarian regimes to spy on civilians and journalists. Keep in mind that developers also have a lot of influence on what a particular company does or doesn’t do. You see how in companies like Google and Facebook, engineers apply effective pressure so as not to cross ethical boundaries. ”

We asked Dr. Tuch if an experiment on this kind of algorithm in Israel (if he were a hypothetical immigrant) would automatically categorize certain populations as “crime targets” and said he was “sure that such an algorithm, if it was operated in Israel, would simply express the chance of people arriving” Jail, with all the built-in racism. “

The issue of built-in bias in machine learning algorithms is a topic that has been talked about many times, and we also tried to understand it with some research from the field. We asked Dr. Tuch if he thought the matter was – Will there be an end to built-in bias in AI algorithms in the future?

“I don’t think there is a pure mathematical way to arrive at fair algorithms perfectly. The work of John Kleinberg (one of the world’s most important computer scientists) has shown that there is a built-in tension between two types of fairness: personal fairness and group fairness, and both cannot be satisfied. “

Not the first time

It is important to note that this is not the first case where researchers claim to have developed an algorithm that can identify crime deadlines from image analysis. A 2016 article by researchers at Shanghai Jiao Tong University also presented an algorithm of this kind.

The article received rave reviews as well, with articles by Princeton University researchers and Google researchers Post a joint article Appealing the conclusions of the Chinese researchers, and warned – like Dr. Toch – that the researchers reached pseudo-scientific conclusions reminiscent of racist theories like physiognomy.