Researchers use AI to predict crime, biased policing in cities

For once, algorithms that predict crime could be used to uncover rather than reinforce bias in the police force.

A group of social and data scientists developed a machine learning tool they hoped would better predict crime. The scientists say they were successful, but their work also revealed substandard police protection in poorer neighborhoods in eight major US cities, including Los Angeles.

Rather than justify more aggressive policing in these areas, however, there is hope that the technology will lead to “changes in policy that lead to more equitable, need-based allocation of resources,” including dispatching officers other than law enforcement to certain types of calls . according to a report published Thursday in the journal Nature Human Behavior.

The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, predicts crime by detecting patterns in vast amounts of public data on property and violent crime and learning from the data.

Chattopadhyay and his colleagues said they wanted to make sure the system wasn’t abused.

“Rather than simply increasing the power of states by predicting the when and where of expected crimes, our tools allow us to examine them for enforcement bias and gain deep insights into the nature of the (intertwined) processes by which police and crime interact . develop in urban areas,” says their report.

For decades, law enforcement agencies across the country have used digital technology for surveillance and prediction, believing it would make policing more efficient and effective. But in practice, civil liberties advocates and others have argued that such policies rest on biased data that contribute to increased patrols in black and Hispanic neighborhoods or false accusations against people of color.

Chattopadhyay said previous efforts to predict crimes did not always account for systemic biases in law enforcement and were often based on flawed assumptions about crimes and their causes. Such algorithms gave undue weight to variables such as the presence of graffiti, he said. They focused on specific “hot spots” without considering cities’ complex social systems or the impact of police enforcement on crime, he said. The predictions sometimes resulted in police swamping certain neighborhoods with additional patrols.

His team’s efforts have yielded promising results in some places. According to the report, the tool predicted future crimes up to a week in advance with about 90% accuracy.

Running a separate model led to an equally important discovery, Chattopadhyay said. By comparing arrest data in neighborhoods with different socioeconomic levels, the researchers found that crime in more affluent neighborhoods led to more arrests in those areas, while arrests in deprived neighborhoods decreased.

But the opposite was not the case. Crime in slums didn’t always result in more arrests — suggesting “enforcement bias,” the researchers concluded. The model is based on multi-year data from Chicago, but researchers found similar results in seven other major cities: Los Angeles; Atlanta; Austin, Texas; Detroit; Philadelphia; Portland, Ore.; and San Francisco.

The danger with any type of artificial intelligence used by law enforcement, according to the researchers, is misinterpreting the results and “generating harmful feedback by sending more police into areas that may already be over-patrolled, but feel underprotected”.

To avoid such pitfalls, the researchers decided to publicly audit their algorithm so anyone can verify that it’s being used appropriately, Chattopadhyay said.

“Often the systems deployed are not very transparent and so there is a concern that bias is built in and there is real risk — because the algorithms themselves or the machines may not be biased, but the input may be,” Chattopadhyay said in a phone interview.

The model his team developed can be used to monitor police performance. “You can turn it around and check bias,” he said, “and check if the guidelines are fair too.”

Most of the machine learning models used by law enforcement agencies today are based on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Prediction Technology.

In the face of some criticism of the technology, some data scientists have become aware of possible bias.

“This is one of several growing research papers or models that are now trying to find some of these nuances and better understand the complexities of predicting crime and trying to make it more accurate, but also addressing the controversy,” said Young, professor in Emergency Medicine and Computer Science at UC Irvine, said the just-released report.

Predictive policing can also be more effective, he said, when used to collaborate with community members to resolve issues.

Despite the study’s promising results, it’s likely to raise some eyebrows in Los Angeles, where police critics and privacy advocates have long railed against the use of predictive algorithms.

In 2020, the Los Angeles Police Department stopped using a predictive policing program called Pred-Pol, which critics said led to increased policing in minority neighborhoods.

At the time, Police Chief Michel Moore insisted he had ended the program because of budgetary issues stemming from the COVID-19 pandemic. He had previously said he disagreed with the view that Pred-Pol wrongly targeted Latino and black neighborhoods. Later, Santa Cruz became the first city in the country to completely ban predictive policing.

Chattopadhyay said he sees machine learning evoking Minority Report, a novel set in a dystopian future where people are being taken away by the police for crimes they haven’t yet committed.

But the impact of the technology is only just beginning to be felt, he said.

“There’s no way to put the cat back in the sack,” he said.

https://www.latimes.com/california/story/2022-07-04/researchers-use-ai-to-predict-crime-biased-policing Researchers use AI to predict crime, biased policing in cities

Alley Einstein

USTimesPost.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@ustimespost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button