Skip to main content

Los Angeles Times: “Researchers use AI to predict crime, biased policing in major U.S. cities like L.A.”

Most machine learning models in use by law enforcement today are built on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Prediction Technology.

Given some of the criticism around the technology, some data scientists have become more mindful of potential bias.

“This is one of a number of growing research papers or models that’s now trying to find some of that nuance and better understand the complexity of crime prediction and try to make it both more accurate but also address the controversy,” Young, a professor of emergency medicine and informatics at UC Irvine, said of the just-published report.

Predictive policing can also be more effective, he said, if it’s used to work with community members to solve problems.

Read the full story in the Los Angeles Times.