AI policing tools may “amplify” prejudices
Evidence has suggested that the absence of consistent guidelines for the use of automation and algorithms, may lead to discrimination in police work.
The Royal United Services Institute (RUSI) published a report which was commissioned by the Centre for Data Ethics and Innovation (CDEI), whereby 50 experts, including senior police officers in England and Wales, were interviewed.
It was found that the use of AI policing tools could result in potential bias occurring. The report stated that algorithms that are trained on prior police data “may replicate (and in some cases amplify) the existing biases inherent in the dataset”, such as under- or over-policing of certain communities.