Artificial intelligence has the potential to transform our lives, either for better or for worse. AI makes our lives better by making our shipping and logistics more efficient, helping us to find cures for diseases, and lessening the burden on us for tasks that can easily be taken over by machines. But it’s not all good stuff – artificial intelligence also amplifies our prejudices and existing dysfunctions. When our biases are programmed into artificial intelligence, artificial intelligence can quickly become highly problematic.
One of the most problematic applications of artificial intelligence so far is in policing. AI is being used to identify suspects in security and body cam footage. While this seems innocuous, facial recognition algorithms have been shown to be biased, misidentifying people of color regularly. Unfortunately, it is already being used in policing across the world, and in one town in California a massive drone patrols the street looking for persons of interest and sending that information along to the police department.
Predictive policing is based on the assumption that crime is like a virus, spreading throughout a community. Many criminologists believe that targeting communities where crime occurs is the best way to prevent future crime from happening, but what really happens is that communities of low-income residents and people of color get targeted for over-policing, which breeds lack of trust between the communities and the police who are supposed to be serving and protecting them.
Artificial intelligence can be used for good, but policing probably isn’t the way to go. Learn more about AI crime prevention from the infographic below.