UNM Law Professor publishes work examining AI in the criminal justice system
Dr. Sonia Gipson Rankin, assistant professor of Law at UNM, recently published an article in the Washington & Lee Law Review about her ongoing research examining the potential impact of untrustworthy or faulty artificial intelligence (AI) in the criminal justice system.
“My scholarship and activism combine with my computer science background and passion for racial justice. So I have centered my work on race, technology and the law and black families and community empowerment. My passion in those areas has led me to study on AI and its impacts in the criminal justice system in particular,” Gipson Rankin said.
The paper, published in spring 2021, explains how AI software has been embedded in every stage of the criminal justice process, including policing, sentencing and probation for at least the past decade.
“Our [criminal justice] system is overloaded. Many components of criminal justice, from law enforcement, prosecutors, parole boards and even judges have incorporated proprietary third-party AI-developed software to help them make decisions on life and liberty of the accused,” Gipson Rankin said. “The initial argument was that this would increase fairness. The idea was to mitigate the decision of potentially biased decision makers and put it into the hands of theoretically neutral technology, but that is not what is actually occurring. I wanted to educate the legal community about basic details related to machine learning and AI so they can make informed arguments.”
Gipson Rankin says her paper focuses on three main concerns regarding how AI can be hacked and weaponized in the criminal justice system.
“My first concern is about cybersecurity and is the ease with which AI can be hacked. . Second, AI must be evaluated to see if there are equal protection violations built into the code or if faulty data has been used in the algorithm. The third category is related to the rogue AI, meaning we don’t know what the AI is going to do. Even if the algorithm is transparent, explainable, and accurate, it may not be reliable because the designer did not anticipate the outcome the AI produces. and you don’t know the outcome until you run a cycle,” Gipson Rankin said.
Out of the three, Gipson Rankin said cyber attacks are the most concerning.
“2020 and 2021 showed extreme cyber attacks that impacted the functioning of the United States federal government, transportation, and food and water safety. Cities have been held ransom because of a lack of cybersecurity measures. Cyber attacks are not an if, but a when, so why are we not paying attention to this in the criminal justice system which is responsible for the liberty and freedom of individuals,” Gipson Rankin said.
Along with this, Gipson Rankin said these AI criminal justice risk assessment tools have been showing disproportionate negative disparities related to people of color.
“From the legal side we are trying to think of what to do with AI, where does it fit in our legal structures? The law moves at a different rate of change than technology. While this can be good for so many reasons, the failure to keep up with the rapid change, failure to understand the ways data is being used and manipulated, failure to protect historically marginalized populations will further embed disparities in treatment under the law. The amount of damage this will cause will be unmeasurable. So the point of the paper is to educate the legal community of the ways the software works, or more critically doesn’t work, with the knowledge that it can be hacked, the data can be wrong, and they don’t really know what it’s going to produce, which can include violations of equal protection,” Gipson Rankin said.
Another concerning aspect Gipson Rankin said is the proprietary nature of the software used.
“They don’t have to be transparent. For example, the AI software Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is used in New Mexico in our probation and parole and what is unknown is all the data that is given to the COMPAS software and the basis for the recommendations COMPAS is giving to our corrections system,” Gipson Rankin said.
“The problems with AI are not going to help Black communities and I do believe there should be stronger oversight for AI. At the minimum, the government should be using open source software so the general population knows what information is going in and the government should be able to answer what is happening with the AI they’re using,” Gipson Rankin said. “And I argue there needs to be special, carved-out legal remedies for parties harmed by rogue AI or hacks. If you’re deciding someone’s liberty, there needs to be a lot more human checks along the way. This can help establish trust in the system.”