Corrections_Today_Fall_2024_Vol.86_No.3
I magine a probation officer accessing her agency’s correctional information management system, which includes a newly implemented AI-powered risk prediction module. A red alert symbol flashes beside a client’s name on her caseload. The system’s algorithm has identified a client at increased risk of a probation violation, specifically, a possible relapse into substance abuse. The assessment was based on information gathered from the client’s social media accounts, case notes, treatment progress reports and recent phone messages and has flagged mention of the client attending a party recently. The risk assessment contradicts the PO’s professional judgment, so she spends several hours wading through social media posts, treatment notes and other informa tion to see what she can learn. As she suspected, the client has made consistent progress and is clearly com mitted to sobriety. The AI system incorrectly interpreted a social media post about a “party,” which was actually his young son’s birthday, and reached the wrong conclu sion about his current level of risk. This is the type of decision POs may increasingly confront: whether to trust their professional judgment or rely on an AI system’s prediction of a client’s imminent relapse. While the potential of AI to streamline process es, enhance decision-making and improve both client and public safety outcomes is enticing, it must be counterbal anced with the very real risks of harm, including errors. Mistakes by AI systems can undermine the principles of fairness, objectivity and rehabilitation that are fun damental to correctional practice. An AI tool might flag a client for increased supervision based on factors that are statistically correlated with recidivism but do not accurately reflect the individual’s current circum stances or potential for change. This could contribute to a self-fulfilling prophecy, where increased scrutiny and intervention increase the likelihood of reoffending. Careful planning, diligent design processes, input from diverse stakeholders and meaningful oversight are needed to protect both the humans involved and the integrity of the system. A comprehensive and enforceable ethical frame work is the first step in that process, but currently, there are no research-based frameworks to support AI develop ment (Munn, 2022). In short, correctional agencies lack the guidance needed to successfully navigate the specific challenges of AI development and implementation.
What is AI? AI encompasses a broad set of mathematical tech niques that range from simple rule-based approaches to more complex machine learning and deep learning applications. These mathematical models can identify patterns in very large data sets, make predictions and perform tasks that typically require human thinking to accomplish (IBM, 2023). AI use is wide-ranging, from curating online content to helping doctors diagnose illnesses and even driving autonomous vehicles. Other forms of AI, such as large language models (LLMs), like ChatGPT, mimic human thought and speech by harnessing the power of statistical probability. They can communicate using natural language by predicting what word should come next in a sentence to create a coherent response that seems quite human-like. “While the potential of AI to streamline processes, enhance decision-making and improve both client and public safety outcomes is enticing, it must be counterbalanced with the very real risks of harm, including errors.” These prediction algorithms and LLMs are powerful and quite useful, but they do not currently understand human contexts, human emotions or moral judgments, and, most importantly, they are subject to error. This is particularly concerning in justice contexts, like cor rections, that hold considerable sway over the lives of inmates, probationers and parolees under supervision. An error in the context of client supervision or manage ment can have profound, even life-long consequences, which mandates that highly skilled people make in formed, ethical decisions. Therefore, the application of AI to replace or augment human judgment should be
Opposite page: Background: Adobe Stock/MiaStendal; Machine Learning: Adobe Stock/Murrstock; Red flag: Adobe Stock/freebird
Fall 2024 | Corrections Today
25
Made with FlippingBook - professional solution for displaying marketing and sales documents online