Corrections_Today_Fall_2024_Vol.86_No.3

understanding of those ideas. This can create tension between algorithmic assessment and human judgment, which creates an urgent need for ethical frameworks and rigorous validation processes to guide the responsible and equitable use of AI in corrections. And finally, overreliance on AI can erode professional discretion and undermine the importance of human judgment in correctional decision-making. Users of AI systems may over-rely on the technology’s perceived ob jectivity and certainty (Leslie, 2019), especially if doing so can take pressure off an already overbooked schedule. The human judgment of correctional employees is irre placeable, especially when it comes to understanding the nuances of a correctional client, case, or situation. Keep ing a human in the loop is a foundational principle of AI ethics, especially in human settings like corrections (Les lie et al., 2021). In fact, human actors become even more critical when an AI system is used. The ability to care fully evaluate and override an AI system’s determination is required to ensure the integrity of the correctional system as an enterprise. The problem with principles In response to these and other concerns about AI, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights in 2022 that offered a set of AI fair principles. Those principles include developing safe and effective systems that have been expertly designed, carefully tested and that undergo regular monitoring to ensure they are not causing harm. The Bill of Rights also expresses the need to protect individuals from discrimination caused by AI, as well as protect individual privacy. It stresses the need for transparency so individuals are aware of when an AI system is being used and to explain why it’s being used (The White House, 2022). Hundreds of ethical guidelines have since been devel oped by technology companies, all of them well-meaning and focused on how to best guide AI development to protect humans and society. In 2023, a review of 200 of these guidelines identified 17 ethical principles that included concepts such as fairness, transparency, benefi cence and non-maleficence, all of which are reasonable and appropriate (Nicholas, 2023). These ideas contrib ute to a helpful vision of the way forward and set the

Adobe Stock/ArtemisDiana

is based on is also complex, which makes it difficult to root out bias in the AI’s decision process and equally difficult to determine the source of that bias. This lack of transparency makes these systems of particular concern in terms of how biased outputs might influence a cor rectional employee’s decision about a client or other important determination. Fundamental rights, such as due process, may be affected, which is unacceptable in such environments. AI-powered tools also raise concerns about individual privacy and the potential for net widening. While not unique to AI systems, the content of an AI that uses private and highly sensitive personal information about clients, interventions and other details is yet another target for malicious actors. Similarly, there might also be a net widening effect created by the ease with which AI systems can analyze data to make recommendations for intervention. Clients who perhaps would not have been assessed in-depth due to their perceived low risk may now be subject to closer scrutiny by systems due to the low expenditure of resources required. That may lead to recommendations for intervention where none would have been made before. An AI’s recommendation could also trigger unwar ranted intrusions and interventions that might disrupt rehabilitation efforts or a client’s recovery. AI systems can apply rules to decision-making, but the nuances of human behavior are quite often lost on them (McKend rick et al., 2022). They lack human context and a deep

Fall 2024 | Corrections Today

27

Made with FlippingBook - professional solution for displaying marketing and sales documents online