Corrections_Today_Fall_2024_Vol.86_No.3

TECHNOLOGY

stage for more thoughtful and careful development of AI systems. They are also problematic for several reasons. Luke Munn, an AI researcher for the Institute of Culture and Society at Western Sydney University, has criticized the development of AI ethics as being woefully inadequate on several counts. He argues that principle based ethical frameworks, such as the AI Bill of Rights, not only lack clear definitions for the terms used but also focus too heavily on the technology-related ethical aspects and not enough on its social impacts. AI systems are sociotechnical, meaning they are tech nologies embedded in social systems that have complex histories, cultural beliefs and human needs at their core. The focus of AI ethics has primarily been on shaping the technology itself, not on how those systems impact hu man systems or social arrangements. The principles do little to address that practical problem, especially given the challenge of defining the values being expressed in a document such as the AI Bill of Rights. Values-based terms, such as fairness and beneficence, lack any con sensus meaning in the context of AI development, which leaves principles based on that concept open to interpre tation. The principles proposed by the U.S. government and technology companies also lack any enforcement mechanism. They may be paper tigers that distract from the actual human costs of AI development and implementation. Call for responsible AI implementation in corrections For this reason, principle-based frameworks are a helpful guide but are not enough on their own. A more concrete way forward is to combine principle-based frameworks with an “actual harms” approach to address the real risks that AI systems pose (Leslie, 2021). Actual harms are less focused on the technology and more focused on the risks posed to human beings as the result of a specific aspect of AI use. Where principles-based approaches focus mainly on the general intentions of AI development, an actual-harms approach centers on first identifying the exact harm, measuring and analyzing it, and then carefully and thoroughly addressing it before implementing the system. For instance, there is a real risk of racial bias in AI systems that may significantly harm or otherwise disadvantage individuals under

correctional supervision. That may include unwarranted sanctions, such as arrest and incarceration, as the result of a biased AI-based decision. Addressing that actual harm in a thoughtful, focused and complete way is a priority, not just the AI technology. Implementing an actual-harms model could be accom plished via design thinking (Redden et al., 2020), which is a human-focused problem-solving approach. It places human actors front and center to develop empathy and understanding of the problem itself from a human perspective and then develop testable AI prototypes that can be rigorously evaluated. It’s a proactive and posi tive way forward that keeps solutions at the forefront. Designing, testing and continuous improvement that engages with communities of stakeholders are hallmarks of design thinking. “Values-based terms, such as fairness and beneficence, lack any consensus meaning in the context of AI development, which leaves principles based on that concept open to interpretation.” The goal of such an approach would be a collaborative effort to develop transparent and explainable AI systems that are rigorously tested to ensure that the actual harm under consideration is eliminated before the system ever sees the light of day. More than that, design thinking also incorporates the development of clear policies and procedures, staff training approaches and other human activities related to AI use. Keeping a human in the loop in AI decision-making, for example, requires that users know the system’s capabilities and its limitations, how the system aligns with law and regulations, as well as its align ment with the agency’s obligations and priorities. Ongoing collaboration with AI researchers, ethicists, members of the public, clients and other stakeholders would further enhance problem-solving in a design thinking approach.

Corrections Today | Fall 2024

28

Made with FlippingBook - professional solution for displaying marketing and sales documents online