Corrections_Today_Fall_2024_Vol.86_No.3

Henderson, S. (2022, March 16). There’s More to AI Bias than Biased Data, NIST Report Highlights . NIST; NIST. https://www.nist.gov/news-events/ news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights IBM. (2023). What Is Artificial Intelligence (AI)? IBM. https://www.ibm. com/topics/artificial-intelligence IBM. (2023). Shedding light on AI bias with real world examples . IBM Blog; Ibm. https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world examples/ IPPC Technologies, The Containment Approach & Computer Monitoring . (n.d.). Createsend.com. Retrieved May 29, 2024, from https://createsend. com/t/y-272F5EDD39E9110B2540EF23F30FEDED Jada, I., & Mayayise, T. O. (2023). The impact of artificial intelligence on organisational cyber security: An outcome of a systematic literature review. Data and Information Management , 100063–100063. https://doi. org/10.1016/j.dim.2023.100063 Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/ zenodo.3240529 Leslie, D., Burr, C., Aitken, M., Katell, M., Briggs, M., & Rincon, C. (2021). Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A Proposal. SSRN Electronic Journal . https://doi.org/10.2139/ ssrn.4027875 McKendrick, J., & Thurai, A. (2022, September 15). AI Isn’t Ready to Make Unsupervised Decisions . Harvard Business Review. https://hbr.org/2022/09/ ai-isnt-ready-to-make-unsupervised-decisions Muhammad Sufyan, Zeeshan Shokat, Usman Ali Ashfaq, (2023). Artificial intelligence in cancer diagnosis and therapy: Current status and future perspective, Computers in Biology and Medicine, Volume 165, 2023,107356, ISSN 0010-4825, https://doi.org/10.1016/j.compbiomed.2023.107356. Nicholas Kluge Corrêa, Galvão, C., James William Santos, Carolina Del Pino, Edson Pontes Pinto, Karen, C., Massmann, D., Mambrini, R., Luiza Galvão, & Terem, E. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. ArXiv (Cornell University) , 4 (10), 100857–100857. https://doi.org/10.1016/j. patter.2023.100857 Redden, J., Inkpen, C., DeMichele, M., & Criminal Justice Testing and Evaluation Consortium. (2020). Artificial Intelligence Applications in Corrections . U.S. Department of Justice, National Institute of Justice, Office of Justice Programs. The White House. (2022, October). Blueprint for an AI Bill of Rights . The White House; The White House. https://www.whitehouse.gov/ostp/ai-bill of-rights/van Dijck, G. Predicting Recidivism Risk Meets AI Act. Eur J Crim Policy Res 28, 407–423 (2022). https://doi.org/10.1007/s10610-022-09516-8

Adobe Stock/Jirapong

Toward the future The future of Correctional AI is an exciting one. It offers tremendous potential to streamline operations, handle routine administrative tasks, provide insights about a facility’s safety and security, and assist correc tional staff when making complex decisions. In short, corrections should embrace AI as a tool to move the field toward better outcomes at both the system and indi vidual levels. The risks of doing so are real, though. Bias, lack of transparency, overly intrusive interventions, privacy intrusions and unfairness are all possible. We can look to principle-based ethical frameworks to help us under stand those risks and prepare to address them, but we can’t stop there. We need to carefully consider the actual harms that AI systems may cause and then use a design thinking process to address them carefully, thoroughly, and rigorously in collaboration with all the stakeholders involved. That way, we can benefit from AI while keeping human beings and their needs in sharp focus. CT REFERENCES Biometrics | U.S. Customs and Border Protection . (n.d.). www.cbp.gov. https:// www.cbp.gov/travel/biometrics Heaven, W. D. (2021, February 5). Predictive policing is still racist— whatever data it uses. MIT Technology Review. https://www.technolo gyreview.com/2021/02/05/1017560/predictive-policing-racist-algorithmic bias-data-crime-predpol/#:~:text=It%27s%20no%20secret%20that%20 predictive

Robert Cameron is a seasoned practitioner, researcher, and educator with over 30 years of experience in correctional systems, emerging technologies, and criminal justice trends.

Fall 2024 | Corrections Today

29

Made with FlippingBook - professional solution for displaying marketing and sales documents online