When Algorithms Testify: Addressing the Explainability Gap of AI Evidence in Criminal Cases

Authors

  • Yuxin Chen Law School, Beijing Normal University, Beijing, China

Keywords:

artificial intelligence, algorithmic black box, criminal evidence

Abstract

The expansion of generative artificial intelligence evidence in the field of criminal justice has exposed the structural risks caused by the unexplainability of algorithms. Although existing studies have revealed multiple obstacles, they have not yet touched upon the fundamental crux of the unexplainability of the algorithm. The three predicaments derived from this, namely the disruption of argumentative logic, the loss of focus in the cross-examination process, and the depletion of judicial trust, essentially stem from the subtle tension between the certainty of machine conclusions and their mystery. The solution lies in establishing a transparent evidence generation mechanism, introducing an expert-assisted review system, and setting up traceability rules for training datasets. Through certain system, a dynamic balance is achieved between technological empowerment and procedural justice to prevent the algorithm conclusions from being improperly endowed with transcendent probative force.

Downloads

Published

2025-05-29

How to Cite

Yuxin Chen. (2025). When Algorithms Testify: Addressing the Explainability Gap of AI Evidence in Criminal Cases. tudies in aw and ustice, 4(3), 1–10. etrieved from https://www.pioneerpublisher.com/slj/article/view/1342

Issue

Section

Articles