Your Authoritative Source on Singapore Law

Artificial Intelligence and Evidence

Daniel Seng & Stephen Mason

(2021) 33 SAcLJ 241

Abstract:
The proliferation and use of artificial intelligence (“AI”) systems that are powered by machine learning (“ML”) to gather and process information means that admitting such evidence will raise issues not only about the admissibility of electronic evidence but also about the limitations inherent in ML. The treatment of the presumption of reliability of computer systems, including AI systems, as a conclusive, legal presumption fails to understand that software systems can produce subtle mistakes that are not obvious. This is compounded by the fact that the non-procedural nature of ML and AI systems amplifies the difficulty of proving or disproving the reliability of AI systems. The fact that ML and AI systems produce results from datasets that contain embedded human assertions also means that the application of the hearsay rule to AI output may be more apposite than previously thought. The authentication of electronic evidence should be subject to a clear procedure to be developed by the courts, especially in an era of “deepfakes” and other digitally manipulated data. The article concludes with a look at the issues in discovery and disclosure related to voluminous electronic evidence empowered by the use of predictive coding and the need to conduct an “examination” of software code.