Author ORCID Identifier

0000-0002-5453-6703

Document Type

Article

Publication Date

2023

Keywords

Generative AI, Hallucinations, Cross-border litigation, Regulation of AI, Harmonization

Abstract

In 2023, ChatGPT—an early form of generative artificial intelligence (AI) capable of creating entirely new content—took the world by storm. The first shock came when ChatGPT demonstrated its ability to pass the U.S. bar exam. Soon thereafter, the world learned that ChatGPT was being used by both lawyers and judges in actual litigation.

Some within the legal community find the use of generative AI in civil and criminal litigation entirely unproblematic. Others find generative AI troubling as a matter of due process and procedural fairness due to its propensity not only to misinterpret legitimate legal authorities but to create fictitious sources through a process known as hallucination. These phenomena suggest that judges and litigants cannot rely on anything contained in a document created by generative AI.

Thus far, the legal response to generative AI has been partial, piecemeal, and panicked. No consensus exists as to what can or should be done, let alone who should be responsible for regulating the use of generative AI by lawyers, litigants, judges, and judicial clerks.

This Article analyzes the narrow issue of which public and private bodies are best-suited to address the problems associated with generative AI in domestic and cross-border litigation. Rather than proposing specific solutions to the issues facing the criminal and civil justice systems, this Article focuses on identifying who can and should act in the short, medium, and long terms. In so doing, this Article provides the legal profession with a content-neutral blueprint for action.

First Page

165

Publication Title

University of Illinois Law Review Online

Share

COinS