Sidebar: AI and the Law

As AI technologies continue to evolve, so too does the research of many of our faculty. Other Law School faculty members who have engaged in significant AI-related research include the following.

Anup Malani

Anup Malani, Lee and Brena Freeman Professor of Law, is exploring the use of large language models (LLMs) to help lawyers conduct legal research. His research examines how to use LLMs to summarize legal opinions, to find relevant case law, and to suggest legal citations while writing. 

“My team’s research is intended to create practically useful AI tools and a user interface to help lawyers incorporate the continual development in LLMs into their daily workflow, with the goal of helping them perform legal research writing more quickly and accurately.”

Adam Chilton

Adam Chilton, Howard G. Krane Professor of Law and Walter Mander Research Scholar, coauthored a paper with several other scholars from around the country on the types of legal reasoning LLMs can perform. The paper presents a collaboratively constructed legal reasoning bench-mark called LegalBench, which is the first open-source legal benchmarking effort of its kind.

“LegalBench evaluated how well twenty LLMs could perform 162 legal reasoning ‘tasks.’ The results suggest that LLMs are quickly getting better— for instance, ChatGPT-4 was more accurate than ChatGPT-3.5—and that LLMs are notably better on some kind of tasks, like determining the correct legal outcome for a set of facts given a specific rule, than other kinds of tasks, like recalling what the legal rule is currently in a given jurisdiction. But on many of these legal tasks, some of the LLMs are already over 80 percent as accurate as humans, and the technology is improving quickly.”

Richard McAdams

Richard McAdams, Bernard D. Meltzer Professor of Law, conducted a study for a forthcoming paper testing ChatGPT as a tool for generating evidence of the ordinary meaning of statutory terms. McAdams and his coauthor found that ChatGPT’s distribution of replies are more useful than what ChatGPT regards as the single “best” reply, pointing toward LLMs having the potential to facilitate legal tasks, but also to the importance of developing best practices around how to best leverage LLMs in performing those tasks. 

“These issues are developing so rapidly that Eleventh Circuit Judge [Kevin] Newsom has already cited a draft of my paper in a concurring opinion exploring and advocating the use of LLMs for determining the ordinary meaning of legal documents.”

Saul Levmore

Saul Levmore, William B. Graham Distinguished Service Professor, coauthored a paper that argued against lawmakers relying on AI and machine learning alone when it comes to implementing rules and standards in the judicial process. 

“AI alone, like humans on their own, is likely to be much worse than the two ‘methods’ combined. One analogy is to chess. Computers now defeat humans, but there is a world of computer chess assisted by humans who are allowed to overrule the computer’s choice of moves. This combination, or teamwork, defeats computers alone. It is interesting and probably relevant to observe that the best human-computer teams involve human chess experts who on their own do not defeat the best human chess players. The same will probably be true for judges and lawmakers. The experts will be those who learn to work alongside AI.”

Artificial intelligence