AI and the Law

Our faculty are at the forefront of exploring the uncharted legal frontiers of artificial intelligence.

A digital eye representing artificial intelligence

A decade ago, Anthony Casey began researching how artificial intelligence could change the law. At the time, AI technology was nascent but promising, with companies like Google doling out millions of dollars to develop the technology. Researchers predicted that AI would soon enable machines to learn vast swaths of information, then perform human-like tasks. Casey saw that this might include tasks typically reserved for well-trained attorneys. How, he wondered, will the law reckon with these changes?

Anthony Casey, in a blue suit, white shirt, and blue tie, walks among the library stacks.
Photo by Lloyd DeGrane

Casey, Donald M. Ephraim Professor of Law and Economics and faculty director of the Center on Law and Finance, recalls presenting on AI in 2015 to a room full of attorneys. AI may soon have the technological ability to write briefs as well as human attorneys, perhaps even better, he told them. He recalls a room full of skeptical looks.

Now, it’s no longer a question of whether AI can write briefs, but how well. In 2021, Casey wrote about the possibility of “Self-Driving Contracts,” by which he meant that contracts might be drafted by AI. One year later, generative AI model ChatGPT launched. ChatGPT creates new content, including contracts and legal briefs, by using AI that learns from data across the internet and beyond. “The technology can produce writing that’s equivalent to or better than the average writer,” Casey said.

“What should we be doing to regulate data? And how will law change because of this? These are the most interesting questions we are working to answer.”

Anthony Casey
 

Ability aside, there are still questions about the legality and efficacy of AI-written briefs. In 2023, a New York federal judge sanctioned two attorneys for submitting a ChatGPT-written brief that was filled with fake quotes and made-up court cases. Despite this highly publicized instance demonstrating the pitfalls of outsourcing legal work to ChatGPT, Casey imagines that law firms are using AI in more judicious ways, such as researching cases and writing templates.

The legal limits of how this technology can be used are murky. At a recent judicial conference, Casey heard about judges who have mandated that lawyers must disclose when they’ve used AI on documents submitted to the court.

But “artificial intelligence” is often a slippery term. For example, must lawyers disclose using Westlaw’s AI-embedded search? Casey said that judges simply want to know that flesh-and-blood attorneys have edited and approved a brief. And that, too, may soon change.

“My understanding is that law firms will have their own proprietary program that only uses real cases, and that’s the where the technology is going,” Casey said. “Courts, at least for now, want to make sure humans are in the loop. Part of the reason is we want to hold someone responsible if there’s a frivolous argument or the brief cites a case that doesn’t exist.”

Responsibility brings about another question, Casey said. As AI improves, pro se litigants who can’t afford lawyers may soon have access to representation through AI programs, trained to serve as legal counsel. On one hand, Casey said, this would improve access to justice for those who need it most. On the other, what happens if people with nefarious intentions get access to this technology?

This falls under the umbrella of a question Casey began mulling a decade ago: How will AI’s advancements change the legal world? Now, this question is being answered in real time. At the Law School, Casey and multiple other professors are studying the implications of AI on the law, government, and society.

AI and Our Moral, Legal Landscape

Aziz Huq sitting in his office with his laptop in front of him.
Photo by Lloyd DeGrane

Over the past five years, Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, has been studying how constitutional rights apply when a decision maker is an AI-driven machine rather than a human. How should discrimination be defined, for example, when predictions are the output of AI-driven machine learning models?

“Many constitutional rules are focused upon intent, and it’s not clear how you think about intent when it’s a machine,” Huq said. “Can machines have bad intent? Is privacy violated when a machine could draw an inference about a private fact, such as your political beliefs or your sexual inclinations?”

The earliest constitutional rules have normative foundations modeled on moral relations with other people, Huq said. Translating these moral intuitions to AI has been difficult, and the stakes for regulators are getting higher as the technology advances.

Five years ago, Huq never imagined that deepfake pornography would become a pressing issue. Today, realistic nude images of people, sometimes underage, are being created with AI technology. States began taking unique legislative paths on this issue last year, but there’s little consistency.

And AI can now create and disseminate disinformation much quicker than any partisan news source was ever able to, which could potentially lead to swaying elections or inciting violence, Huq said. The First Amendment covers free speech of people; what about machines? And what happens when these efforts become international? Huq foresees more geopolitical balkanization as AI advances.

“Many constitutional rules are focused upon intent, and it’s not clear how you think about intent when it’s a machine.”

Aziz Huq
 

“These are genuinely new problems,” Huq said. “They’re concrete examples of where the law is grappling with AI. We don’t really have a solution for these really hard problems.” 

In 2024, Huq believes that regulation will become “thicker and more consequential” for AI, which is moving far faster in its advancements than regulators possibly can. In New York, Governor Kathy Hochul introduced legislation that would punish those who used AI for distribution of deepfakes and identity theft, among other uses. But some say that too much regulation of AI may hamper long-term benefits of the technology. 

The Risks of Regulation

Omri Ben-Shahar gestures in front of a classroom with slides on a display screen behind him.
Photo by Lloyd DeGrane

The societal conversation around AI has largely been preparing for what may go wrong. Omri Ben-Shahar believes that the discussion must also include what benefits AI technology could bring to the legal and regulatory world. There are algorithmic tools, for example, that could rightsize fines based on salary, making traffic fines fairer. Or create personalized loan documents that make the loan process easier and more just.

And there are tools that can help judges make better predictions. Ben-Shahar, Leo and Eileen Herzel Distinguished Service Professor of Law and Kearney Director of the Coase-Sandor Institute for Law and Economics, said that predictive AI tools can cut down on the amount of time people spend sitting in jail waiting for trial.

“The reality is that human decisions are awful. It’s very hard for humans to make good risk predictions. And we know now that algorithms can sometimes make better predictions.”

Omri Ben-Shahar
 

“An algorithm can identify those who do not pose any risk for criminality or flight and release them, especially since many of those individuals who will be released are from racial minorities,” Ben-Shahar said. “It means less crime, more liberty, and less discrimination.”

Many states are already using AI-based risk-assessment tools for sentencing, parole, and bail, said Ben-Shahar. A study published in SSRN, which examined 50,000 convictions in Virginia where judges used AI to rate offenders’ chances of reoffending, found that low-risk offenders avoided incarceration more often and recidivism was reduced. These tools can create a better justice system, according to Ben-Shahar, but he believes that too much of the societal conversation on AI is based in fear.

“The reality is that human decisions are awful,” Ben-Shahar said. “It’s very hard for humans to make good risk predictions. And we know now that algorithms can sometimes make better predictions. As long as regulation is focusing primarily on the downsides and neglects to give the same attention to the upside, I worry that we will sacrifice a lot of social good by excessively slowing the pace of adoption of AI tools.”

In a 2023 paper titled “Privacy Protection, At What Cost?,” Ben-Shahar argued that resistance to new technology is prevalent even when the technology may save people’s lives. He showed that data technology introduced by car insurers to track dangerous driving causes people to drive safer and therefore dramatically reduces fatal accidents. Yet privacy laws restrict its adoption. This brings an important question, he said: how much of AI’s upside is society willing to sacrifice in the name of privacy protection? It’s a question Ben-Shahar is exploring in a new book he is writing, called Why Fear Data?

US Government as Regulator and Adopter of AI 

Bridget Fahey looks into the camera with her hands folded.
Photo by Lloyd DeGrane

Federal regulation of AI has been “highly minimalist,” according to Bridget Fahey, Assistant Professor of Law. Even the AI in Government Act of 2020 was a mere two pages. That same minimalism has extended to how the government regulates its own use of AI technology—and metes out access to the valuable stores of government data that are used to train it.

In fall of 2023, after an executive order asking for use of AI among federal agencies, the US government disclosed 700 AI-use cases. The General Accountability Office later found that there are closer to 1,200 AI projects across federal agencies in use or in the planning stages.

“Not only are we seeing the conventional regulatory story about technology—the government is slow to recognize and respond to technological changes—but we are also seeing the government itself as an enthusiastic participant in the AI market,” Fahey said.

Although no statute comprehensively regulates the government’s use of AI by name, the general terms of many existing laws should shape how government accesses AI. But it is not clear that federal agencies are complying with the letter—or spirit—of those laws in their early experimentations with AI. The Privacy Act of 1974, for example, mandates that governmental agencies disclose how they collect and use data about people. By law, the federal government must publicize new data collections and new data uses in the Federal Register, Fahey said. But ongoing research by Fahey and Raul Castro Fernandez, Assistant Professor of Computer Science at the University of Chicago Department of Computer Science, suggests that agencies have generally not disclosed the use of government data to train AI.

“It’s a puzzle,” Fahey said. “The government has an enormous amount of the kind of high-quality data that AI developers covet, even as private data of that quality is becoming more scarce. Our analysis suggests that agencies must have used existing stores of data to train AI—including personal identifying information about individuals—but they have not disclosed those data uses publicly or subjected them to the kind of contestation we might expect.”

“Not only are we seeing the conventional regulatory story about technology . . . but we are also seeing the government as an enthusiastic participant in the AI market.”

Bridget Fahey
 

In an earlier article called “Data Federalism,” Fahey argues that the government—including federal, state, and local governments—generally lacks adequate legal tools to manage the vast stores of data those governments collect. This is true even as data has come to be regarded in the private sector as the kind of high-value asset that must be conscientiously and deliberately stewarded. AI, she says, presents only the latest example of the gap between the government’s statutory and regulatory treatment of data and its actual acquisition and use of that data.

A New Gilded Age

Eric Posner, in a blue shirt and grey sportscoat, points at the white board behind him.
Photo by Lloyd DeGrane

Eric Posner closely follows the world of antitrust law and believes that the relationships between tech companies are growing reminiscent of the Gilded Age, an era which brought about antitrust legislation that broke up big banks.

The best example is when OpenAI, a nonprofit AI company that created ChatGPT, fired its CEO, Sam Altman. OpenAI is controlled by a board but receives investments from multiple companies, including Microsoft, which is also a competitor with its own AI tools. After Microsoft expressed displeasure with Altman’s firing and hired Altman for itself, OpenAI employees revolted, and Altman was soon rehired as CEO by OpenAI. Microsoft also gained an additional nonvoting seat on OpenAI’s board. The matter is being investigated both by the US Federal Trade Commission and the UK’s competition.

The potential antitrust issues created by AI run deeper than potential big-tech collusion, said Posner, Kirkland & Ellis Distinguished Service Professor of Law and Arthur and Esther Kane Research Chair. Large language models (LLM), a type of AI that can recognize and generate text, require huge swaths of data to learn and produce good results, but most of the world’s data is controlled by megacorporations, such as Google, Facebook, and Amazon.

“In less regulated countries, you’ll presumably get more effective AI tools. But in those countries, there’s also greater risk that you end up with bad AI tools.”

Eric Posner
 

“That’s an antitrust concern, because that means that there may not be as much competition as we’d want among AI firms or firms in AI-related markets,” Posner said.

There could also be new monopolies created by these data powerhouses, Posner said, noting that YouTube, owned by Google, now features thousands of AI-created videos. These videos require no filming or production work, but still give their creators the ability to make money via advertising revenue. But YouTube is likely the only platform with enough video data to train a productive AI, Posner said, meaning that it could create licenses to make these AI videos and charge monopoly-level prices.

And then there’s the issue of potential collusion between AI algorithms. If companies were to meet and set prices, this would be illegal. But if they were to buy AI price-setting tools, choose to maximize profits, and allow these AI algorithms to scan the market, the algorithms could potentially raise prices as a feature. One company using AI in this way may not be an issue, but if every company used this technology, it would essentially allow companies and their algorithms to engage in price fixing without ever exchanging a word or data packet. This could already be happening without any regulators or legislators knowing. In March, the Justice Department announced that it will focus more on companies who deliberately use AI to advance price fixing or market manipulation, accounting for how well a company has managed the risk of AI. “This is going to be an important area of law to develop in some way,” Posner said.

Antitrust laws, as drafted, did not take the speed and efficiency of AI technology into account. Congress may have to step in to adopt new legislation, Posner said, just as it did during the Gilded Age. Or else legislators may wait to see what events unfold in a less regulated market, then act. This is a game that every country’s government will be forced to play in the coming years.

“In less regulated countries, you’ll presumably get more effective AI tools,” Posner said. “But in those countries, there’s also greater risk that you end up with bad AI tools. These countries are experimenting with different trade-offs, and it’s just not clear who’s doing a better job.”

Studying the Future of AI

New technology always brings fear. At first blush, people feared automobiles, elevators, and even refrigerators, Casey said. AI is no different—he doesn’t buy that AI will ever reach the world-destroying powers of Skynet in the Terminator film series.

Even so, there are big questions that remain about AI and the law. The technology has shown promise in making predictions in criminal law and writing briefs, but Casey wonders if it could potentially create new laws or legal regimes and, an overarching question in the legal world, will the technology ever be able to work in gray areas or understand the objective of laws?

“You need an objective when you’re using these programs,” Casey said. “You need to say something like, ‘We want the law to achieve X, Y, or Z’ or ‘We want to mimic what judges have done in these cases.’ It’s unclear whether we know exactly what we want law to do in certain areas. It depends on the political system that produces it.”

And then there’s the question of whether AI will give unfair advantages to the money rich. If a firm has access to proprietary AI technology, giving them an edge in legal proceedings, are regulators and the market okay with that? Or will there need to be regulation to determine who has access to the most powerful AI?

Attorneys may fear that AI will take their jobs, but Casey believes that the technology will simply change the focus, and perhaps even create new areas of legal work. There has always been technological change, he said. When computer-assisted legal research launched, book research became antiquated. When cryptocurrency grew larger, lawyers in several fields—securities, contracts, and even bankruptcy— became crypto experts. Attorneys who want to stay sharp would be wise to stay abreast of AI, Casey said, both its areas of fear and hope, as well as how it can help them in their day-to-day tasks.

“Our goal as scholars is to think about the way law interacts with the world and society,” Casey said. “This is absolutely core to that inquiry: What should we be doing to regulate data? And how will law change because of this? These are the most interesting questions we are working to answer.”

Artificial intelligence