Research Matters: Anthony Casey on “The Death of Rules and Standards”
Research Matters is a regular feature in which a member of the faculty talks about some of his or her latest work and its impact and relevance to law and society.
Anthony Casey, Assistant Professor of Law and Mark Claster Mamolen Teaching Scholar, wrote “The Death of Rules and Standards” with Anthony Niblett, Assistant Professor & Canada Research Chair in Law, Economics, & Innovation at the University of Toronto Faculty of Law. The paper imagines a future world in which predictive technology tells people how to comply with the law, creating personalized “micro-directives” that are communicated in real time. For instance, the speed limit might be tailored to a specific driver based on weather, traffic, and experience and then broadcast via a device in the car. The authors examine how customized, machine-generated instructions, which are neither rules nor standards, could remake the law, transform the work of judges, lawyers, and legislators—and even change the way we think about what’s right.
Q. What drew you and your co-author to this idea?
A. I had been thinking generally about why we choose rules [example: drive 40 mph] in some instances and standards [example: drive reasonably] in others. My colleague Lior Strahilevitz [the Sidley Austin Professor of Law] had written a paper on the use of big data in setting default rules, and my co-author had started a company that uses big data to advise people on tax compliance. And so [Niblett] and I were talking about this at a conference and one of us said, “If we used big data to advise people on how to comply with a standard in tax, that’s not a standard anymore, that’s a rule.” But then the other said, “Or is it?” Over time, we realized that, really, it was neither. We began considering what it would mean if a prediction about the outcome drove the behavior. You wouldn’t have a lawmaker saying “drive reasonably” and the user saying “drive reasonably”—you’d have a lawmaker saying “drive reasonably” and the user seeing, “drive 44.5 mph.” We realized that this approach might change everything about law—how we make it, how we follow it, how we think about it, how we teach it.
Q. In one example, you describe speed limits that are customized to individuals based on their driving experience, weather and traffic conditions, and other factors. Is this pure theory, or is this a realistic possibility?
A. There are two technologies we’d need: the predictive technology—or big data—to say, “If you were driving at 3 pm on a rainy day on Lake Shore Drive and you’ve been driving for 15 years and you have perfect eyesight, the speed limit should be 44.5 mph.” That’s the first part, and I’m pretty sure we’re already there; I think someone could easily come up with an algorithm that could predict the ideal speed using all that information. But you’d also have to gather that information, process it quickly, and communicate it to the driver. We’re not quite there yet, but we’re getting close. People already walk around with wristbands telling them how far they’ve walked and what their pulse is. It’s not far-fetched to think you could have something in your car telling you how fast to drive.
Q. So why would we want our cars to do this? What’s in it for us?
A. The driver has certainty; you know for sure whether your behavior is legal. That’s the benefit of a rule over a standard: you don’t have to guess what “drive reasonably” means or predict what a judge will think is reasonable. But the problem with a rule like the posted speed limit is that it gets it wrong in certain situations: if it’s raining, 45 mph might be too fast. But if it’s sunny and nobody’s on the road, it might be too slow and you end up wasting time. A standard—“drive reasonably”—allows for these differences, but isn’t specific; you have to guess. Another possibility is that we could have really, really precise rules for every possibility sitting in a book somewhere, but how would we look those up while driving? We’re not going to stop and figure it out. A micro-directive would offer the best of all worlds. It would give the driver complete certainty, as well as the advantage of knowing that the rule has been correctly calibrated.
Q. Would changes like this come more quickly to certain areas of law?
A. Criminal law might move slowly. People aren’t going to ask an algorithm whether something’s OK, and people tend to commit crimes where they can’t be observed, which means facts can’t be easily verified. An Orwellian fear of government would make it particularly slow; people won’t want cameras everywhere. But in regulatory areas of law, these changes might come more quickly. The majority of people want to comply with the law, they just need to know how. So traffic, tax—these are areas where people are looking for a rule to follow.
Q. If we had machines telling us how to comply with the law, it sounds like we wouldn’t need judges, lawyers, or lawmakers, at least not in the way we do now. Wouldn’t we lose out on human judgment?
A. This is probably where the paper is most controversial. Most of what we’d lose would be human bias. Our judgment is really pretty bad. Humans are better than random or better than an algorithm from 10 years ago or even two years ago. Machine-driven predictive technology is getting better—and remember, it doesn’t have to be perfect for this to work. It just needs to be better than a biased judge. You’d have to have a pretty rosy view of human behavior to think that a well-calibrated algorithm deprives us of a “humanness” that we really need. Human judgment is necessary for those times when you don’t have all the facts. But as we get more and more of the facts, the value of human judgment goes down—and the error cost of human judgment goes up.
Q. So what roles would humans have?
A. We’d still need lawmakers to set the policies and goals and programmers to set the algorithms. Humans would need to think, for instance, about what to maximize. In traffic law, you certainly wouldn’t want an algorithm that focused only on getting you from Point A to Point B as quickly as possible because you’d have a lot of accidents. But you also wouldn’t want an algorithm only aimed at avoiding accidents because you’d have gridlock and people driving at 2 mph. You need people to set the goals and then decide what costs we’re willing to incur to achieve them.
Q. Lawmakers and administrative agencies would do that work; what would lawyers and judges do?
A. In the extreme version of what we predict, the lawyers would be more like lobbyists, arguing for a new algorithm or policy. Judges might be advisors deciding whether an algorithm achieves the right policy goal, but they wouldn’t spend much time making ex post laws.
Q. Marbury v. Madison gave us judicial review—does that change under this kind of system? Does the balance of power among the three branches shift?
A. If the system continues in the same way, and the trend toward micro-directives continues, it does absolutely shift the balance. Judges wouldn’t have cases and controversies to review as under Marbury, but they might have this other policy-advisor role. So it you wanted judges to retain their power, you’d need to re-envision the institution a bit. And this isn’t science fiction: we’re already seeing it with traffic videos. If there is a video of you speeding, you’re much less likely to challenge it, which then means a judge doesn’t have a chance to say whether the speed limit was unreasonable. You could argue that when judges interpret laws and apply constitutional norms today, they’re providing a check on the policies that lawmakers have set. But they don’t need to do that sitting ex post on a case. Instead of a judge saying, “That was the wrong rule or standard,” you could employ a judge to look at an algorithm and say, “Is that the right policy?” I’d much rather have a judge saying, “That’s a crazy policy,” than looking at the defendant in the room and saying, “That’s a crazy policy in this case,” and not realizing how much of that is driven by bias rather than rational judgment.
Q. So maybe this is a philosophical question, but if citizens were following machine-generated micro-directives rather than taking time to make their own decisions, wouldn’t our ability to reason gradually erode?
A. People have made this criticism of traditional rules as well. They say the more you have clear guidance for how you’re supposed to behave, the less you engage in your own moral judgment and thought—and moral atrophy results. But I’m pretty skeptical. Most of my moral judgments during the day aren’t about whether to comply with the law, and I think that’s the case for most people. You still have to decide how to react to the server who gives you your coffee a few minutes late, or how you’ll respond to someone who’s being rude. I’m not a philosopher, but my guess is most of the moral judgments we make are not legal.
Q. Let’s talk about two concerns you mention in the paper: privacy and autonomy. In order for this system to work, the government needs to collect information on people—and it would be able to dictate quite specifically how people should behave. How much should we worry about these things?
A. Given what we know about people’s willingness to give up privacy for convenience, that one is a smaller concern. Autonomy is a little different: that’s a big concern. If you have the ability to predict the behavior that would maximize social welfare, there could be a temptation to create laws that would govern all of it. The big role for lawyers and lawmakers in the future would be understanding where to draw those lines. We are not advocating for laws that govern your whole life; our point is that we can do a better job in the areas where law already exists. We want to leave a sphere for human decisions.
Q. What do you most hope readers will take from this paper?
A. Whether or not you think micro-directives are a good thing, I think it is a trend that we’re likely to see—so it is important to talk about issues like autonomy and privacy now. It would be unfortunate if we had those debates at the last minute.