Research Matters: Lior Strahilevitz on “Personalizing Default Rules and Disclosure with Big Data”
Research Matters is a regular feature in which a member of the faculty talks about some of his or her latest work and its impact and relevance to law and society.
Lior Strahilevitz, Sidley Austin Professor of Law, teamed with Ariel Porat, Greenbaum Distinguished Visiting Professor of Law, to write “Personalizing Default Rules and Disclosure with Big Data,” which will appear in an upcoming issue of the Michigan Law Review. The paper shows how Big Data might be harnessed to enable the personalization of default rules and disclosures in many types of consumer contracts, as well as laws related to medical malpractice, inheritance, landlord-tenant relations, and labor. Strahilevitz spoke about the work.
Q. How did this paper originate?
A. Ariel Porat is a regular visitor at the Law School. He teaches here every fall, though his primary appointment is at Tel Aviv University. Ariel and I love to bounce ideas off each other, and I was in his office about two years ago, talking to him about some of the work I was doing on Big Data. He told me about a paper he had just read by a colleague in Israel that talked about how men and women tend to have different preferences for how their estates should be divided up. Men typically share the large majority of their estates with their widows, and women tend not to share as large a portion of their estate with widowers. We started to talk about that finding as an example of a legal setting in which, rather relying than one-size-fits-all legal rules, the law might do better by trying to tailor legal rules to characteristics that members of particular groups have. Then we started thinking maybe this isn’t just a paper about wills. Maybe there’s more to be said broadly about contract law, landlord-tenant law, medical malpractice, labor law and all the examples we end up using in the paper.
Q. What was your approach?
A. The world is filled with default rules, which provide that if the contract is silent or ambiguous, if the will is silent, if the statute is silent or ambiguous, then this follows. There is, going back several decades, a lot of good writing about how one might fill in those default terms, but almost all of that literature assumes that these default rules should be universal, rather than tailored to individual circumstances. And we think that approach made sense for most of human history, but as society is moving into an era where information about you and me and everyone else is readily available, in massive databases held by the government or the private sector, we thought that maybe now the time is right for us to explore the feasibility of having different default rules that are tailored to individuals’ preferences and characteristics.
The simplest way to describe the project is like this: In consumer contracts, we have a big problem where people sign contracts or they click “accept” on a website, but they haven’t actually read the terms they’re agreeing to, and the courts need to decide whether to hold people to those terms that studies show 99 percent of consumers didn’t read. What we say is, well, it’s unrealistic to assume that everyone will actually read those terms. But we can surmise that certain types of people are likely to be drawn to particular terms, and other types of people are likely to be repelled by those terms. So what we’d like to do is select a group of “guinea pigs” and pay them to read through contractual terms carefully, and then to provide the state or corporation with information about their preferences and characteristics. Then, the state or corporation can look for instances in which particular types of people coalesce around preferences for particular types of contract terms. Finally, we can try to match everyone else in the population to the guinea pigs who are just like them. The idea is similar to what Netflix does for movies. It has this enormous bank of users who rate movies, and it tries to figure out whether I’ll like a movie or you’ll like a movie, by finding people who have extremely similar tastes to us, based on movies we have seen and rated. They’ve done very well, developing the industry’s best algorithm for predicting whether particular individuals will like a movie. And we’re saying, we think that preferences for terms in contracts or wills are not an entirely different beast, and the same sorts of strategies can be used to match up particular people with the terms most likely to make them happy.
Now, it could be that that use of algorithms and data mining and Big Data gets it wrong sometimes, just like Netflix sometimes doesn’t always accurately predict whether I’m going to like a movie. And so for that reason, we think it’s important to stress, we’re just talking about default rules. We always think it’s important to give people room to change the contractual terms that they’re getting. But in the absence of such a negotiation, we think personalization of the sort we describe is, in some cases, a better starting point than just giving everyone the same default terms.
Q. Has this ever been done before in the real world, or is this a new idea?
A. We can certainly point to instances where the law does tailor itself, but I believe our project is the first to suggest that this can be done in a very sweeping and ambitious way using Big Data and analytics. In the paper, we try to do some proof-of-concept work. One of the problems we analyze is organ donation. In the United States, the dominant legal default rule is that people are not organ donors unless they opt-in to donation. This choice of defaults contributes significantly to the shortage of organs available for transplants. There are other countries that assume people consent, rather than assuming they don’t consent, and they have much greater availability of organs for people needing transplants. So I think there’s definitive evidence that the default rule for organ donation makes a difference. And the American default rule kills a lot of people.
We wondered whether there are attributes that correlate strongly with people being willing to donate their organs. I undertook a pilot survey with Matthew Kugler, who’s a second-year student at the Law School and a psychologist. Matthew and I tried to see whether there were any personality correlates with a propensity to donate organs, and we found that in fact there were. Those individuals possessing what psychologists call authoritarian-traditionalist personalities are much less likely to donate their organs or to support organ donation than those who score low on these authoritarianism scales. Perhaps if we can identify a couple other personality characteristics that correlate strongly with a propensity to donate organs, we can take some portion of the population and say, for this group, people who have these characteristics, we will presume they consent to organ donation. For the people who lack these characteristics, the law should continue to presume that they don’t wish to be organ donors, and they’ll only be organ donors if they specify on their drivers’ license or in a will or to their next of kin that they wish to donate.
Q. How do we get from the idea to execution?
A. We think it has to be legislative. In particular court cases where there is a dispute over what the appropriate legal default rules should be, we think a party could cite our paper and argue that personalization of default rules is appropriate. Having said that, courts tend to be reluctant to promote dramatic changes, and this is, in many ways, a sharp break from the way that the law presently decides the content of default rules.
Another part of the paper proposes personalized disclosure. Under such a regime, asthmatics would automatically get sent pollution warnings, people with peanut allergies would see special notifications that those of us without allergies wouldn’t see at the point of sale, the contents of financial disclosures a home buyer might receive would be tied to her level of sophistication. We think our proposals in this section of the paper are less controversial than personalized default rules, and our arguments are largely addressed to administrative agencies that could implement such regimes.
Q. When will this not work?
A. Personalization won’t work where people have overwhelmingly homogeneous preferences, or where peoples’ preferences are heterogeneous but not in any predictable way. Concerns about cross-subsidization (or the lack thereof) might also form the basis for reasonable opposition to greater personalization. There also might be contexts in which the law ought to have reservations about giving people exactly what they want through default rules. A good example of this is marital name changes. Liz Emens has pointed out that most American women would prefer to take their husband’s name at the time they get married, and yet the law, we think rightly, doesn’t change women’s names by default upon marriage. The anti-majoritarian default rule is appealing because there is a lot of historical baggage that may help explain why women in this country typically change their names upon marriage, and there’s something problematic about the state putting a thumb on the scale in ways that might provide subtle support for the subordination of women. Personalization will often be efficient, but in contexts like this one, equality considerations temper our enthusiasm for embracing greater efficiencies.
Having said that, it’s conceivable that personalization could even help us make progress in these morally fraught domains. Suppose that heterosexual men who subscribe to Mother Jones, have PhDs in the humanities, and show up as highly Agreeable on personality tests adopt their female partners’ surnames upon marriage 65 percent of the time. Perhaps for members of this subgroup, the law should change these men’s surnames to their new spouses’ surnames by default.