How Do We Define Human Rights in the Age of AI?
UChicago event fostered interdisciplinary discussion on the role of AI in democracy
As countless headlines suggest, AI has seemingly boundless potential to reshape every sector of our lives. The recent releases of AI image generators and ChatGPT have captivated our imaginations with thrilling and odd results. They’ve also inspired job loss anxieties, political fears and general existential dread.
“Technology can be terrifically positive or disastrously bad,” said University Librarian Torsten Reimer during an event at the Regenstein Library on March 31. “It's really important that, at every stage of the process, we critically reflect on what it does.”
Hosted by UChicago’s Pozen Family Center for Human Rights, “Democracy + AI” brought together Reimer; Aziz Huq, the Frank and Bernice J. Greenberg Professor of Law; David Gunkel, a Northern Illinois University professor of media studies; and keynote speaker Sheila Jasanoff, a Harvard professor.
In her opening remarks, Jasanoff suggested that a discussion about AI and human rights should begin with the “human” part.
“What is it about the human that we consider worth protecting and worth defending?” asked Jasanoff, a renowned scholar of science and technology studies.
She cautioned against falling into the “traps” of AI—the idea that technological progress is always linear, good and needed. Also, that faster is better.
Though speed is privileged in the tech and finance worlds, there are other sectors like science and medicine where slow, accumulated knowledge is more valuable. “There are lots of places where we actually value slowness and the work of the tortoise over the work of the hare,” Jasanoff said. “So, are technological ways of doing democracy necessarily better?”
Defining the “I” in AI
Jasanoff claimed that most people are fixated on the “A” part of artificial intelligence—how fast, how complex we can make it. “But what is the “I of AI?” Jasanoff asked. “Is it individualist or communitarian? Is it traditional or modern? Is it hierarchical or egalitarian?”
In human society, Jasanoff points out, we recognize the value of different intelligences. Even if you have a bad sense of direction, maybe you have a great memory for names and faces. But when it comes to AI, some forms of intelligence are getting privileged over others.
“The kinds of intelligence we choose to develop, we don't do that in a vacuum,” Jasanoff said. “There's money attached. There are what you might call political economies of intelligence.”
Huq picked up on this thread during the discussion portion of the event, asking the audience to consider who adopts and develops AI
“If you look at the agencies responsible for ensuring health, safety, collecting taxes, engaging in the protection of the population, there are some [AI] adoptions, but they're very limited,” said Huq, a scholar of constitutional law and AI regulation.
“Technology can be terrifically positive or disastrously bad.”
Torsten ReimerUniversity Librarian and Dean of the University Library
This isn’t true for what Huq calls “the coercive sector,” which includes the military and police.
“Police have funds, they have a will to use those funds to extend their coercive power. And they're very little by way of regulatory constraint that stops them from doing so even when a technology is probably not cost justified.”
For example, Huq cites the Chicago Police Department’s adoption of the costly AI tool ShotSpotter—meant to detect and locate gunshots—which has shown little to no effect on crime reduction.
“AI has already built into it directions and biases that privilege some kinds of ways of life, some kinds of assumptions about the moral world, at the expense of others,” Jasanoff said.
Bias and democracy
Among our greatest hopes for AI, is that machines could eliminate the foibles of human judgment. If done right, AI could potentially eliminate inequity in the judicial system or ensure that diverse voices are represented in public discourse.
However, we’ve quickly learned that we’ve built a lot of ourselves into AI—including human bias.
“Technology is as much an object as it is a mirror,” said Gunkel, who studies the philosophy of technology. “It's a mirror that reflects back to us what we think about ourselves, our society and our world.”
As AI advances, the line between human and machine continues to blur. “Is it a thing? Or is it a person?” Gunkel asked. “Or is it something that doesn't fit the existing categories?”
This is most evident in places like social media, where it’s increasingly more difficult to spot the difference between humans and bots. As we’ve seen in elections over the past several years, AI has had a tremendous impact on online debate and political discourse.
“One of the key questions on my mind is, do we need some regulatory intervention in the use of AI in shaping public opinion?” Reimer asked.
Jasanoff advocated a “deliberate slowness” in both tech development and political discourse.
“In a time of profound American political polarization, Jasanoff said. “It's worth keeping in mind that good deliberation often means not allowing things to harden into binaries, in the way that the digital world so beautifully captures with its language of zeros and ones.”
—Event was produced in collaboration with the University of Chicago Law School and the University of Chicago Library.
This story was adapted from one that ran on the website of the University News Office.