Artificial intelligence (AI) and machine learning have made modern life infinitely easier, quicker and more connected.
But the limits of “smart machines” also are on display every day—self-driving cars that crash and catch fire, facial recognition and law enforcement software that are biased against minorities and those with darker skin, and algorithms that eliminate swaths of the population from home loan and credit card qualification.
It’s clear artificial intelligence needs reasonable boundaries, mathematician and computer scientist Moshe Vardi told those gathered for a virtual symposium, “AI & Society,” Sept. 21 and 22. The problem will be reaching agreement on the methods of regulation and implementing them over time, said Vardi, who heads Rice University’s Initiative on Technology, Culture and Society.
“Technology’s driving the future, but who’s doing the steering? The answer is, right now, that society has given up. Tech corporations and the marketplace are doing the steering,” Vardi said. “Public policy has lagged behind technology. We need to figure out how to harness technology with the goal of the public good.”
Vardi’s keynote address kicked off a two-day exploration of the role of artificial intelligence in society sponsored by the Utah Informatics Initiative (UI2) and the Tanner Humanities Center. Tanner Humanities Center will carry the theme at future events throughout the fall, including a conversation with Shoshana Zuboff, professor emerita at Harvard Business School and author of “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,” at noon on Oct. 28.
Six University of Utah faculty members talked about their own AI-related research during the symposium’s second day:
- “Literary Forecasting: Speculative Ecologies at Work in DH, EH, and AI,” by Lisa Swanstrom, associate professor in the Department of English
- “Why isn’t this helping? Evaluating Computer-Aided Detection and Augmented Target Recognition Usage,” by Trafton Drew, associate professor in the Department of Psychology
- “Teaching Ethics,” by Eliane Wiese, assistant professor in the School of Computing
- “A Pedagogical Necessity: On the Role of Domain Expertise in the Age of Black Box Models,” by Aniello De Santo, assistant professor in the Department of Linguistics
- “Artificial Intelligence in Robot Assisted Surgery,” by Alan Kuntz, assistant professor in the School of Computing and Robotics Center
- “Unbiased Human Experts in the AI System’s Prediction Loop,” by Bei Wang, assistant professor in the School of Computing
Finally, a closing panel discussed how to implement policy changes on big tech companies that have been operating largely unfettered for at least two decades.
“We somehow took for granted that more science, more technology, meant more societal good,” Vardi said. “We need to think hard about how science and technology benefits society. It will not happen just because we have more science and technology.”
Tanner Humanities Center Director Erika George suggested consideration of a resolution at the United Nations banning all AI that cannot operate in compliance with international human rights and privacy laws.
“Ethics is not enough. Policy is imperative. The scales of justice are about balancing,” she said. “Perhaps we need to pause. I think we need to be concerned about velocity, slowing down, and doing the due diligence we expect businesses to do.”
And Dan Reed, senior vice president for academic affairs and former vice president at Microsoft for global technology policy, noted the global and cultural limits on a common understanding of AI policy and ethics, as well as the lessons of history.
“Control of information has always rested in the hands of those with power,” Reed said. “That was true in times of village elders. And it was true when reading and writing were restricted to a literate minority. It was true at the time of the printing press. And it’s true now. It is even more important that we talk about how to change that and increase transparency and equity.”