Prof. Ben Zhao Talks AI Dangers on UChicago Big Brains Podcast

Much of what we hear about artificial intelligence today focuses on its potential, with new capabilities from machine learning, neural networks, and data science driving applications in virtually every industry. But as with all technologies, there is also a dark side to AI, vulnerabilities that can be exploited by “bad actors.”

Ben Zhao, Neubauer Professor of Computer Science at UChicago, brings the myriad adversarial uses of AI out of the shadows to expose potential flaws and improve protections. On this week’s episode of the University of Chicago podcast Big Brains, Zhao chats with host Paul Rand about some of his recent projects, from training a neural network to write fake restaurant reviews to finding the “backdoors” that could allow hackers to trick facial recognition security systems or automated vehicles.

In the interview, Zhao argues that it’s up to computer scientists to carefully scrutinize the new AI techniques and applications appearing with startling frequency in the modern world.

“I think this is one of those things where, whether it's atomic fission or the newest gene splicing technique, once science has gone to a certain level you can only hope to make it as balanced as possible,” Zhao says. “Because, in the wrong hands it will get used in the wrong way. And so as long as the science is moving and technology is moving, you have to try to nudge it towards the light. And so in this sense that's what we're trying to do. These techniques are coming. And there's no stopping that. So the only question is, will they be used for good or will they be used for evil? And can we stop it from being used and weaponized in the wrong way?”

Related News

More UChicago CS stories from this research area.
UChicago CS News

UChicago Partners On New National Science Foundation Large-Scale Research Infrastructure For Education

Dec 10, 2024
In the News

Data Ecology: A Socio-Technical Approach to Controlling Dataflows

Sep 18, 2024
UChicago CS News

NeurIPS 2023 Award-winning paper by DSI Faculty Bo Li, DecodingTrust, provides a comprehensive framework for assessing trustworthiness of GPT models

Feb 01, 2024
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

UChicago Undergrad Analyzes Machine Learning Models Used By CPD, Uncovers Lack of Transparency About Data Usage

Oct 31, 2023
UChicago CS News

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
UChicago CS News

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023
UChicago CS News

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
In the News

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

UChicago Team Wins The NIH Long COVID Computational Challenge

Jun 28, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube