The whole world’s abuzz about the wonders of ChatGPT, Bard, and other generative AI systems. But there’s also buzz about AI’s dangers—from its vulnerability to misinformation all the way to concerns that it could pose “a risk of extinction” for humanity.
At the University of Texas at Arlington, a researcher is taking on that first danger—by working to understand the vulnerabilities of artificial intelligence to online misinformation. And she just got a $567,609 grant from the National Science Foundation to help her do it.
Shirin Nilizadeh, an assistant professor in UTA’s department of computer science and engineering, earned the five-year grant for her work to increase the security of natural language generation (NLG) systems, in order “to guard against misuse and abuse that could allow the spread of misinformation online.”
Adversaries may try to ‘poison these systems’ with false information
AI misinformation is “an important and timely problem to address,” Nilizadeh says.
“These systems have complex architectures and are designed to learn from whatever information is on the internet,” she added in a statement. “An adversary might try to poison these systems with a collection of adversarial or false information.”
“The system will learn the adversarial information in the same way it learns truthful information. The adversary can also use some system vulnerabilities to generate malicious content. We first need to understand the vulnerabilities of these systems to develop detection and prevention techniques that improve their resilience to these attacks.”
CAREER grant is the NSF’s most prestigious for junior faculty
Nilizadeh’s grant was awarded through the NSF’s Faculty Early Career Development Program, the foundation’s most prestigious honor for junior faculty. Recipients are selected for being not just outstanding researchers but also outstanding teachers through research, educational excellence, and integrating education and research at their home institutions.
Nilizadeh will use the funding to take a comprehensive look at the types of attacks NLG systems are susceptible to. She’ll also create AI-based optimization methods to examine the systems against different attack models. After an in-depth analysis and characterization of vulnerabilities that lead to attacks, she’ll work to develop defensive methods to protect systems like OpenAI’s ChatGPT and GPT4 and Google’s Bard.
Focusing on two key NLG techniques
Two common natural language generation techniques will be the focus of Nilizadeh’s research, UTA said: summarization and question-answering.
In summarization, an AI is given a list of articles and asked to summarize their content. In question answering, the system is given a document and asked to find answers to questions in that document by generating text answers.
Hong Jiang, chair of Nilizadeh’s department at UT Arlington, says the research will address “serious concerns.”
“With large language models and text-generation systems revolutionizing how we interact with machines and enabling the development of novel applications for health care, robotics and beyond, serious concerns emerge about how these powerful systems may be misused, manipulated or cause privacy leakages and security threats,” Jiang said in a statement.
“It is threats like these that Dr. Nilizadeh’s CAREER Award seeks to defend against by exploring novel methods for enhancing the robustness of such systems so that misuses can be detected and mitigated, and end-users can trust and explain the outcomes generated by the systems,” Jiang added.
Open letter warned of ‘profound risks to society and humanity’
In March, an open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and more than 1,000 artificial intelligence experts, researchers, and backers warned of the “profound risks to society and humanity” posed by AI systems with human-competitive intelligence.
Local signers included UT Dallas professor Sriraam Natarajan and student assistant Charles Averill, according to the Silicon Valley Business Journal. Other signers from Texas included J. Craig Wheeler, the Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, emeritus, at UT Austin and an inaugural fellow of the American Astronomical Society and past president of the American Astronomical Society; and Peter Stone, associate chair of computer science and director of robotics at the University of Texas.
Get on the list.
Dallas Innovates, every day.
Sign up to keep your eye on what’s new and next in Dallas-Fort Worth, every day.