Skip to main content

Padhraic Smyth Receives $900K NSF Grant for Improving Human-AI Decision-Making

Padhraic Smyth
Padhraic Smyth

Padhraic Smyth, Distinguished Professor of Computer Science and Hasso Plattner Endowed Chair in Artificial Intelligence at UC Irvine School of Information and Computer Sciences, and co-principal investigator Mark Steyvers, professor and chair of Cognitive Sciences, have received a four-year $900,000 grant from the National Science Foundation (NSF) for their project “Improving Human-AI Decision-Making Partnerships Through Shared Understanding.”

Smyth and Steyvers in front of DBH
Padhraic Smyth, UCI Chancellor’s Professor of computer science (left) with Mark Steyvers (right), UCI professor of cognitive sciences. Steve Zylius / UCI

“We are delighted to receive multi-year funding for this research from NSF,” says Smyth.  “While there has been significant focus on the autonomous capabilities of AI models in recent years, for example in areas such as image analysis and text generation, there has been far less research on developing AI systems that complement (rather than replace) human expertise. This project will explore multiple open research questions at this human-AI interface and will involve multiple PhD students from both my research group in Computer Science along with PhD students from Mark’s group in Cognitive Sciences –  interdisciplinary thinking will be a key aspect of our research work.”

Artificial intelligence (AI) is becoming increasingly key to critical human decision-making in applications ranging from medical diagnosis to driving. This research project aims to develop new ways to better understand how humans interact with AI. One aspect of the research will focus on investigating under what conditions can AI be a trusted assistant in the context of human decision-making. The project will improve how people and AI collaborate, making these interactions not just effective but also aligned with human values and expectations.

The interdisciplinary project uses ideas such as Bayesian learning and cognitive modeling to analyze human-AI interaction.

“We are excited to be starting work on this research journey for the next four years, motivated by the importance of creating AI systems that can respect human goals, such as fairness, teamwork, and preserving a sense of personal control,” says Smyth. “The results from our research can lead to a better understanding of how AI can be used in a responsible manner across applications in areas such as medicine, education, science, and business.”

The project will include a number of core research activities, including developing  Bayesian inference frameworks to evaluate evolving abilities of both humans and AI agents; creating adaptive optimization algorithms to manage decision policies under uncertainty; and exploring human-centered aspects of AI by integrating subjective metrics such as perceived fairness, teamwork, and agency into multi-objective optimization frameworks.

Behavioral studies across various tasks, including image classification, natural language question answering, visual target tracking, and simulated navigation will validate theoretical models and algorithms. Additionally, the project will create openly accessible datasets to support reproducibility and facilitate further research on collaborative human-AI systems.

– Tonya Becerra

Skip to content