How Language Models Work (and That’s Why They Don’t)
Sameer Singh
Associate Professor, Department of Computer Science, UC Irvine
We are at the precipice of widespread adoption of natural language processing: large language models will fundamentally change how we use and interact with devices. Through examples, we will discuss how language models can be adapted to be classifiers, summarizers, coders, writers, and conversational assistants, with little to no supervision. We will discuss the basics of neural networks, the text corpus, and the training pipeline that enables language models to behave as these general-purpose AI agents. However, we will show how this very paradigm of language modeling also introduces fundamental limitations in this technology. We will characterize these vulnerabilities in language models and discuss how they affect end-use applications. By the end of the talk, attendees will better understand the capabilities, workings, and limitations of large language models.
BIO: Dr. Sameer Singh is an Associate Professor of Computer Science at the University of California, Irvine (UCI). He is working primarily on the robustness and interpretability of machine learning algorithms and models that reason with text and structure for natural language processing. He has been named the Kavli Fellow by the National Academy of Sciences, received the NSF CAREER award, UCI Distinguished Early Career Faculty award, the Hellman Faculty Fellowship, and was selected as a DARPA Riser. His group has received funding from the Allen Institute for AI, Amazon, NSF, DARPA, Adobe Research, Hasso Plattner Institute, NEC, Base 11, and FICO. Sameer has published extensively at machine learning and natural language processing venues and received numerous paper awards, including KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, ACL 2020, and NAACL 2022.