Skip to main content

Unlocking Language Models: Controlling LMs to Enable NLP for All

Hila Gonen

Postdoctoral Researcher, University of Washington

Abstract: Large language models (LLMs) have soared in popularity in recent years, thanks to their ability to generate well-formed natural language answers for a myriad of topics. Despite their astonishing capabilities, they still suffer from various limitations. This talk will focus on two of them: the limited control over LLMs, and their failure to serve users from diverse backgrounds. I will start by presenting my research on controlling and enriching language models through the input (prompting). In the second part, I will introduce a novel algorithmic method to remove protected properties (such as gender and race) from text representations, which is crucial for preserving privacy and promoting fairness. The third part of the talk will focus on my research efforts to develop models that support multiple languages, and the challenges faced when working with languages other than English. These efforts together unlock language technology for different user groups and across languages. I will conclude by presenting my vision for safer and more reliable language modeling going forward.

Bio: Hila is a postdoctoral Researcher at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Hila’s research lies in the intersection of Natural Language Processing, Machine Learning, and AI. In her research, she works towards two main goals: (1) developing algorithms and methods for controlling the model’s behavior; (2) making cutting-edge language technology available and fair across speakers of different languages and users of different socio-demographic groups.

Before joining UW, Hila was a postdoctoral researcher at Amazon and Meta AI. Prior to that she did her Ph.D in Computer Science at the NLP lab at Bar Ilan University. She obtained her Ms.C. in Computer Science from the Hebrew University. Hila is the recipient of several prestigious postdoc awards and an EECS Rising Stars award. Her work received the best paper awards at CoNLL 2019 and at the repL4nlp workshop 2022.