Skip to main content

Aligning Language Model Agents to Environment Dynamics

Kolby Nottingham

PhD Student, Department of Computer Science, University of California, Irvine

Kolby Nottingham

Abstract: Language model agents are tackling challenging tasks from embodied planning to web navigation to programming. These models are a powerful artifact of natural language processing research that are being applied to interactive environments traditionally reserved for reinforcement learning. However, many environments are not natively expressed in language, resulting in poor alignment between language representations and true states and actions. Additionally, while language models are generally capable, their biases from pretraining can be unaligned with specific environment dynamics. In this talk, I cover our research into rectifying these issues through methods such as: (1) mapping high-level language model plans to low-level actions, (2) optimizing language model agent inputs using reinforcement learning, and (3) in-context policy improvement for continual task adaptation.

Bio: Kolby Nottingham is a 5th-year CS PhD student at the University of California Irvine co-advised by Roy Fox and Sameer Singh. His research applies algorithms and insights from reinforcement learning to improve the potential of agentic language model applications. He has diverse industry experience from internships at companies such as Nvidia, Unity, and Allen AI. Kolby is also excited by prospective applications of his work in the video game industry and has experience doing research for game studios such as Latitude and Riot Games.

Skip to content