Steering Textual Reasoning with Explanations
Xi Ye
PhD Candidate, University of Texas at Austin
Abstract: Large language models (LLMs) have significantly extended the boundaries of NLP’s potential applications, partially because of their increased ability to do complex reasoning. However, LLMs have well-documented reasoning failures, such as hallucinations and inability to systematically generalize. In this talk, I describe my work on enhancing LLMs in reliably performing textual reasoning, with a particular focus on leveraging explanations. I will first introduce a framework for automatically assessing the robustness of black-box models using explanations. The framework first extracts features to describe the “reasoning process” disclosed by the explanations, and then uses a trained verifier to judge the reliability of predictions based on these features. I will then describe how to form effective explanations for better teaching LLMs to reason. My work uses declarative formal specifications as explanations, which enables using an SMT solver to amend the limited planning capabilities of LLMs. Finally, I will describe future directions for further enhancing LLMs to better aid humans in challenging real-world applications demanding deep reasoning.
Bio: Xi Ye is a Ph.D. candidate in the Department of Computer Science at the University of Texas at Austin, advised by Greg Durrett. His research is in the area of natural language processing, particularly in leveraging explanations to steer language models for complex textual reasoning tasks. He is also interested in semantic parsing and program synthesis. He is a co-instructor of the tutorial on Explanations in the Era of Large Language Models at NAACL 24 and a co-organizer of the workshop on Natural Language Reasoning and Structured Explanations at ACL 24.