Skip to main content

ICS News Archive

If looking for the latest news, go here

Return to News Archive List
July 21, 2022

ICS and Google Researchers Awarded Best Task Paper for Collaboration on Automatically Updating Wikipedia

Sameer Singh and Robert Logan at the ICS Commencement Ceremony.

When Robert Logan, a Ph.D. student in UCI’s Donald Bren School of Information and Computer Sciences (ICS), started a summer internship at Google last year, he says he wanted to “flip things around.” Instead of seeing whether knowledge bases could be used to improve language models, the goal was to explore whether language models could be used to improve knowledge bases.

“In the short span of a summer internship, we decided to look into the novel task of whether language models could be used to automatically update Wikipedia articles when given new information about the article subject,” says Logan. “I have a lot of experience building datasets from Wikipedia that I was able to draw upon in order to create a dataset for this task, and Google has compute infrastructure that makes training large language models fast and painless.”

The results of the work are outlined in the paper, “FRUIT: Faithfully Reflecting Updated Information in Text,” by Logan; his Ph.D. adviser, Sameer Singh; and Google researchers Alexandre Tachard Passos and Ming-Wei Chang. The collaboration earned a “Best New Task Paper” award at the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022), held July 10-15.

According to the award committee, the work presents an “interesting and relevant” new natural language processing (NLP) challenge: “that of updating information from a knowledge base given new evidence.” Stressing the need to automatically update information in many real-world applications, the committee praised the authors for tackling the task of contrasting textual evidence to decide which pieces are outdated and then generating language to produce the new text.

Singh says he was excited to work on this project for several reasons, including providing a way to help the many volunteers who maintain Wikipedia. “Second, from an NLP perspective, this kind of task really pushes the field: it is only recently that machine-generated text comes close to being good enough to replace human-generated text, and further, generating factually grounded and consistent text is still challenging for current models,” he explains. “This task requires the language models to be able to address these challenges, which will make NLP models useful for many tasks in the real world.” [He notes that Meta is also now pursuing similar research directions.]

Logan successfully defended his thesis in June and starts work as a research scientist this month at Dataminr, conducting research into real-time event detection and summarization. Of his work on FRUIT, Logan highlights that FRUIT is unique from other tasks where faithful generation is important (such as automatically summarizing news articles) in that it uses models to edit an existing piece of text instead of writing everything from scratch. “This could enable some really cool applications,” he says, “like autocomplete systems that leverage information from external references, or automatic summarization systems that generate live updates of breaking news.”

Shani Murray

Skip to content