[CSRR] Invited Talk: Greg Durrett

Greg Durrett: "Bridging Implicit and Explicit Knowledge for Explainable Reasoning"

Abstract

Large language models have demonstrated striking capabilities for commonsense reasoning: huge parameter counts enable these models to synthesize information and make commonsense judgments even about scenarios not commonly attested in text. But this "implicit" reasoning, internalized in the Transformer layers of the model, is hard to debug when it goes wrong and cannot easily provide explanations of answers to a user. In this talk, I will discuss bridging the divide between this implicit reasoning and modes of explicit reasoning, namely accessing knowledge in text and building chains of textual inferences. First, I will discuss two new benchmark datasets targeting "entity commonsense:" the ability to make inferences about known entities to understand statements like "Harry Potter can teach classes on how to fly on a broomstick." Second, I will discuss paths towards making implicit inferences more explicit. We study the ability of GPT-3 to do multi-step reasoning ("chain-of-thought" prompting) and show that the explanations it produces even for simple text tasks are unreliable, suggesting that even scaled-up language models will face challenges for complex reasoning problems. I will then discuss some recent work on combining implicit and explicit inferences for explainable fact-checking of complex political claims, demonstrating the importance of leveraging these two reasoning types together.

Bio

Greg Durrett is an assistant professor of Computer Science at UT Austin. His current research focuses on making natural language processing systems more interpretable, controllable, and generalizable, spanning application domains including question answering, textual reasoning, summarization, and information extraction. His work is funded by a 2022 NSF CAREER award and other grants from agencies including the NSF, DARPA, Salesforce, and Amazon. He completed his Ph.D. at UC Berkeley in 2016, where he was advised by Dan Klein, and from 2016-2017, he was a research scientist at Semantic Machines.

Link

https://csrr-workshop.github.io/