[CSRR] Invited Talk: Raj Ammanabrolu

Raj Ammanabrolu: "Modeling Worlds in Text"


How do we develop interactive agents that can operate via language? Agents that can both learn from feedback and generate contextually relevant language grounded in the world around them. One way of doing this is by creating world modeling agents that have an intrinsic motivation, curiosity, to learn the underlying rules and commonsense axioms of the world they are operating in and use that better inform their actions. This talk focuses on the task of building world models of text-based game environments. Text-based games, or interactive narratives, are reinforcement learning environments in which agents perceive and interact with the world using textual natural language. These environments contain long, multi-step puzzles or quests woven through a world that is filled with hundreds of characters, locations, and objects. I will first describe what a world model looks like (spoiler: it looks like a knowledge graph), how to dynamically build it via an information seeking intrinsic motivation, and how automated agents can use it to more effectively affect change via natural language.


"Raj Ammanabrolu is a researcher on the Mosaic team at the Allen Institute for AI. He received his PhD in Computer Science from the School of Interactive Computing at Georgia Tech. His research focuses on the question of how to imbue learning agents with the ability to understand and generate contextually relevant natural language through interactivity and environmental grounding, which are vital parts of language learning in humans. His work uses techniques at the intersection of Machine Learning, Reinforcement Learning, and Natural Language Processing. "