About

CSRR @ ACL 2022

Common sense is the basic level of practical knowledge that is commonly shared among most people. Such knowledge includes but is not limited to social commonsense (“it's impolite to comment on people’s weight”), and physical commonsense (“snow is cold”). While humans use commonsense knowledge and reasoning abilities to seamlessly navigate everyday situations, endowing machines with such capabilities has remained an elusive goal of AI research for decades.

Recently, advances in large pre-trained LMs have shown that machines can directly learn large quantities of commonsense knowledge through self-supervised learning on raw data forms such as text and images. Additionally, their representations show promise at learning reasoning abilities when provided necessary supporting facts to reach a correct answer. However, despite these impressive exhibitions, these models still fall short of human-like understanding. While they achieve strong performance on benchmarks, they make inconsistent predictions, learn to exploit spurious patterns, generate socially stereotyped inferences, and fail to robustly apply learned knowledge to downstream applications.


Consequently, we organize this workshop to encourage discussion of current progress on building machines with commonsense knowledge and reasoning abilities. We aim to bring together researchers from different areas (e.g., NLP, computer vision, computational neuroscience, psychology) to communicate promising working directions in the area of commonsense reasoning.

Topics of interests include but not limited to

  • Methods: methods for commonsense reasoning tasks; methods that integrate commonsense knowledge bases and neural models; methods that improve the interpretability and explainability of neural models for reasoning and more.
  • Analysis: methods to probe commonsense knowledge from NLP models; methods to understand reasoning mechanisms of existing models; methods that identify limitations of existing methods for AI applications (including but not limited to NLP, CV and robotics) due to the lack of commonsense knowledge
  • Resources: acquiring commonsense knowledge (from text corpora, images, videos, pre-trained neural models, etc.); constructing and completing (semi-)structured CKBs; consolidating CKBs under unified schemas.
  • Benchmarks: designing challenging tasks and building datasets to evaluate models’ commonsense knowledge and reasoning abilities; designing new evaluation schemas and metrics for commonsense reasoning tasks, particularly for open-ended and generative tasks

The CSRR Workshop will be co-located with ACL 2022! Previously, we also hosted a workshop at AKBC 2021, named CSKB.

Program

Workshop Program (27th May)


Please join us on Slack (#csrr-acl2022) for the latest news and further discussion.
Underline: https://underline.io/events/284/sessions?eventSessionId=10977.
Zoom: https://us06web.zoom.us/j/88455110936?pwd=M081eFlRbFdKb0tmREcyNXNaU3N2UT09

Time (Dublin) Event
  9:00 -   9:10 Opening remarks
  9:10 -   9:55 Invited Talk | Mor Geva: "Debugging Transformer Language Models Inside Out"
  9:55 - 10:25 10 lightning talks
10:25 - 10:30 Best Paper Talk - 5 minutes
10:30 - 10:45 Break
10:45 - 11:30 Invited Talk | Marie-Francine Moens: "Can We Build a Machine That Has a Commonsense Memory?"
11:30 - 12:00 10 lightning talks
12:00 - 14:00 Lunch Break (extended)
14:00 - 14:45 Invited Talk | Ev Fedorenko: "The language system in the human brain."
14:45 - 15:30 Invited Talk | Greg Durrett: "Bridging Implicit and Explicit Knowledge for Explainable Reasoning"
15:30 - 15:45 Break (shortened)
15:45 - 16:30 Invited Talk | Raj Ammanabrolu: "Modeling Worlds in Text"
16:30 - 17:15 Invited Talk | Tobi Gerstenberg: "The counterfactual simulation model: A novel approach to understanding causal explanations"
17:15 - 17:25 Closing Statements

Accepted Papers

Accepted Papers

Archival Papers
  1. Identifying relevant common sense information in knowledge graphs
    Authors: Guy Aglionby, Simone Teufel
  2. Cloze Evaluation for Deeper Understanding of Commonsense Stories in Indonesian
    Authors: Fajri Koto, Timothy Baldwin, Jey Han Lau
  3. Psycholinguistic Diagnosis of Language Models’ Commonsense Reasoning
    Authors: Yan Cong
  4. Bridging the Gap between Recognition-level Pre-training and Commonsensical Vision-language Tasks
    Authors: Yue Wan, Yueen Ma, Haoxuan You, Zhecan Wang, Shih-fu Chang
  5. Materialized Knowledge Bases from Commonsense Transformers
    Authors: Tuan-Phong Nguyen, Simon Razniewski
  6. Knowledge-Augmented Language Models for Cause-Effect Relation Classification
    Authors: Pedram Hosseini, David A. Broniatowski, Mona T. Diab
  7. CURIE: An Iterative Querying Approach for Reasoning About Situations
    Authors: Dheeraj Rajagopal, Aman Madaan, Niket Tandon, Yiming Yang, Shrimai Prabhumoye, Abhilasha Ravichander, Peter Clark, Eduard H Hovy
Non-Archival Papers
  1. CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm
    Authors: Hongming Zhang, Yintong Huo, Yanai Elazar, Yangqiu Song, Yoav Goldberg, Dan Roth
  2. CIS2: A Simplified Commonsense Inference Evaluation for Story Prose
    Authors: Bryan Li, Lara J. Martin, Chris Callison-Burch
  3. Analyzing the Contribution of Commonsense Knowledge Sources for Why-Question Answering
    Authors: Yash Kumar Lal, Horace Liu, Niket Tandon, Nathanael Chambers, Ray Mooney, Niranjan Balasubramanian
  4. Commonsense Reasoning for Question Answering with Explanations
    Authors: Wenting Zhao, Alexander M Rush, Claire Cardie
  5. Memory-assisted prompt editing to improve GPT-3 after deployment
    Authors: Aman Madaan, Niket Tandon, Peter Clark, Yiming Yang
  6. Distilling Relation Embeddings from Pre-trained Language Models
    Authors: Asahi Ushio, Jose Camacho-Collados, Steven Schockaert
  7. BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?
    Authors: Asahi Ushio, Luis Espinosa-Anke, Steven Schockaert, Jose Camacho-Collados
  8. Advanced Semantics for Commonsense Knowledge Extraction
    Authors: Tuan-Phong Nguyen, Simon Razniewski, Gerhard Weikum
  9. Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach
    Authors: Chao Zhao, Faeze Brahman, Tenghao Huang, Snigdha Chaturvedi
  10. Few-shot Subgoal Planning with Language Models
    Authors: Lajanugen Logeswaran, Yao Fu, Moontae Lee, Honglak Lee
  11. Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning
    Authors: Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, Kai-Wei Chang
  12. Numeracy enhances the Literacy of Language Models
    Authors: Avijit Thawani, Jay Pujara, Filip Ilievski
  13. Learning Instance-Level N-Ary Semantic Knowledge At Scale For Robots Operating in Everyday Environments
    Authors: Weiyu Liu, Dhruva Bansal, Angel Daruna, Sonia Chernova
  14. E-KAR : A Benchmark for Rationalizing Natural Language Analogical Reasoning
    Authors: Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei Li, Yanghua Xiao, Hao Zhou

Talks

Invited Speakers

Evelina Fedorenko

Associate Professor of Neuroscience at MIT

Tobias Gerstenberg

Assistant Professor of Psychology at Stanford

Greg Durrett

Assistant Professor of CS at UT Austin

Prithviraj Ammanabrolu

Young Investigator, AI2

Mor Geva

Ph.D. candidate at Tel Aviv University

Marie-Francine Moens

Professor at KU Leuven

Organization

Workshop Organizers

Antoine Bosselut

Assistant Prof. at EPFL

Xiang Lorraine Li

PhD Candidate at UMass

Bill Yuchen Lin

PhD Candidate at USC

Vered Shwartz

Assistant Prof. at UBC

Bodhisattwa Prasad Majumder

PhD Candidate at UCSD

Yash Kumar Lal

PhD student at SBU

Rachel Rudinger

Assistant Prof. at UMD

Xiang Ren

Assistant Prof. at USC

Niket Tandon

Research Scientist at AI2

Vilém Zouhar

PhD student at ETHz

Program Committee

  • Maarten Sap (AI2)
  • Jack Hessel (AI2)
  • Keisuke Sakaguchi (AI2)
  • Prithviraj Ammanabrolu (AI2)
  • Tuhin Chakrabarty (Columbia)
  • Liwei Jiang (UW)
  • Alisa Liu (UW)
  • Rowan Zellers (UW)
  • Lianhui Qin (UW)
  • Ximing Lu (UW)
  • Michi Yasunaga (Stanford)
  • Xikun Zhang (Stanford)
  • Deniz Bayazit (EPFL)
  • Silin Gao (EPFL)
  • Aman Madaan (CMU)
  • Khyathi Chandu (CMU)
  • Yanai Elazar (Bar-Ilan)
  • Avijit Thawani (USC)
  • Pei Zhou (USC)
  • Yu Hou (USC)
  • Anurag Acharya (Florida International)
  • Sarah Wiegreffe (Georgia Tech)
  • Neha Srikanth (UMD)
  • Yue Dong (McGill)
  • Denis Emelin (Edinburgh)
  • Simon Razniewski (MPI)
  • Filip ilievski (USC ISI)
  • Mayank Kejriwal (USC ISI)
  • Jeff Da (AI2)
  • Sumit Bhatia (Adobe Research)
  • Manuel Ciosici (USC ISI)
  • Emily Allaway (Columbia)
  • Neha Srikanth (UMD)
  • Shaobo Cui (EPFL)
  • Faeze Brahman (UCSC)

Sponsors

Allen Institute for AI

Contact us

Email us at csrr2022@googlegroups.com
Join our Slack Channel for more discussion!

Calls

Call for Papers

Important Dates
  • Regular submission deadline (never-published work): Feb 28 Mar 3, 2022
  • ARR submission deadline (submissions with ARR reviews): Mar 21, 2022
  • Notification of Acceptance (for regular and ARR submissions): March 26, 2022
  • Camera-ready papers due: April 10, 2022
  • Published submission deadline (published at other venues): April 8, 2022
  • Workshop Dates: May 27, 2022
  • All deadlines are AoE time.
Submission Instructions

We solicit two categories of papers.

Workshop papers (regular/ARR): describing new, previously unpublished research in this field. The submissions should follow the ACL-ARR style guidelines. We accept both short (4 pages of content) and long papers (8 pages of content). Submissions will be subject to a double-blind review process (i.e. they need to be anonymized). Final versions of accepted papers will be allowed 1 additional page of content so that reviewer comments can be taken into account.

Published papers: papers on topics relevant to the workshop theme, previously published at NLP or ML conferences. These papers can be submitted in their original format without hiding the author names. Submissions will be reviewed for fit to the workshop topics.
Please fill this google form to submit your papers: https://forms.gle/AToY6HYZ6buydSVv5

In both categories, accepted papers will be:
  • can be non-archival
  • published on the workshop website
  • presented at the workshop as a lightning talk

Cross-submission Policy: As long as it doesn't conflict with the cross-submission policies of the other venue (e.g., the ARR policy), you may submit the paper to CSRR as a regular workshop paper. Please feel free to email us if you are not sure about your case!

Please submit your paper via OpenReview: openreview.net/group?id=aclweb.org/ACL/2022/Workshop/CSRR

Best Paper Award
There will be a Best Paper Award for honoring exceptional papers published at the CSRR workshop, which is sponsored by the Allen Institute for AI.