NeurIPS 2024 Workshop on

Compositional Learning: Perspectives, Methods, and Paths Forward


Time: December 14 (or 15), 2024

Location: Vancouver, BC, Canada





Overview

Compositional learning, inspired by the innate human ability to understand and generate complex ideas from simpler concepts, seek to imbue machines with a similar capacity for understanding, reasoning, and learning. Compositional learning naturally improves machine generalization towards out-of-distribution samples in the wild, through the recombination of learned components. This attractive property has led to vibrant research in fields like object-centric learning, compositional generalization, and compositional reasoning, with broad applications across diverse tasks including machine translation, cross-lingual transfer, semantic parsing, controllable text generation, factual knowledge reasoning, image captioning, text-to-image generation, visual reasoning, speech processing, reinforcement learning, and etc.


Despite notable advancements in these domains, significant gaps in compositional generalization and reasoning persist in dynamic and frequently changing real-world distributions, challenging even advanced LLMs. Among the remaining challenges and new opportunities ahead for compositional learning, in this workshop, we propose to have the following four foci, informed by recent progress in the field


  • (Perspectives) In which contexts, and why, should we expect foundation models to excel in compositional generalization or reasoning? This question is pivotal for accessing the inherent capabilities and understanding the learning dynamics of such models. Our goal is to unite researchers from various fields to explore both empirical and theoretical aspects that might contribute and influence the compositionality in foundation models (e.g., architecture, scale, composition type, input).
  • (Methods) Can we identify or design compositional learning methods that are transferable across different domains and compatible with existing foundation models? This initiative seeks to foster discussions among various domains of researchers to develop more reliable and model-agnostic strategies for compositional learning. Possible directions for further exploration include data augmentation and added modularity via mixture of experts.
  • (Methods and Perspectives) Modular learning strategies have been investigated as a means to achieve compositionality. Yet, an intriguing question remains largely unanswered: does such modularity in structures guarantee compositional generalization and is there any correspondence between them? This dialogue will encompass various modular learning approaches (e.g., adapters, prompts, sparsity), and both theoretical and empirical contributions.
  • (Paths Forward) What unique challenges arise when extending compositional learning strategies to continual learning environments, and what are the possible solutions? The ultimate objective of compositional learning is to continually adapt to the dynamically changing world through novel combinations and mitigate the risk of temporal performance degradation. We aim to engage researchers in a discussion about which specific hurdles existing compositional learning methods encounter, such as issues related to memory and consolidation, and to identify potential solutions.


Call for Papers

As for prospective participants, we primarily target machine learning researchers and practitioners interested in the above questions. Specific target communities include but are not limited to compositional generalization, compositional reasoning, modular deep learning, transfer learning, continual learning, and foundation models. We also invite submissions from researchers who study neuroscience, to provide a broad perspective to the attendees. To summarize, the topics include but are not limited to:

  • Empirical analysis of compositional generalization/reasoning capacity in various foundation models
  • Mechanism understanding of compositional generalization/reasoning in foundation models
  • Reliable and model-agnostic compositional generalization methods
  • Modular and dynamic architectures
  • Theoretical foundations and empirical findings of connections between modular structures and compositional generalization
  • Continual/transfer learning through compositionality
  • Compositional learning for various application domains, such as computer vision, natural language processing, reinforcement learning, and science.

Submission URL:   https://openreview.net/group?id=NeurIPS.cc/2024/Workshop/Compositional_Learning

Format:  All submissions must be in PDF format and anonymized. Submissions are limited to four content pages, including all figures and tables; unlimited additional pages containing references and supplementary materials are allowed. Reviewers may choose to read the supplementary materials but will not be required to. Camera-ready versions may go up to five content pages.

Style file:   You must format your submission using the NeurIPS 2024 LaTeX style file. For your convenience, we have modified the main conference style file to refer to the compositional learning workshop: compositional_learning.sty. The maximum file size for submissions is 50MB. Submissions that violate the NeurIPS style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.

Dual-submission policy:  We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted without published proceedings.

Non-archival:  The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.

Visibility:  Submissions and reviews will not be public. Only accepted papers will be made public.

Contact:  For any questions, please contact us at compositional-learning-neurips2024@googlegroups.com.

If you would like to become a reviewer for this workshop, please let us know at https://forms.gle/nYwXVRhL6QK8eR2y6.


Important Dates

 

    Extended submission deadline: Sep 15, 2024, AOE

    Late breaking paper (i.e., rejected NeurIPS'24 paper) submission deadline: Sep 27, 2024, AOE

    Notification to authors: Oct 8, 2024, AOE

    Video recording deadline (contributed talk only): Oct 20, 2024

    Final workshop program, camera-ready deadline: Oct 30, 2024

Schedule

This is the tentative schedule of the workshop. All slots are provided in Eastern Time (ET).

Morning Session


[8:55 - 9:00] Introduction and opening remarks
[9:00 - 9:30] Invited talk 1: TBD
[9:30 - 9:45] Contributed talk 1: TBD
[9:45 - 10:30] Poster Session 1
[10:30 - 11:00] Coffee Break
[11:00 - 11:30] Invited talk 2: TBD
[11:30 - 12:00] Invited Talk 3: TBD
[12:00 - 13:30] Lunch Break

Afternoon Session


[13:30 - 14:00] Invited talk 4: TBD
[14:00 - 14:30] Invited talk 5: TBD
[14:30 - 14:45] Contributed talk 2: TBD
[14:45 - 15:30] Poster Session 2
[15:30 - 16:00] Coffee Break
[16:00 - 16:30] Invited talk 6: TBD
[16:30 - 17:00] Invited talk 7: TBD
[17:00 - 18:00] Panel Discussion
 

Invited Speakers




Claudia Clopath

Imperial College London

Colin Raffel

University of Toronto

Thomas Kipf

Google Deepmind



Chuang Gan

UMass Amherst & MIT-IBM Watson AI Lab

Rajeev Alur

University of Pennsylvania

Irina Rish

University of Montreal & Mila-Quebec AI Institute

Workshop Organizers




Ying Wei

Nanyang Technological University

Jonathan Richard Schwarz

Harvard University

Laurent Charlin

Mila-Quebec AI Institute & HEC Montréal




Mengye Ren

New York University

Matthias Bethge

University of Tübingen & Tübingen AI Center