The 2nd Workshop on

Compositional Learning: Safety, Interpretability, and Agents

ICML 2026, COEX Convention & Exhibition Center (TBD) July, 2026


Compositionality, defined as the ability to construct and reason about complex concepts from reusable components, is a hallmark of human cognition and the key to robust generalization. Despite the astonishing progress of modern AI systems, it remains an open question whether they truly capture and leverage the compositional nature of many real-world domains. The workshop explores this pressing challenge across multiple critical dimensions. We invite contributions focusing on the theoretical foundations of compositionality, its central role in the age of foundation models and agents, and its impact on achieving robustness and systematic out-of-domain generalization. Among the most promising topics related to compositionality, we identify three timely and impactful foci for this edition of the workshop.

  • How can systems generalize beyond training distributions to improve safety? Exploring the representational structures and learning dynamics that enable true compositional understanding under distribution shifts. This involves examining the roles of inductive biases, abstraction, and modularity to identify strategies that allow systems to generalize in scenarios where simple statistical correlations fail, hence improving safe deployment in real-world scenarios.
  • How do models internally represent and learn compositionality? Investigating the mechanisms by which LLMs represent, acquire, and generalize compositional structures. This focuses on interpreting internal states, analyzing emergent behaviors, and establishing rigorous benchmarks to evaluate whether models leverage compositionality or rely on memorization.
  • How can agents leverage compositionality for complex tasks? Studying how compositional principles drive the development of robust, generalizable agents. Key areas include the composition of skills and sub-goals for long-horizon planning, the systematic integration of retrieved knowledge and tools, and the creation of architectures that dynamically combine neural and symbolic modules to execute multi-step tasks.
Furthermore, through an interdisciplinary dialogue featuring high-profile guests from both academia and industry, we aim to catalyze new research directions that push the boundaries of compositional learning in advanced AI systems.



Speakers

Parisa Kordjamshidi
Michigan State University
Samy Bengio
Apple, EPFL
Noémi Éltető
Google Deepmind
David Cox
IBM Research

Panelists

Noam Brown
OpenAI
Tailin Wu
Westlake University

Call for Papers

Authors are welcome to submit a 4-page or 8-page (short/long) paper based on in-progress work, or a relevant paper being presented at the main conference, in the following topics:

  • Compositionality in Action: Agentic AI, Planning, and Tool Use
  • Safety: architectures and representations for robust and generalizable systems
  • Explainability: representations and reasoning in foundation models
  • Theoretical foundations and general principles of compositionality in AI
  • Multimodal compositionality
  • Modular and dynamic architectures
  • Continual/transfer learning through compositionality
  • Compositional learning for various application domains, such as computer vision, natural language processing, reinforcement learning, and science

We welcome review and positional papers that may foster discussions. Accepted papers will be presented during poster sessions, with exceptional submissions selected for spotlight oral presentations. Accepted papers will be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals. The following dates are important for the submission:

  • Paper submissions open: March 30, 2026, AOE
  • Paper submissions deadline: April 24, 2026, AOE
  • Notification to authors: May 15, 2026, AOE


Submission Guidelines

  • Submission URL: https://openreview.net/group?id=ICML.cc/2026/Workshop/CompLearn
  • Format: All submissions must be in PDF format and anonymized. Submissions are limited to 4/8 content pages, including all figures and tables; unlimited additional pages containing references and supplementary materials are allowed. Reviewers may choose to read the supplementary materials but will not be required to. Camera-ready versions may go up to 5/9 content pages.
  • Style file: You must format your submission using the ICML 2026 LaTeX style file. The maximum file size for submissions is 20MB. Submissions that violate the ICML style (e.g., by decreasing margins or font sizes) or page limits may be rejected without further review.
  • Dual-submission policy: We welcome ongoing and unpublished work. We will also accept papers that are under review at the time of submission, or that have been recently accepted without published proceedings.
  • Non-archival: The workshop is a non-archival venue and will not have official proceedings. Workshop submissions can be subsequently or concurrently submitted to other venues.
  • Visibility: Submissions and reviews will not be public. Only accepted papers will be made public.
  • Mandatory Reciprocal Reviewing: At least one author per submission must commit to reviewing for the workshop. You will need to designate the reviewing author(s) on OpenReview. Submissions without a nominated reviewer may be desk-rejected.
  • Contact: For any questions, please contact us at compositional-learning-icml-2026@googlegroups.com.
If you would like to become a reviewer for this workshop, please let us know https://forms.gle/nG4idePAF4Qp6TNk8.


Schedule

Morning session

08:45 AM Opening Remarks
08:50 AM 4 Spotlight Talks
9:30 AM Invited Talk: Francesco Locatello
10:05 AM Invited Talk: Nouha Dziri
10:40 AM Coffee Break
10:50 AM Invited Talk: Parisa Kordjamshidi
11:25 AM Invited Talk: Samy Bengio
12:00 PM Poster Session 1
13:00 PM Lunch Break

Afternoon session

2:00 PM Invited Talk: Noémi Éltető
2:35 PM Invited Talk: David Cox
3:10 PM Panel Discussion
4:00 PM Poster Session 2

Organizers

Giacomo Camposampiero
IBM Research, ETH Zürich
Pietro Barbiero
IBM Research
Martha Lewis
University of Amsterdam
Yilun Du
Harvard
Ying Wei
Zhejiang University