There has been an ever-growing interest in tasks targeting Natural Language Understanding and Reasoning. Although deep learning models have achieved human-like performance in many such tasks, it has also been repeatedly shown that they lack the precision, generalization power, reasoning capabilities, and explainability found in more traditional, symbolic approaches. Thus, current research has started employing hybrid methods, combining the strengths of each tradition and mitigating its weaknesses. This workshop would like to promote this research direction and foster fruitful dialog between the two disciplines by bringing together researchers working on hybrid methods in any subfield of Natural Language Understanding and Reasoning.
NALOMA began by focusing on bridging the gap between machine learning and natural logic but has since broadened to include works integrating symbolic methods and machine learning. Even so, combining deep learning with natural logic remains a highly promising direction. The remarkable capabilities of LLMs, which directly operate on natural language expressions, offer a distinct advantage to natural logic—a family of logics with formulas resembling natural language expressions.
The NALOMA workshop is endorsed by SIGSEM.
Call for Papers
The NALOMA workshop invites submissions on any (theoretical or computational) aspect of hybrid methods concerning Natural Language Understanding and Reasoning (NLU&R). The topics include but are not limited to:
- Hybrid NLU&R systems that integrate logic-based/symbolic methods with neural networks
- Explainable NLU&R (with structured explanations)
- Opening the black-box of deep learning in NLU&R
- Downstream applications of hybrid NLU&R systems
- Probabilistic semantics for NLU&R
- Comparison and contrast between symbolic and deep learning work on NLU&R
- Creation, criticism, refinement, and augmentation of NLU&R datasets
- (Dis)Alignment of humans and machines on NLU&R tasks
- Addressing inherent human disagreements in NLU&R tasks
- Generalization of NLU&R systems
- Fine-grained evaluation of NLU&R systems
We invite two types of submissions:
- Archival (long or short) papers should report on complete, original and unpublished research. Accepted papers will be published in the workshop proceedings and appear in the ACL anthology. Short and long papers may consist of up to 4 and 8 pages of content, respectively, plus unlimited references. Camera-ready versions of papers will be given one additional page of content so that reviewers' comments can be taken into account.
- Extended abstracts may report on work in progress or work that was recently published/accepted at a different venue. Extended abstracts will not be included in the workshop proceedings. Thus, the unpublished work will retain its status and can be submitted to another venue. This webpage will link to the accepted extended abstracts. The extended abstracts should not contain an abstract section and may consist of up to 2 pages of content, plus unlimited references.
Both accepted papers and extended abstracts are expected to be presented at the workshop. Extended abstracts will be presented as talks or posters at the discretion of the program committee.
Submissions will be double-blind reviewed, and all long/short papers and extended abstracts must be anonymous, i.e. not reveal author(s) on the title page or through self-references. Both extended abstracts and papers must be formatted according to the ACL style-files or the ACL Overleaf template. All submissions must adhere to the ARR guidelines about Anonymized Review, Authorship, Citation and Comparison, and Ethics Policy (without requiring completion of the responsible NLP research checklist).
Both papers and extended abstracts should be submitted via openreview.
The workshop participants must register for ESSLLI 2025.
Important Dates
- Deadline for papers & extended abstracts: 25 April
- Notification: 27 May
- ESSLLI registration dates: 31 May (early)
- Camera-ready due: 20 June
- Workshop: 4-8 August
- All dates are AoE
Keynotes

Aaron Steven White
University of Rochester

Kyle Richardson
Allen Institute for AI

Program Committee
- Lasha Abzianidze (co-chair), Utrecht University
- Valeria de Paiva (co-chair), Topos Institute
- Stergios Chatzikyriakidis, University of Crete
- Aikaterini-Lida Kalouli, Ludwig Maximilian University of Munich
- Katrin Erk, University of Texas at Austin
- Hai Hu, Shanghai Jiao Tong University
- Thomas Icard, Stanford University
- Lawrence S. Moss, Indiana University
- Hitomi Yanaka, University of Tokyo and Riken Institute