Submitted by Sanaz Saki Norouzi on
Special Issue on Explainable Neurosymbolic AI (X-NeSy)
Building learning systems that are both highly performant and truly understandable remains a central challenge in AI. While deep learning models excel at perceptual and statistical tasks, their black-box nature can be a significant barrier to trust and deployment in critical applications. A promising and technically rich direction for overcoming this barrier is the integration of continuous, sub-symbolic learning with discrete, symbolic structures, including logic, programs, or knowledge graphs.
This special issue aims to provide a dedicated forum for researchers exploring this “neurosymbolic” intersection to advance the current state of the art in Explainable AI (XAI). While XAI has become a major field of research, we specifically invite contributions that leverage symbolic methods as a core component of the modeling and interpretation process. The focus is mainly on technical contributions that go beyond feature attribution and general-purpose XAI methods, investigating how the synergy between the neural and symbolic paradigms leads to models with built-in transparency and high-fidelity explanations.
We also explicitly welcome submissions from a diverse range of related fields, including mechanistic interpretability, causal inference, formal methods, and program synthesis. The special issue seeks to foster a cross-community dialogue on the shared challenges and opportunities in creating AI systems whose reasoning processes are open to inspection, verification, and human-level understanding.
Topics of Interest
We invite submissions of high-quality, original research on topics including, but not limited to:
- Interpretable Representations: Probing and aligning latent neural representations with human-interpretable concepts, structures, or programs.
- Knowledge Extraction: Inducing explicit symbolic knowledge, such as logical rules or programs, that faithfully captures the behavior of a trained neural model.
- Mechanistic Interpretability: Using symbolic methods to guide, constrain, or verify the discovery of concepts and functional circuits in neural networks.
- Explanations over Structured Data: Generating faithful explanations for models operating over graphs (e.g., GNNs) or knowledge bases by extracting symbolic patterns or logical sub-structures.
- Causal Learning and Explanation: Architectures and methods that integrate causal modeling with deep learning to produce robust causal explanations.
- Formal Verification for Explainability: Applying formal methods to certify the faithfulness or logical consistency of explanations for neural-based systems.
- Transparent-by-Design Models: Novel architectures that are inherently interpretable by virtue of their hybrid neuro-symbolic design.
- Generating Structured Explanations: Moving beyond common feature attribution methods (e.g., LIME/SHAP) to generate more expressive explanations, such as counterfactuals constrained by symbolic knowledge or natural language justifications grounded in logic.
- Foundations of Evaluation: Development of novel benchmarks, datasets, and rigorous protocols for evaluating the fidelity and quality of symbolic explanations.
Important Dates
- Paper Submission Deadline: February 28, 2026. Papers submitted earlier will enter review immediately.
- Author Notification: 6-8 weeks (on a rolling basis)
Guest Editors
- Gustav Šír, Czech Technical University, Czechia
- Giuseppe Marra, KU Leuven, Belgium
- Roberto Confalonieri, University of Padova, Italy
Contact email for the guest editors: x-nesy@googlegroups.com
Author Guidelines:
We invite full papers, dataset descriptions, survey papers, application reports and reports on tools and systems. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this special issue. Authors can extend previously published conference or workshop papers - see the submission guidelines at https://neurosymbolic-ai-journal.com/content/author-guidelines for details. Submissions shall be made through the journal website at https://neurosymbolic-ai-journal.com/. Prospective authors must take notice of the submission guidelines posted at https://neurosymbolic-ai-journal.com/content/author-guidelines.
Note that you need to request an account on the website for submitting a paper. Please indicate in the cover letter that it is for the "Special Issue on X-NeSy". All manuscripts will be reviewed based on the journal's open and transparent review policy and will be made available online during the review process.