A Survey on Neuro-Symbolic Auditing: A Framework for Verification, Traceability, and Correction in High-Stakes AI

Tracking #: 935-1958

Flag : Reject (Pre-Screening)

Authors: 

Xiaoming Guo
Shenglin Li
Jiacheng Cao
Jiaqi Gong

Submission Type: 

Survey

Full PDF Version: 

Cover Letter: 

Dear Editor, We are pleased to submit our manuscript, A Survey on Neuro-Symbolic Auditing: A Framework for Verification, Traceability, and Correction in High-Stakes AI, for consideration for publication in Neurosymbolic Artificial Intelligence. Given the journal's focus on Neurosymbolic AI Topics, we believe this systematic survey offers a timely and critical perspective on the intersection of technical architecture and regulatory compliance. The rapid deployment of Large Language Models (LLMs) in high-stakes domains, such as healthcare, finance, and autonomous systems, has precipitated a profound paradox: unprecedented advances in capability accompanied by an escalating crisis in reliability. While governance frameworks like the NIST AI RMF have emerged to address this, our research highlights a persistent Audit Gap. Current standards largely verify processes (governance checklists) rather than validating the technical correctness of system decisions at inference time. To address this gap, our paper defines and systematizes the emerging field of Neuro-Symbolic Auditing. Unlike traditional surveys that focus solely on model performance, we frame Neuro-Symbolic AI (NSAI) as an enabling technology for inherently governable systems. Key contributions of this work include: The VTC Framework: We introduce a novel auditing framework based on Verification (formal safety proofs), Traceability (logic-based audit trails), and Correction (surgical model editing). Systematic Taxonomy: Following PRISMA 2020 guidelines, we provide a comparative taxonomy that maps specific NSAI architectures to their ability to satisfy these auditing requirements, distinguishing them from purely neural "black box" approaches. Bridge to Accountability: We synthesize evidence to show how symbolic components can transition AI systems from mere interpretability to rigorous accountability, preventing "audit washing" where superficial checks mask underlying failures. This survey provides a design roadmap for researchers and practitioners who must move beyond theoretical risk management to tangible, architectural guarantees of safety. We believe this work will be of significant interest to your readership, particularly those working at the convergence of AI safety, neuro-symbolic methods, and policy. We confirm that this manuscript has not been published elsewhere and is not under consideration by another journal. All authors have approved the manuscript and agree with its submission. Thank you for your time and consideration. Sincerely, Shenglin Li Postdoctoral Researcher sli90@ua.edu jiaqi.gong@ua.edu

Approve Decision: 

Approved

Tags: 

  • Reviewed

Decision: 

Reject (Pre-Screening)