CausalNeSy 2025 : Workshop on Causal Neuro-symbolic Artificial Intelligence

June 1-2, 2025Portoroz, Slovenia
Call for Papers: Workshop on Causal Neuro-symbolic Artificial Intelligence (CausalNeSy)

As artificial intelligence systems become increasingly complex and integrated into critical decision-making processes, ensuring their interpretability, robustness, and understanding of causality is essential. The emerging field of Causal Neuro-symbolic AI bridges data-driven learning with symbolic reasoning to empower systems with the ability to both learn and reason about causes and effects within a structured framework.

The CausalNeSy workshop aims to bring together researchers, practitioners, and industry experts from academia and industry to share innovative ideas, research, and practical insights on combining causality with neuro-symbolic AI. The focus is on enriching AI systems with explicit representations of causality, integrating causal and domain knowledge, and leveraging neuro-symbolic techniques for advanced causal reasoning tasks.

Topics of Interest
------------------
We invite submissions that explore, but are not limited to, the following themes:

1. Core Methods and Frameworks
- Causal Knowledge Representation: Approaches for representing causal knowledge using neuro-symbolic AI methods.
- Causal Reasoning in Neuro-symbolic Systems: Implementing causal reasoning within neuro-symbolic frameworks.
- Neuro-symbolic Methods for Causal Structure Learning: Techniques for learning causal structures in neuro-symbolic systems.
- Causal Representation Learning: Approaches to learning causal representations using neuro-symbolic AI.

2. Integration of Techniques and Paradigms
- Causal Knowledge Graph Embeddings: Utilizing embeddings of causal knowledge for graph completion and discovery.
- Causal Reasoning and Neural Networks: Harmonizing causal symbolic reasoning with neural networks to improve interpretability and robustness.
- Integration of Causality, Logic, and Probability: Combining causality, logic, and probabilistic reasoning within neuro-symbolic AI.
- Causal Generative Models: Development and application of causal generative models in machine learning.
- Causal Neuro-Symbolic AI in Large Language Models (LLMs): Enhancing reasoning capabilities in LLMs by integrating causality.
- Causal Foundation Models: Building foundational models that incorporate causal reasoning within a neuro-symbolic framework.

3. Explanation, Trust, Fairness, and Accountability
- Neuro-symbolic Methods for Causal Explanation: Techniques for elucidating cause-effect relationships.
- Fairness, Accountability, Transparency, and Explainability: Ensuring ethical and transparent AI systems.
- Trustworthiness, Grounding, Instruct-ability, and Alignment: Addressing challenges related to the trust and reliability of causal neuro-symbolic AI systems.

4. Applications
- Causal Discovery in Complex Environments: Strategies for identifying causal relationships in complex domains.
- Causal Neuro-symbolic AI in Use: Real-world applications in areas such as healthcare, finance, autonomous systems, natural language processing, and more.

Important Dates
---------------
- Workshop Paper Submissions Due: 6 March 2025 (23:59 AoE)
- Notification to Authors: 3 April 2025 (23:59 AoE)
- Camera-Ready Version Due: 17 April 2025 (23:59 AoE)
- Early-bird Registration: To be announced
- Workshop Date: June 1-2, 2025

Submission Guidelines
---------------------
Papers should be submitted via the OpenReview submission site. We welcome original contributions in the following formats:
1. Full Research Papers: 12-14 pages
2. Position Papers: 6-8 pages
3. Short Papers: 4-6 pages

All submissions must adhere to the CEUR workshop template and will be subjected to a double-blind review process. A multidisciplinary program committee will evaluate submissions based on originality, technical quality, and relevance to the workshop's theme. Selected papers will be presented at the workshop and published as open-access archival content in the workshop proceedings through CEUR.

Workshop Organizers
-------------------
- Utkarshani Jaimini, University of South Carolina
- Cory Henson, Bosch Center for AI
- Amit Sheth, University of South Carolina
- Yuni Susanti, Fujitsu Inc

Program Committee
-----------------
The program committee is currently being finalized and will be announced soon.

We look forward to receiving your submissions and to a stimulating workshop that advances the field of Causal Neuro-symbolic Artificial Intelligence!

For further details and updates, please visit https://sites.google.com/view/causalnesy/.