ATRACC 2025 : AAAI Fall Symposium: AI Trustworthiness and Risk Assessment for Challenged Contexts
AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors.
However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons.
Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.
The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation.
This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.
Topics of interest include, but are not limited to:
- AI: addressing challenges related to autonomy and safety, including multi-agent systems with an emphasis on robustness, reliability, accountability, and emergent behaviors in risk-averse contexts.
- Pluralistic alignment: approaches to AI alignment for addressing the diverse and often conflicting perspectives, values, and needs of different users.
- AI benchmarking and evaluation: theoretical and empirical methods for analyzing the capabilities of foundation models, including benchmark design, formal guarantees, and multimodal AI evaluation.
- Methods and approaches for enhancing and evaluating reasoning in general purpose AI systems, e.g., causal reasoning techniques and outcome verification approaches.
- Assessment of non-functional requirements such as explainability, accountability, and privacy as well as assessment from pilot stage to systematic evaluation and monitoring.
- Approaches for verification and validation of AI systems, including evaluation of different aspects such as factuality and trustworthiness.
- Evaluation of AI systems vulnerabilities and risks, including adversarial and red-teaming approaches.
- Links between performance and trustworthiness leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities.
- User studies and evaluation of governance mechanisms in organizations and communities.
Symposium Details:
- Duration: 2 1/2 days
- Features: Keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, presentation of selected papers, student papers, and a poster session.
Submission Details:
- Full papers: Maximum 8 pages
- Poster/short/position papers: Maximum 4 pages
- Deadline for submission: August 1st
- Notification of acceptance or rejection: August 15th
- Camera-ready papers for symposium proceedings: August 29th
- Submission Link: https://easychair.org/my/conference?conf=fss25
All accepted papers will be included in the AAAI Fall 2025 proceedings.
For provisional schedule, program committee, and practical information, please visit the Symposium website (Note: this is the 2024 version and will be updated shortly).
Program Chairs: Bertrand Braunschweig ([email protected]) and Brian Hu ([email protected])
However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons.
Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.
The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation.
This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.
Topics of interest include, but are not limited to:
- AI: addressing challenges related to autonomy and safety, including multi-agent systems with an emphasis on robustness, reliability, accountability, and emergent behaviors in risk-averse contexts.
- Pluralistic alignment: approaches to AI alignment for addressing the diverse and often conflicting perspectives, values, and needs of different users.
- AI benchmarking and evaluation: theoretical and empirical methods for analyzing the capabilities of foundation models, including benchmark design, formal guarantees, and multimodal AI evaluation.
- Methods and approaches for enhancing and evaluating reasoning in general purpose AI systems, e.g., causal reasoning techniques and outcome verification approaches.
- Assessment of non-functional requirements such as explainability, accountability, and privacy as well as assessment from pilot stage to systematic evaluation and monitoring.
- Approaches for verification and validation of AI systems, including evaluation of different aspects such as factuality and trustworthiness.
- Evaluation of AI systems vulnerabilities and risks, including adversarial and red-teaming approaches.
- Links between performance and trustworthiness leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities.
- User studies and evaluation of governance mechanisms in organizations and communities.
Symposium Details:
- Duration: 2 1/2 days
- Features: Keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, presentation of selected papers, student papers, and a poster session.
Submission Details:
- Full papers: Maximum 8 pages
- Poster/short/position papers: Maximum 4 pages
- Deadline for submission: August 1st
- Notification of acceptance or rejection: August 15th
- Camera-ready papers for symposium proceedings: August 29th
- Submission Link: https://easychair.org/my/conference?conf=fss25
All accepted papers will be included in the AAAI Fall 2025 proceedings.
For provisional schedule, program committee, and practical information, please visit the Symposium website (Note: this is the 2024 version and will be updated shortly).
Program Chairs: Bertrand Braunschweig ([email protected]) and Brian Hu ([email protected])