Final CfP: AxAI’23 – Special Session on Actionable Explainable AI (AxAI) at CD-MAKE
Dear GK members,
we want to inform you about the final CfP for AxAI’23 – Special Session on Actionable Explainable AI (AxAI) held at the Cross Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE 2023)
When: August 29 – September 01, 2023
Where: University of Sannio in Benevento, Italy
Submission Deadline: 17th of April 2023 AoE (hard deadline, see also important dates below)
Invited Contributions: full research papers and short research papers. Extended versions of the accepted papers will be solicited for a special issue to be published on the Machine Learning and Knowledge Extraction (MAKE) journal.
Topics of interest include, but are not limited to:
– Approaches for context-aware explainability
– Concepts and methods for human-centered explainable artificial intelligence
– Experimental and empirical studies on ML systems that show the suitability of novel explanatory approaches in different application domains (e.g., environmental studies/agriculture/forestry, medicine, autonomous driving, etc.)
– Methods that allow for corrective feedback by means of a human-in-the-loop to improve explanatory approaches and ML models
– Methods to integrate human knowledge into automated decision-making in ML systems
– Novel data set benchmarks to validate ML systems from a user and application domain perspective
– Novel techniques for the evaluation of ML systems based on the aspect of fidelity and robustness of an explanation
– Human-centered user interfaces that integrate or compare various XAI methods to facilitate and improve the evaluation of ML systems
– XAI methods that support safer and more effective human/AI-decision making (e.g., action correction and choice of actions that are more beneficial in a given context)
Full description:
Methods of Explainable Artificial Intelligence (XAI) are developed especially with the goal to make decisions of opaque machine-learned models (e.g., Deep Learning) transparent, interpretable and comprehensible. However, merely establishing transparency, interpretability, and comprehensibility is not enough to derive value from explanations. In addition, it is important to improve models by creating opportunities to act on and learn from explanations. This can be achieved through so-called actionable concepts, methods, measures, and metrics for explainable learning and reasoning (Gunning and Aha, 2019). An important aspect of actionable XAI is the incorporation of psychological insights into the design of explanations and interactive interfaces for the purpose of model understandability, validation, and correctability. Similarly, it is important to develop evaluation criteria that enable meaningful and generalizable comparability of explanations from a user and application perspective. The goal is to find the best possible explanatory approaches for the respective context of use. In this special session, we want to bring together interdisciplinary researchers who are working on exactly these aspects of Explainable Artificial Intelligence and who want to present and discuss new, groundbreaking research that goes beyond testing existing work in new application areas.
Important Dates:
Submission Deadline:
April 17, 2023 (AoE, hard deadline)
Author Notification:
June 01, 2023
Proceedings Version:
June 22, 2023 (AoE)
Conference:
August 29 – September 01, 2023
Session and Track Chairs:
Bettina FINZEL, University of Bamberg, Germany
Anna SARANTI, Human-Centered AI, University of Life Sciences, Vienna, Austria
Program Committee 2023:
Tim Miller, The University of Melbourne, Australia
Fabrizio Silvestri, University of Rome, Italy
Christian Geißler, Technische Universität Berlin, Germany
Giovanna Nicora, University of Pavia, Italy
Lukas-Valentin Herm, Julius-Maximilians-Universität Würzburg, Germany
Kary Främling, Umeå University, Sweden
Andreas Hinterreiter, Johannes Kepler University Linz, Austria
Hadi Khorshidi, The University of Melbourne, Australia
Andrew Silva, Georgia Institute of Technology, USA