
A lot of research attention has been paid to eXplainable AI (XAI) in the last few years [1-5]. In AI4EU XAI is a foundational AI component towards more human-centred systems. XAI refers to methods and techniques used in the applications of artificial intelligence (AI), which are attempting to provide humans with concrete justifications for the systems’ decisions. In the exception of system verification, bias identification and system improvement, XAI can cross the mistrust gap between AI systems and end users, making the integration of AI technologies to humans' everyday life faster. At the same time, XAI ensures that AI systems comply with regulations such as GDPR (see the right to explanation, Article 22 of the European Union’s GDPR). Of all the AI systems, those whose nature is contrary to XAI's, are the so-called "black boxes", where even their model designers may not understand and explain at times, particularly in the fast-growing subfields of AI, machine learning and deep learning. For more information and a simple guide to XAI, address here: https://www.ai4eu.eu/simple-guide-explainable-artificial-intelligence.
The RuleML4XAI Technical Group will seek to identify the various types of AI system explanations that can be used and then attempt to standardise an interchange format for these explanations between systems and/or humans. To this end, the TG will reach out to similar groups and projects (notably the AI4EU H2020 project that is developing a European AI on-Demand Platform to support an AI-based ecosystem of stakeholders and share AI resources) in order to identify:
- Types of AI systems that produce and can exchange explanations (e.g. rule-based systems, planning systems, machine learning systems, etc.)
- Forms that these explanations can take (e.g. proofs, narratives, numerical feature importance, rules, saliency maps or ever dialogues etc.)
The RuleML4XAI can be found here: https://cutt.ly/Sa19orr and here: https://cutt.ly/Ga58Ela
- RuleML
- XAI
- Explainable AI
- Technical Group