<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">
<div class="">***********************************************************************</div>
*Call for papers: Special Issue on <br class="">
The Role of Ontologies and Knowledge in Explainable AI*<br class="">
<br class="">
<div class="">to be published in the Semantic Web journal, IOS Press.<br class="">
<br class="">
Paper submission: December 10th, 2021
<div class=""><br class="">
<a href="https://sites.google.com/view/special-issue-on-xai-swj" class="">https://sites.google.com/view/special-issue-on-xai-swj</a></div>
<div class=""><br class="">
</div>
<div class="">***********************************************************************</div>
<div class=""><br class="">
<br class="">
Explainable AI (XAI) has been identified as a key factor for developing <br class="">
trustworthy AI systems. The reasons for equipping intelligent systems <br class="">
with explanation capabilities are not limited to user rights and acceptance. <br class="">
Explainability is also needed for designers and developers to enhance <br class="">
system robustness and enable diagnostics to prevent bias, unfairness, <br class="">
and discrimination, as well as to increase trust by all users in why <br class="">
and how decisions are made. <br class="">
<br class="">
The interpretability of AI systems has been described already a long time ago <br class="">
since the mid 1980s, but only recently it became an active research <br class="">
focus in the computer science community due to the advances of big data <br class="">
and various regulations of data protection in developing AI systems, <br class="">
such as the GDPR. For example, according to the GDPR, citizens have <br class="">
the legal right to an explanation of decisions made by algorithms that <br class="">
may affect them (e.g., see Article 22). This policy highlights the <br class="">
pressing importance of transparency and interpretability in algorithm design. <br class="">
<br class="">
XAI focuses on developing new approaches for explanations of black-box <br class="">
models by achieving good explainability without sacrificing system performance. <br class="">
One typical approach is the extraction of local and global post-hoc explanations. <br class="">
Other approaches are based on hybrid or neuro-symbolic systems, advocating <br class="">
a tight integration between symbolic and non-symbolic knowledge, e.g., <br class="">
by combining symbolic and statistical methods of reasoning. <br class="">
<br class="">
The construction of hybrid systems is widely seen as one of the grand <br class="">
challenges facing AI today. However, there is no consensus regarding <br class="">
how to achieve this, with proposed techniques in the literature ranging <br class="">
from knowledge extraction and tensor logic to inductive logic programming <br class="">
and other approaches. Knowledge representation---in its many incarnations---<br class="">
is a key asset to enact hybrid systems, and it can pave the way towards <br class="">
the creation of transparent and human-understandable intelligent systems. <br class="">
<br class="">
This special issue will feature contributions dedicated to the role played <br class="">
by knowledge bases, ontologies, and knowledge graphs in XAI, in particular <br class="">
with regard to building trustworthy and explainable decision support systems.<br class="">
Knowledge representation plays a key role in XAI. Linking explanations to <br class="">
structured knowledge, for instance in the form of ontologies, brings multiple <br class="">
advantages. It does not only enrich explanations (or the elements therein) <br class="">
with semantic information---thus facilitating evaluation and effective knowledge <br class="">
transmission to users---but it also creates a potential for supporting the <br class="">
customisation of the levels of specificity and generality of explanations <br class="">
to specific user profiles or audiences. However, linking explanations, <br class="">
structured knowledge, and sub-symbolic/statistical approaches raise a multitude <br class="">
of technical challenges from the reasoning perspective, both in terms of <br class="">
scalability and in terms of incorporating non-classical reasoning approaches, <br class="">
such as defeasibility, methods from argumentation, or counterfactuals, <br class="">
to name just a few. <br class="">
<br class="">
<br class="">
**Topics of Interest**<br class="">
<br class="">
Topics relevant to this special issue include – but are not limited to – the following:<br class="">
<br class="">
- Cognitive computational systems integrating machine learning and automated reasoning<br class="">
- Knowledge representation and reasoning in machine learning and deep learning<br class="">
- Knowledge extraction and distillation from neural and statistical learning models<br class="">
- Representation and refinement of symbolic knowledge by artificial neural networks<br class="">
- Explanation formats exploiting domain knowledge<br class="">
- Visual exploratory tools of semantic explanations<br class="">
- Knowledge representation for human-centric explanations<br class="">
- Usability and acceptance of knowledge-enhanced semantic explanations<br class="">
- Evaluation of transparency and interpretability of AI Systems<br class="">
- Applications of ontologies for explainability and trustworthiness in specific domains<br class="">
- Factual and counterfactual explanations<br class="">
- Causal thinking, reasoning and modeling<br class="">
- Cognitive science and XAI<br class="">
- Open source software for XAI<br class="">
- XAI applications in finance, medical and health sciences, etc.<br class="">
<br class="">
<br class="">
**Deadline**<br class="">
<br class="">
- Submission deadline: 10th of December 2021. </div>
<div class="">(Papers submitted before the deadline will be reviewed upon receipt).<br class="">
- Acceptance/rejection notification: March 31st, 2022<br class="">
- Revision due: May 31st, 2022 <br class="">
- Estimated Publication Date: July 2022 <br class="">
<br class="">
**Author Guidelines**<br class="">
<br class="">
Submissions shall be made through the Semantic Web journal website at <br class="">
<a href="http://www.semantic-web-journal.net" class="">http://www.semantic-web-journal.net</a>. <br class="">
Prospective authors must take notice of the submission guidelines posted at <br class="">
<a href="http://www.semantic-web-journal.net/authors" class="">http://www.semantic-web-journal.net/authors</a>.<br class="">
<br class="">
We welcome four main types of submissions: (i) full research papers, <br class="">
(ii) reports on tools and systems, (iii) application reports, <br class="">
and (iv) survey articles. The description of the submission types is <br class="">
posted at <a href="http://www.semantic-web-journal.net/authors#types" class="">http://www.semantic-web-journal.net/authors#types</a>. <br class="">
While there is no upper limit, paper length must be justified by content.<br class="">
<br class="">
Note that you need to request an account on the website for submitting a paper. <br class="">
Please indicate in the cover letter that it is for the "The Role of Ontologies <br class="">
and Knowledge in Explainable AI” special issue. All manuscripts will be reviewed <br class="">
based on the SWJ open and transparent review policy and will be made available <br class="">
online during the review process. <br class="">
<br class="">
Also note that the Semantic Web journal is open access.<br class="">
<br class="">
<a href="http://www.semantic-web-journal.net/blog/call-papers-special-issue-role-ontologies-and-knowledge-explainable-ai" class="">http://www.semantic-web-journal.net/blog/call-papers-special-issue-role-ontologies-and-knowledge-explainable-ai</a><br class="">
<br class="">
**Guest editors** <br class="">
<br class="">
The guest editors can be reached at <a href="mailto:ontologies-knowledge-in-xai-swj@googlegroups.com" class="">ontologies-knowledge-in-xai-swj@googlegroups.com</a> .<br class="">
<br class="">
- Roberto Confalonieri, Free University of Bozen-Bolzano, Faculty of Computer Science, Italy<br class="">
- Oliver Kutz, Free University of Bozen-Bolzano, Faculty of Computer Science, Italy<br class="">
- Diego Calvanese, Department of Computing Science, Umeå University, Sweden and <br class="">
Free University of Bozen-Bolzano, Faculty of Computer Science<br class="">
- Jose M. Alonso, University of Santiago de Compostela, CiTIUS, Spain<br class="">
- Shang-Ming Zhou, University of Plymouth, Faculty of Health, UK<br class="">
</div>
</div>
<div class=""><br class="">
</div>
<div class=""><br class="">
</div>
</body>
</html>