<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
International Workshop on Ontologies for Autonomous Robotics (<a
href="https://robontics2022.github.io/" moz-do-not-send="true">ROBONTICS
2022</a>) <br>
<div class="moz-forward-container"> @ JOWO 2022, 15-19 August 2022,
Jönköping, Sweden<br>
<br>
<br>
We encourage researchers interested in the fields of robotics and
knowledge engineering to submit short (5-6 pages) or long (10-12
pages) research papers by May 24th 2022; accepted papers will be
published in the JOWO proceedings volume in 2022.<br>
<br>
Researchers with accepted papers will be invited to present at the
RobOntics 2022 workshop, which is planned to be a hybrid (online
and, if conditions allow, physical event).<br>
<br>
**WORKSHOP MOTIVATION**<br>
<br>
ROBONTICS focuses on the area of robot autonomy enabled by
knowledge-driven approaches, and in particular formal ontologies.
It<br>
aims to foster interaction across robotics, ontology, and
knowledge representation and reasoning, to match open problems to
promising approaches, and to review progress in knowledge-driven
robotics.<br>
<br>
Today ontologies are used in robotics and standardization efforts
for robotics knowledge management. Many open problems involve
autonomous robotic agents operating in natural or human
environments, and several research projects in healthcare
assistance, logistics, autonomous driving, etc, aim to bring
robots into realistic human environments.<br>
<br>
One of the difficulties is the large amount of real-world
knowledge that an agent needs to have to be able to act
competently and autonomously. Further, any item of knowledge is
often relevant for many agents and behaviors, and as such should
be reusable. To garner trust and enable debugging, knowledge
should also be accessible to human operators, both in terms of
explaining what knowledge is present in a system, and of providing
ways to easily amend it if necessary.<br>
<br>
**IMPORTANT DATES**<br>
<br>
- Submission deadline: May 24th, 2022<br>
- Notification: July 1st, 2022<br>
- Camera ready: July 22nd, 2022<br>
- Workshop: August 15th-19th (TBD), 2022<br>
<br>
<br>
**LIST OF TOPICS (partial)**<br>
<br>
Participants are invited to submit original papers for oral
presentation, including, but not limited to, topics such as:<br>
<br>
<b>- Foundational issues:</b><br>
- are there some ontological approaches better suited than
others for autonomous robotics? why?<br>
- how should we ontologically model notions like capability,
action, interaction, context etc. in robotics?<br>
<b>- Robustness:</b><br>
- how can ontologies be used to help robots cope with the
variety and relatively fluid structure of human environments?<br>
- is ontology a scalable tool in robotics applications?<br>
- what are good benchmarks for robot autonomy?<br>
- Ontologies in the perception-action loop:<br>
- what roles can ontology play in autonomous manipulation? <br>
- how can we help robots autonomously cope with manipulation
problems using ontology?<br>
- how can ontology be used to support machine learning for
object classification?<br>
<b>- Interactivity:</b><br>
- how can knowledge about other agents present in the
environment be modelled?<br>
- how should we ontologically model the flow of an interaction,
such as a conversation or shared task?<br>
- how can model-driven methods play a role in human-robot
interaction?<br>
- how can ontology-based reasoning play a role in developing
trust in Human-Robot Interaction scenarios?<br>
<b>- Normed behavior:</b><br>
- how should we ontologically represent, and then have a robot
act according to, norms on behavior such as cultural expectations?<br>
- how can these expectations be acquired, and would they be the
same for robots as they are for humans?<br>
<b>- Explainability:</b><br>
- decision chains are very complex; how can these be organized
and presented at various levels of detail for the benefit of a
human user?<br>
- what, ontologically, is an explanation? what is a good
explanation, and how can one be generated from a collection of
knowledge items?<br>
<br>
<br>
<br>
**WORKSHOP CO-CHAIRS (alphabetical order)**<br>
<br>
- Daniel Beßler, Institute for Artificial Intelligence, University
of Bremen, Germany<br>
- Stefano Borgo, Laboratory for Applied Ontology (LOA), ISTC CNR,
Trento, Italy<br>
- Mohammed Diab, Institute of Industrial and Control Engineering,
Universitat Politècnica de Catalunya, Barcelona, Spain<br>
- Aldo Gangemi, University of Bologna and ISTC-CNR, Italy<br>
- Alberto Olivares-Alarcos, Institut de Robòtica i Informàtica
Industrial (CSIC-UPC), Barcelona, Spain<br>
- Mihai Pomarlan, Faculty of Linguistics and Literature,
University of Bremen, Germany<br>
- Robert Porzel, Digital Media Lab, University of Bremen, Germany<br>
<br>
<br>
**SUBMISSION INFORMATION**<br>
<br>
Papers presenting initial or ongoing research are welcome; so are
position and survey papers delineating robotics problems and/or
discussing the suitability of knowledge engineering approaches to
solve such problems.<br>
<br>
All the contributions to the workshop must be submitted according
to the CEUR-Art format. Submitted papers must have 10-12 pages for
long papers, or 5-6 pages for short papers (not including
references).<br>
<br>
Papers will be refereed and accepted on the basis of their merit,
originality, and relevance to the workshop. Each paper will be
reviewed by at least two Program Committee members.<br>
<br>
Papers must be submitted electronically in PDF format to <a
href="https://easychair.org/conferences/?conf=jowo2022"
target="_blank" class="moz-txt-link-freetext"
moz-do-not-send="true">https://easychair.org/conferences/?conf=jowo2022</a><br>
<br>
<br>
**PUBLICATION**<br>
<br>
Accepted contributions to the workshop will be published in the
JOWO proceedings.
<div class="moz-signature"><br>
<br>
-- <br>
<b>Laboratory for Applied Ontology</b> (LOA), ISTC-CNR <br>
Trento, Italy <br>
<a class="moz-txt-link-freetext"
href="http://www.loa.istc.cnr.it" moz-do-not-send="true">http://www.loa.istc.cnr.it</a></div>
</div>
</body>
</html>