CALL for Workshop on Robust AI for High-Stakes Applications, collocated with KI 2022

Workshop on Robust AI for High-Stakes Applications (RAI), collocated with KI 2022 (https://ki2022.gi.de/)

Website: https://rai2022.sme.uni-bamberg.de/

 

Introduction

Robustness is widely understood as the property of some method, algorithm, or system to only decrease gradually in performance when assumptions about its input are decreasingly met. This renders robustness to be a crucial property for dependable and trustworthy applications of AI in open-world environments, in particular in high-stake applications in which human well-being is at risk. However, the usual definition of robustness raises several questions, including:

§  What are the performance measures for evaluating the decrease in performance, i.e., which shortcomings are acceptable and which are not?

§  How do we identify the degree to which assumptions about input characteristics are not met, in particular if assumptions are hard to specify?

Depending on the respective application area and technique considered, various approaches have been taken to measure or benchmark performance and abnormality of input characteristics. Sometimes, we may be facing unknown requirements on input data and only experiments reveal much later that an approach is not robust (1-pixel-attacks on CNN-based object classification being one infamous example).

There has been a lot of progress in AI over the past few years, with many successful examples in perception and reasoning, which has encouraged the integration of the resulting technologies into important and high-stakes real-world applications such as autonomous mobile systems (e.g., self-driving cars, autonomous drones, service robots) automated surgical assistants, electrical grid management systems, control of critical infrastructure, to name a few. However, for such an integration to constitute a beneficial socio-technical system, safety and reliability are key, and robustness is essential to avert potential catastrophic events due to unconsidered phenomena or situations. The aim of this workshop is to bring together researchers from basic or applied AI across all sub-fields of AI to discuss approaches and challenges for developing robust AI. In particular, we envisage a dialogue between the Machine Learning and the Symbolic AI communities for the benefit of critical real-world applications. Our aim is to foster exchange between the various AI sub-fields present at KI and to discuss future research directions.

 

Topics

Robustness refers to capability of coping with unforeseen phenomena or situations. Gearing AI towards robustness has always been an aim for open-world AI, and it becomes a pressing requirement as AI makes its way into control of high-stake applications. Robustness is addressed in many sub-fields of AI using various working definitions, and various measures. This workshop aims to bring together researchers from all sub-fields of AI working on robust methods.

In this workshop, we invite the research community in Artificial Intelligence to submit position statements and technical works related to the theme of Robust AI for High-Stakes Applications in order to develop a joint understanding of robustness in AI and to foster the exchange on robust AI. Topics of interest include:

Explainable Artificial Intelligence

§  Benchmarking, evaluation, and regularization

§  Regularization in Machine Learning

§  Robust optimization

§  Robust inference algorithms

§  Causal model learning

§  Neuro-symbolic integration; Logic as a referee

§  Anomaly detection

§  Open-world planning and decision-making

§  AI in socio-technological systems

The list above is by no means exhaustive, as the aim is to foster the debate around all aspects of the suggested theme.

 

Submission

Guidelines

We invite submissions of regular research papers (up to 12 pages in KI format), position papers (up to 6 pages), or abstracts of recently published papers (3 pages) on the topic of robustness. Accepted papers will be published as a collection of working papers. The workshop is also open to people who would like to attend without submitting a paper as discussion of the topic will play a major role. During the workshop, perspectives on proposing a special issue for the KI journal on robust AI will be discussed. Workshop submissions and camera-ready versions will be handled by EasyChair; the submission link is as follows: https://easychair.org/conferences/?conf=ki2022

Important Dates

July 15, 2022: Workshop Paper Due Date

August 5, 2022: Notification of Paper Acceptance

August 19, 2022: Camera-ready papers due

Note: all deadlines are Central European Time (CET), UTC +1, Paris, Brussels, Vienna.

 

Organizing Committee

Prof. Dr. Ulrich Furbach, University of Koblenz-Landau, Germany / wizAI solutions GmbH

Dr. Alexandra Kirsch, Independent Scientist

Dr. Michael Sioutis, University of Bamberg, Germany

Prof. Dr. Diedrich Wolter, University of Bamberg, Germany

 

Contact

All questions about submissions should be emailed to the workshop co-organizers

 

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult