BEWARE-23 @AIXIA Rome, 6-9 November 2023

 

BEWARE-23

The 2nd international workshop on the emerging ethical aspects of AI, with a focus on Bias, Risk, Explainability and the role of Logic and Computational Logic. BEWARE23 is co-located with the AIxIA 2023 conference. Note that the deadline has been extended to the **17th of September**.
Aims and Scope

Current AI applications do not guarantee objectivity and are riddled with biases and legal difficulties. AI systems need to perform safely, but problems of opacity, bias and risk are pressing. Definitional and foundational issues about what kinds of bias and risks are involved in opaque AI technologies are still very much open. Moreover, AI is challenging Ethics and brings the need to rethink the basis of Ethics.

 

In this context, it is natural to look for theories, tools and technologies to address the problem of automatically detecting biases and implementing ethical decision-making. Logic, Computational Logic and formal ontologies have great potential in this area of research, as logic rules are easily comprehensible by humans and favour the representation of causality, which is a crucial aspect of ethical decision-making. Nonetheless, their expressivity and transparency need to be integrated within conceptual taxonomies and socio-economic analyses that place AI technologies in their broader context of application and determine their overall impact.

 

This workshop addresses issues of logical, ethical and epistemological nature in AI through the use of interdisciplinary approaches. We aim to bring together researchers in AI, philosophy, ethics, epistemology, social science, etc., to promote collaborations and enhance discussions towards the development of trustworthy AI methods and solutions that users and stakeholders consider technologically reliable and socially acceptable.

 

The workshop invites submissions from computer scientists, philosophers, economists and sociologists wanting to discuss contributions ranging from the formulation of epistemic and normative principles for AI, their conceptual representation in formal models, to their development in formal design procedures and translation into computational implementations.

 

Topics of interest include, but are not at all limited to:

 

Conceptual and formal definitions of bias, risk and opacity in AI 

Epistemological and normative principles for fair and trustworthy AI 

Ethical AI and the challenges brought by AI to Ethics

Explainable AI

Uncertainty in AI

Ontological modelling of trustworthy as opposed to biased AI systems

Defining trust and its determinants for implementation in AI systems

Methods for evaluating and comparing the performances of AI systems

Approaches to verification of ethical behaviour

Logic Programming Applications in Machine Ethics

Integrating Logic Programing with methods for Machine Ethics and Explainable AI

Submission

 

The workshop invites (possibly non-original) submissions of FULL PAPERS (up to 15 pages) and SHORT PAPERS (up to 5 pages). Short papers are particularly suitable to present work in progress, extended abstracts, doctoral theses, or general overviews of research projects. Note that all papers will undergo a careful peer-reviewer process and, if accepted, camera-ready versions of the papers will be published on the AIxIA subseries of CEUR proceedings (Scopus indexed). 

 

Manuscripts must be formatted using the 1-column CEUR-ART Style (you can access the Overleaf template here). For more information, please see the CEUR website http://ceur-ws.org/HOWTOSUBMIT.html.  Papers must be submitted through EasyChair: https://easychair.org/conferences/?conf=beware23.

Proceedings

CEUR Workshop Proceedings.

 

Please refer the workshop website for updates regarding the proceedings, and a potential special issue.

Organizers

Guido Boella, Università di Torino

Fabio Aurelio D'Asaro, Università degli Studi di Verona

Abeer Dyoub, Università degli Studi dell'Aquila

Laura Gorrieri, Università di Torino

Francesca A. Lisi, University of Bari ”Aldo Moro”

Chiara Manganini, Università degli Studi di Milano

Giuseppe Primiero, Università degli Studi di Milano

Important Dates
Submission deadline: FINAL EXTENSION: 17 September 2023

Notification: 6 October 2023

Camera ready: 20 October 2023

 

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult