The WACV 2022 2nd Workshop on Explainable & Interpretable Artificial Intelligence for Biometrics (xAI4Biometrics Workshop 2022) intends to promote research on Explainable & Interpretable-AI to facilitate the implementation of AI/ML in the biometrics domain, and specifically to help facilitate transparency and trust.
This workshop will include two keynote talks by:
• Walter J. Scheirer, Notre Dame University, USA
• Speaker TBA
The xAI4Biometrics Workshop 2022 is organized by INESC TEC, Porto, Portugal and co-organized by the European Association for Biometrics (EAB).
For more information please visit http://vcmi.inesctec.pt/xai4biometrics
IMPORTANT DATES
Abstract submission (mandatory): October 04, 2021
Full Paper Submission Deadline: October 11, 2021
Acceptance Notification: November 15, 2021
Camera-ready & Registration: November 19, 2021
Conference: January 04-08, 2022 | Workshop Date: January 04, 2022
TOPICS OF INTEREST
The xAI4Biometrics welcomes works that focus on biometrics and promote the development of:
• Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
• Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
• Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
• Methods to generate model-agnostic explanations;
• Transparency and fairness in AI algorithms avoiding bias;
• Methods that use post-model explanations to improve the models’ training;
• Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
• Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
• Natural Language generation for explanatory models;
• Methods for adversarial attacks detection, explanation and defense (“How can we interpret adversarial examples?”);
• Theoretical approaches of explainability (“What makes a good explanation?”);
• Applications of all the above including proofs-of-concept and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes.
ORGANIZING COMMITTEES
GENERAL CHAIRS
o Jaime S. Cardoso, INESC TEC and University of Porto, Portugal
o Ana F. Sequeira, INESC TEC, Porto, Portugal
o Arun Ross, Michigan State University, USA
o Peter Eisert, Humboldt University & Fraunhofer HHI
o Cynthia Rudin, Duke University, USA
PROGRAMME CHAIRS
o Christoph Busch, NTNU & Hochschule Darmstadt
o Tiago de Freitas Pereira, IDIAP Research Institute, Switzerland
o Wilson Silva, INESC TEC and University of Porto, Portugal
CONTACT
Ana Filipa Sequeira, PhD (ana.f.sequeira@inesctec.pt)
Assistant Researcher
INESC TEC, Porto, Portugal
Dr.-Ing. Naser Damer
Smart Living & Biometric Technologies
Fraunhofer Institute for Computer Graphics Research IGD
Fraunhoferstr. 5 | 64283 Darmstadt | Germany Tel +49 6151 155-521
Fax +49 6151 155-499 | naser.damer@igd.fraunhofer.de | www.igd.fraunhofer.de