************************************************************************************************************
xAI4Biometrics Workshop @ WACV 2022 :: Call for Papers
************************************************************************************************************
xAI4Biometrics Workshop @ WACV 2022 :: Call for Papers
************************************************************************************************************
The WACV 2022 2nd Workshop on Explainable & Interpretable Artificial Intelligence for
Biometrics (xAI4Biometrics Workshop 2022) intends to promote research on Explainable &
Interpretable-AI to facilitate the implementation of AI/ML in the biometrics domain, and
specifically to help facilitate transparency and trust.
This workshop will include two keynote talks by:
- Walter J. Scheirer, Notre Dame University, USA
- Speaker TBA
The xAI4Biometrics Workshop 2022 is organized by INESC TEC, Porto, Portugal.
For more information please visit http://vcmi.inesctec.pt/xai4biometrics
IMPORTANT DATES
Abstract submission:October 04, 2021
Full Paper Submission Deadline:October 11 October 25, 2021
Acceptance Notification: November 15, 2021
Camera-ready & Registration: November 19, 2021
Conference: January 04-08, 2022 | Workshop Date: January 04, 2022
Abstract submission:
Full Paper Submission Deadline:
Acceptance Notification: November 15, 2021
Camera-ready & Registration: November 19, 2021
Conference: January 04-08, 2022 | Workshop Date: January 04, 2022
TOPICS OF INTEREST
The xAI4Biometrics welcomes works that focus on biometrics and promote the development of:
The xAI4Biometrics welcomes works that focus on biometrics and promote the development of:
- Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
- Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
- Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
- Methods to generate model-agnostic explanations;
- Transparency and fairness in AI algorithms avoiding bias;
- Methods that use post-model explanations to improve the models’ training;
- Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
- Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
- Natural Language generation for explanatory models;
- Methods for adversarial attacks detection, explanation and defense (“How can we interpret adversarial examples?”);
- Theoretical approaches of explainability (“What makes a good explanation?”);
- Applications of all the above including proofs-of-concept and demonstrators of how to integrate explainable AI into real-world workflows and industrial processes.
ORGANIZING COMMITTEES
GENERAL CHAIRS
- Jaime S. Cardoso, INESC TEC and University of Porto, Portugal
- Ana F. Sequeira, INESC TEC, Porto, Portugal
- Arun Ross, Michigan State University, USA
- Peter Eisert, Humboldt University & Fraunhofer HHI
- Cynthia Rudin, Duke University, USA
PROGRAMME CHAIRS
- Christoph Busch, NTNU & Hochschule Darmstadt
- Tiago de Freitas Pereira, IDIAP Research Institute, Switzerland
- Wilson Silva, INESC TEC and University of Porto, Portugal
CONTACT
Ana Filipa Sequeira, PhD (ana.f.sequeira@inesctec.pt)
Assistant Researcher
INESC TEC, Porto, Portugal
Ana Filipa Sequeira, PhD (ana.f.sequeira@inesctec.pt)
Assistant Researcher
INESC TEC, Porto, Portugal