There is an ever-growing research interests of the computer vision and machine learning community in modeling human facial and gestural behavior for clinical applications. However, the current state of the art in computer vision and machine learning for face and gesture analysis has not yet achieved the goal of reliable use of behavioral indicators in clinical context. One main challenge to achieve this goal is the lack of available archives of behavioral observations of individuals that have clinically relevant conditions (e.g., pain, depression, autism spectrum). Well-labeled recordings of clinically relevant conditions are necessary to train classifiers. Interdisciplinary efforts to address this necessity are needed. The workshop aims to discuss the strengths and major challenges in using computer vision and machine learning of automatic face and gesture analysis for clinical research and healthcare applications. We invite scientists working in related areas of computer vision and machine learning for face and gesture analysis, affective computing, human behavior sensing, and cognitive behavior to share their expertise and achievements in the emerging field of computer vision and machine learning based face and gesture analysis for health informatics.
Website
Organizers
- Zakia Hammal (Carnegie Mellon University)
- Di Huang (Beihang University)
- Liming Chen (Ecole Centrale De Lyon)
- Mohamed Daoudi (IMT Lille Douai, CRIStAL UMR CNRS)
- Kévin Bailly (Sorbonne University)