From enhancing data collection and annotation processes to advancing visual causality in diagnostic medical imaging, our past workshops have demonstrated the pivotal role of gaze in several domains. The GMCV 2025 workshop aims to continue this momentum by fostering collaboration among experts in neuroscience, machine learning, computer vision, medical imaging, natural language processing (NLP), and other related fields, with a strong focus on computer vision applications. Together, we will explore how bridging human and machine attention can drive more efficient, reliable solutions for computer vision tasks.
For more details, please refer to the Call for Papers below.
Sincerely,
The GMCV Workshop Organizing Committee
********************************************************************************
The 2025 Gaze Meets CV workshop in conjunction with WACV 2025
********************************************************************************
Webpage: https://sites.google.com/view/gmcv-workshop-wacv2025
Twitter Handle: https://twitter.com/Gaze_Meets_ML
Submission site: https://cmt3.research.microsoft.com/GMCV2025
Submission deadline: November 30th, 2024
Date: Feb 28th – Mar 4th
Location: Tucson, Arizona, USA
** Overview **
We are excited to host the Gaze Meets Computer Vision (GMCV) Workshop, in conjunction with WACV 2025 (Feb 28th – Mar 4th). The workshop will take place in person at Tucson, Arizona! We’ve got a great lineup of speakers.
** Background **
The rise of big data and human-centered technologies has brought exciting advancements and challenges, such as data annotation, multimodal fusion, and enhancing human-computer interaction. Wearable eye-tracking devices like Meta Quest 3 and Apple Vision Pro promise to revolutionize the field by enabling eye-gaze data collection in real-world settings, offering new ways to study human cognition and develop gaze-aware ML models.
Eye gaze is a cost-effective way to gather physiological data, revealing attentional patterns in various domains like radiology, marketing, and UX. Recently, it's been used for data labeling and analysis in computer vision, with growing interest in using gaze as a cognitive signal to train models. Key challenges remain, including data quality and decoding, but advancements in eye-tracking are opening new possibilities for egocentric perception, embodied AI, and multimodality. This workshop aims to bring together experts to address core issues in gaze-assisted computer vision.
** Call for Papers **
We invite submissions to the “Gaze meets Computer Vision (GMCV): Bridging Human Attention and Machine Perception” workshop at WACV 2025. The workshop seeks original research contributions, as well as surveys and position papers, that focus on the integration of gaze data with computer vision tasks. We welcome papers addressing a broad range of topics, including but not limited to:
- Gaze-Informed Visual Understanding
- Gaze-based Human-AI Interaction
- Attention Modeling in Vision Systems
- Gaze-Driven Annotation and Labeling
- Egocentric Vision and Embodied AI
- Gaze Enhanced Medical Imaging
- Understanding Human intention and Goal inference
- Eye-tracking in Visual Search and Navigation
- Explainable AI and Trustworthy Vision Systems
- Ethical Considerations of using eye-tracking data
- Gaze Data Quality and Integration
- State-of-the-art method integrating Gaze in ML
- Gaze applications in cognitive psychology, radiology, neuroscience, AR/VR, autonomous cars, privacy, etc.
- Gaze-Driven Behavioral Analytics
- Cross-Modal Learning with Gaze and Vision
- Real-Time Gaze Prediction and Analysis
- Gaze-Guided Object Detection and Recognition
- Learning from Noisy Gaze Data
- Temporal Dynamics of Gaze in Video Analysis
- Privacy-Preserving Gaze Analysis
- Gaze in Low-Light and Challenging Environments
- Personalization in Vision Systems via Gaze Data
- Other Applications of Gaze and Computer Vision
Submission Tracks:
We are accepting submissions for two distinct tracks: Full Paper Track and Extended Abstract Track. Both offer unique opportunities to showcase your work at the workshop.
- Full Paper Track (Archival). This track is for original research contributions that will be published in the conference proceedings and included in IEEE Xplore. Full papers in this track undergo rigorous peer review and are indexed separately from the main conference proceedings, ensuring visibility and recognition in the field.
- Page Limit: Up to 8 pages (excluding references and appendices)
- Review Process: Double-blind peer review
- Publication: IEEE Xplore, archival indexing
- Extended Abstract Track (Non-Archival). This track is for late-breaking research, and preliminary results, or if you wish to present previously published work. Submissions in this track will also undergo double-blind peer review, without committing your work to archival publication. This means that presenting at Gaze Meets ML does not preclude future submissions to other journals or conferences.
- Page Limit: Up to 4 pages (excluding references and appendices)
- Review Process: Double-blind peer review
- Publication: Non-archival, no restrictions on future publication
Submission Guidelines:
- Formatting: All submissions must adhere to the WACV template and guidelines.
- References & Appendices: Include references and any appendices within the same PDF document. These sections are excluded from the page count limit.
- Review Process: All submissions, regardless of track, will undergo a double-blind peer review to ensure quality and fairness.
** Awards and Funding **
We are offering two GP3 SD UX eye-tracking devices from Gazepoint as Best Paper Awards and, depending on funding availability, we will cover the registration fees for presenting authors, with a focus on supporting underrepresented minorities.
** Important dates for Workshop paper submission **
-
Paper submission deadline:
November 22nd, 2024November 30th, 2024 -
Notification of acceptance: December 18th, 2024
-
Camera-ready: January 10th, 2025
-
Workshop: (Feb 28th or Mar 4th)
** Organizing Committee **
Dario Zanca (FAU Erlangen-Nürnberg)
Ismini Lourentzou (Illinois Urbana-Champaign)
Joy Tzung-yu Wu (Stanford)
Bertram Emil SHI (HKUST)
Elizabeth Krupinski (Emory School of Medicine)
Jimin Pi (Google)
Alexandros Karargyris (MLCommons)
Amarachi Mbakwe (Virginia Tech)
Satyananda Kashyap (IBM)
Abhishek Sharma (Google)
** Contact **
All inquiries should be sent to dario.zanca@fau.de or akarargyris@gmail.com