ICCV 2021 Workshop on Computer Vision in Human Robot Collaboration (CVinHRC)

CALL FOR PAPERS

 

ICCV 2020 Workshop on: “Computer Vision in Human-Robot Collaborative factories of the future” (CVinHRC 2021)

https://cvinhrc.iti.gr/

 

In Conjunction with ICCV 2021 – International Conference on Computer Vision

11-17 October 2021, Montreal, Canada

http://iccv2021.thecvf.com/home

 

The workshop, as well as ICCV 2021, will be a virtual experience

 

Scope and Topics Covered

The technological breakthrough in robotics and the needs of the factories of future (Industry 4.0) bring the robots out of their cages to work in close collaboration with humans, aiming to increase productivity, flexibility and autonomy in production. To enable true and effective human-robot collaboration, the perception system of such collaborative robots should be endorsed with advanced computer vision methods that will transform them into active and effective co-workers.

Recent advances in the field of computer vision are anticipated to resolve several complex tasks that require human-robot collaboration in manufacturing and logistics domains. However, the applicability of existing computer vision techniques in such factories of the future is hindered from the challenges that real, unconstrained industrial environments with cobots impose, such as variability in position and orientation of manipulated objects, deformation and articulation, existence of occlusions, motion, dynamic environments, human presence and more.

In particular, the variability of manufactured parts and the lighting conditions in realistic environments renders robust object recognition and pose estimation challenging, especially when collaborative tasks demand dexterous and delicate grasping of objects. Deep learning can further advance the existing methods to cope with occlusions and other incurred challenges, while also the combination of learning with visual attentional models could reduce the need for data redundancy by selecting most prominent and rich-in-context viewpoints to be memorized, boosting the overall performance of the vision systems. Moreover, close distance collaboration with humans requires accurate SLAM and real time monitoring and modelling of the human body to be applied for robot manipulation and AGV navigation tasks in unconstrained environments, ensuring safety and human faith to the new automation solutions. Alongside, further advanced semantic SLAM methods are needed to endorse cobots with robust long-term autonomy with no or minimal human intervention. What is more, the fusion of deep learning with multimodal perception can offer solutions to complex manufacturing tasks that require powerful vision systems to deal with challenges such as articulated objects and deformable materials handled by the robots. This can be achieved not only by using vision systems as passive observers of the scene, but also with the active involvement of the collaborative robots endorsed with visual searching and view planning capabilities to drastically increase their knowledge for their surroundings.

The goal of this workshop is to bring together researchers from academia and industry in the field of computer vision and enable them to present novel methods and approaches that set the basis for further advanced robotic perception dealing with the significant challenges of human robot collaboration in the factories of future.

 

We encourage submissions of original and unpublished works that address computer vision for robotic applications in manufacturing and logistics domain, including but not limited to the following:

  •     Deep learning for object recognition and pose estimation in manufacturing and logistics
  •     6-DoF object pose estimation for grasping
  •     Real time object tracking and visual servoing
  •     Vision-based object affordances learning
  •     Vision-based manipulation skills modelling and knowledge transfer
  •     View planning with robot active vision
  •     Human presence modelling, detection and tracking in real factory environments
  •     Human-robot workspace modelling for safe manipulation
  •     Semantic SLAM and lifelong environment learning
  •     Safe AGV navigation based on visual input
  •     Multi-AGVs perception and coordination for multiple tasks
  •     Visual search for AGVs and manipulators in industrial environments
  •     Sensor fusion (Camera, Lidar, Haptic, etc.) for enhanced scene understanding
  •     Vision-based attention modeling for collaborative tasks

Invited Speakers

·         Prof. Lydia Kavraki, Rice University, USA

·         Prof. John Tsotsos, York University, Canada

·         Prof. Markus Vincze, Technical University of Vienna, Austria

·         Prof. Danica Kragic, Royal Institute of Technology, KTH, Sweden

·         Prof. Antonios Argyros, University of Crete, Greece

·         Dr. Georgia Gkioxari, Facebook Research

 

Important Dates

Paper Submission Deadline:           July 2, 2021

Author Notification:                         July 23, 2021

Camera Ready Submission:            August 1, 2021

 

Workshop Paper Submissions

Conference papers will be submitted electronically through the workshop submission service website
(cmt3.research.microsoft.com/CVINHRC2021), in PDF format.

Papers should be properly anonymized and should follow the guidelines and template of ICCV 2021: iccv2021.thecvf.com/node/4#submission-guidelines

For further information on the papers submission process, please visit the workshop website: https://cvinhrc.iti.gr/

 

Workshop Organizers

Dimitrios Giakoumis, Senior Researcher, Grade C' at CERTH/ITI, dgiakoum@iti.gr
Ioannis Kostavelis, Senior Researcher, Grade C' at CERTH/ITI, gkostave@iti.gr

Ioannis Mariolis, Postdoctoral Research Associate at CERTH/ITI, ymariolis@iti.gr
Dimitrios Tzovaras, Senior Researcher, Grade A' at CERTH/ITI and CERTH President of the Board, dimitrios.Tzovaras@iti.gr

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult