CALL FOR PAPERS 3D-DLAD-v4 2021
3D-DLAD-v4 (Fourth 3D Deep Learning for Autonomous Driving) workshop is the 6th workshop organized as part of DLAD workshop series. It is organized as a part of the flagship automotive conference Intelligent Vehicles iv2022.com/.
Deep Learning has become a de-facto tool in Computer Vision and 3D processing with boosted performance and accuracy for diverse tasks such as object classification, detection, optical flow estimation, motion segmentation, mapping, etc. Lidar sensors are playing an important role in the development of Autonomous Vehicles as they overcome some of the many drawbacks of a camera-based system, such as degraded performance under changes in illumination and weather conditions. In addition, Lidar sensors capture a wider field of view, and directly obtain 3D information. This is essential to assure the security of the different agents and obstacles in the scene. It is a computationally challenging task to process more than 100k points per scan in realtime within modern perception pipelines. Following the said motivations, finally to address the growing interest in deep representation learning for lidar point-clouds, in both academic as well as industrial research domains for autonomous driving, we invite submissions to the current workshop to disseminate the latest research.
We are soliciting contributions in deep learning on 3D data applied to autonomous driving in (but not limited to) the following topics. Please feel free to contact us if there are any questions. The workshop papers are reviewed under the same procedure as the conference papers, and they will also be published in the proceeding together with the conference papers.
TOPICS :
Deep Learning for Lidar based clustering, road extraction object detection and/or tracking.
Deep Learning for Radar pointclouds
Deep Learning for TOF sensor-based driver monitoring
New lidar based technologies and sensors.
Deep Learning for Lidar localization, VSLAM, meshing, pointcloud inpainting
Deep Learning for Odometry and Map/HDmaps generation with Lidar cues.
Deep fusion of automotive sensors (Lidar, Camera, Radar).
Design of datasets and active learning methods for pointclouds
Synthetic Lidar sensors & Simulation-to-real transfer learning
Cross-modal feature extraction for Sparse output sensors like Lidar.
Generalization techniques for different Lidar sensors, multi-Lidar setup and point densities.
Lidar based maps, HDmaps, prior maps, occupancy grids
Real-time implementation on embedded platforms (Efficient design & hardware accelerators).
Challenges of deployment in a commercial system (Functional safety & High accuracy).
End to end learning of driving with Lidar information (Single model & modular end-to-end)
Deep learning for dense Lidar point cloud generation from sparse Lidars and other modalities
Workshop link : sites.google.com/view/3d-dlad-v4-iv2022/home
Submission instructions : iv2022.com/program/workshops
Location : Aachen, Germany
Workshop papers submission deadline: March 8th, 2022
Acceptation/Rejection Notification: April 22nd, 2022
Final paper submission: May 1st, 2022
Workshop Organizers:
Abhinav Valada, University of Freiburg, Germany
Varun Ravi Kumar, Qualcomm
B Ravi Kiran, Navya, France
Senthil Yogamani, Qualcomm
Patrick Perez, Valeo.AI, France
Bharanidhar Duraisamy, Daimler, Germany
Dan Levi, GM, Israel
Lars Kunze, Oxford University, UK
Markus Enzweiler, Daimler, Germany
Sumanth Chennupati, Wyze Labs, USA
Stefan Milz, Spleenlab.ai , Germany
Hazem Rashed, Valeo AI Research, Egypt
Jean-Emmanuel Deschaud, MINES ParisTech, France
Victor Vaquero, Research Engineer, IVEX.ai
Kuo-Chin Lien, Appen USA
Naveen Shankar Nagaraja, BMW Group, Munich