We cordially invite you to attend the IROS 2023 workshop on “Robotic Perception and Mapping: Frontier Vision & Learning Techniques”. We invite researchers to submit short papers, extended abstracts, posters, and/or videos to the workshop.
Key Dates
=========
IROS 2023 Workshop, October 5, 2023
Location: Detroit, USA
Submission Deadline: Sunday, Aug 20, 2023, 11:59 PM EDT
Acceptance Notification: Sunday, Sep 10, 2023
Workshop URL: https://sites.google.com/view/ropem/
Scope
========
This workshop aims to present the latest advancements and frontier techniques in computer vision and machine learning that are expected to have a significant impact on robotic perception and mapping and set the direction of research in the next 5-10 years. Through a series of invited and contributed talks by renowned academic leaders and researchers, the event will discuss frontier technologies for robotic perception and mapping with particular focus on addressing existing computer vision challenges such as dealing with dynamic environments and non-rigid objects, trade-offs between scalability (capturing large environments over long periods of operation without running out of memory) versus expressivity (capturing precise details about the environment characteristics, including geometry, semantics, dynamics, topology), as well as addressing machine learning challenges such as reducing training and inference time of machine learning models, fitting large models on small robotic platforms, trade-offs between pre-training and fine-tuning environment models, and ensuring generalization and robustness. To encourage interaction among participants, the workshop will feature panel discussions, posters, and spotlight talks. The event will adopt a hybrid format with both in-person and remote participants. All talks and accepted contributions will be published on the workshop's webpage to expand its reach and impact.
This workshop is a follow-up to the well-received ICRA 2022 workshop on “Robotic Perception and Mapping: Emerging Techniques,” which had the largest attendance across all workshops with over 1,000 participants. This follow-up workshop will provide a new perspective by inviting a new set of speakers and discussing frontier research on computer vision and machine learning (instead of mapping, which was the focus of the previous workshop). To differentiate the workshop from traditional computer vision and machine learning conferences, the talks are particularly focused on techniques with direct application in robotic perception and autonomy.
Invited Speakers
=================
– Konstantinos Alexis (Norwegian University of Science and Technology)
– Wolfram Burgard (Technical University of Nuremberg)
– Maurice Fallon (University of Oxford)
– Katerina Fragkiadaki (Carnegie Mellon University)
– Golnaz Habibi (University of Oklahoma)
– Christoffer Heckman (University of Colorado Boulder)
– Chad Jenkins (University of Michigan)
– Michael Kaess (Carnegie Mellon University)
– Jana Kosecka (George Mason University)
– Lingjie Liu (University of Pennsylvania)
Call for Papers
============
We cordially invite researchers to submit short papers, extended abstracts, posters, and/or videos. We accept original papers, as well as in-review or recently accepted manuscripts. Submitted contributions can describe work in progress, preliminary results, novel concepts, or industrial applications. All manuscripts are limited to 4+n pages (i.e., additional pages over 4 are only allowed for references), should use the IEEE standard two-column conference format (see IROS 2023 website), and must be in the PDF format with a size less than 20 MB. We encourage authors to submit a video for their manuscript as supplementary material. All video submissions must be in mp4 format with a size of less than 100 MB. All submissions will be peer-reviewed. Authors who submit a paper are expected to provide (up to) 3 single-blind reviews for the papers submitted to this workshop. Submissions will be selected by workshop organizers based on the reviews, their originality, relevance to the workshop topics, contributions, technical clarity, and presentation. All accepted manuscripts will be presented as posters during the workshop, which will be displayed throughout the day. Two top contributions will be selected for 10-minute oral presentations at spotlight sessions. Accepted posters and videos will be posted on the workshop website.
Topics of Interest
==============
– Novel 3D representations including implicit and explicit neural representations, compressed point clouds, meshes, signed distance functions, occupancy, and semantic maps
– Dynamic and non-rigid 3D reconstruction
– Language models for perception and spatial reasoning
– Test-time incremental neural network training for perception and mapping
– Self-supervised feature and environment model training
– Semantic scene understanding, detection, and segmentation
– Multi-modal sensing for multi-modal scene reconstruction
– Vision-based navigation and estimation
– Robust localization in uncertain and dynamic environments
– Uncertainty estimation and introspective failure detection in machine learning and perception
– Few-shot generalization and robustness to distribution shift in mapping and SLAM domains
– Certifiable and interpretable learning techniques for perception
– Optimization and graphical models for perception
– Synergetic learning and model-based techniques for perception
Organizers
=========
– Nikolay Atanasov (University of California San Diego)
– Luca Carlone (Massachusetts Institute of Technology)
– Kevin Doherty (Massachusetts Institute of Technology)
– Kaveh Fathian* Corresponding Organizer
– Golnaz Habibi (University of Oklahoma)
– Jonathan How (Massachusetts Institute of Technology)
– John Leonard (Massachusetts Institute of Technology)
– Carlos Nieto (US Army Research Laboratory)
– Hasan Poonawala (University of Kentucky)
– David Rosen (Northeastern University)
– Sebastian Scherer (Carnegie Mellon University)
– Chen Wang (University at Buffalo)
– Shibo Zhao (Carnegie Mellon University)
You can contact the corresponding organizer with any questions: fathian@ariarobotics.com
More information at:
https://sites.google.com/view/ropem/