General-purpose autonomy requires robots to interact with a constantly
dynamic and uncertain world.
We have organized an ICRA 2021 workshop on perception that brings
together amazing keynote speakers on this topic.
We encourage the submission of full research papers or extended
abstracts, please submit even if your work is only preliminary!
In conjunction with the workshop, we will hold the DodgeDrone Challenge,
where participants can build navigation algorithms for drones flying
through a forest! The winner of the competition will be awarded a
Skydio2 drone directly awarded from Skydio Autonomy!
Please visit the workshop website for further details:
https://uzh-rpg.github.io/PADE-ICRA2021/
==============
Important dates
==============
* Paper Submission deadline: 10.05.2021 AOE
* Notification date: 24.05.2021 AOE
* Challenge Submission deadline: 01.06.2021 AOE
* Workshop date: 04.06.2021 from 3pm to 8pm GMT (London time),
online.
=================
Overview and topics
=================
Humans and animals have an innate capacity to make predictions about
their surroundings, which allows them to react to both static and
dynamic obstacles during an action. Thanks to this ability, for example,
a seagull can catch a fast-moving fish in a short amount of time. In
contrast, artificial agents struggle to interact with complex and
dynamic environments and often rely either on the assumption that the
world is static or on simplified motion models of their surroundings.
This workshop will bring together researchers coming from different
backgrounds (computer vision, machine learning, and robotics) and
applications, to discuss existing solutions, research problems, and the
way forward to make robots interact with a permanently moving world.
Besides the usual mix of invited talks and poster presentations, we will
organize the DodgeDrones challenge, where participants will need to
develop perception and control algorithms to navigate a drone in a
highly dynamic environment.
* Perception: state estimation, object detection, free space
detection, etc.
* Simulation and modeling
* Transfer from simulation to reality
* Machine Learning for Robotics: end-to-end learning, learning from
demonstration, reinforcement learning
* Control, from high-level planning to high-fidelity tracking.
* Manipulation in unstructured environments.
* Application-specific challenges: interaction with humans,
navigation in the wild, AR/VR in dynamic scenes, etc.
==========
Submission
==========
All submitted papers will be reviewed by at least two international
experts on the basis of technical quality, relevance, significance, and
clarity. We accept extended abstracts (2-4 pages), experiences’ reports
(2-4 pages), or full research papers (up to 6 pages). We also encourage
the submission of live demos and working systems (up to 2 pages). All
accepted papers will appear on the workshop website. The paper version
should be a paper in pdf standard IEEE format. Accepted paper will be
made available on the website, and authors will be invited to give a
presentation about their work. Submission website:
https://easychair.org/conferences/?conf=pade2021.
==========
Challenge
==========
The DodgeDrone challenge revisits the popular dodgeball game in the
context of autonomous drones. Specifically, participants will have to
code navigation policies to fly drones between waypoints while avoiding
dynamic obstacles. Drones are fast but fragile systems: as soon as
something hits them, they will crash! Since objects will move towards
the drone with different speeds and accelerations, smart algorithms are
required to avoid them!
The competition consists of two challenges: (i) navigation in a static
environment, and (ii) navigation in a dynamic environment. The
navigation policy can only rely on on-board perception (dense depth,
agent, and goal location). These two modalities will help participants
to concentrate on different aspects of the navigation algorithm. Two
environments are used for the competition: a simple and irrealistic one
(which you should reserve for training and development); and a testing
one consisting of a photorealistic forest, where drones have to avoid
the vegetation, as well as the rocks and birds that will obstruct their
path.
A demo video can be found at the following link:
https://youtu.be/ZC1jfh2074o
=======================
Confirmed invited speakers
=======================
* Hayk Martiros, Skydio Autonomy
* Katherine J. Kuchenbecker, Max Planck Institute for Intelligent
Systems
* Alexsandra Faust, Google Brain
* Chelsea Finn and Annie Xie, Stanford
* Wolfram Burgard, University of Freiburg and Toyota Research
Institute
* Raquel Urtasun, University of Toronto
* Richard Newcombe, Facebook Reality Labs
=========
Organizers
=========
* Antonio Loquercio, University and ETH Zurich, Switzerland
* Davide Scaramuzza, University and ETH Zurich, Switzerland
* Luca Carlone, Massachusetts Institute of Technology (MIT), USA
* Markus Ryll, Technical University of Munich, Germany
On behalf of the organizers,
Antonio Loquercio