CLVISION CVPR 2020: Workshop on Continual Learning in Computer Vision
OVERVIEW
During the past few years we have witnessed a renewed and growing attention to Continuous Learning (CL). The interest in CL is essentially twofold. From the artificial intelligence perspective, CL can be seen as another important step towards the grand goal of creating autonomous agents which can learn continuously and acquire new and complex skills and knowledge. From a more practical perspective, CL looks particularly appealing because it enables two important properties: adaptability and scalability. One of the key hallmarks of CL techniques is the ability to update the models by using only recent data (i.e., without accessing old data). This is often the only practical solution when learning on the edge from high-dimensional streaming or ephemeral data, which would be impossible to keep in memory and process from scratch every time a new piece of information becomes available. Unfortunately, when (deep or shallow) neural networks are trained only on new data, they experience a rapid overriding of their weights with a phenomenon known in the literature as catastrophic forgetting.
To this end, the goal of the CVPR 2020 Workshop on Continual Learning (CLVISION) is to explore methods that generalize to a continuous stream of tasks, incrementally consolidating their knowledge without interfering with previously learned information. Thus, we encourage submissions that address the problems of learning from a few examples, catastrophic forgetting and online learning, large-scale realistic benchmarks, or bio-inspired systems for continual learning, such as memory and plasticity. In this one-day workshop, we will have regular paper presentations, invited speakers, and technical benchmark challenges to present the current state of the art, as well as the limitations and future directions for computer vision in continual learning, arguably one of the most crucial milestones of computer vision and AI in general.
We solicit paper submissions on novel methods and application scenarios of Continual Learning.
TOPICS OF INTEREST (include but are not limited to):
- Continual/Lifelong learning: Models that are able to adapt to new tasks without forgetting the previously-learned ones.
- Few-shot learning: Models that learn from a few examples.
- Transfer learning: Models that use new information to improve the performance in previous and novel tasks.
- Online learning: Models that can learn online.
- Bio-inspired learning: Works that take inspiration in nature to propose fundamental mechanisms for continual learning, such as memory or synaptic plasticity.
- Curiosity: Works where the model identifies the most important pieces of information to incorporate new knowledge efficiently. Unsupervised/self-supervised models are welcome.
- Metrics: Metrics and benchmarks for continual learning of visual representations.
- Experience replay: Experience replay for learning systems and robots.
All accepted papers will be presented as posters. Two papers will be selected for oral presentation and one paper will be awarded as the best paper.
CLVision CHALLENGE:
CLVision workshop also provides a comprehensive 2-phase challenge track to thoroughly assess novel continual learning solutions in the computer vision context based on 3 different continual learning (CL) protocols. With this challenge we aim to:
- Invite the research community to scale up CL approaches to natural images and possibly on video benchmarks.
- Invite the community to work on solutions that can generalize over multiple CL protocols and settings (e.g. with or without a “task” supervised signal).
- Provide the first opportunity for comprehensive evaluation on a shared hardware platform for a fair comparison.
- Provide the first opportunity to show the generalization capabilities (over learning) of the proposed approaches on a hidden continual learning benchmark.
More details on the CLVision Workshop Challenge can be found here: https://sites.google.com/view/clvision2020/challenge.
SUBMISSION GUIDELINES:
- The submitted manuscript should follow the CVPR 2019 paper template. Paper submission through: https://cmt3.research.microsoft.com/CONTVISION2020
- The page limit for a full paper is 8 pages (excluding references) and short-papers is 4-pages (excluding references).
- We accept dual submissions to CVPR 2020 and CLVISION 2020, but the manuscript must contain substantial original contents not submitted to any other conference, workshop or journal.
- Submissions will be rejected without review if they:
- contain more than 8 pages (excluding references).
- violate the double-blind policy or violate the dual-submission policy.
- The accepted papers will be linked at the workshop webpage and also in the main conference proceedings if the authors agree
- Papers will be peer-reviewed under the double-blind policy.
IMPORTANT DATES:
Workshop paper submission deadline: March 20th 2020 (11:59 pm Pacific Time)
- Notification to authors: 2nd April 2020
- Camera-ready deadline: 10th April 2020 (11:59 pm Pacific Time)
- Workshop date: June 14, 2020
INVITED SPEAKERS:
- Dr Razvan Pascanu, DeepMind.
- Prof Chelsea Finn, Assistant Professor at Stanford University.
- Prof Cordelia Schmid INRIA Research Director, Head of THOTH Project Team.
- Prof David Maltoni, Professor, Universita Di Bologna.
- Prof Christopher Kanan, PAIGE, RIT and CornellTech.
- Prof Gemma Roig, Ass. Professor at SUTD, MIT.
- Subutai Ahmad, VP Research, Numenta.
ORGANIZERS:
- Pau Rodriguez, Element AI.
- German Parisi, University of Hamburg.
- David Vazquez, Element AI.
- Vincenzo Lomonaco, University of Bologna.
- Nikhil Churamani, University of Cambridge.
- Zhiyuan (Brett) Chen, Google.
- Marc Pickett, Google Research.
WORKSHOP WEBSITE
PAPER SUBMISSION:
—————————
Thanks and Regards
Nikhil Churamani
PhD Student
University of Cambridge
Department of Computer Science and Technology
William Gates Building
15 JJ Thomson Avenue
Cambridge CB3 0FD
Phone: +44 1223 767024
Email: Nikhil DOT Churamani AT cl.cam.ac.uk