CFP IEEE Transactions on Pattern Analysis and Machine Intelligence Special Issue on Learning with Fewer Labels in Computer Vision

    

Call For Papers 

IEEE Transactions on Pattern Analysis and Machine Intelligence 

Special Issue on Learning with Fewer Labels in Computer Vision 

 

 

1.         Abstract and Motivation 

The past several years have witnessed an explosion of interest in and a dizzyingly fast development of machine learning, a subfield of artificial intelligence. Foremost among these approaches are Deep Neural Networks (DNNs) that can learn powerful feature representations with multiple levels of abstraction directly from data when large amounts of labeled data is available.  One of the core computer vision areas, namely, object classification achieved a significant breakthrough result with a deep convolutional neural network and the large scale ImageNet dataset, which is arguably what reignited the field of artificial neural networks and triggered the recent revolution in Artificial Intelligence (AI). Nowadays, artificial intelligence has spread over almost all fields of science and technology. Yet, computer vision remains in the heart of these advances when it comes to visual data analysis, offering the biggest big data and enabling advanced AI solutions to be developed. 

Undoubtedly, DNNs have shown remarkable success in many computer vision tasks, such as recognizing/localizing/segmenting faces, persons, objects, scenes, actions and gestures, and recognizing human expressions, emotions, as well as object relations and interactions in images or videos. Despite a wide range of impressive results, current DNN based methods typically depend on massive amounts of accurately annotated training data to achieve high performance, and are brittle in that their performance can degrade severely with small changes in their operating environment. Generally, collecting large scale training datasets is time-consuming, costly, and in many applications even infeasible, as for certain fields only very limited or no examples at all can  be gathered (such as visual inspection or medical domain), although for some computer vision tasks large amounts of unlabeled data may be relatively easy to collect, e.g., from the web or via synthesis. Nevertheless, labeling and vetting massive amounts of real-world training data is certainly difficult,  expensive,  or  time-consuming,  as  it  requires the painstaking efforts of experienced human annotators or experts, and in many cases prohibitively costly or impossible due to some reason,  such  as  privacy,  safety or ethic issues (e.g., endangered species, drug discovery, medical diagnostics and industrial inspection). 

DNNs lack the ability of learning from limited exemplars and fast generalizing to new tasks. However, real-word computer vision applications often require models that are able to (a) learn with few annotated samples, and (b) continually adapt to new data without forgetting prior knowledge. By contrast, humans can learn from just one   or a handful of examples (i.e., few shot learning), can do very long-term learning, and can form abstract models of a situation and manipulate these models to achieve extreme generalization. As a result, one of the next big challenges in computer vision is to develop learning approaches that are capable of addressing the important shortcomings of existing methods in this regard. Therefore, in order to address the current inefficiency of machine learning, there is pressing need to research methods, (1) to drastically reduce requirements for labeled training data, (2) to significantly reduce the amount of data necessary to adapt models to new environments, and (3) to even use as little labeled training data as people need. 

 

2.         Topics of Interest 

This special issue focuses on learning with fewer labels for computer vision tasks such as image classification, object detection, semantic segmentation, instance segmentation, and many others and the topics of interest include (but are not limited to) the following areas: 

  • Self-supervised learning methods 
  • New methods for few-/zero-shot learning 
  • Meta-learning methods 
  • Life-long/continual/incremental learning methods 
  • Novel domain adaptation methods 
  • Semi-supervised learning methods 
  • Weakly-supervised learning methods 

3.         Submission Deadline 

Paper Submission Deadline: April 15, 2021. 

4.         Guest Editors 

  • Li Liu 

Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland 

li.liu@oulu.fi 

  • Timothy Hospedales 

Professor 

University of Edinburgh, UK 

Principal Scientist at Samsung AI Research Centre Alan Turing Institute Fellow  

t.hospedales@ed.ac.uk 

  • Yann LeCun 

Silver Professor 

New York University, United States  

VP and Chief AI Scientist at Facebook  

yann@fb.com 

  • Mingsheng Long 

Tsinghua University, China  

mingsheng@tsinghua.edu.cn 

  • Jiebo Luo 

Professor 

University of Rochester, United States  

jluo@cs.rochester.edu 

  • Wanli Ouyang 

University of Sydney, Australia  

wanli.ouyang@sydney.edu.au 

  • Matti Pietikäinen 

Professor (IEEE Fellow) 

Center for Machine Vision and Signal Analysis University of Oulu, Finland  

matti.pietikainen@oulu.fi 

  • Tinne Tuytelaars 

Professor 

KU Leuven, Belgium  

Tinne.Tuytelaars@esat.kuleuven.be 

 

Main Contact: 

Dr. Li Liu 

Email: li.liu@oulu.fi, dreamliu2010@gmail.com 

National University of Defense Technology, China 

Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland 

Kind Regards
      Li
Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult