CVPR 2023 Workshop on Event-based Vision – Call for papers and contributions

Full day workshop at CVPR 2023. June 19th, Vancouver, Canada.

Website: https://tub-rip.github.io/eventvision2023/

**************
Important Dates
**************
Paper submission deadline: March 20, 2023.
Submission website: https://cmt3.research.microsoft.com/EVENTVISION2023
Demo abstract submission: March 20, 2023
Notification to authors: April 3, 2023
Camera-ready paper: April 8, 2023 (firm deadline by IEEE)
Workshop day: June 19, 2023. 2nd day of CVPR. Full day workshop.

**************
Objective
**************
This workshop is dedicated to event-based cameras, smart cameras, and algorithms processing data from these sensors. Event-based cameras are bio-inspired sensors with the key advantages of microsecond temporal resolution, low latency, very high dynamic range, and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with standard frame-based cameras (which have been the main sensing technology for the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight, build a flying robot with the agility of a bee, and perform structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. In recent years they have received attention from large companies, e.g., Sony, Samsung, and Omnivision are now producing event sensors. The workshop also considers novel vision sensors, such as pixel processor arrays (PPAs), that perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor, the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This workshop will cover the sensing hardware, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.

**************
Call for Papers:
**************
Research papers and demos are solicited in, but not limited to, the following topics:
 – Event-based / neuromorphic vision.
 – Algorithms: motion estimation, visual odometry, SLAM, 3D reconstruction, image intensity reconstruction, optical flow estimation, recognition, feature/object detection, visual tracking, calibration, sensor fusion (video synthesis, visual-inertial odometry, etc.).
 – Model-based, embedded, or learning-based approaches.
 – Event-based signal processing, representation, control, bandwidth control.
 – Event-based active vision, event-based sensorimotor integration.
 – Event camera datasets and/or simulators.
 – Applications in: robotics (navigation, manipulation, drones…), automotive, IoT, AR/VR, space science, inspection, surveillance, crowd counting, physics, biology.
 – Biologically-inspired vision and smart cameras.
 – Near-focal plane processing, such as pixel processor arrays (PPAs).
 – Novel hardware (cameras, neuromorphic processors, etc.) and/or software platforms, such as fully event-based systems (end-to-end).
 – New trends and challenges in event-based and/or biologically-inspired vision (SNNs, etc.).
 – Event-based vision for computational photography.

**************
Paper/Demo Submission
**************
Research papers and demos are solicited in, but not limited to, the topics listed above. Paper submissions must adhere to the CVPR 2023 paper submission style, format and length restrictions. See the author guidelines and template provided by the CVPR 2023 main conference. See also the policy of Dual/Double Submissions of concurrently-reviewed conferences, such as ICCV. Authors may want to limit the submission to four pages (excluding references) if that is their case.

A double blind peer-review process of the submissions received is carried out via CMT. Accepted papers will be published open access through the Computer Vision Foundation (CVF) (see examples from CVPR Workshop 2019 and 2021). For the accepted papers we encourage authors to write a paragraph about ethical considerations and impact of their work.

**************
Courtesy Presentations
**************
We also invite courtesy presentations of papers relevant to the workshop that are accepted at CVPR main conference or at other peer-reviewed conferences or journals. These presentations provide visibility to your work and help build a community around the topics of the workshop. These contributions will be checked for relevance to the workshop, but will not undergo a complete review, and will not be published in the workshop proceedings. Please contact the organizers to make arrangements to showcase your work at the workshop.

Organizers:
– Guillermo Gallego, TU Berlin, ECDF and SCIoI, Germany.
– Davide Scaramuzza, University of Zurich, Switzerland.
– Kostas Daniilidis, University of Pennsylvania, USA.
– Cornelia Fermüller, University of Maryland, USA.
– Davide Migliore, Prophesee, France.

Workshop on Signal processing and machine learning to foster accessibility in cultural environments (SPACE)

Dear colleagues

We cordially invite you to the workshop:

Signal processing and machine learning to foster accessibility in cultural environments (SPACE)

Organized in Conjunction with the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023),

taking place in the island of Rhodes – Greece, held on one of the following dates: June 4th, 5th, or 10th, 2023

https://sites.google.com/view/space-workshop

 

The main topics covered are related to

Perception signals: tracking of human motion for signing or HCI, visual object recognition, localization in cultural spaces

Actions: Navigation in cultural spaces

Processing for content presentation such as: retrieval of cultural or educational content, graphics for sign language avatars, VR for the blind, content-based recommendation, content summarization

Processing for interfaces: speech interfaces, sign language interpretation, gaze tracking

Audio/speech processing: techniques for improving the accessibility of audio/speech content for people with hearing impairments, such as speech-to-text or audio captioning

Natural language processing: techniques for improving the accessibility of text content for people with visual impairments, such as text-to-speech or text magnification

Accessibility for virtual and augmented reality: techniques for making virtual and augmented reality experiences more accessible for people with disabilities, such as haptic feedback or audio-based navigation

Machine learning for accessibility: techniques for using machine learning to improve the accessibility of cultural content and spaces, such as personalized recommendations,  automatic summarization, or automatic audio description of visual content

Human-computer interaction: techniques for improving the usability of cultural content and spaces for people with disabilities, such as gesture-based interfaces or touch-based interfaces

Assistive technology: techniques for developing and evaluating assistive technology for people with disabilities, such as wearable devices or mobile apps

Ethical considerations: discussions of the ethical implications of using signal processing and machine learning for accessibility in cultural environments, such as issues of privacy and security

Relevant datasets

These topics are indicative, feel free to contact us if you are working on related topics that might be within the broad area of the workshop but are not listed here.

 

Review Process

All submissions will be reviewed by the workshop's program committee, which is composed of experts in the relevant fields. The review process is single-blind. The criteria for evaluating submissions will include originality, significance, and relevance to the workshop themes. In addition, reviewers may consider the quality of the writing, the clarity of the presentation, and the potential of the research to make a meaningful contribution to the field.

 

Submissions will be reviewed and evaluated according to the announced time schedule (see Important Dates below), and authors will be notified of the outcome of the review process via email. We are aiming to release decisions to authors on April 10, 2023. Authors of accepted submissions will be required to revise their papers based on the feedback provided by the reviewers and to submit a final version for inclusion in the conference proceedings. The camera ready submission deadline is April 28, 2023.

 

Accepted submissions will be invited to present at the workshop as a poster presentation and a short spotlight presentation. Please note that, in order for a paper to be included in the conference proceedings, at least one author of each accepted paper must register for the conference and present it. Details on special workshop registration rates can be found at the main conference website.

 

Important Dates

Submission: 10 March 2023, 23:59 Pacific Time

Decision: 10 April 2023

Camera-ready: 28 April 2023

 

ICIP 2023 Special Session on Autonomous Vehicle Vision (AVVision)

Early abstract submissions are required! Send your abstract (including tentative title, abstract, author list, and corresponding author affiliation and email) to Rui Fan (rui.fan@ieee.org) before January 29, 2023! If your abstract is within the scope of our special session, we will invite you to submit a full paper (4 pages). Please note: the paper review process for Special Session papers will be handled by the TPCs, along with the Regular Paper. The important dates and paper instructions are the same as Regular Paper.

 

Call for Papers 

Due to the recent boom in artificial intelligence technologies, there are growing expectations that fully autonomous driving may become a reality in the near future and it is expected to bring fundamental changes to our society. Fully autonomous vehicles offer great potential to improve efficiency on roads, reduce traffic accidents, increase productivity, and minimize our environmental impact in the process.

As a key component of autonomous driving, autonomous vehicle vision (AVVision) systems are typically developed based on cutting-edge computer vision, machine/deep learning, image/signal processing, and advanced sensing technologies. With recent advances in deep learning, AVVision systems have achieved compelling results. However, there still exist many challenges. For instance, the perception modules cannot perform well in poor weather and/or illumination conditions or in complex urban environments. Developing robust and all-weather visual environment perception algorithms is a popular research area that requires more attention. In addition, most perception methods are computationally-intensive and cannot run in real-time on embedded and resource-limited hardware. Therefore, fully exploiting the parallel-computing architecture, such as embedded GPUs, for real-time perception, prediction, and planning is also a hot subject that is being researched in the autonomous driving field. Furthermore, existing supervised learning approaches have achieved compelling results, but their performance is fully dependent on the quality and amount of labeled training data. Labeling such data is a time-consuming and labor-intensive process. Un/self-supervised learning approaches and domain adaptation techniques are, therefore, becoming increasingly crucial for real-world autonomous driving applications.

Research papers are solicited in, but not limited to, the following topics:

• 3D geometry reconstruction for autonomous driving;

• Driving scene understanding;

• Self-supervised/unsupervised visual environment perception;

• Driver status monitoring and human-car interfaces;

• Deep/machine learning and image analysis for autonomous vehicle perception;

• Adversarial domain adaptation for autonomous driving.

Organizers 

Dr. Rui Ranger Fan, Tongji University

Dr. Wenshuo Wang, McGill University

Important Dates

Paper Submission Deadline: February 15, 2023

Paper Acceptance Notification: June 21, 2023

Final Paper Submission Deadline: July 5. 2023

Submission

Paper Submission Instruction: https://cmsworkshops.com/ICIP2023/papers.php. The review process for Special Session papers will be handled by the TPCs, along with the Regular Paper. The important dates and paper instructions are the same as Regular Paper:

CIAP 2023 – 22th International Conference on Image Analysis and Processing

Call for Papers

ICIAP 2023 is the 22th edition of a series of conferences organized biennially by CVPL, the Italian Member Society of the International Association for Pattern Recognition (IAPR).

The focus of the conference is on both classic and recent trends in computer vision, pattern recognition and image processing, and covers both theoretical and applicative aspects, with particular emphasis on the following topics:

Pattern Recognition  
Machine Learning and Deep Learning 
3D Computer Vision and Geometry 
Image Analysis: Detection and Recognition 
Video Analysis & Understanding
Biomedical and Assistive Technology 
Digital Forensics and Biometrics
Multimedia 
Cultural Heritage 
Robot Vision and Automotive
Shape representation  recognition and analysis: 
Augmented and Virtual Reality
Geospatial Analysis
Computer Vision for UAVs

The conference will be held in Udine, Italy on 11-15th September, 2023.
The conference is structured in oral and poster sessions and offers invited lectures from distinguished speakers. Satellite workshops and tutorials are also organised.


Dates

Paper Submission 1st round: 15  February  2023
Notifications to Authors 1st round:  15  April 2023
Paper Submission 2nd round:  1  May 2023 
Notifications to Authors 2nd round: 1  July 2023
Camera Ready papers due:  15  July 2023
Main Conference 12-14 September 2023
Workshop and Tutorials:  11 and 15 September 2023

Submission


All submissions will be handled electronically via the conference’s CMT Website: 
https://cmt3.research.microsoft.com/ICIAP2023
Papers will be selected through a double-blind review process, taking into account originality, significance, clarity, soundness, relevance and technical contents. 
Each submission will be managed by two Area Chairs and reviewed by at least three reviewers. 
Accepted papers will be published in indexed conference proceedings.


People

General Chairs:
 Gian Luca Foresti (U. Udine)  Andrea Fusiello (U. Udine) , Edwin Hancock (U. York)
Program Chairs: Michael Bronstein (U. Oxford),  Barbara Caputo (Politecnico Torino),  Giuseppe Serra (U. Udine) 
Workshop Chairs: Federica Arrigoni (Politecnico Milano),  Lauro Snidaro (U. Udine) 
Tutorial Chairs: Christian Micheloni (U. Udine) , Francesca Odone (U. Genova)
Publication Chairs: Claudio Piciarelli (U. Udine) , Niki Martinel (U. Udine) 
Industrial Liaison Chairs:  Pasqualina Fragneto (STM)
Publicity/Social Chair:  Matteo Dunnhofer (U. Udine), Beatrice Portelli (U. Udine)
Local Organization Chairs:  Eleonora Maset (U. Udine), Andrea Toma (U. Udine), Emanuela Colombi (U. Udine), Alex Falcon (U. Udine)

EarthVision Workshop @ CVPR 2023

 

CALL FOR PARTICIPANTS & PAPERS

 

EarthVision 2023 – Large Scale Computer Vision for Remote Sensing Imagery Workshop 

in conjunction with CVPR 2023, June 2023, Vancouver, Canada. 

 

Website: https://www.grss-ieee.org/earthvision2023/

 

AIMS AND SCOPE

Earth Observation (EO) and remote sensing are fast growing fields of investigation where computer vision, machine learning, and signal/image processing meet. The general objective of EO is to provide large-scale and consistent information about processes occurring at the surface of the Earth by exploiting data collected by airborne and spaceborne sensors. EO covers a broad range of tasks, from detection to registration, data mining, and multi-sensor, multi-resolution, multi-temporal, multi-modal fusion and regression, to name just a few. It serves numerous  applications such as location-based services, online mapping, large-scale surveillance, 3D urban modeling, navigation systems, natural hazard forecast and response, climate change monitoring, virtual habitat modeling, food security, etc. The sheer amount of data calls for highly automated scene interpretation workflows. 


The Earthvision workshop, held for its seventh edition at the CVPR 2023, aims at fostering collaboration between the computer vision, machine learning, and the remote sensing communities to boost automated analysis of EO data. EarthVision will strive to build cooperation within the CVPR community for this highly challenging and quickly evolving field with a significant impact on society, economy, industry, and the environment. 

 

We invite contributions in the fields of (not exhaustive list):

  • Super-resolution in the spectral and spatial domain

  • Hyperspectral and multispectral image processing

  • Reconstruction and segmentation of optical and LiDAR 3D point clouds

  • Feature extraction and learning from spatio-temporal data 

  • Analysis  of UAV / aerial and satellite images and videos

  • Deep learning tailored for large-scale Earth Observation

  • Domain adaptation, concept drift, and the detection of out-of-distribution data

  • Evaluating models using unlabeled data

  • Self-, weakly, and unsupervised approaches for learning with spatial data

  • Human-in-the-loop and active learning

  • Multi-resolution, multi-temporal, multi-sensor, multi-modal processing

  • Fusion of machine learning and physical models

  • Explainable and interpretable machine learning in Earth Observation applications

  • Applications for climate change, sustainable development goals, and geoscience

  • Public benchmark datasets: training data standards, testing & evaluation metrics, as well as open source research and development.

 

IMPORTANT DATES

Full paper submission: March 9, 2023

Notification of acceptance: March 30, 2023

Camera-ready paper: April 6, 2023

Workshop (full day): June 18, 2023

 

SUBMISSION GUIDELINES

A complete paper should be submitted using the EarthVision templates provided on the workshop website. The paper length must not exceed 8 pages (excluding references) and formatting follows CVPR 2023 instructions. All manuscripts will be subject to a double-blind review process, i.e. authors must not identify themselves on the submitted papers. The reviewing process is single-stage, meaning that there will not be rebuttals to reviewers.

 

Papers are to be submitted using the dedicated submission platform on the workshop website. By submitting a manuscript, the authors guarantee that it has not been previously published or accepted for publication in substantially similar form. CVPR rules regarding plagiarism, double submission, etc. apply.  

 

WORKSHOP ORGANIZERS

Ronny Hänsch, German Aerospace Center, Germany

Devis Tuia, EPFL, Switzerland

Jan Dirk Wegner, University of Zurich & ETH Zurich, Switzerland

Bertrand Le Saux, ESA/ESRIN, Italy

Nathan Jacobs, Washington University in St. Louis, USA

Loïc Landrieu, IGN, France

Charlotte Pelletier, UBS Vannes, France

Hannah Kerner, Arizona State University, USA

Beth Tellman, University of Arizona, USA

 

CHALLENGE

EarthVision 2023 will feature the African Biomass Challenge with the goal to accurately estimate aboveground biomass in different cocoa plantations in Côte d'Ivoire. The dataset consists of ESA Sentinel-2 images, NASA GEDI data and ground truth biomass. All AI practitioners, experts and enthusiasts are invited to take part in the competition organized on Zindi.

 

SPONSORING

The event is co-organized by the Image Analysis and Data Fusion Technical Committee of the IEEE-GRSS, and it is sponsored by Blacksky, Exolabs, Picterra, and Kitware.

 

Website: https://www.grss-ieee.org/earthvision2023/

Design by 2b Consult