First International Workshop on “AI-based All-Weather Surveillance System”, AWSS 2024 in conjunction with ACCV 2024

Main Organizers

Thierry Bouwmans, Associate Professor (HDR), Laboratoire MIA, La Rochelle Université, France, Email : tbouwman@univ-lr

Santosh Kumar Vipparthi, Associate Professor, Dept. of Computer Science & Engineering, MNIT, Jaipur, India, Email : kvipparthi@iitrpr.ac.in

Subrahmanyam Murala, Trinity College Dublin, Ireland, Email: muralas@tcd.ie

Sajid Javed, Khalifa University of Science and Technology, UAE, Email : sajid.javed@ku.ac.ae

 

Description (AWSS 2024 (google.com))

Advances in computer vision and the falling costs of camera hardware have allowed the massive deployment of cameras for monitoring physical premises. The extensive deployment of fixed and movable cameras for control and safety has resulted in visual data collection for online and post-event analysis. However, different environmental conditions such as haze or fog, snow, dust, raindrops, and rain streaks degrade the perceptual quality of the data, eventually affecting the architecture performance on high-level computer vision tasks such as change detection, object detection, traffic monitoring, border surveillance, behavior analysis, video synopsis, action recognition, anomaly detection, and object tracking, motion magnification, etc. In literature, different modeling methods based on deep learning (CNNs, GNNs) and graph signal processing concepts have been employed to address the challenges of weather-specific applications (either removal of rain, fog, snow, or haze) only. Nevertheless, only few algorithms allow to handle these multi-weather conditions with a unified network. Moreover, these algorithms require high computational complexity, which leads to poor inference performance in real-world scenarios, and also are most-of-the time not suitable in unseen scenarios. In addition, very few algorithms are available for simultaneous image/video restoration and static/moving object detection in these challenging multi-weather scenarios.

Most of the time, these algorithms employ two-stage architectures to address these challenges. In the first stage, an application-specific image/video degrading algorithm is applied, and in the second stage, high-level video processing tasks such as static/moving objects are detected.  Thus, there is an immense need to design and develop end-to-end unified learning architectures which restore the image/videos and detect the static/moving objects under sparse to extreme multi-weather conditions.

 
Goal of This Workshop

The goals of this workshop are three-fold: 

  1. Designing unified framework that handles low- and high-level computer vision applications such as intelligent transportation, intelligent surveillance systems, conventional/aerial image or video enhancements.

  2.  Proposing new algorithms that can fulfil the requirements of real-time applications,

  3. Proposing robust and interpretable deep learning to handle the key challenges in pattern in these applications. 

 
 
 

Broad Subject Areas for Submitting Papers

Papers are solicited to address deep learning methods to be applied in based all-weather surveillance system,including but not limited to the following:

  • Graph Machine Learning for Computer Vision

  • Transductive/Inductive Graph Neural Networks (GNNs)

  • GNNs Architectures

  • Zero-shot Learning

  • Graph Signal Processing for Computer Vision

  • Graph Spectral Clustering for Computer Vision

  • Ensemble learning-based methods

  • Meta-knowledge Learning methods

  • RGB-D cameras, Event based cameras

 
 Important Dates

Full Paper Submission Deadline:       August 30, 2024

Decisions to Authors:                         September 20, 2024

Camera-ready Deadline:                    Same than ACCV 2024.

Selected papers, after extensions and further revisions, will be published in a special issue of an international journal.

 

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult