Satellite Workshop “Real-Time Implementation and Lightweight GNNs for Conventional and Event- based Cameras “, (RT-GNNs 2025) at IEEE ICIP 2025

Call for Papers 

Website : https://sites.google.com/view/rt-gnns-2025/accueil

Description of Topic

Object classification and detection from a video stream captured by conventional cameras or event-based cameras is a fundamental step in applications such as visual surveillance of human activities, observation of animals and insect behaviors   human-machine interaction and all kinds of advanced mobile robotics perceptions systems. A large number of graph neural networks applied for detection and classification of moving objects have been published outperforming conventional deep learning approaches. Many scientific efforts have been reported in the literature to improve their application in a more progressive way in applications where challenges are becoming more complex.  But no algorithm is able to simultaneously address all the key challenges that are present in videos during long sequences as in the real cases.

However, the top background subtraction methods currently compared in CDnet 2014 are based on deep convolutional neural networks. But, their main drawbacks are computational and memory requirements, and also supervised aspects requiring labeling of a large amount of data. In addition, their performance decreases significantly in the presence of unseen videos. Thus, the current top algorithms are not practicable in real applications despite high performance regarding moving object detection.

In recent years, GNNs have also been increasingly used in object detection, object tracking, and mobile robot navigation. Their ability to model spatial and temporal dependencies makes them well-suited for these applications, especially in dynamic environments, where relationships between objects and scene elements must be continuously updated. However, real-time deployment of GNN-based solutions remains a challenge, as they often require significant computational resources, limiting their practicality in embedded and resource-constrained environments. Recently, only a few works have addressed real-time and lightweight GNN algorithms.

Hence, the goals of this workshop are thus three-fold:

1) Designing lightweight and practicable GNN algorithm that handles low- and high-level computer vision applications using conventional or event-based cameras;

2) proposing new algorithms that can fulfil the requirements of real-time applications, 

3) proposing robust and interpretable graph learning to handle the key challenges in these applications.

 

Papers are solicited to address deep learning methods to be applied in image and video processing, including but not limited to the following:

Graph Signal Processing for Computer Vision

Graph Machine Learning for Computer Vision

Transductive/Inductive Graph Neural Networks (GNNs)

GNNs Architectures

Zero-shot Learning

 Ensemble learning-based methods

Meta-knowledge Learning methods

RGB-D cameras

Eventbased cameras

Hardware Architectures for Graph Processing

 

Main Organizers

Thierry Bouwmans, Associate Professor (HDR), Laboratoire MIA, La Rochelle Université, France.

Tomasz Kryjak, Assistant Professor, Embedded Vision Systems Group, Computer Vision Laboratory, AGH University of Krakow, Poland

Mohamed S. Shehata, Associate Professor, UBC Okanagan, Canada.

Ananda S. Chowdhury, Professor, Jadavpur University, India.

Badri N. Subudhi, Associate Professor, Indian Institute of Technology Jammu, India.

 

Important Dates

Workshop Paper Submission Deadline: 4 June 2025

Workshop Paper Acceptance Notification:  2 July 2025

Workshop Final Paper Submission Deadline: 9 July 2025

Workshop Author Registration Deadline : 16 July 2025

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult