2025 4th International Workshop on Fine Art Pattern Extraction and Recognition – FAPER within ICIAP 2025

2025 Second workshop on Explainable Artificial Intelligence for the medical domain – EXPLIMED
within the 28TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (ECAI 2025)

25-30 OCTOBER 2025 Bologna (Italy)
https://sites.google.com/view/explimed-2025/home-page
=============================================================================

AI has the potential to revolutionize medical care, but there are concerns about fairness and transparency. Explainable AI (XAI) is necessary to enhance transparency, accountability, and trustworthiness in medical AI systems. XAI provides understandable insights into AI-powered clinical decision-making, enabling healthcare professionals to trust recommendations and empowering patients to participate actively in their healthcare decisions. By addressing ethical concerns related to biased or discriminatory outcomes, XAI ensures fair and equitable healthcare practices. Finally, XAI aids in the validation process, offering insights into model predictions and facilitating the integration of AI technologies into clinical workflows.
This workshop aims to explore and exhibit research, methodologies, and case studies that focus on the integration of Explainable Artificial Intelligence (XAI) in the medical domain. It will provide a platform for researchers, practitioners, and policymakers to share their insights and advancements in XAI. The purpose is to improve transparency and trust in medical AI systems. The workshop aims to highlight the importance of XAI in medical decision-making, share innovative approaches and technologies that enhance interpretability in medical AI, and discuss regulatory implications and compliance strategies for incorporating XAI in healthcare AI applications.
Possible topics related to application in the healthcare domain include (but are not limited to):

– eXplainable Artificial Intelligence
– Post-hoc methods for explainability
– Ante-hoc methods for explainability
– Rule-based XAI systems
– Uncertainty modeling
– XAI methods for neuroimaging and neural signals
– Case-based explanations for AI systems
– Fuzzy systems for explainability
– Interpreting and explaining neural networks
– Model-specific vs model-agnostic methods
– Transparent and explainable learning methods
– Interpretable representational learning
– Causal inference and explanations
– Bayesian modeling for interpretability

**** IMPORTANT DATES ****

– Abstract submission deadline: May 10, 2025
– Paper submission deadline: May 21, 2025
– Notification of acceptance: July 21 2025
– Final paper submission: September 12, 2025
– Early registration deadline: TBA
– Workshop date:  25-26 October, 2025

**** SUBMISSIONS ****

Please ensure submissions follow the CEUR style guidelines, utilizing a one-column layout. Accepted paper formats include regular papers (10-20 pages) and short papers (5-9 pages) such as work-in-progress or position papers. Note: Both formats, if accepted, will be part of the workshop proceedings and must be written in English. Papers should be submitted in PDF format through the submission system (https://sites.google.com/view/explimed-2025/authors-guidelines/submissions). We encourage authors to use the LaTeX template and to include ORCIDs in their submissions. 
The workshop will be held in person. Each accepted paper will be assigned either an oral or a poster presentation. In case of a high number of submissions, some papers may be allocated as posters instead of oral presentations.

**** PUBLICATIONS ****
All papers submitted to the EXPLIMED workshop will undergo a double-blind review by independent reviewers. They will be published in the CEUR Workshop Proceedings under a CC-BY 4.0 license (http://ceur-ws.org/) upon acceptance. CEUR-WS proceedings are typically indexed in Scopus.

**** REGISTRATION ****
Please refer to the conference website: https://ecai2025.org/

**** KEYNOTE SPEAKER****
Alberto Fernández Hilario, Full Professor of Computer Science and Artificial Intelligence, University of Granada

**** ORGANIZING COMMITTEE ****
– Gabriella Casalino, University of Bari, Italy
– Giovanna Castellano, University of Bari, Italy
– Katarzyna Kaczmarek-Majer, Polish Academy of Sciences, Poland
– Raffaele Scaringi, University of Bari, Italy
– Gianluca Zaza, University of Bari, Italy

**** CONTACTS ****
Any inquiries can be directed to gianluca.zaza@uniba.it and raffaele.scaringi@uniba.it

 

Satellite Workshop “Real-Time Implementation and Lightweight GNNs for Conventional and Event- based Cameras “, (RT-GNNs 2025) at IEEE ICIP 2025

Call for Papers 

Website : https://sites.google.com/view/rt-gnns-2025/accueil

Description of Topic

Object classification and detection from a video stream captured by conventional cameras or event-based cameras is a fundamental step in applications such as visual surveillance of human activities, observation of animals and insect behaviors   human-machine interaction and all kinds of advanced mobile robotics perceptions systems. A large number of graph neural networks applied for detection and classification of moving objects have been published outperforming conventional deep learning approaches. Many scientific efforts have been reported in the literature to improve their application in a more progressive way in applications where challenges are becoming more complex.  But no algorithm is able to simultaneously address all the key challenges that are present in videos during long sequences as in the real cases.

However, the top background subtraction methods currently compared in CDnet 2014 are based on deep convolutional neural networks. But, their main drawbacks are computational and memory requirements, and also supervised aspects requiring labeling of a large amount of data. In addition, their performance decreases significantly in the presence of unseen videos. Thus, the current top algorithms are not practicable in real applications despite high performance regarding moving object detection.

In recent years, GNNs have also been increasingly used in object detection, object tracking, and mobile robot navigation. Their ability to model spatial and temporal dependencies makes them well-suited for these applications, especially in dynamic environments, where relationships between objects and scene elements must be continuously updated. However, real-time deployment of GNN-based solutions remains a challenge, as they often require significant computational resources, limiting their practicality in embedded and resource-constrained environments. Recently, only a few works have addressed real-time and lightweight GNN algorithms.

Hence, the goals of this workshop are thus three-fold:

1) Designing lightweight and practicable GNN algorithm that handles low- and high-level computer vision applications using conventional or event-based cameras;

2) proposing new algorithms that can fulfil the requirements of real-time applications, 

3) proposing robust and interpretable graph learning to handle the key challenges in these applications.

 

Papers are solicited to address deep learning methods to be applied in image and video processing, including but not limited to the following:

Graph Signal Processing for Computer Vision

Graph Machine Learning for Computer Vision

Transductive/Inductive Graph Neural Networks (GNNs)

GNNs Architectures

Zero-shot Learning

 Ensemble learning-based methods

Meta-knowledge Learning methods

RGB-D cameras

Eventbased cameras

Hardware Architectures for Graph Processing

 

Main Organizers

Thierry Bouwmans, Associate Professor (HDR), Laboratoire MIA, La Rochelle Université, France.

Tomasz Kryjak, Assistant Professor, Embedded Vision Systems Group, Computer Vision Laboratory, AGH University of Krakow, Poland

Mohamed S. Shehata, Associate Professor, UBC Okanagan, Canada.

Ananda S. Chowdhury, Professor, Jadavpur University, India.

Badri N. Subudhi, Associate Professor, Indian Institute of Technology Jammu, India.

 

Important Dates

Workshop Paper Submission Deadline: 4 June 2025

Workshop Paper Acceptance Notification:  2 July 2025

Workshop Final Paper Submission Deadline: 9 July 2025

Workshop Author Registration Deadline : 16 July 2025

Open sourcing the Neuro-SAN multiagent software

Cognizant AI Lab (CAIL) recently open sourced the NeuroAI Multiagent Accelerator (Neuro-SAN) software for researchers. This software allows you to rapidly build coordinated multiagent systems consisting of multiple LLMs and other AI agents, and includes tools for visualization and performance analysis. See
   https://medium.com/@evolutionmlmail/neuro-san-is-all-you-need-0925aa7ae3d6
for a description, demos, and the repo. And if you are interested, we’d be happy give a demo e.g. at IEEE CEC, ICML, IJCNN, ITU AI for Good, GECCO, CogSci, or IJCAI.

iEDGE 2025 – Edge Intelligence & Trustworthy Decentralized AI – Dubrovnik, Oct 14–17

We are pleased to share the Call for Papers for the International Symposium on Edge Intelligence, Trustworthy and Decentralized Artificial Intelligence (iEDGE 2025), to be held in Dubrovnik, Croatia, co-located with FLTA 2025.
Please find the full CFP below. We welcome your submissions and encourage you to share this opportunity with your networks.
Best regards,  
Siba Haidar  
Publicity Chair – iEDGE 2025
––––––––––––––––––––––––––––––––––––––––––  
Call for Papers  – International Symposium on Edge intelligence, Trustworthy and Decentralized Artificial Intelligence (iEDGE 2025)
October 14-17, 2025, Dubrovnik, Croatia
iEDGE 2025 is a premier international venue dedicated to advancing research and innovation at the intersection of trustworthy, decentralized, and distributed AI systems. As the scale, complexity, and societal impact of AI systems grow, ensuring trust, privacy, robustness, and decentralization becomes critical.
We invite researchers, practitioners, and innovators from academia and industry to join us in exploring the new frontiers of Decentralized Artificial Intelligence, Trustworthy Machine Learning, and Secure Distributed Systems. Co-located with FLTA 2025, iEDGE focuses on scaling trustworthiness principles beyond federated learning into the broader decentralized AI ecosystem in the edge and the computing continuum.
The International Symposium on Edge intelligence, Trustworthy and Decentralized Artificial Intelligence (iEDGE 2025) addresses the use of advanced intelligent systems in providing trustworthy decentralized AI solutions in many fields, and the challenges, approaches, and future directions.  We invite the submission of original papers on all topics related to trustworthy and decentralized artificial intelligence in the edge and the computing continuum, with special interest in but not limited to:
Decentralized Artificial Intelligence
* Decentralized foundation model training and serving
* Peer-to-peer (P2P) inference networks
* Blockchain-based machine learning platforms
* Distributed optimization for large AI models
* Incentive mechanisms in decentralized AI systems
Trustworthy AI Systems
* Bias mitigation, fairness, and accountability
* Verifiable decentralized model updates
* Explainability in distributed learning
* Robustness against adversarial attacks
Security and Privacy
* Secure aggregation and differential privacy
* Trust mechanisms for model collaboration
* Resilient learning under malicious participants
* Secure Edge AI deployments
Edge Intelligence
* Edge deployment of open foundation models
* Collaborative edge-cloud AI systems
* Privacy-preserving AI for constrained environments
* 5G/6G architectures for decentralized AI
* Resource allocation and scheduling
=====================IMPORTANT DATES=============================
Paper Submission Deadline:  August 1, 2025
Notification of Acceptance:    August 30, 2025
Camera-Ready Submission:  September 15, 2025
Symposium Dates:                 October 15-17, 2025
=====================SUBMISSION INSTRUCTIONS======================
Authors are invited to submit their original research work that has not previously been submitted or published in any other venue. Submitted papers may be up to 8 pages. Two additional pages may be added for references. Submitted papers (pdf format) must use the A4 IEEE Manuscript Templates for Conference Proceedings. Word template is available at https://flta-conference.org/iedge-2025/20251/files/conference-template-a4.docx. Latex template is available at https://flta-conference.org/iedge-2025/20251/files/Conference-LaTeX-template_10-17-19.zip.
Please submit your paper via the EasyChair submission system at: https://easychair.org/conferences/?conf=flta2025 (please select iEDGE track).
All accepted papers will be published in an IEEE proceedings. Best Paper Awards will be granted to high quality papers. Extended versions of high quality, selected papers will be invited to be submitted to prestigious journals (to be announced).
At least one of the authors of any accepted paper is requested to register and present the paper at the conference. If it is not possible for any of the authors to present the paper in person, online presentation will be allowed.
================= ORGANIZING COMMITTEE ===========================
General Chairs

PETS2025 at AVSS2025 – Call for participations

We are pleased to announce the 2025 International Challenge on Performance Evaluation of Tracking and Surveillance (PETS2025), which will take place in conjunction with the IEEE International Conference on Advanced Visual and Signal-Based Systems (AVSS 2025). The focus of PETS2025 challenge is multi-authority, multi-sensor maritime border surveillance, and it comprises the following three tasks:
  1. Object Detection: Detecting and classifying targets (vessels, vehicles, and people) in maritime and coastal environments using multi-platform, multi-spectral sensors (e.g., RGB, SWIR, UV, and Thermal from ground sensors and UAVs).
  2. Long-term Multi-Target Tracking: Tracking multiple targets across diverse terrains.
  3. Geolocation Approximation: Estimating the geolocations of specific targets using UAV imagery and telemetry data alone.
The deadline for submitting challenge results is 15th June 2025. All participating teams will be invited to co-author a joint challenge summary paper, which will be published at AVSS 2025. Note that only the Object Detection task is mandatory; the other two tasks are optional.
Further details can be found on the official challenge website: pets2025.net
We would be very grateful if you would consider participating or kindly help us disseminate this invitation within your network.
If you require further information, please let me know.
Thank you for your time and consideration.
Sincerely,
Thanet Markchom
Postdoctoral Research Assistant
Computer Vision Group
University of Reading
Design by 2b Consult