Chicago, USA
June 30-July 3, 2025
Website: https://mad2025.aimultimedialab.ro/
**CALL FOR PAPERS**
Modern communication does not rely anymore solely on mainstream media
like newspapers or television, but rather takes place over social
networks, in real-time, and with live interactions among users. The
speedup of distribution and the amount of information available,
however, also led to an increased amount of misleading content,
disinformation and propaganda. Conversely, the fight against
disinformation, in which news agencies and NGOs (among others) take part
on a daily basis to avoid the risk of citizens’ opinions being
distorted, became even more crucial and demanding, especially for what
concerns sensitive topics such as politics, health and religion.
Disinformation campaigns are leveraging, among others, AI-based tools
for content generation and modification: hyper-realistic visual, speech,
textual and video content have emerged under the collective name of
“deepfakes”, and more recently with the use of Large Language Models
(LLMs) and Large Multimodal Models (LMMs), undermining the perceived
credibility of media content. It is, therefore, even more crucial to
counter these advances by devising new robust and trustworthy AI tools
able to detect the presence of inaccurate, synthetic and manipulated
content, accessible to journalists and fact-checkers.
Future multimedia disinformation detection research relies on the
combination of different modalities and on the adoption of the latest
advances of deep learning approaches and architectures. These raise new
challenges and questions that need to be addressed to reduce the effects
of disinformation campaigns. The workshop, in its fourth edition,
welcomes contributions related to different aspects of AI-powered
disinformation detection, analysis and mitigation.
Topics of interest include but are not limited to:
* Disinformation detection in multimedia content (e.g., video, audio,
texts, images)
* Multimodal verification methods
* Synthetic and manipulated media detection
* Multimedia forensics
* Disinformation spread and effects in social media
* Analysis of disinformation campaigns in societally-sensitive domains
* Robustness of media verification against adversarial attacks and
real-world complexities
* Fairness and non-discrimination of disinformation detection in
multimedia content
* Explaining disinformation detection results to non-expert users
* Temporal and cultural aspects of disinformation
* Dataset sharing and governance in AI for disinformation
* Datasets for disinformation detection and multimedia verification
* Open resources, e.g., datasets, software tools
* Large Language Models for analyzing and mitigating disinformation
campaigns
* Large Multimodal Models for media verification
* Multimedia verification systems and applications
* System fusion, ensembling and late fusion techniques
* Benchmarking and evaluation frameworks
**IMPORTANT DATES**
* Paper submission due April 10, 2025
* Acceptance notification April 29, 2025
* Camera-ready papers due May 5, 2025
* Workshop @ ICMR 2025 June 30, 2025
**SUBMISSIONS**
When preparing your submission, please adhere strictly to the ACM ICMR
2025 instructions, see here:
https://www.icmr-2025.org/authors/paper-submissions, to ensure the
appropriateness of the reviewing process and inclusion in the ACM
Digital Library proceedings.
Please use the following link to submit papers
https://easychair.org/my/conference?conf=mad2025.
Submissions to the MAD workshop are expected to be long papers (8 page
limit, plus additional pages for references) and to comply with a
double-blind review process. Details to ensure this compliance can be
found in the website linked above.
**ORGANIZERS**
* Dan-Cristian Stanciu, Politehnica University of Bucharest, Romania
* Milica Gerhardt, Fraunhofer IDMT, Germany
* Symeon Papadopoulos, Centre for Research and Tecnhology Hellas, Greece
* Vera Schmitt, Technical University Berlin, Germany
* Bogdan Ionescu, Politehnica University of Bucharest, Romania
* Roberto Caldelli, CNIT and Mercatorum University, Italy
* Giorgos Kordopatis-Zilos, Czech Technical University, Czechia
* Adrian Popescu, CEA LIST, France