LAVA’25: Call for Papers and Challenge Participation

Call for Papers and Challenge Participation

We invite researchers, practitioners, and enthusiasts to contribute to the Workshop and Grand Challenge on Large Vision–Language Model Learning and Applications (LAVA), to be held in conjunction with ACM Multimedia 2025.

 

🔬 LAVA Workshop Overview

The LAVA Workshop explores innovations and challenges in Large Vision–Language Models (LVLMs). We welcome contributions across a broad spectrum of topics, including but not limited to:

  • Data preprocessing and prompt engineering for LVLMs
  • Training and compression techniques for LVLMs
  • Self-supervised, unsupervised, few-shot, and zero-shot learning
  • Generative AI and multimodal generation
  • Trustworthy and explainable LVLMs
  • Security, privacy, and ethical concerns in LVLMs
  • Evaluation and benchmarking methodologies
  • LVLMs for downstream tasks and applications
  • LVLMs in virtual, augmented, and mixed reality
  • LVLMs for low-resource scenarios
  • Multimodal integration beyond vision and language

Submission Types

  • Short papers (non-archived): Up to 4 pages, excluding references
  • Long papers (archived in ACM Digital Library): Up to 8 pages, excluding references

All submissions should follow the official ACM MM format.

Workshop Important Dates

  • 📄 Paper submission deadline: June 15, 2025
  • 🚀 ACM MM fast-track submission: July 11, 2025
  • Notification of acceptance: July 24, 2025
  • 🖋️ Camera-ready deadline: August 1, 2025
  • 📅 Workshop date: October 27–28, 2025

🔗 More info: https://lava-workshop.github.io/workshop

 

🏆 LAVA Grand Challenge 2025

This year's LAVA Challenge focuses on enhancing LVLM capabilities in interpreting complex visual documents, including: Data Flow Diagrams (DFDs), Class Diagrams, Gantt Charts, Architectural and Building Design Drawings

The 2025 challenge emphasizes Japanese government and business documents in PDF format, each accompanied by multiple-choice (10-option) questions requiring deep visual–linguistic understanding.

Challenge Important Dates

  • Registration opens: March 15, 2025
  • 📂 Public data release: April 17, 2025
  • Registration closes: May 31, 2025
  • 🔐 Private test data release: We decided to use the test data for public and private leaderboard.
  • 📝 Final results, report & paper submission deadline: June 30, 2025
  • 📢 Notification of acceptance: July 24, 2025
  • 🖋️ Camera-ready deadline: August 26, 2025
  • 📅 Challenge presentation date: October 27–31, 2025

🔗 More info: https://lava-workshop.github.io/grandchallenge

 

We look forward to your contributions and participation in pushing the frontiers of vision–language learning!

CfP: ACM Multimedia 2025 Grand Challenge “MultiMediate”

;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;word-wrap:break-word;line-break:after-white-space”>

––––––––––––––––––––––––––––––––––––––––––––––––––––––
Dr. Philipp Müller 
Senior Researcher

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH (DFKI)
Stuhlsatzenhausweg 3, Campus D3 1
66119 Saarbrücken
Germany
+49 681 85775 7752
–––––––––––––––––––––––––––––––––––––––––––––––––––––– 

Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
Firmensitz: Trippstadter Straße 122, D-67663 Kaiserslautern 
Geschäftsführung: Prof. Dr. Antonio Krüger (Vorsitzender)
Helmut Ditzer
Vorsitzender des Aufsichtsrats: 
Dr. Ferri Abolhassan 
Amtsgericht Kaiserslautern, HRB 2313 
––––––––––––––––––––––––––––––––––––––––––––––––––––––

1st CFP due July 15: CAIS 2025 Automated and Intelligent Systems, Oct 1-2, Online & OKCity, USA

April 30th, 2025 Daniela Lopez de Luise

FairBench: a Python library for comprehensive AI fairness exploration

;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none;unicode-bidi:plaintext”>

we would like to announce the release of FairBench, a comprehensive Python library for exploring and understanding AI biases and fairness.

Why an(other) AI fairness library?

As AI systems become ingrained in everyday lives, it is important to ensure their fairness across different demographic groups — or group intersections. 
But it can be challenging to navigate through the many algorithmic bias/fairness definitions and metrics in a standardized way.
 
Introducing FairBench: a partner in fair AI development
FairBench provides a robust and flexible platform for in-depth AI fairness exploration, for example, as part of your AI fairness compliance plan.
It composes fairness definitions from a growing list of simpler building blocks, and lets you automatically or manually combine and run those in a couple lines of code.

Take advantage of features designed to help grasp a broad picture of systems:

🧱 Measures are built from simpler blocks through a standard scheme that makes it easier to understand what each one represents.
📈 Generate fairness reports and stamps for classification, recommendation, ranking, or scoring tasks. Reports contain descriptions of their values, can be saved, and can be compared to track progress as datasets evolve or between different algorithm versions.
⚖️  Perform analysis across multiple multi-value sensitive attributes and their intersections. Visualize the results in the console or in the browser, or export them in various formats (HTML text, json, etc) for integration in your pipelines.
🧪 Filter reports (simplify/transform/extract ad-hoc summaries) to get insights about where to start your investigation, and backtrack to the intermediate computations of worrisome values to get a feel for algorithmic issues at play.
🖥️   ML compatible: can handle lists, arrays, dataframes, and tensors from popular frameworks. These could come from any modality you are working in (tabular data, images, graphs, text, etc.) 
📦 Comes together with exploratory datasets and algorithms, currently from the tabular and vision data modalities, for out-of-the-box experimentation.
 
We invite you to explore FairBench
Documentation with the ability to try it in your browserhttps://fairbench.readthedocs.io/
If you are interested in more direct discussion or talking about the topic of AI fairness, join us in our Discord server: https://discord.gg/WwQWFSjSWZ
We are eager to hear your feedback and receive feature requests and bug reports via Discord, GitHub issues, or email. We welcome pull requests, and encourage you to join our community in advancing the field of AI fairness. 

Ready to assess AI fairness?
 
Sincerely,
Emmanouil Krasanakis

The 6th GENEA Workshop @ACM Multimedia 2025 – Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

📢 Call for Papers

The 6th Generation and Evaluation of Non-verbal Behaviour for Embodied Agents (GENEA) Workshop
October 27 or 28, 2025 (in person)
Held in conjunction with ACM Multimedia 2025, Dublin, Ireland
Website: 
https://genea-workshop.github.io/2025/workshop/

Paper submissions are now open for the 6th edition of the GENEA Workshop, focusing on the generation of non-verbal behaviours such as gesticulation, facial expressions, and gaze—a crucial component of natural interaction with embodied agents, including virtual agents and social robots.

Currently, behaviour generation is typically powered by rule-based systemsdata-driven approaches like generative AI, or hybrid models. For evaluation, both objective and subjective methods are used, though their application and validity are often debated. This workshop aims to bring together researchers from diverse disciplines working on various aspects of non-verbal behaviour generation, facilitating discussion on advancing both generation techniques and evaluation methodologies.


 Topics of Interest

We invite original contributions on topics including (but not limited to):

  • Automated synthesis of facial expressions, gestures, and gaze movements, including multimodal synthesis
  • Audio-, music-, emotion-driven, or stylistic non-verbal behaviour synthesis
  • Closed-loop / end-to-end non-verbal behaviour generation (from perception to action)
  • Non-verbal behaviour synthesis in two-party and group interactions
  • Use of LLMs/VLMs in the context of non-verbal behaviour synthesis
  • New datasets, annotation methods, and analyses of existing datasets related to non-verbal behaviour
  • Cross-cultural and multilingual influences on non-verbal behaviour generation
  • Cognitive and affective models for non-verbal behaviour generation
  • Social perception and attribution of synthesised non-verbal behaviour
  • Ethical considerations and biases in non-verbal behaviour synthesis
  • Subjective and objective evaluation methods for any of the above topics

📝 Submission Types

We welcome:

  • Long papers (8 pages)
  • Short papers (4 pages)

All submissions should follow the double-column ACM conference format used by ACM Multimedia (https://acmmm2025.org/call-for-papers/). Pages containing only references do not count toward the page limit. Papers must be submitted in PDF format via OpenReview and formatted for double-blind reviewAccepted papers will be presented at the workshop and included in the companion proceedingsSubmission site: https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/GENEA

🗓️ Important Dates (Anywhere on Earth, AoE)

  • Paper abstract deadline: 9 July 2025
  • Full submission deadline: 11 July 2025
  • Notification of acceptance: 01 August 2025
  • Camera-ready deadline: 10 August 2025
  • Poster submission deadline: 19 September 2025
  • Notification of poster acceptance: 3 October 2025
  • Workshop date: 27 or 28 October 2025

Invited Speakers 

·      Catherine Pelachaud – CNRS-ISIR, Sorbonne University, France

·      Asli Ozyurek – Radboud University, The Netherlands

👥 Organisers

  • Taras Kucherenko – Electronic Arts (EA), Sweden
  • Rajmund Nagy – KTH Royal Institute of Technology, Sweden
  • Alice Delbosc – Davi, The Humanizers, France
  • Oya Celiktutan – King's College London, United Kingdom
  • Youngwoo Yoon – ETRI, South Korea
  • Gustav Eje Henter – KTH Royal Institute of Technology / Motorica AB, Sweden
  • Laura Hensel – University of Glasgow, Scotland, United Kingdom

For more information, visit our website, contact us at genea-contact@googlegroups.com, or follow us:
🔵 @geneaworkshop.bsky.social (BlueSky)
🐦 @genea_workshop (X)
💼 LinkedIn Group

We look forward to your contributions!

 

Design by 2b Consult