DiversityOne Open Challenge at Ubicomp 2025: Exploring People’s Everyday Life Behavior with Mobile Data

DiversityOne Open Challenge: Exploring Diversity in People’s Everyday Life Behavior with Mobile Data

held at 

The ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

Espoo, Finland, October 12-16, 2025

Workshop websitehttps://datascientiafoundation.github.io/diversityone-2025/ 

The open challenge aims to explore the DiversityOne dataset, one of the larger and most geographically diverse datasets for everyday life behavior modeling. The dataset combines questionnaires about demographic and psychosocial variables from 18K participants, and passive smartphone sensor data and self-reported annotations from 782 students across eight universities in eight countries. The study followed ethical approval procedures in each of the participating institutions and is compliant with the European General Data Protection Regulation (GDPR). This dataset is a rich, flexible and valuable research resource that can be used to answer research questions in multiple fields: machine learning, mobile sensing, computational social science, behavioral recognition, and many others. This challenge offers the opportunity to work on the dataset and gain useful feedback on your research. We welcome contributions from researchers from diverse backgrounds and geographical provenances. In particular, we welcome contributions that address aspects including, but not limited to:

  • AI/ubiquitous computing/mobile sensing

  • data-centric AI

  • interactive machine learning

  • noisy annotation detection and correction

  • domain adaptation

  • transfer learning

  • activity and mood recognition

  • responsible and ethical AI

  • Computational social science

  • network analysis of social systems

  • sequence analysis of diary data

  • analysis of communities of practices

  • machine learning or rule-based analysis of social behavior

  • Designing with data

  • studies focusing on the design and documentation of the dataset collection

  • studies focusing on the design affordances of the dataset

  • data-centric design

  • user-centered design

Why join?

  • Explore a rich, large-scale dataset for research

  • Receive feedback for your work from an expert program committee 

  • A selection of the accepted papers will be invited to submit an extended version to IEEE Pervasive Computing.

Important dates

  • From April 3, 2025: Submit your research proposal using the web form and request the datasets that you need to answer your research questions. The full list of available datasets and documentation is accessible on the data catalog. More details are on the workshop website.

  • June 8 July 1, 2025: Abstract deadline.

  • June 25 July 4, 2025: Submission deadline

  • June 29 July 21, 2025: Author notification

  • July 30, 2025: Deadline for camera-ready version of workshop papers to be included in the ACM DL

  • October 12 or 13, 2015: Full-day Workshop.

Important links

Submission platform: https://new.precisionconference.com/submissions (Society: SIGCHI  >  Conference: Ubicomp/ISWC 2025  >  Track: Ubicomp/ISWC 2025 DiversityOne-Challenge).

Workshop website: https://datascientiafoundation.github.io/diversityone-2025/ 

Dataset paper https://dl.acm.org/doi/10.1145/3712289

Data catalog https://datascientiafoundation.github.io/LivePeople-ws/datasets/ 

Dataset webpage https://datascientia.eu/projects/diversityone/

Paper submission

Short paper (max 4 pages, excluding references). The paper should report the motivation, methodology, results, future analyses and an ethical statement highlighting potential societal impacts. The submitted works should reflect on, analyze, or test the DiversityOne dataset.

Additional info:

Organizers

  • Andrea Bontempelli (University of Trento)

  • Matteo Busso (University of Trento)

  • Lakmal Meegahapola (ETH Zurich)

  • Amalia de Götzen (Aalborg University)

  • Fausto Giunchiglia (University of Trento)

  • Daniel Gatica-Perez (Idiap Research Institute & EPFL)

Registrations for the 12th Iberian Conference on Pattern Recognition and Image Analysis are open!

Registrations for the 12th Iberian Conference on Pattern Recognition and Image Analysis are open!
Coimbra, Portugal • June 30 – July 03, 2025
We are delighted to announce that registrations for IbPRIA 2025, the 12th Iberian Conference on Pattern Recognition and Image Analysis, is now open! Co-organized by the Portuguese APRP and Spanish AERFAI chapters of the IAPR and technically endorsed by the IAPR, IbPRIA is your forum to share and discuss cutting-edge research in pattern recognition and image analysis.
Why Attend IbPRIA 2025?
1. High-Quality, Original Research
IbPRIA brings together 67 previously unpublished papers to be presented at seven oral sessions and two poster sessions. The work is from leading research groups, engineers, and practitioners. Discover the latest algorithmic improvements and promising future directions in our field!
2. Doctoral Consortium
PhD students will present the extended abstracts of their work in progress in a dynamic poster session alongside professionals, fellow PhD students, faculty, and researchers. It’s the perfect opportunity to receive feedback, network, and showcase your work. Submissions are still open!
3. Keynote Speakers
We are honored to host four distinguished keynote presentations:
“Challenges for Automated Face Recognition Systems”
Prof. Christoph Busch (NTNU, Norway & HDA, Germany)
Face recognition challenges: image quality impacts accuracy, vulnerability to presentation attacks (masks) and enrollment morphing attacks, need for biometric template security, and ensuring algorithmic fairness across demographics.
“Inner Thoughts: Interpreting Deep Networks with Causality and Tuning Contributions”
Prof. João Henriques (University of Oxford, UK)
This talk presents two approaches to interpret neural networks: 1) Constraining CNNs via causal learning to represent real physical variables (e.g., object positions), enabling direct interpretability and error prediction. 2) Analyzing how LLM pretraining vs. fine-tuning affects responses, allowing safety control, behavior steering, and understanding jailbreaks.
“Vision-based Autonomous Driving by Imitation Learning”
Prof. António M. López (Autonomous University of Barcelona, Spain)
CVC/UAB reduces labeled data need for autonomous vehicles using sensorimotor models & imitation learning. Leveraging 7 years’ experience (CARLA sim to Pyrenees/UAB deployments), their research includes human/AI attention comparison. Talk covers journey, achievements, and open challenges.
“Building Innovative AI-Driven Capabilities for Law Enforcement: From R&I to Compliance”
Dr. Luísa Proença & Dr. Filipe Rodrigues (Law Enforcement – Polícia Judiciária, Portugal)
Talk about how the Portuguese Law Enforcement combats tech-evolving crime via AI R&D: strategic innovation approach, operational tool development, and addressing legal challenges under the EU AI Act.
4. Hands-On Tutorials
Deepen your skills with four tutorials, each chaired by a senior expert:
“Data-Efficient Strategies for Object Detection” organized by a team from the Center for Informatics and Systems of the University of Coimbra.
“On the Turning Away: Enhancing Stroke Survivors Rehabilitation with Virtual Reality” organized by a team from University of Aveiro.
“pyMDMA: An Open-Source Multimodal Framework for Enhanced Auditing of Real and Synthetic Data” organized by a team from Fraunhofer-AICOS.
“Error Estimation in Pattern Recognition” organized by a team from Polytechnic University of Valencia.
The conference will be held at:
Quinta das Lágrimas Hotel
Rua António Augusto Gonçalves
3041-901 Coimbra, Portugal
View the schedule and session details at https://www.ibpria.org/2025/?page=program
We look forward to welcoming you to Coimbra this summer!
Best regards,
Nuno Gonçalves.
Local Chair

Call For Papers — 2nd AI-CogDev workshop @ICDL2025

=======================================================

CALL FOR PAPERS – IEEE ICDL 2025 2nd AI-COGDEV WORKSHOP

=======================================================

 

Call for AI-CogDev – “Architecting Intelligence: Exploring Intersections in Cognitive Robotics and Developmental Learning” workshop.

 

IEEE International Conference on Development and Learning (ICDL 2025)

September 16th, Prague, Czech Republic

 

=== Important dates ===

Submission deadline: June 27th, 2025

Notification of acceptance: July 11th, 2025

Camera-ready deadline: August 1st, 2025

Workshop date and time: September 16th, 2025 – 14:00 – 17:30

 

Workshop website: https://sites.google.com/view/aicogdev-workshop

 

=== Why Developmental Cognitive Architectures Matter ===

Humans are perhaps the most adaptable species on Earth—not because we are inherently the best at any one thing, but because we excel at learning. We don’t come equipped with a perfect model of the world; instead, we continuously update our partial understanding through experience. We can’t read minds, but we learn to interpret others’ behaviors and intentions through interaction and empathy.

As researchers, we build models that capture some of the brain’s inner workings, describe its functional components, and replicate isolated behaviors in controlled environments. But to bring these models into the complex, ever-changing real world, we must go beyond static systems. We must design adaptive, developmental architectures—systems that grow, learn, and evolve, much like human infants do.

This workshop seeks to revitalize the conversation around Developmental Cognitive Architectures, creating a space for both established experts and emerging researchers to exchange ideas, share insights, and explore how we can shape the future of intelligent, autonomous systems.

Workshop Objectives:

  • Explore how developmental principles—such as sensorimotor learning, curiosity-driven exploration, and social interaction—can inform the design of intelligent robotic systems.

  • Highlight cutting-edge work in cognitive robotics that draws inspiration from developmental processes.

  • Identify key challenges and open research questions that must be addressed to advance the field.

  • Foster interdisciplinary collaboration across AI, robotics, psychology, neuroscience, and engineering.

We warmly invite all those curious about the intersection of learning, cognition, and robotics to join us in this important discussion.

 

Topics of interest include, but are definitely not limited to:

·  Cognitive Architectures applied to real or simulated embodiments

·  Developmental Learning and Developmental Robotics

·  Symbol grounding and concept emergence in autonomous systems

·  Open-ended Reinforcement Learning

·  Intrinsically Motivated Learning

·  Affection and Cognition in Development

·  Ethical and trust considerations of developmental agents in real-world scenarios

·  Brain/Psychologically -inspired development/ computational intelligence

 

=== Call for Contributions ===

To encourage rich and thought-provoking discussions on the theme of learning, especially at the intersection of Cognitive Architectures and Developmental AI, we warmly invite participants to submit their contributions to the workshop.

We welcome a variety of submission formats, including:

  • Extended abstracts (2 to 4 pages)

  • Short position papers (1 page)

In addition to state-of-the-art research, we strongly encourage submissions that include works in progress, preliminary findings, and critical reflections or position papers. This inclusive format aims to foster dialogue and highlight emerging ideas that may serve as valuable springboards for the workshop's panel discussions.

Please submit your work in PDF format via the AI-CogDev submission form (https://forms.gle/HQeovAuEQwoXpgF26).
Accepted contributions will be featured in poster sessions and may also be presented orally. The paper will be uploaded to the workshop website.

We look forward to your insights and to shaping this conversation together.

 

=== Organizers ===

 

Dr. Letícia Mara Berto, University of Campinas, Brazil

Marco Gabriele Fedozzi, University of Genova and Italian Institute of Technology, Italy

Renan Baima, University of Luxembourg, Luxembourg

 

 

 


Best Regards,

Renan LIMA BAIMA 
Doctoral Researcher – FINATRAX/ FiReSpARX group
SnT – Interdisciplinary Centre for Security, Reliability, and Trust


UNIVERSITÉ DU LUXEMBOURG


CAMPUS KIRCHBERG
29, Avenue John F. Kennedy
L-1855 Luxembourg Kirchberg
renan.limabaima@uni.lu
 
Join the conversation  

News | Twitter | LinkedIn | Partner with SnT 

 

YsqwXlFmTdBD9P0jMGBMXH+1rAAAAAElFTkSuQmCC 

FIRE 2025 – Hateful Memes in Bengali, Hindi, Gujarati and Bodo – Registration open


************************

[CFP] HASOC-meme: Hate Speech and Offensive Content Identification in Memes in Bengali, Hindi, Gujarati and Bodo at FIRE 2025 

************************************************************************************

https://hasocfire.github.io/hasoc/2025/call_for_participation.html

We are excited to announce the 7th edition of HASOC, featuring a range of engaging shared tasks. We warmly invite you to participate in this edition. HASOC 2025 will introduce classification tasks on memes, focusing on the identification of abuse, sentiment, sarcasm, vulgarity, and target. The task will primarily include three binary classification tasks, one multi-class classification task, and one multi-label classification task on memes in Bangla, Hindi, Gujarati, and Bodo languages.

Track Description: This task involves analyzing multimodal data (image and text) to detect abuse, identify targeted communities, assess vulgarity and sarcasm, and assign sentiment labels. So, the task will be in five parts.

Sentiment detection:

• Positive – The meme conveys a supportive, humorous, or appreciative tone.

• Neutral – The meme is neither overtly positive nor negative in tone.

• Negative – The meme expresses hostility, mockery, or criticism.

Sarcasm Detection:

• Sarcastic – The meme presents statements or visuals that imply the opposite of their literal meaning, often to mock or ridicule.

• Non-Sarcastic: The meme directly conveys its message without sarcasm or irony.

Vulgarity Detection:

• Vulgar – The meme contains explicit or offensive words, gestures, or depictions.

• Not Vulgar – The meme does not include any such content.

Abuse Detection:

• Abusive – The meme includes offensive, harmful, or derogatory language, imagery, or implications targeting an individual or a group.

• Non-abusive – The meme does not contain any offensive, harmful, or derogatory content.

Target Community Identification:

• Gender – Any reference to male, female, non-binary, or transgender identities.

• Religion – Mentions or imagery related to any religious belief, deity, or practice.

• Individual – Specifically mentions or portrays a particular person.

• Political – Targets political ideologies, parties, politicians, or policies.

• National Origin – Targets people based on their country or ethnicity.

• Social Sub-groups – Groups based on socio-economic status, occupation, cultural identity, or other affiliations.

• Others – Any target that does not fall into the above categories.

• None – If the meme does not target any specific community, no target label is assigned.

Important dates

  • Registration starts: 15th May, 2025

  • Hindi, Marathi and Bodo Training Data Release: 17th May, 2025

  • Bangla Training data release: 24th May, 2025

  • Release of the test set: 15th June, 2025

  • Run submission deadline: 30th June, 2025

  • Announcement of results: 15 July, 2025

  • Working notes due:  30th August, 2025

  • Camera-ready copies of notes and overview paper: 30th September, 2025

Task organizers

  • Prof. Dr. Thomas Mandl :- University of Hildesheim, Germany

  • Prof. Dr. Utpal Garain :-Indian Statistical Institute, India

  • Prof. Dr. Debasis Ganguly :- University of Glasgow, United Kingdom

  • Prof. Dr. Sandip Modha :- University of Milano-Bicocca, Italy & LDRP-ITR, Gandhinagar, India

  • Prof. Dr. Animesh Mukherjee :- Indian Institute of Technology, Khargapur, India

  • Dr. Koyel Ghosh :- University of Hildesheim, Germany

  • Dr. Mithun Das :- Indian Institute of Technology, Khargapur, India

  • Shubhankar Barman :- BITS pilani, India

  • Mwnthai Narzary :- Central Institute of Technology, Kokrajhar, India

  • Saptarshi Saha :- Indian Statistical Institute, Kolkata, India

Website: https://hasocfire.github.io/hasoc/2025/call_for_participation.html

IEEE TBIOM Special Issue on Generative AI and Large Vision-Language Models for Biometrics

IEEE Transactions on Biometrics, Behavior, and Identity Science (T-BIOM)
Special Issue on
Generative AI and Large Vision-Language Models for Biometrics

Submission Deadline (extended to): 31 August 2025 (firm)
Targeted Publication: Q2 2026

Paper submission: https://ieee.atyponrex.com/journal/tbiom

*********************************************************************************************

*** Motivation ***

In the rapidly advancing field of artificial intelligence, generative AI
and large-scale vision-language models are becoming key areas of
interest, revolutionizing numerous research fields, including natural
language processing and computer vision. Generative AI models are
designed and trained to approximate the underlying distribution of a
dataset, enabling the generation of new samples that reflect the
patterns and regularities within the training data. Among the various
types of generative models, such as Generative Adversarial Networks
(GANs), Variational Autoencoders (VAEs), flow-based, autoregressive, and
diffusion models, GANs and diffusion models have gained significant
attention and are widely applied to tasks such as image synthesis, image
manipulation, text generation, and speech synthesis. These models have
shown remarkable success in modeling and interpreting the probability
distributions of real-world data. Vision-language models, on the other
hand, integrate visual and textual data, learning to associate these
modalities to enhance understanding and enable multimodal
reasoning-based applications.

The advancements in generative AI and vision-language models (LVMs) are
also making a significant impact on biometrics, offering new
possibilities for addressing longstanding challenges. Generative AI,
with its ability to synthesize highly realistic data, has the potential
to address privacy concerns related to collecting, sharing, and using
sensitive biometric data. This synthetic data can also be used to
increase diversity and variation in training datasets through
augmentation, thus improving model generalizability and reducing
potential bias induced by imbalanced training data. At the same time,
large vision-language models offer the capability to process and
understand multimodal information by combining visual features with
contextual data, such as semantic insights from natural language.
Furthermore, large-scale vision-language models can be optimized for
downstream tasks, such as template extraction, using zero or few-shot
learning approaches, making them highly versatile for biometric
applications.

Although generative AI and vision-language models offer a rich set of
tools that can be utilized to address challenges in biometrics, the
misuse of these technologies presents a threat to the field. Generative
AI models have the ability to incorporate conditions in the generation
process to take control over the generated samples. This enables a wide
range of applications such as image-to-image translation, text-to-image
synthesis, and style transfer. However, this capability also allows for
creating deepfake attacks, e.g., images, videos, and audio that are
indistinguishable or nearly indistinguishable from real content. The
increased realism and widespread public accessibility of generative AI
have raised concerns about the potential misuse of this technology for
malicious purposes. This highlights the need for solutions to detect
generated AI content and mitigate the potential misuse of generative AI
models.

The proposed TBIOM special issue will provide a platform to discuss the
latest advancements and technical achievements related to Generative AI
and Large vision-language models when applied to problems in biometrics.
The topics of interest of the special issue include, but are not limited to:

+ Novel generative AI models for responsible synthesis of biometric data
+ Novel generative models for conditional data synthesis
+ Biometrics interpretability and explainability through large
language-vision models
+ Few-shot learning from large language-vision models
+ Generative AI and LVMs for detecting attacks on biometrics systems
+ Generative AI-based image restoration
+ Information leakage of synthetic data
+ Data factories and label generation for biometric models
+ Quality assessment of AI generated data
+ Synthetic data for data augmentation
+ Detection of generated AI contents
+ Bias mitigation using synthetic data
+ LLMs and VLMs for biometrics
+ Watermarking AI generated content
+ New synthetic datasets and performance benchmarks
+ Security and privacy issues regarding the use of generative AI methods
for biometrics
+ Ethical considerations regarding the use of generative AI methods for
biometrics
+ Parameter efficient fine-tuning of VLMs for biometrics applications

*** Important Dates ***

Submission deadline:                          31 August 2025
First round of reviews completed (first decision): November 2025
Second round of reviews completed                 January 2026
Final papers due                        March 2025
Publication date:                         Q2 2026

*** Paper Submission ***

Papers should be submitted through the TBIOM submission portal before
the deadline using the TBIOM journal templates:
https://ieee.atyponrex.com/journal/tbiom and selecting the article type:
“Generative AI and Large Vision-Language Models for Biometrics”.

*** Guest Editors: ***

+ Fadi Boutros, Fraunhofer IGD, Germany
+ Hu Han, Institute of Computing Technology, Chinese Academy of Sciences
(CAS), China
+ Tempestt Neal, University of South Florida, United States
+ Vishal M. Patel, Johns Hopkins University, United States
+ Vitomir Štruc, University of Ljubljana, Slovenia
+ Yunhong Wang, Beihang University, China

Design by 2b Consult