Special Issue on Heritage Preservation in the Digital Age

;text-indent:0px;text-transform:none;word-spacing:0px;white-space:normal;text-decoration-style:initial;text-decoration-color:initial;background-color:rgb(255,255,255)”>

*** Aims and Scope


This special issue focuses on analyzing, processing and valorizing all types of data related to cultural heritage, including tangible and intangible heritage. As stated by UNESCO, cultural heritage provides societies with a wealth of resources inherited from the past, created in the present for the benefit of future generations. The massive digitization of historical analogue resources and production of born digital documents provide us with large volumes of varied multimedia heritage data (images, maps, text, video, 3D objects, multi-sensor data, etc.), which represent an extremely rich heritage that can be exploited in a wide variety of fields, from research in social sciences and computational humanities to land use and territorial policies, including urban modeling, digital simulation, archaeology, tourism, education, culture preservation, creative media and entertainment. 
In terms of research in computer science, artificial intelligence, and digital humanities, they address challenging problems related to the diversity, specificity or volume of the media, the veracity of the data, and the different user needs with respect to engaging with this rich material and the extraction of value out of the data. These challenges are reflected in the corresponding sub-fields of machine learning, signal processing, mono/multi-modal techniques and human-machine interaction.
The objective of this special issue is to present and discuss the latest and most significant trends on analysis, understanding and promotion of heritage contents, focusing on advances on machine learning, signal processing, mono/multi-modal techniques, and human-machine interaction. We welcome research contributions for (but not limited to) the following topics: 
  • Monomodal analysis: image, text, video, 3D, music, sensor data and structured referentials
  • Information retrieval for multimedia heritage
  • AI assisted archaeology and heritage data processing
  • Multi-modal deep learning and time series analysis for heritage data
  • Heritage modeling, visualization, and virtualization
  • Smart digitization and reconstruction of heritage data
  • Open heritage data and bench-marking
The scope of targeted applications is extensive and includes:
  • Analysis, archaeometry of artifacts
  • Diagnosis and monitoring for restoration and preventive conservation
  • Geosciences / Geomatics for cultural heritage
  • Education
  • Smart and sustainable tourism
  • Urban planning
  • Digital Twins
*** Important Dates
  • Submission deadline: 31 March 2024
  • Review period: 1 April – 30 July 2024
  • Notification: 31 July 2024
  • Author revision deadline: 15 October 2024
  • Final notification: 31 October 2024
*** Guest Editors
*** Submission Guidelines


Authors should prepare their manuscript according to the Instructions for Authors available from the Multimedia Tools and Applications website. Authors should submit through the online submission site at https://www.editorialmanager.com/mtap/default.aspx and select “SI 1250- Heritage Preservation in the Digital Age: Advances in machine learning, monomodal and multimodal processing, and human-machine interaction” when they reach the “Article Type” step in the submission process. Submitted papers should present original, unpublished work, relevant to one of the topics of the special issue. All submitted papers will be evaluated on the basis of relevance, significance of contribution, technical quality, scholarship, and quality of presentation, by at least three independent reviewers. It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process.
SUMAC 2023. Please note that the authors of selected papers presented at workshop SUMAC2023 (ACM Multimedia 2023) are invited to submit an extended version of their contributions by taking into consideration both the reviewers’ comments on their conference paper, and the feedback received during presentation at the conference. It is worth clarifying that the extended version is expected to contain a substantial scientific contribution, e.g., in the form of new algorithms, experiments or qualitative/quantitative comparisons, and that neither verbatim transfer of large parts of the conference paper nor reproduction of already published figures will be tolerated. The extended versions of SUMAC2023 papers will undergo the standard, rigorous journal review process and be accepted only if well-suited to the topic of this special issue and meeting the scientific level of the journal. Final decisions on all papers are made by the Editor in Chief.

2nd Call for Papers for 2nd Edition of CRUM 2024

This is the Second call for papers (CfP) of the second edition of the workshop on Context Representation in User Modelling (CRUM) co-located with the ACM Conference on User Modeling, Adaptation, and Personalisation (UMAP), which is taking place on 1st July 2024 in Cagliari, Sardinia, Italy.

Website: https://crum-workshop.github.io/

Submission deadline: 17 April 2024 AoE

Submission: https://easychair.org/conferences/?conf=umap24 – select the track “UMAP 24 Workshop CRUM”

For further information: crum.workshop@gmail.com

*** Abstract ***

The evolving landscapes of user modelling, adaptation, and personalisation necessitate a nuanced exploration of context and its impact on human-computer interaction. This evolution represents a paradigm shift towards placing the user at the centre of context representation, acknowledging the multifaceted nature of context as it intertwines with user needs, environmental changes, and technological advancements. The second edition of the Context Representation in User Modeling (CRUM) workshop, themed “Human-Centric Context,” seeks to foster a comprehensive understanding of context by focusing on the dynamic interplay between subjective and objective contexts in enhancing user experience. We welcome submissions that explore the nuances of human-centric context across various domains, aiming to standardise context modelling practices that enhance user engagement, privacy, and trust in multi-stakeholder environments.

** Topics ***

Evolving from last year's workshop, CRUM 2024 invites multiple submission types. We welcome long papers (up to 7 pages) and provocation/opinion papers (up to 2 pages). Details on the content expectation for both submission types are given here.

Topics considered relevant to the theme of this workshop include, but are not limited to:

  • Capturing and storing contextual information;
  • Situation-aware user modelling and adaptive system;
  • Algorithmic relevance of situational, temporal, location, and hypermedia context;
  • Context representation for personalisation;
  • Adaptation of user models based on spatial, temporal, or situational context;
  • Capturing and ranking application-specific context in hypermedia user applications;
  • Context as relevance of static and dynamic external characteristics within recommendation systems;
  • Contextualising proactive behaviour;
  • Context-aware personalised pervasive computing;
  • The role of context for user modelling in recommender systems;
  • Evaluation frameworks for capturing, representing, and using contextual information in agent decision-making;
  • Role of context and context representation within explainable adaptation;
  • Scrutability of contextual representation in personalised systems; and

Additionally, topics considered relevant to the theme of this workshop include, but are not limited to:

  • Representing human-specific context information;
  • Examining human- and domain-specific context, including similarities and differences in how it is captured, stored, or represented;
  • Privacy and context;
  • User control and management of context-aware systems;
  • Evaluation frameworks for human-centred context;
  • Impact on performance, efficiency, and usability based on context; and
  • Role of context information in natural language processing, information retrieval, human-agent interaction, and robotics.

** Submission, presentation, and publication ***

CRUM 2024 accepts two types of submissions – long papers (up to 7 pages excluding references and appendices) and provocation/opinion papers (up to 2 pages excluding references and appendices).

All submissions to the workshop should use the same ACM template (single-column format) and formatting adopted by the main UMAP conference. The templates and instructions are available here.

We encourage authors to submit works in progress, negative results, insights, position papers, and case studies on context and its role in user modelling and adaptive systems.

CRUM follows a rigorous double-blind peer review policy. Please ensure that all workshop submissions are anonymised.

At least one author is expected to personally attend the conference and present the paper for it to be published. In line with the UMAP 2024 policy, hybrid activities cannot be supported.

ACM publishes accepted papers in the UMAP adjunct proceedings.

*** Organisation ***

The workshop is co-chaired by:

  • Owen Conlan, ADAPT Centre, Trinity College Dublin, owen.conlan@tcd.ie
  • Judy Kay, University of Sydney

The workshop is co-organised by Dipto Barman (barmand@tcd.ie), Jovan Jeromela, Alok Debnath, Anouk Van Kasteren and Marloes Vredenborg.

*** Important Dates ***

Submission deadline: 17 April 2024

Notification: 8 May 2024

Camera-Ready (TAPS System): 18 May 2024

Workshop date: 1 July 2024 (Cagliari, Sardinia, Italy)

Best
CRUM Organisers

CVPR’24 Rhobin Workshop on Reconstruction of Human-Object Interactions

2nd Call for participants and papers with an extended deadline 

Second Rhobin Challenge – Reconstruction of human-object interaction in conjunction with CVPR 2024, June 2024, Seattle, USA

Website: https://rhobin-challenge.github.io/


Important dates 

Full-paper submission deadline: March 20, 2024

Notification to authors: April 5, 2024

Camera-ready deadline: April 12, 2024

Workshop: June 17/18, 2024

Aims and scope 

This half-day workshop will provide a venue to present and discuss state-of-the-art research in the reconstruction of human-object interactions from images. We invite papers on topics related broadly to human-centered interaction modeling. This could include but is not limited to 

  • Estimation of 3D human pose and shape from a single image or video

  • 3D human motion prediction 

  • Interactive motion sequence generation

  • Shape reconstruction from a single image

  • Object 6-DoF pose estimation and tracking

  • Human-centered object semantics and functionality modeling

  • Joint reconstruction of both bodies and objects/scenes

  • Contact detection/estimation from visual input

  • Interaction modeling between humans and objects, e.g., contact, physics properties

  • Detection of human-object interaction semantics

  • New datasets or benchmarks that have 3D annotations of both humans and objects/scenes.

Participation details of the Rhobin challenge can be found below. 

Submission guidelines 

We invite submissions of a maximum of 8 pages, excluding references, using the CVPR template. Submissions should follow CVPR 2024 instructions. All papers will be subject to a double-blind review process, i.e. authors must not identify themselves on the submitted papers. The reviewing process is single-stage without rebuttals.

If you have any questions, feel free to reach out to us.

Workshop organizers

Xi Wang, ETH Zurich, Switzerland Xianghui Xie, MPI Informatics, Germany

Nikos Athanasiou, MPI Intelligent System, Germany

Ilya Petrov, University of Tübingen, Germany

Kaichun Mo, NVIDIA, USA

Bharat Lal Bhatnagar, Meta, Switzerland

Julien Valentin, Microsoft

Dimitrios Tzionas, University of Amsterdam, Netherlands

Otmar Hilliges, ETH Zurich, Switzerland

Luc Van Gool, ETH Zurich, Switzerland

Gerard Pons-Moll, University of Tübingen and MPI Informatics, Germany


The Second Rhobin Challenge 

We propose a challenge on reconstructing 3D human and object and estimating 3D human-object and human-scene contact, from monocular RGB images. In this workshop, we continue to examine how well the existing human and object reconstruction and contact estimation methods work under more realistic settings and more importantly, understand how they can benefit each other for accurate interaction reasoning. The recently released BEHAVE (CVPR'22)InterCap (GCPR’22) and DAMON (ICCV’23) datasets enable joint reasoning about human-object interactions in real settings and evaluating contact prediction in the wild. We use these datasets in the second Rhobin challenge to spark research in human-object interaction modeling. Challenge winners will be awarded on the day of the workshop. 

Challenge website:
  3D human reconstruction (https://codalab.lisn.upsaclay.fr/competitions/17571)

  6DoF pose estimation of rigid objects (https://codalab.lisn.upsaclay.fr/competitions/17524)

  Joint reconstruction of human and object (https://codalab.lisn.upsaclay.fr/competitions/17522)

  Tracking human-object interaction in a video (https://codalab.lisn.upsaclay.fr/competitions/17572)

  3D contact prediction from RGB images (https://codalab.lisn.upsaclay.fr/competitions/17561)

Challenges are open and more information can be found on the website. 


Important dates 

Challenge open: February 5, 2024

Submission deadline: May 30, 2024

Winner award: June 17/18, 2024

Challenge organizers

Xianghui Xie, MPI Informatics, Germany Shashank Tripathi, MPI Intelligent System, Germany

Dimitrios Tzionas, University of Amsterdam, Netherlands

Gerard Pons-Moll, University of Tübingen and MPI Informatics, Germany

Call for papers – held in conjunction with IEEE FG 2024

1st International Workshop on Synthetic Data for Face and Gesture
Analysis (SD-FGA 2024)
Held in the scope of IEEE FG 2024
27 or 31 May 2024 (TBD), Istanbul, Turkey
https://sites.google.com/view/sd-fga2024/

Paper submission: 17 March 2024, 11:59pm PST
*****************************

*** Call for Papers ***
Recent advancements in generative models within the realms of computer
vision and artificial intelligence have revolutionized the way
researchers approach data-driven tasks. The advent of sophisticated
generative models, such as GANs (Generative Adversarial Networks), VAEs
(Variational Autoencoders), or more recently diffusion models, has
empowered practitioners to create synthetic data that closely mirrors
real-world scenarios. These models enable the generation of
high-fidelity images and sequences, laying the foundation for
groundbreaking applications in face and gesture analysis. The
significance of these generative models lies in their ability to produce
synthetic data that is remarkably realistic, thereby mitigating
challenges associated with data scarcity and privacy concerns. As a
result, the utilization of synthetic data has become increasingly
prevalent in various research domains, offering a versatile and ethical
alternative for training and testing machine learning algorithms.

This workshop aims to delve into the diverse applications of synthetic
data in the realm of face and gesture analysis. Participants will
explore how synthetic datasets have been instrumental in training facial
recognition systems, enhancing emotion detection models, and refining
gesture recognition algorithms. The workshop will showcase exemplary use
cases where the integration of synthetic data has not only overcome data
limitations but has also fostered the development of more robust and
accurate models. As researchers increasingly recognize the potential of
synthetic datasets in shaping the future of computer vision and machine
learning, there arises a demand for a collaborative platform where ideas
can be exchanged, methodologies shared, and challenges addressed. This
workshop aims to bridge the gap between theoretical knowledge and
practical implementation, fostering a community of experts and
enthusiasts dedicated to advancing the frontiers of synthetic data in
face and gesture analysis.

Topics of interest include, but are not limited to:
+ Novel generative models for face and gesture synthesis
+ Label generation for synthetic data
+ Information leakage in synthetics data
+ Data factories for training biometric (detection, landmarking,
recognition) models
+ Synthetic data for data augmentation
+ Data synthesis for bias mitigation and fairness
+ Quality assessment for synthetic data
+ Synthetic data for privacy protection
+ Novel applications of synthetic data
+ New synthetic datasets and performance benchmarks
+ Applications of synthetic data, e.g., deepfakes, virtual try-on, face
and gesture editing

*** Paper format and submission ***

5th workshop on Intelligent Cross-Data Analysis and Retrieval

5th workshop on Intelligent Cross-Data Analysis and Retrieval
Deadline extended: 29 March 2024


We invite submissions for our workshop that focuses on the relevance and significance of multimedia analytics and retrieval in the context of the broader societal landscape. Over the past decade, significant advancements have been made in multimedia analytics and retrieval, allowing for precise and rapid extraction of data insights. This progress has led to numerous applications that enhance various aspects of human lives. However, the diverse perspectives embedded in multimedia and other data types present a complex puzzle that requires assembly to address human-centered challenges effectively. The workshop aims to bring together individuals working with multimedia and other data types across diverse research domains and disciplines, such as wellbeing, disaster prevention & mitigation, mobility, food computing, security, and smart cities. We encourage contributions that explore the originality and novelty of proposed topics related to the integration of diverse multimodal data. The current era witnesses the exponential growth of sensors, communication technologies, and social networks, enabling individuals to collect data swiftly from themselves and their environments. Coupled with artificial intelligence and advanced application techniques, data has evolved into a more intelligent form, providing valuable information and knowledge for near-human cognitive analytics and retrieval. This intelligent data collection offers new opportunities to better understand the intricate associations between human beings and their surroundings. The workshop specifically calls for submissions addressing the analysis and retrieval of cross-data from different perspectives, focusing on wearable and ambient sensors, lifelog cameras, social networks, and surrounding sensors. While several investigations have explored individual perspectives, there is a limited focus on analyzing and retrieving cross-data to maximize the benefits for human beings. Researchers are invited to contribute to this endeavor, aiming to create a smart and sustainable society by efficiently utilizing intelligent cross-data analysis and retrieval. Possible application domains for submitted works include, but are not limited to, well-being, disaster prevention & mitigation, mobility, and food computing. We encourage researchers from various backgrounds to engage in this workshop, fostering collaboration and innovation in the field of intelligent cross-data analysis and retrieval. 
Example topics of interest include but are not limited to the following 
– Event-based cross-data retrieval, data mining, and AI technology. 
– Complex event processing for linking sensory data from individuals, regions to broad areas dynamically. 
– Transfer Learning and Transformers. 
– Hypotheses Development of the associations within the heterogeneous data.
– Realization of a prosperous and independent region in which people and nature coexist. 
– Applications leverage intelligent cross-data analysis for a particular domain. 
– Cross-datasets for Repeatable Experimentation. 
– Federated Analytics, Federated Learning, and Edge AI for cross-data. 
– Privacy-public data collaboration. 
– Integration of diverse multimodal data.
Design by 2b Consult