Final CfP: CBMI2024 Special Session on “Multimedia Indexing for eXtended Reality” – deadline extended to Apr 12

Call for Papers: Special Session on “Multimedia Indexing for eXtended Reality” at CBMI 2024

https://cbmi2024.org/?page_id=100#MmIXR

21st International Conference on Content-based Multimedia Indexing (CBMI 2024).
18-20 September 2024, Reykjavik, Iceland –
https://cbmi2024.org/

DESCRIPTION:
Extended Reality (XR) applications rely not only on computer vision for navigation and object placement but also require a range of multimodal methods to understand the scene or assign semantics to objects being captured and reconstructed. Multimedia indexing for XR thus encompasses methods for processes during XR authoring, such as indexing content to be used for scene and object reconstruction, as well as during the immersive experience, such as object detection and scene segmentation.
The intrinsic multimodality of XR applications involves new challenges like the analysis of egocentric data (video, depth, gaze, head/hand motion) and their interplay. XR is also applied in diverse domains, e.g., manufacturing, medicine, education, and entertainment, each with distinct requirements and data. Thus, multimedia indexing methods must be capable of adapting to the relevant semantics of the particular application domain.

TOPICS OF INTEREST:

·       Multimedia analysis for media mining, adaptation (to scene requirements), and description for use in XR experiences (including but not limited to AI-based approaches)

·       Processing of egocentric multimedia datasets and streams for XR (e.g., egocentric video and gaze analysis, active object detection, video diarization/summarization/captioning)

·       Cross- and multi-modal integration of XR modalities (video, depth, audio, gaze, hand/head movements, etc.)

·       Approaches for adapting multimedia analysis and indexing methods to new application domains (e.g., open-world/open-vocabulary recognition/detection/segmentation, few-shot learning)

·       Large-scale analysis and retrieval of 3D asset collections (e.g., objects, scenes, avatars, motion capture recordings)

·       Multimodal datasets for scene understanding for XR

·       Generative AI and foundation models for multimedia indexing and/or synthetic data generation

·       Combining synthetic and real data for improving scene understanding

·       Optimized multimedia content processing for real-time and low-latency XR applications

·       Privacy and security aspects and mitigations for XR multimedia content

IMPORTANT DATES:
Submission of papers: 12 April 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

SUBMISSION:
The session will be organized as an oral presentation session. The contributions to this session will be long papers describing novel methods or their adaptation to specific applications or short papers describing emerging work or open challenges.

SPECIAL SESSION ORGANISERS:
Fabio Carrara, Artificial Intelligence for Multimedia and Humanities Laboratory, ISTI-CNR, Pisa, Italy

Werner Bailer, Intelligent Vision Applications Group, JOANNEUM RESEARCH, Graz, Austria

Lyndon J. B. Nixon, MODUL Technology GmbH and Applied Data Science School at MODUL University, Vienna, Austria

Vasileios Mezaris, Information Technologies Institute / Centre for Research and Technology Hellas, Thessaloniki, Greece

 

Last CFP with Final Deadline Extension for CBMI 2024 in Reykjavik, Iceland

Last Call for Papers (with Final Deadline Extension) for the  21st International Conference on Content-Based Multimedia Indexing — CBMI 2024

September 18 – 20, 2024 in Reykjavik, Iceland


**** The CBMI 2024 submission deadline has been extended to April 12, 2024

**** The conference proceedings will be published by IEEE


After successful editions across Europe in France, Austria, Italy, UK, Czech Republic, and Hungary, the Content-Based Multimedia Indexing (CBMI) conference will take place in Reykjavík, Iceland this coming September 2024. CBMI aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualisation and analytics. We encourage contributions both on theoretical aspects and applications of CBMI in the new era of Artificial Intelligence.  Authors are invited to submit previously unpublished research papers highlighting significant contributions addressing these topics. In addition, special sessions on specific technical aspects or application domains are planned.

Conference Website: http://cbmi2024.org/

The conference proceedings will be published by IEEE. Authors can submit full papers (6 pages + references), short papers (4 pages + references), special session papers (6 pages + references) and demonstration proposals (4 pages + 1 page demonstration description + references). Authors of high-quality papers accepted to the conference may be invited to submit extended versions of their contributions to a special journal issue in MTAP. Submissions to CBMI are peer reviewed in a single blind process. All types of papers must use the IEEE templates at https://www.ieee.org/conferences/publishing/templates.html. The language of the conference is English.

CBMI 2024 proposes eight special sessions:

  • AIMHDA: Advances in AI-Driven Medical and Health Data Analysis
  • Content-Based Indexing for Audio and Music: From Analysis to Synthesis
  • ExMA: Explainability in Multimedia Analysis
  • IVR4B: Interactive Video Retrieval for Beginners
  • MIDRA: Multimodal Insights for Disaster Risk Management and Applications
  • MmIXR: Multimedia Indexing for XR
  • Multimedia Analysis and Simulations for Digital Twins in the Construction Domain
  • Multimodal Data Analysis for Understanding of Human Behaviour, Emotions and their Reasons

 Submission Deadlines

  • Full and short research papers are due April 12, 2024
  • Special session papers are due April 12, 2024
  • Demonstration submissions are due April 26, 2024

 

CBMI 2024 seeks contributions on the following research topics:

Multimedia Content Analysis and Indexing:

  • Media content analysis and mining
  • AI/ML approaches for content understanding
  • Multimodal and cross-modal indexing
  • Activity recognition and event-based multimedia indexing and retrieval 
  • Multimedia information retrieval (image, audio, video, text)
  • Conversational search and question-answering systems
  • Multimedia recommendation
  • Multimodal analytics, summarization, visualisation, organisation and browsing of multimedia content
  • Multimedia verification (e.g., multimodal fact-checking, deep fake analysis)
  • Large multimedia models, large language models and vision language models
  • Explainability in multimedia learning
  • Large scale multimedia database management
  • Evaluation and benchmarking of multimedia retrieval systems

 Multimedia User Experiences:

  • Extended reality (AR/VR/MR) interfaces
  • Mobile interfaces
  • Presentation and visualisation tools
  • Affective adaptation and personalization
  • Relevance feedback and interactive learning

 Applications of Multimedia Indexing and Retrieval:

  • Multimedia and sustainability
  • Healthcare and medical applications
  • Cultural heritage and entertainment applications
  • Educational and social applications
  • Egocentric, wearable and personal multimedia
  • Applications to forensics, surveillance and security
  • Environmental and urban multimedia applications
  • Earth observation and astrophysics

 On behalf of the CBMI 2024 organisers,

Announcing New DFIR 6, 7 & 8 Streams

The Association of Cyber Forensics and Threat Investigators invites you to join our next webinars:

“DFIR Stream 0x6” on Tuesday, April 16 · 4:00 – 5:00 pm (GMT+00:00) UK Time
Title: Operationalizing Machine Learning for Networks,
by Shinan Liu, University of Chicago.
Register@ https://www.acfti.org/news-events/dfir-stream-0x6

“DFIR Stream 0x7” on Tuesday, May 7. 1:30 – 2:30 pm (GMT+00:00) UK Time
Title: Malware Detection in Memory Forensics: Open Challenges and Issues,
by Dr. Ricardo J. Rodríguez, University of Zaragoza.
Register@ https://www.acfti.org/news-events/dfir-stream-0x7

“DFIR Stream 0x8” on Monday, May 13 · 4:00 – 5:00 pm (GMT+00:00) UK Time
Title: Low-Level Hardware Information Assisted Approach Towards System Security,
by Dr. Chen Liu, Clarkson University.
Register@ https://www.acfti.org/news-events/dfir-stream-0x8

======Housekeeping Notes======

– Note that this event is online only. Hence, You must register to receive a link to connect. Due to limited availability, we kindly ask you to register as soon as possible to ensure your participation in the webinar of your choice.

– For Students, A certificate of successful participation in the event will be delivered upon request for free (after verifying attendance), indicating the number of hours of the seminar (please make sure that you add the correct name in the registration form). This should be sufficient for those participants who plan to request ECTS recognition from their home university.

Join Us & stay tuned! #CyberSecurity #MemoryForensics #MachineLearning #AnomalyDetection

Finally, I would like to remind you that the call for speakers is currently open on the dedicated DFIR stream website, https://dfir.stream/call-for-guest-speakers

To get more news about our events, please join our low-traffic announcement group @ https://groups.google.com/g/acfti

This event is brought to you by CFTIRC (Cyber Forensics & Threat Investigations Research Community).

Best regards,
Andrew Zayin Ph.D., CISSP, CISM, CRISC, CDPSE, PMP
ACFTI Secretariat

The 2nd International Conference on Foundationand Large Language Models (FLLM2024) 26-29 November, 2024 | Dubai, UAE

The 2nd International Conference on Foundation and Large Language Models (FLLM2024)

https://fllm2024.fllm-conference.org/index.php

26-29 November, 2024 | Dubai, UAE

Technically Co-Sponsored by IEEE UAE Section

FLLM 2024 CFP:

With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.

The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:

  •     Architectures and Systems
    • Transformers and Attention
    • Bidirectional Encoding
    • Autoregressive Models
    • Massive GPU Systems
    • Prompt Engineering
    • Multimodal LLMs
    • Fine-tuning
  •     Challenges
    • Hallucination
    • Cost of Creation and Training
    • Energy and Sustainability Issues
    • Integration
    • Safety and Trustworthiness
    • Interpretability
    • Fairness
    • Social Impact
  •     Future Directions
    • Generative AI
    • Explainability and EXplainable AI
    • Retrieval Augmented Generation (RAG)
    • Federated Learning for FLLM
    • Large Language Models Fine-Tuning on Graphs
    • Data Augmentation
  •     Natural Language Processing Applications
    • Generation
    • Summarization
    • Rewrite
    • Search
    • Question Answering
    • Language Comprehension and Complex Reasoning
    • Clustering and Classification
  •     Applications
    • Natural Language Processing
    • Communication Systems
    • Security and Privacy
    • Image Processing and Computer Vision
    • Life Sciences
    • Financial Systems

Submissions Guidelines and Proceedings

Manuscripts should be prepared in 10-point font using the IEEE 8.5″ x 11″ two-column format. All papers should be in PDF format, and submitted electronically at Paper Submission Link. A full paper can be up to 8 pages (including all figures, tables and references). Submitted papers must present original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines may be rejected without review. Also submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. Authors may contact the Program Chair for further information or clarification. All submissions are peer-reviewed by at least three reviewers. Accepted papers will appear in the FLLM Proceeding, and be published by the IEEE Computer Society Conference Publishing Services and be submitted to IEEE Xplore for inclusion. Submitted papers must include original work, and must not be under consideration for another conference or journal. Submission of regular papers up to 8 pages and must follow the IEEE paper format. Please include up to 7 keywords, complete postal and email address, and fax and phone numbers of the corresponding author. Authors of accepted papers are expected to present their work at the conference. Submitted papers that are deemed of good quality but that could not be accepted as regular papers will be accepted as short papers.

Important Dates:

  • Paper submission deadline: June 30, 2024
  • Notification of acceptance: September 15, 2024
  • Camera-ready Submission: October 10, 2024

 

Contact:

Please send any inquiry on FLLM toinfo@fllm-conference.org

Big Visual Data Analytics (BVDA) Workshop at ICIP, 27-30 October 2024, Abu Dhabi, UAE

CALL FOR PAPERS

 

Big Visual Data Analytics (BVDA) Workshop at ICIP 2024

 

IEEE International Conference on Image Processing, 27-30 October 2024, Abu Dhabi, UAE

 

We invite researchers and practitioners working on various aspects of big visual data analytics to submit their work to the Big Visual Data Analytics (BVDA) Workshop, organized in conjunction with the IEEE International Conference on Image Processing (ICIP) 2024. The ever-increasing visual data availability leads to repositories or streams characterized by big data volumes, velocity (acquisition and processing speed), variety (e.g., RGB or RGB-D or hyperspectral images) and complexity (e.g., video data and point clouds). Their processing necessitates novel and advanced visual analysis methods, in order to unlock their potential across diverse domains.

The BVDA Workshop aims to explore this rapidly evolving field encompassing cutting-edge methods, emerging applications, and significant challenges in extracting meaning and value from large-scale visual datasets. From high-throughput biomedical imaging and autonomous driving sensors to satellite imagery and social media platforms, visual data has permeated nearly every aspect of our lives. Analyzing this data effectively requires efficient tools that go beyond traditional methods, leveraging advancements in machine learning, computer vision and data science. Exciting new developments in these fields are already paving the way for fully and semi-automated visual data analysis workflows at an unprecedented scale. This workshop will provide a platform for researchers and practitioners to discuss recent breakthroughs and challenges in big visual data analytics, explore novel applications across diverse domains (e.g., environment monitoring, natural disaster management,  robotics, urban planning, healthcare, etc.), as well as for fostering interdisciplinary collaborations between computer vision, data science, machine learning, and domain experts. Its ultimate goal is to help identify promising research directions and pave the way for future innovations.

The BVDA Workshop delves deeper into specific aspects of big visual data, complementing the broader ICIP themes. Thus it can generate new research interest and collaborations within the main conference community, while attracting researchers and practitioners specifically interested in big visual data analytics. Its interdisciplinary nature, its focus on cutting-edge areas (e.g., large Vision-Language Models, distributed deep neural architectures, fast generative models, etc.) and its synergies with neighboring fields (e.g., privacy-preserving analytics, real-time visual analytics, ethical considerations, etc.) broaden the discussion.

 

Topics of interest include (non-exhaustively) the following ones:

  • Scalable algorithms and architectures for big visual data processing and analysis.
  • High-performance computing, distributed and parallel processing, efficient data storage and retrieval for big visual data analysis.
  • Deep learning architectures for large-scale visual content understanding, search & retrieval: Convolutional Neural Networks (CNNs), Transformers, Self-Supervised Learning, etc.
  • Big visual data summarization.
  • Decentralized/distributed DNN architectures for big visual data analysis.
  • Cloud/edge computing architectures for big visual data analysis.
  • Multimodal big visual data analysis.
  • Large Vision-Language Models/Foundation Models.
  • Fast generative models for visual data: Synthesizing realistic images/videos, data augmentation, in-painting and manipulation.
  • Fast Interpretability and eXplainability (XAI) of visual analytics models: Understanding and communicating model decisions, trust and bias in AI systems.
  • Privacy-preserving analytics in the context of big visual data: Secure data processing, differential privacy, federated learning.
  • Visual analytics for real-time applications: Efficient analysis of visual streaming data, edge/fog computing.
  • Visual analytics for specialized domains: Remote sensing, natural disaster management, medical imaging, social media analysis, etc.
  • Ethical considerations in big visual data analytics: Data ownership, fairness, accountability, societal impact.

 

The regular ICIP paper template/style must be used for submission. All accepted contributions will be published in IEEE Xplore. The paper submission deadline is April 25, 2024.

 

For further details and submission instructions visit: https://icarus.csd.auth.gr/cfp-bvda-icip24-workshop/

 

 

Organizers

 

Prof. Ioannis Pitas: Chair of the International AI Doctoral Academy (AIDA), Director of the Artificial Intelligence and Information analysis (AIIA) Lab,

Aristotle University of Thessaloniki, Greece.

 

Prof. Massimo Villari: University of Messina, Italy.

 

Dr. Ioannis Mademlis: Postdoctoral researcher at the Harokopio University of Athens.

Design by 2b Consult