Competition on Detection and Recognition of Greek Letters on Papyri

Searching for an object detection challenge in an unusual setting?

Please check out our competition on object detection of Greek letters in
Papyri data.

Training set and more information is available below:
https://lme.tf.fau.de/competitions/2023-competition-on-detection-and-recognition-of-greek-letters-on-papyri/

Final evaluation set will be released on April 1st and evaluated on
CodaLab. A summary will be presented at the International Conference on
Document Analyses and Recognition (ICDAR) in San José.

Best regards,
Competition organizers

Call for Participation – Deep Video Understanding Grand Challenge 2023 (ACM Multimedia)

Call for participation in the ACM Multimedia Deep Video Understanding Grand Challenge
Where: Ottawa, Ontario, Canada (https://www.acmmm2023.org/grand-challenges-2/)
When: Oct. 29 – Nov. 3, 2023

Background:

Deep video understanding is a difficult task which requires systems to develop a deep analysis and understanding of the relationships between different entities in video, to use known information to reason about other more hidden information, and to populate a knowledge graph (KG) representation with all acquired information. To work on this task, a system should take into consideration all available modalities (speech, image/video, and in some cases text). The aim of this challenge series is to push the limits of multimodal extraction, fusion, and analysis techniques to address the problem of analyzing long duration videos holistically and extracting useful knowledge to utilize it in solving different types of queries. The target knowledge includes both visual and non-visual elements. As videos and multimedia data are getting more and more popular and usable by users in different domains and contexts, the research, approaches and techniques we aim to apply in this Grand Challenge will be very relevant in the coming years and near future.

Challenge Overview:

Interested participants are invited to apply their approaches and methods on an extended novel Deep Video Understanding (DVU) dataset being made available by the challenge organizers. The dataset is split into a development data of 14 movies from the 2020-2022 versions of this challenge with Creative Commons licenses, and a new set of 5 movies licensed from KinoLorberEdu platform. The development data includes: original while videos, segmented scene shots, image examples of main characters and locations, movie-level KG representation of the relationships between main characters, relationships between characters key-locations, scene-level KG representation of each scene in a movie (location type, characters, interactions between them, order of interactions, sentiment of scene, and a short textual summary), and a global shared ontology of locations, relationships (family, social, work), interactions and sentiments. The testing dataset consists of 5 Kinolorber licensed movies.

The organizers will support evaluation and scoring for a hybrid of main query types, at the overall movie level and at the individual scene level distributed with the dataset. Participants will be given the choice to submit results for either the movie-level or scene-level queries, or both. And for each category, queries are grouped for more flexible submission options (please refer to the dataset webpage for more details):

Example Question types at Overall Movie Level:
Multiple choice question answering on part of Knowledge Graph for selected movies.
Fill in the Graph Space – Given a partial graph, systems will be asked to fill in the graph space.

Example Question types at Individual Scene Level:
Find next or previous interaction, given two people, a specific scene, and the interaction between them.
Find a unique scene given a set of interactions and a scene list.
Fill in the Graph Space – Given a partial graph for a scene, systems will be asked to fill in the graph space.
Match between selected scenes and set of scene descriptions written in natural language .
Scene sentiment classification.

A new addition to 2023 challenge is that systems may also submit their results against a secondary dataset where real world noise and various types of perturbations and corruptions are introduced (in visual and audio channels). This will allow the measure of multimodal robustness in this context.

IMPORTANT DATES
DVU development data release: Available now (more updates will be added by April 15)
Testing dataset release : April 15, 2023
Testing queries release: June 2, 2023
Paper submission deadline: July 14, 2023
Submissions of solutions to organizers: July 14, 2023
Results released back to participants: July 24 2023
Notification to authors: July 24, 2023
Camera-ready submission: August 6, 2023
Grand Challenge at ACM Multimedia: TBD



DVU 2023 Grand Challenge Organizers

International Journal on Natural Language Computing (IJNLC) shared the paper “Call for Papers – 4 th International Conference on NLP & Big Data (NLPD 2023)” with you

Academia.edu


 

International Journal on Natural Language Computing (IJNLC) shared the paper “Call for Papers – 4 th International Conference on NLP & Big Data (NLPD 2023)” with you.

 
View
 

 

580 California St., Suite 400, San Francisco, CA, 94104

Unsubscribe   Privacy Policy   Terms of Service  

© 2023 Academia

                                                           

ICMI 2023 Call for tutorial proposals

https://icmi.acm.org/2023/call-for-tutorials/

25th ACM International Conference on Multimodal Interaction

9-13 October 2023, Paris, France

=====================================

 

ACM ICMI 2023 seeks half-day (3-4 hours) tutorial proposals addressing current and emerging topics within the scope of “Science of Multimodal Interactions”. Tutorials are intended to provide a high-quality learning experience to participants with a varied range of backgrounds. It is expected that tutorials are self-contained.

 

Prospective organizers should submit a 4-page (maximum) proposal containing the following information:

 

1. Title

2. Abstract appropriate for possible Web promotion of the Tutorial

3. A short list of the distinctive topics to be addressed

4. Learning objectives (specific and measurable objectives)

5. The targeted audience (student / early stage / advanced researchers, pré-requisite knowledge, field of study)

6. Detailed description of the Tutorial and its relevance to multimodal interaction

7. Outline of the tutorial content with a tentative schedule and its duration

8. Description of the presentation format (number of presenters, interactive sessions, practicals)

9. Accompanying material (repository, references) and equipment, emphasizing any required material from the organization committee (subject to approval)

10. Short biography of the organizers (preferably from multiple institutions) together with their contact information and a list of 1-2 key publications related to the tutorial topic

11. Previous editions: If the tutorial was given before, describe when and where it was given, and if it will be modified for ACM ICMI 2023.

 

Proposals will be evaluated using the following criteria:

 

– Importance of the topic and the relevance to ACM ICMI 2023 and its main theme: “Science of Multimodal Interactions”

– Presenters' experience

– Adequateness of the presentation format to the topic

– Targeted audience interest and impact

– Accessibility and quality of accompanying materials (open access)

 

Proposals that focus exclusively on the presenters' own work or commercial presentations are not acceptable.

 

Unless explicitly mentioned and agreed by the Tutorial chairs, the tutorial organizers will take care of any specific requirements which are related to the tutorial such as specific handouts, mass storages, rights of distribution (material, handouts, etc.), copyrights, etc.

 

Important Dates and Contact Details

CFP: 18th International conference on machine vision applications

The eighteenth International Conference on Machine Vision Applications will be held at ACT CITY Hamamatsu, Shizuoka, Japan from July 23 through 25, 2023. The conference is sponsored by the MVA Organization, co-organized by IEICE PRMU and IPSJ SIG-CVIM, and endorsed by IAPR.

The deadline is approaching, so we would like to inform you again.
We look forward to your submission.
Full Paper (4 pages) Submission Deadline: March 31, 2023

The aim of this conference is to bring together researchers and practitioners from both academia and industry, and to stimulate the exchange of knowledge through intensive discussions on the cutting-edge research topics.

Topics-of-interest include, but are not limited to, sensing, algorithms, and applications (Factory automation and robotics, Intelligent Transport Systems, Human computer interaction, biomedical, multimedia, and life) concerning the image media.

Papers should be prepared in the designated format in four pages, and submitted electronically by the deadline.

Accepted papers will be presented in English either as an oral presentation or a poster presentation.
Note that at least one author of an accepted paper must present their work at the conference.
If you have a reason such as illness or difficulty traveling, we allow you to participate online as well.

Details can be found in the MVA2023 Web site or our twitter and facebook.
https://www.mva-org.jp/mva2023/
https://twitter.com/MVA_ORG/
https://www.facebook.com/Mva-org/

Important Dates:
– Full Paper (4 pages) Submission Deadline: March 31, 2023
– Notification of Acceptance: June 13, 2023
– Camera Ready Manuscript Deadline: July 4, 2023

And, we are pleased to announce MVA2023 IAPR Invited speakers and tutorial speakers.

IAPR Invited speakers:
– Dima DAMEN (Univ. of Bristol, UK)
– Chung-Chieh Jay KUO (Univ. of Southern California, USA)
– Kensaku MORI (Nagoya Univ., Japan)

Tutorial spearkers:
– Michael S. Ryoo
   (SUNY Empire Innovation Associate Professor, Stony Brook University, USA.
    Staff Research Scientist, Robotics at Google, USA)
– Shunsuke Saito
   (Research Scientist, Reality Labs Research (Pittsburgh), USA)

Contact:
MVA2023 Organizing Committee (mva2023-sec-AT-mva-org.jp)

General Chairs
– Kyoko Sudo (Toho University)
– Shunsuke Kudo (The University of Electro-Communications)

Program Chairs
– Ichiro Ide (Nagoya University)
– Wei-Ta Chu (National Cheng Kung University)

Callenge:
In conjunction with MVA2023, we host the Small Object Detection Challenge for Spotting Birds.
This challenge focuses on Small Object Detection (SOD) problem,
which is a hot topic in the Computer Vision community in recent years.
Please check the details here.
        https://www.mva-org.jp/mva2023/challenge

We look forward to your participation.

Design by 2b Consult