TRECVID 2023 : 2nd Call for Participation in the 21st TREC Video Retrieval Evaluation

CALL FOR PARTICIPATION in the 2023 TREC VIDEO RETRIEVAL EVALUATION (TRECVID 2023)  February 2023 - December 2023
Conducted by the National Institute of Standards and Technology (NIST) with additional funding from other US government agencies. Below you can find an overview on the used datasets, tasks, and how to apply to participate. All teams are encouraged to apply early to get access to data and join the slack workspace of active teams.
Please consult the guidelines for each task for more details
including General Schedule
Application URL:  http://ir.nist.gov/tv-submit.open/application.html
Application deadline: June 1. [apply early to get access to task discussions, participants mailing lists, and datasets]

Introduction:
The TREC Video Retrieval Evaluation series (trecvid.nist.gov) promotes progress in content-based analysis of and retrieval from digital video via open, metrics-based evaluation. TRECVID is a laboratory-style evaluation that attempts to model real world situations or significant component tasks involved in such situations. In its 23nd annual evaluation cycle, TRECVID will evaluate participating systems on 5 different video analysis and retrieval tasks (Adhoc video search, video to text captioning, movie QA, medical video QA, and activities detection) using
various types of real world datasets
. Please see below more details about the main datasets & tasks to be used in 2023 across the 5 proposed tasks.
Data:
In TRECVID 2023 NIST will use at least the following data sets:
* Vimeo Creative Commons Collection (V3C)
The V3C is a large-scale video dataset that has been collected from high-quality web videos with a time span over several years in order to represent true videos in the wild. It consists of 28,450 videos with a duration of 3,801 hours in total. In 2023, the V3C2 subcollection (1,300 hr and 1.4 million shots) will be utilized
as a testing dataset, while V3C1 (1,000 hr and 1 million shots) previously adopted at TRECVID from 2019-2021 as a development dataset. This new V3C2 subcollection is planned to be adopted from 2022 to 2024.
* IACC.3
The IACC.3 was introduced in 2016 and consists of approximately 4600 Internet Archive videos (144 GB, 600 h) with Creative Commons licenses in MPEG-4/H.264 format with duration ranging from 6.5 min to 9.5 min and a mean duration of almost 7.8 min. Most videos will have some metadata provided by the donor available e.g., title, keywords, and description. The IACC.3 is provided
as a development dataset for teams.
* Kino lorber edu Movies
A set of 10 movies licensed from Kino Lorber Edu (https://www.kinolorberedu.com/) will be available to support the deep video understanding (DVU) task. Five movies will be assigned as part of the training dataset with annotations at the movie and scene levels, while the other 5 movies will be employed as the testing dataset for the DVU task. All movies are in English with duration between 1.5 - 2 hrs each. Participants Will be able to download the whole original movies and use the data for research only purpose within TRECVID tasks.
* Deep Video Understanding (DVU)
A set of 14 movies (total duration of 17.5 hr) with Creative Commons license
previously utilized at the ACM Multimedia Grand Challenges in 2020 and 2021 will be available as a development dataset for the DVU task. The dataset contains movie-level and scene-level annotations. The movies have been collected from public websites such as Vimeo and the Internet Archive. In total, the 14 movies consist of 621 scenes, 1572 entities, 650 relationships, and 2491 interactions.
* TV_VTT
This dataset will support the training dataset for the Video-to-Text (VTT) task. It contains short videos (ranging from 3 seconds to 10 seconds) from TRECVID VTT
task from 2016 to 2022. There are 12,870 videos with captions. Each video has between 2 and 5 captions, which have been written by dedicated annotators.
* MedVidQA Collections
The VCVAL (Video Corpus Visual Answer Localization) task is supported by MedVidQA collections training dataset consisting of 3,010 human-annotated instructional questions and visual answers from 900 health-related videos. In addition, an automatically created HealthVidQA dataset consists of ~50 000 instructional questions and visual answers from 15,000 health-related videos. A validation dataset consisting of 50 questions and their answer timestamps created from 25 medical instructional videos will also be available. Finally, the testing dataset will
contain 50 questions and their answer timestamps created from 25 medical instructional videos.
The
MIQG (Medical Instructional Question Generation) task is supported by a training dataset consisting of 2710 question and visual segments, which are formulated from 800 medical instructional videos from the MedVidQA collections. The provided validation dataset will contain 145 questions and answers timestamps created from 49 medical instructional videos, while the test dataset will contain 100 questions and answers timestamps created from 45 medical instructional videos.
* Gatwick and i-LIDS MCT airport surveillance video
The data consist of about 150 hours obtained from airport surveillance video data (courtesy of the UK Home Office). The Linguistic Data Consortium has provided event annotations for the entire corpus. The corpus was divided into development and evaluation subsets. Annotations for 2008 development and test sets are available.
* MEVA dataset

The TRECVID ActEV 2023 Challenge is based on the Multiview Extended Video with Activities (MEVA) Known Facility (KF) dataset. The large-scale MEVA dataset is designed for activity detection in multi-camera environments. It was created on the Intelligence Advanced Research Projects Activity (IARPA) Deep Intermodal Video Analytics (DIVA) program to support DIVA performers and the broader research community. You can download the public MEVA resources (training video, training annotations and the test set) at mevadata.org

Tasks:
In TRECVID 2023 NIST will evaluate systems on the following tasks using the [data] indicated:
* AVS: Ad-hoc Video Search (automatic, manually-assisted, relevance feedback) [V3C2]
The Ad-hoc search task started in TRECVID 2016 and will continue in 2023 to model the end user search use-case, who is looking for segments of video containing persons, objects, activities, locations, etc., and combinations of the former. Given about 40 textual queries created at NIST, return for each query all the shots which meet the video need expressed by it, ranked in order of confidence. Although all evaluated submissions will be for automatic runs, Interactive systems will have the opportunity to participate in the
Video Browser Showdown (VBS) in 2024 using the same testing data (V3C2).
* ActEV: Activities in Extended Video [MEVA]
ActEV is a series of evaluations to accelerate development of robust, multi-camera, automatic activity detection algorithms for forensic and real-time alerting applications. ActEV is an extension of the annual TRECVID Surveillance Event Detection (SED) evaluation where systems will also detect, and track objects involved in the activities. Each evaluation will challenge systems with new data, system requirements, and/or new activities.
* MedVidQA: Medical Video Question Answering [MedVidQA Collections] Many people prefer instructional videos to teach or learn how to accomplish a particular task with a series of step-by-step procedures in an effective and efficient manner. In a similar way, medical instructional videos are more suitable and beneficial for delivering key information through visual and verbal communication to consumers' healthcare questions that demand instruction. With an aim to provide visual instructional answers to consumers' first aid, medical emergency, and medical educational questions, this TRECVID
NEW task on medical video question answering will introduce a new challenge to foster research toward designing systems that can understand medical videos to provide visual answers to natural language questions and equipped with the multimodal capability to generate instructional questions from the medical video. Following the success of the 1st MedVidQA shared task in the BioNLP workshop at ACL 2022, MedVidQA 2023 at TRECVID expanded the tasks and introduced a new track considering language-video understanding and generation. This track comprises two main tasks, Video Corpus Visual Answer Localization (VCVAL) and Medical Instructional Question Generation (MIQG). For detailed information, please refer to the task guidelines page.
* DVU: Deep Video Understanding [Kino lorber edu movies]
Deep video understanding is a difficult task which requires computer vision systems to develop a deep analysis and understanding of the relationships between different entities in video, and to use known information to reason about other, more hidden information. The aim of the task is to push the limits of multimedia analysis techniques to address analysing long duration videos holistically and extract useful
knowledge to utilize it in solving different kinds of queries. The knowledge in the target queries includes both visual and non-visual elements. Participating systems should take into consideration all available modalities (speech, image/video, and in some cases text).
The task for participating researchers will be: given a whole original movie (e.g 1.5 - 2hrs long), image snapshots of main entities (persons, locations, and concepts) per movie, and ontology of relationships, interactions, locations, and sentiments used to annotate each movie at global movie-level (relationships between entities) as well as on fine-grained scene-level (scene sentiment, interactions between characters, and locations of scenes), systems are expected to generate a knowledge-base of the main actors and their relations (such as family, work, social, etc) over the whole movie, and of interactions between them over the scene level. This representation can be used to answer a set of queries on the movie-level and/or scene-level per movie. The task will support two tracks (subtasks) where teams can join one or both tracks. Movie track where participants are asked queries on the whole movie level, and Scene track where Queries are targeted towards specific movie scenes. New this year, is a subtask where systems can also submit results against the same queries but modified testing dataset after introducing some natural corruptions and perturbations to simulate real world noise datasets.
* VTT: Video to Text Description [V3C3]
Automatic annotation of videos using natural language text descriptions has been a long-standing goal of computer vision. The task involves understanding of many concepts such as objects, actions, scenes, person-object relations, temporal order of events and many others. In recent years there have been major advances in computer vision techniques which enabled researchers to start practically to work on solving such
problems
. Given a set of short video clips, systems are asked to work and submit results for a main task: The "Description Generation" task requires systems to automatically generate a text description (1 sentence) for each video clip based on who is doing what, where and when. The other subtask proposed this year is to generate text descriptions on the same testing dataset but after introducing some natural corruptions and perturbations to simulate real world noise datasets.
In addition to the data, TRECVID will provide uniform scoring procedures, and a forum for organizations interested in comparing their approaches and results.
Participants will be encouraged to share resources and intermediate system outputs to lower entry barriers and enable analysis of various components' contributions and interactions.
*************************************************** *
You are invited to participate in TRECVID 2023 * ***************************************************
The evaluation is defined by the Guidelines. A draft version is
available and further feedback input from the participants are welcomed till April,2023.
You should read the guidelines carefully before applying to participate in one or more tasks: Guidelines
Please note
1) Dissemination of TRECVID work and results other than in the (publicly available) conference proceedings is welcomed, but the
conditions of participation specifically preclude any advertising claims based on TRECVID results.
2) All system output and results submitted to NIST are published in the Proceedings or on the public portions of TRECVID web site archive.
3) The workshop is open to participating groups that submit results for at least one task, to selected government personnel from sponsoring agencies, data donors, and interested researchers who may never participated Before and would like to know more about TRECVID.
4) Each participating group is required to submit before the workshop a notebook paper describing their experiments and results. This is true even for groups who may not be able to attend the workshop.
5) It is the responsibility of each team contact to make sure that information distributed via the call for participation and the tv23.list@list.nist.gov email list is disseminated to all team members with a need to know. This includes information about deadlines and restrictions on use of data.
6) By applying to participate you indicate your acceptance of the above conditions and obligations.
There is a tentative schedule for the tasks included in the Guidelines webpage:
Schedule
Workshop format

The workshop format as being in-person, hybrid, Or virtual in 2023 is still something to be decided. Details will be provided to participants as soon as available.
The TRECVID workshop is used as a forum both for presentation of results (including failure analyses and system comparisons), and for more lengthy system presentations describing retrieval techniques used, experiments run using the data, and other issues of interest to researchers in information retrieval and computer vision. As there is a limited amount of time for these presentations, the evaluation coordinators and NIST will determine which groups are asked to speak and which groups will present in a poster session. Groups that are interested in having a speaking slot during the workshop will be asked to submit a short abstract before the workshop describing the experiments they performed. Speakers will be selected based on these abstracts.

How to respond to this call
Organizations wishing to participate in TRECVID 2023 must respond to this call for participation by submitting an on-line application by the latest 1 June (the earlier the better). Only ONE APPLICATION PER TEAM please, regardless of how many organizations the team comprises.
*PLEASE* only apply if you are able and fully intend to complete the work for at least one task. Taking the data but not submitting any runs threatens the continued operation of the workshop and the availability of data for the entire community.
Here is the application URL:
http://ir.nist.gov/tv-submit.open/application.html You will receive an immediate automatic response when your application is received. NIST will respond with more detail to all applications submitted before the end of March. At that point you'll be given the active participant's userid and password, be subscribed to the tv23.list email discussion list, and can participate in finalizing the guidelines as well as sign up to get the data, which is controlled by separate passwords. All active teams will also be added to a slack workspace to encourage more communication and facilitate announcements.
TRECVID 2023 email discussion list
The tv23.list email discussion list (tv23.list@list.nist.gov) will serve as the main forum for discussion and for dissemination information about TRECVID 2023. It is each participant's responsibility to monitor the tv23.list postings. It accepts postings only from the email addresses used to subscribe to it. An archive of past postings is available using the active participant's userid/password.
Questions ?
Any administrative questions about conference participation, application format/content, subscriptions to the
tv23.list, etc. should be sent to george.awad at nist.gov.
Best regards,
TRECVID 2023 organizers team

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult