Dear colleague,
Apologies for possible cross-posting. We are proud to launch the SdSV (Short duration Speaker Verification) Challenge 2020.
Best regards,
Jahangir Alam. PhD
Researcher, CRIM, Montreal (Quebec) Canada
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
Short-duration Speaker Verification (SdSV) Challenge 2020
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
Are you searching for new challenges in speaker recognition? Join SdSV Challenge 2020 – The first challenge with a broad focus on systematic benchmark and analysis on a varying degree of phonetic variability on short-duration speaker recognition.
CHALLENGE TASK
The SdSV Challenge 2020 consists of two tasks. Task 1 is defined as speaker verification in text-dependent mode where the lexical content (in both English and Persian) of the utterance is also taken into consideration. Task 2 is defined as speaker verification in text-independent mode with same- and cross-language trials. A baseline system will be provided for each task. The SdSV Challenge corpus is drawn from the DeepMine corpus which includes voice recordings of 1800 speakers and is the largest public corpus designed for short-duration speaker verification.
BACKGROUND:
Text-dependency is known to improve accuracy of speaker recognition when dealing with short duration speech segments. However, recent breakthroughs in terms of accuracy and robustness for text-independent (e.g., deep speaker embedding with x-vector) were achieved at the cost of an intensive use of training data. This leads to practical difficulties in adapting existing methods to the specific case of text-dependent speaker verification. The aim of the SdSV Challenge is to draw researchers' attention to the importance of speaker recognition under conditions where test utterances are of short duration and of variable phonetic content. Text-dependent and text-independent speaker recognition both stand to benefit from progress in this area and we expect that modern machine learning methods (neural networks, sequence models etc.) will play a key role.
WORKSHOP:
There will be three prizes for each task. Top performers will be determined based on the results of the primary systems on the evaluation subset. In addition to the cash prize, winners will receive certificates for their achievement.
Rank 1: 500 EUR
Rank 2: 300 EUR
Rank 3: 100 EUR
SCHEDULE:
Jan 10, 2020 Release of evaluation plan
Jan 10, 2020 Release of train, development and evaluation sets
Jan 15, 2020 Evaluation platform open
Mar 13, 2020 Challenge deadline
Mar 20, 2020 Release of challenge results
Mar 30, 2020 Interspeech submission deadline
Sep 14, 2020 Post-challenge evaluation
Sep 14-18 SdSV Challenge 2020 special session at Interspeech
REGISTRATION:
The challenge leaderboards are hosted at CodaLab. Participants need a CodaLab account to be able to submit the results. When creating an account, the team name can be the name of your organization or any anonymous identity. The same account should be used for both Task 1 and Task 2. More details here: https://sdsvc.github.io/registration/
EVALUATION PLAN:
The evaluation plan and license agreement of the dataset can be downloaded via the following:
Evaluation plan: https://sdsvc.github.io/assets/SdSV_Challenge_Evaluation_Plan.pdf
License agreement: https://sdsvc.github.io/assets/SdSV_Challenge_License_Agreement.pdf
ORGANIZERS:
Hossein Zeinali, Amirkabir University of Technology, Iran.
Kong Aik Lee, NEC Corporation, Japan.
Jahangir Alam, CRIM, Canada.
Lukáš Burget,Brno University of Technology, Czech Republic.
FURTHER INFORMATION: