OLR Challenge 2018

From cslt Wiki
Revision as of 01:31, 4 April 2019 by Cslt (Talk | contribs)

Jump to: navigation, search

Oriental Language Recognition (OLR) 2018 Challenge

Oriental languages involve interesting specialties. The OLR challenge series aim at boosting language recognition technology for oriental languages. Following the success of OLR Challenge 2017 and OLR Challenge 2016, the new challenge in 2018 follows the same theme, but sets up more challenging tasks in the sense of:

  • Short-utterance identification task: This is a close-set identification task, which means the language of each utterance is among the known 10 target languages. The utterances are as short as 1 second.
  • Confusing-language identification task: This task identifies the language of utterances from 3 highly confusing languages (Cantonese, Korean and Mandarin).
  • Open-set recognition task: In this task, the test utterance may be in none of the 10 target languages.

We will publish the results on a special session of APSIPA ASC 2018.


  • Ground truth for test data released, download here.
  • Test data for 3 tasks released, download here.


The challenge is based on two multilingual databases, AP16-OL7 that was designed for the OLR challenge 2016, and AP17-OL3 database that was designed for the OLR challenge 2017.

AP16-OL7 is provided by SpeechOcean (www.speechocean.com), and AP17-OL3 is provided by Tsinghua University, Northwest Minzu University and Xinjiang University, under the M2ASR project supported by NSFC.

The features for AP16-OL7 involve:

  • Mobile channel
  • 7 languages in total
  • 71 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The License for the data is here

The features for AP17-OL3 involve:

  • Mobile channel
  • 3 languages in total
  • Tibetan provided by Prof. Guanyu Li@Northwest Minzu Univ.
  • Uyghur and Kazak provided by Prof. Askar Hamdulla@Xinjiang University.
  • 35 hours of speech signals in total
  • Transcriptions and lexica are provided
  • The data profile is here
  • The License for the data is here

Evaluation plan

Refer to the following paper:

Zhiyuan Tang, Dong Wang, Qing Chen: AP18-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2018 (arXiv)

Evaluation tools

  • The Kaldi-based baseline scripts here

Participation rules

  • Participants from both academy and industry are welcome
  • Publications based on the data provided by the challenge should cite the following paper:

Dong Wang, Lantian Li, Difei Tang, Qing Chen, AP16-OL7: a multilingual database for oriental languages and a language recognition baseline, APSIPA ASC 2016. pdf

Zhiyuan Tang, Dong Wang, Yixiang Chen, Qing Chen: AP17-OLR Challenge: Data, Plan, and Baseline, APSIPA ASC 2017. pdf

Zhiyuan Tang, Dong Wang, Qing Chen: AP18-OLR Challenge: Three Tasks and Their Baselines, submitted to APSIPA ASC 2018. pdf

Important dates

  • May. 1, AP18-OLR training/dev data release.
  • Sep. 1, register deadline.
  • Oct. 8, test data release, download here.
  • Oct. 15, 24:00, Beijing time, submission deadline.
  • APSIPA ASC 2018, results announcement.

Registration procedure

If you intend to participate the challenge, or if you have any questions, comments or suggestions about the challenge, please send email to the organizers:

  • Prof. Dong Wang (wangdong99@mails.tsinghua.edu.cn)
  • Dr. Zhiyuan Tang (tangzhiyuan12@mails.ucas.ac.cn)
  • Ms. Qing Chen (chenqing@speechocean.com)



  • Dong Wang, Tsinghua University [home]


  • Zhiyuan Tang, Tsinghua University [home]
  • Qing Chen, SpeechOcean

Ranking list

The Oriental Language Recognition (OLR) Challenge 2018, co-organized by CSLT@Tsinghua University and Speechocean, was completed with a great success. The results have been published in the APSIPA ASC, Dec 12-15, 2018, Hawaii, USA.


There are totally 25 teams that registered this challenge. Until the deadline of submission, 17 teams submitted their results. The submissions have been ranked in terms of the 3 language recognition tasks respectively, one is short-utterance identification task, the second one is confusing-language identification task, and the third one is open-set identification task. We just present the results and details of the top 10 teams.

Task 1




Task 2




Task 3




OLR 2018 Workshop

The workshop was successfully held on Sunday afternoon, Mar 24, 2019, in FIT building, Tsinghua University.





Prof Hong: Associate Professor at Xiamen University. [pdf],


Prof Zhengda Tang: Associate Professor at CASS. [pdf]


Prof Ming Li: Associate Professor at Duke-Kunshan University. [pdf]


Briefly, the workshop had several wonderful speeches and also gave awards to the winners of OLR Challenge 2018.

We saw that the hall had no space for more audiences.


First, Prof Dong Wang, the main organizer of OLR challenge 2018, introduced the OLR Challenge and the workshop.


Then, Prof Qingyang Hong from Xiamen University gave a talk about how his team had got the best performance of task 1 in the challenge. You could find Prof Hong's slides at the end of the page.


Next, Prof Zhengda Tang from Chinese Academy of Social Sciences revealed the secrets of worldwide languages. You could find Prof Tang's slides at the end of the page.


After that, Prof Ming Li from Duke Kunshan University shared their end-to-end technologies on speaker/language recognition. You could find Prof Li's slides at the end of the page.


Finally, Prof Thomas Fang Zheng, the director of CSLT, and Liming Song, Marketing Director of Speechocean, gave the awards to the winners of the challenge.

The xmuspeech team from Xiamen University got the best performance in the first task.


The NetEase AI-Speech Group got the best performance in both the second and third tasks.