NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[EVALUATION RESULTS TOP] | [BLEU] | [RIBES] | [AMFM] | [HUMAN (WAT2022)] | [HUMAN (WAT2021)] | [HUMAN (WAT2020)] | [HUMAN (WAT2019)] | [HUMAN (WAT2018)] | [HUMAN (WAT2017)] | [HUMAN (WAT2016)] | [HUMAN (WAT2015)] | [HUMAN (WAT2014)] | [EVALUATION RESULTS USAGE POLICY]

BLEU


# Team Task Date/Time DataID BLEU
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1NLPHutINDIC21hi-en2021/03/19 16:11:514589---23.65------NMTNoTransformer trained using all languages PMI data. Then fine-tuned using all hi-en data.
2ORGANIZERINDIC21hi-en2021/04/08 17:21:244793---28.21------NMTNoBilingual baseline trained on PMI data. Transformer base. LR=10-3
3NICT-5INDIC21hi-en2021/04/21 15:41:585278---35.80------NMTNoPretrain MBART on IndicCorp and FT on bilingual PMI data. Beam search. Model is bilingual.
4SRPOLINDIC21hi-en2021/04/21 19:31:155327---39.49------NMTNoBase transformer on all WAT21 data
5NICT-5INDIC21hi-en2021/04/22 11:51:355353---36.20------NMTNoMBART+MNMT. Beam 4.
6gaurvarINDIC21hi-en2021/04/25 17:35:485531---20.90------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
7gaurvarINDIC21hi-en2021/04/25 18:08:205532---20.90------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
8gaurvarINDIC21hi-en2021/04/25 18:38:485554---21.33------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
9gaurvarINDIC21hi-en2021/04/25 18:41:085555---20.63------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
10gaurvarINDIC21hi-en2021/04/25 18:58:165567---21.33------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
11sakuraINDIC21hi-en2021/04/30 22:40:565872---41.58------NMTNoFine-tuning of multilingual mBART many2many model with training corpus.
12NLPHutINDIC21hi-en2021/05/03 00:15:595985---24.55------NMTNoTransformer with source and target language tags trained using all languages PMI data. Then fine tuned using all hi-en data.
13IIIT-HINDIC21hi-en2021/05/03 18:12:586017---43.23------NMTNoMNMT system (XX-En) trained via exploiting lexical similarity on PMI+CVIT parallel corpus, then improved using back translation on PMI monolingual data followed by fine tuning.
14CFILTINDIC21hi-en2021/05/04 01:11:556054---39.71------NMTNoMultilingual(Many-to-One(XX-En)) NMT model based on Transformer with shared encoder and decoder.
15coastalINDIC21hi-en2021/05/04 01:44:106099---26.41------NMTNoseq2seq model trained on all WAT2021 data
16CFILT-IITBINDIC21hi-en2021/05/04 01:52:556115---30.90------NMTNoMultilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all Indo-Aryan languages data converted to same script
17CFILT-IITBINDIC21hi-en2021/05/04 01:57:406126---33.70------NMTNoMultilingual NMT (Many to One): Trained using transformer based model using all Indo-Aryan languages with shared encoder-decoder and shared BPE vocabulary with all indic language data converted to sam
18coastalINDIC21hi-en2021/05/04 05:41:336164---36.47------NMTNomT5 trained only on PMI
19sakuraINDIC21hi-en2021/05/04 13:13:066204---42.61------NMTNoPre-training multilingual mBART many2many model with training corpus followed by finetuning on PMI Parallel.
20SRPOLINDIC21hi-en2021/05/04 15:25:386244---46.93------NMTNoEnsemble of many-to-one on all data. Pretrained on BT, finetuned on PMI
21SRPOLINDIC21hi-en2021/05/04 16:30:256270---45.61------NMTNoMany-to-one on all data. Pretrained on BT, finetuned on PMI
22IITP-MTINDIC21hi-en2021/05/04 17:49:106284---40.08------NMTNoMany-to-One model trained on all training data with base Transformer. All indic language data is romanized. Model fine-tuned on BT PMI monolingual corpus.
23mcairtINDIC21hi-en2021/05/04 19:12:146333---40.05------NMTNomultilingual model(many to one) trained on all WAT 2021 data by using base transformer.
24NICT-5INDIC21hi-en2021/06/21 11:55:036475---41.31------NMTNoUsing PMI and PIB data for fine-tuning on am mbart model trained for over 5 epochs.
25NICT-5INDIC21hi-en2021/06/25 11:48:126495---42.44------NMTNoUsing PMI and PIB data for fine-tuning on a mbart model trained for over 5 epochs. MNMT model.

Notice:

Back to top

RIBES


# Team Task Date/Time DataID RIBES
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1NLPHutINDIC21hi-en2021/03/19 16:11:514589---0.782386------NMTNoTransformer trained using all languages PMI data. Then fine-tuned using all hi-en data.
2ORGANIZERINDIC21hi-en2021/04/08 17:21:244793---0.782146------NMTNoBilingual baseline trained on PMI data. Transformer base. LR=10-3
3NICT-5INDIC21hi-en2021/04/21 15:41:585278---0.828390------NMTNoPretrain MBART on IndicCorp and FT on bilingual PMI data. Beam search. Model is bilingual.
4SRPOLINDIC21hi-en2021/04/21 19:31:155327---0.844792------NMTNoBase transformer on all WAT21 data
5NICT-5INDIC21hi-en2021/04/22 11:51:355353---0.832916------NMTNoMBART+MNMT. Beam 4.
6gaurvarINDIC21hi-en2021/04/25 17:35:485531---0.729188------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
7gaurvarINDIC21hi-en2021/04/25 18:08:205532---0.729188------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
8gaurvarINDIC21hi-en2021/04/25 18:38:485554---0.759034------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
9gaurvarINDIC21hi-en2021/04/25 18:41:085555---0.764441------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
10gaurvarINDIC21hi-en2021/04/25 18:58:165567---0.759034------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
11sakuraINDIC21hi-en2021/04/30 22:40:565872---0.856469------NMTNoFine-tuning of multilingual mBART many2many model with training corpus.
12NLPHutINDIC21hi-en2021/05/03 00:15:595985---0.785027------NMTNoTransformer with source and target language tags trained using all languages PMI data. Then fine tuned using all hi-en data.
13IIIT-HINDIC21hi-en2021/05/03 18:12:586017---0.853267------NMTNoMNMT system (XX-En) trained via exploiting lexical similarity on PMI+CVIT parallel corpus, then improved using back translation on PMI monolingual data followed by fine tuning.
14CFILTINDIC21hi-en2021/05/04 01:11:556054---0.837668------NMTNoMultilingual(Many-to-One(XX-En)) NMT model based on Transformer with shared encoder and decoder.
15coastalINDIC21hi-en2021/05/04 01:44:106099---0.797755------NMTNoseq2seq model trained on all WAT2021 data
16CFILT-IITBINDIC21hi-en2021/05/04 01:52:556115---0.807304------NMTNoMultilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all Indo-Aryan languages data converted to same script
17CFILT-IITBINDIC21hi-en2021/05/04 01:57:406126---0.820716------NMTNoMultilingual NMT (Many to One): Trained using transformer based model using all Indo-Aryan languages with shared encoder-decoder and shared BPE vocabulary with all indic language data converted to sam
18coastalINDIC21hi-en2021/05/04 05:41:336164---0.840014------NMTNomT5 trained only on PMI
19sakuraINDIC21hi-en2021/05/04 13:13:066204---0.859128------NMTNoPre-training multilingual mBART many2many model with training corpus followed by finetuning on PMI Parallel.
20SRPOLINDIC21hi-en2021/05/04 15:25:386244---0.872874------NMTNoEnsemble of many-to-one on all data. Pretrained on BT, finetuned on PMI
21SRPOLINDIC21hi-en2021/05/04 16:30:256270---0.867712------NMTNoMany-to-one on all data. Pretrained on BT, finetuned on PMI
22IITP-MTINDIC21hi-en2021/05/04 17:49:106284---0.851601------NMTNoMany-to-One model trained on all training data with base Transformer. All indic language data is romanized. Model fine-tuned on BT PMI monolingual corpus.
23mcairtINDIC21hi-en2021/05/04 19:12:146333---0.850322------NMTNomultilingual model(many to one) trained on all WAT 2021 data by using base transformer.
24NICT-5INDIC21hi-en2021/06/21 11:55:036475---0.851512------NMTNoUsing PMI and PIB data for fine-tuning on am mbart model trained for over 5 epochs.
25NICT-5INDIC21hi-en2021/06/25 11:48:126495---0.851808------NMTNoUsing PMI and PIB data for fine-tuning on a mbart model trained for over 5 epochs. MNMT model.

Notice:

Back to top

AMFM


# Team Task Date/Time DataID AMFM
Method
Other
Resources
System
Description
unuse unuse unuse unuse unuse unuse unuse unuse unuse unuse
1NLPHutINDIC21hi-en2021/03/19 16:11:514589---0.714917------NMTNoTransformer trained using all languages PMI data. Then fine-tuned using all hi-en data.
2ORGANIZERINDIC21hi-en2021/04/08 17:21:244793---0.736131------NMTNoBilingual baseline trained on PMI data. Transformer base. LR=10-3
3NICT-5INDIC21hi-en2021/04/21 15:41:585278---0.808180------NMTNoPretrain MBART on IndicCorp and FT on bilingual PMI data. Beam search. Model is bilingual.
4SRPOLINDIC21hi-en2021/04/21 19:31:155327---0.825005------NMTNoBase transformer on all WAT21 data
5NICT-5INDIC21hi-en2021/04/22 11:51:355353---0.805716------NMTNoMBART+MNMT. Beam 4.
6gaurvarINDIC21hi-en2021/04/25 17:35:485531---0.714649------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
7gaurvarINDIC21hi-en2021/04/25 18:08:205532---0.714649------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
8gaurvarINDIC21hi-en2021/04/25 18:38:485554---0.722822------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
9gaurvarINDIC21hi-en2021/04/25 18:41:085555---0.724694------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
10gaurvarINDIC21hi-en2021/04/25 18:58:165567---0.722822------NMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
11sakuraINDIC21hi-en2021/04/30 22:40:565872---0.834172------NMTNoFine-tuning of multilingual mBART many2many model with training corpus.
12NLPHutINDIC21hi-en2021/05/03 00:15:595985---0.721805------NMTNoTransformer with source and target language tags trained using all languages PMI data. Then fine tuned using all hi-en data.
13IIIT-HINDIC21hi-en2021/05/03 18:12:586017---0.823007------NMTNoMNMT system (XX-En) trained via exploiting lexical similarity on PMI+CVIT parallel corpus, then improved using back translation on PMI monolingual data followed by fine tuning.
14CFILTINDIC21hi-en2021/05/04 01:11:556054---0.822034------NMTNoMultilingual(Many-to-One(XX-En)) NMT model based on Transformer with shared encoder and decoder.
15coastalINDIC21hi-en2021/05/04 01:44:106099---0.786466------NMTNoseq2seq model trained on all WAT2021 data
16CFILT-IITBINDIC21hi-en2021/05/04 01:52:556115---0.775032------NMTNoMultilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all Indo-Aryan languages data converted to same script
17CFILT-IITBINDIC21hi-en2021/05/04 01:57:406126---0.791408------NMTNoMultilingual NMT (Many to One): Trained using transformer based model using all Indo-Aryan languages with shared encoder-decoder and shared BPE vocabulary with all indic language data converted to sam
18coastalINDIC21hi-en2021/05/04 05:41:336164---0.824040------NMTNomT5 trained only on PMI
19sakuraINDIC21hi-en2021/05/04 13:13:066204---0.834538------NMTNoPre-training multilingual mBART many2many model with training corpus followed by finetuning on PMI Parallel.
20SRPOLINDIC21hi-en2021/05/04 15:25:386244---0.847064------NMTNoEnsemble of many-to-one on all data. Pretrained on BT, finetuned on PMI
21SRPOLINDIC21hi-en2021/05/04 16:30:256270---0.843456------NMTNoMany-to-one on all data. Pretrained on BT, finetuned on PMI
22IITP-MTINDIC21hi-en2021/05/04 17:49:106284---0.831265------NMTNoMany-to-One model trained on all training data with base Transformer. All indic language data is romanized. Model fine-tuned on BT PMI monolingual corpus.
23mcairtINDIC21hi-en2021/05/04 19:12:146333---0.832119------NMTNomultilingual model(many to one) trained on all WAT 2021 data by using base transformer.
24NICT-5INDIC21hi-en2021/06/21 11:55:036475---0.000000------NMTNoUsing PMI and PIB data for fine-tuning on am mbart model trained for over 5 epochs.
25NICT-5INDIC21hi-en2021/06/25 11:48:126495---0.000000------NMTNoUsing PMI and PIB data for fine-tuning on a mbart model trained for over 5 epochs. MNMT model.

Notice:

Back to top

HUMAN (WAT2022)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2021)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1NICT-5INDIC21hi-en2021/04/21 15:41:585278UnderwayNMTNoPretrain MBART on IndicCorp and FT on bilingual PMI data. Beam search. Model is bilingual.
2NICT-5INDIC21hi-en2021/04/22 11:51:355353UnderwayNMTNoMBART+MNMT. Beam 4.
3gaurvarINDIC21hi-en2021/04/25 18:08:205532UnderwayNMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
4gaurvarINDIC21hi-en2021/04/25 18:58:165567UnderwayNMTNoMulti Task Multi Lingual T5 trained for Multiple Indic Languages
5sakuraINDIC21hi-en2021/04/30 22:40:565872UnderwayNMTNoFine-tuning of multilingual mBART many2many model with training corpus.
6NLPHutINDIC21hi-en2021/05/03 00:15:595985UnderwayNMTNoTransformer with source and target language tags trained using all languages PMI data. Then fine tuned using all hi-en data.
7IIIT-HINDIC21hi-en2021/05/03 18:12:586017UnderwayNMTNoMNMT system (XX-En) trained via exploiting lexical similarity on PMI+CVIT parallel corpus, then improved using back translation on PMI monolingual data followed by fine tuning.
8CFILTINDIC21hi-en2021/05/04 01:11:556054UnderwayNMTNoMultilingual(Many-to-One(XX-En)) NMT model based on Transformer with shared encoder and decoder.
9CFILT-IITBINDIC21hi-en2021/05/04 01:52:556115UnderwayNMTNoMultilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all Indo-Aryan languages data converted to same script
10CFILT-IITBINDIC21hi-en2021/05/04 01:57:406126UnderwayNMTNoMultilingual NMT (Many to One): Trained using transformer based model using all Indo-Aryan languages with shared encoder-decoder and shared BPE vocabulary with all indic language data converted to sam
11coastalINDIC21hi-en2021/05/04 05:41:336164UnderwayNMTNomT5 trained only on PMI
12SRPOLINDIC21hi-en2021/05/04 15:25:386244UnderwayNMTNoEnsemble of many-to-one on all data. Pretrained on BT, finetuned on PMI
13SRPOLINDIC21hi-en2021/05/04 16:30:256270UnderwayNMTNoMany-to-one on all data. Pretrained on BT, finetuned on PMI
14IITP-MTINDIC21hi-en2021/05/04 17:49:106284UnderwayNMTNoMany-to-One model trained on all training data with base Transformer. All indic language data is romanized. Model fine-tuned on BT PMI monolingual corpus.
15mcairtINDIC21hi-en2021/05/04 19:12:146333UnderwayNMTNomultilingual model(many to one) trained on all WAT 2021 data by using base transformer.

Notice:
Back to top

HUMAN (WAT2020)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2019)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2018)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2017)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2016)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2015)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2014)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02