NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[EVALUATION RESULTS TOP] | [BLEU] | [RIBES] | [AMFM] | [HUMAN (WAT2022)] | [HUMAN (WAT2021)] | [HUMAN (WAT2020)] | [HUMAN (WAT2019)] | [HUMAN (WAT2018)] | [HUMAN (WAT2017)] | [HUMAN (WAT2016)] | [HUMAN (WAT2015)] | [HUMAN (WAT2014)] | [EVALUATION RESULTS USAGE POLICY]

BLEU


# Team Task Date/Time DataID BLEU
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1BITS-PMMCHMM23en-hi2023/07/08 13:44:397124------52.10---NMTYesNLLB model finetuned on captions + object tags of original & synthetic images using DETR model
2VoltaMMCHMM23en-hi2021/05/25 13:56:036430------51.60---NMTYesFinetuned mBART (Used IITB for data augmentation) and added object tags to the input using Mask RCNN
3ODIAGENMMCHMM23en-hi2023/07/06 03:54:047106------42.80---NMTNoImage features extracted as Object tags appended with text and MBART fine-tuning
4CNLP-NITS-PPMMCHMM23en-hi2022/07/11 12:39:256741------39.30---NMTNoTransliteration-based phrase pairs augmentation and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
5CNLP-NITS-PPMMCHMM23en-hi2021/04/27 23:56:015730------39.28---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) and visual features in training using
6SILO_NLPMMCHMM23en-hi2022/07/18 17:33:086959------39.10---NMTYesObject Tags (Image) + Flickr8 dataset as additional resource + Finetune mBART
7iitpMMCHMM23en-hi2021/05/01 23:25:565942------37.50---NMTNoRemoved special chars at start and end of sentence 1. pre-trained with HindEnCorp, trained with Vis Gen 2. trained with Visual Genome Selected best of two for each sentence according to translation
8CNLP-NITSMMCHMM23en-hi2020/09/18 15:52:353894------33.57---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
9ORGANIZERMMCHMM23en-hi2020/11/07 01:35:404179------20.34---NMTNo
10ORGANIZERMMCHMM23en-hi2020/11/07 01:39:324178------20.34---NMTNo
11ORGANIZERMMCHMM23en-hi2020/11/07 01:47:444180------20.34---NMTNo

Notice:

Back to top

RIBES


# Team Task Date/Time DataID RIBES
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1VoltaMMCHMM23en-hi2021/05/25 13:56:036430------0.859645---NMTYesFinetuned mBART (Used IITB for data augmentation) and added object tags to the input using Mask RCNN
2BITS-PMMCHMM23en-hi2023/07/08 13:44:397124------0.853388---NMTYesNLLB model finetuned on captions + object tags of original & synthetic images using DETR model
3ODIAGENMMCHMM23en-hi2023/07/06 03:54:047106------0.815156---NMTNoImage features extracted as Object tags appended with text and MBART fine-tuning
4CNLP-NITS-PPMMCHMM23en-hi2021/04/27 23:56:015730------0.792097---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) and visual features in training using
5CNLP-NITS-PPMMCHMM23en-hi2022/07/11 12:39:256741------0.791468---NMTNoTransliteration-based phrase pairs augmentation and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
6iitpMMCHMM23en-hi2021/05/01 23:25:565942------0.790809---NMTNoRemoved special chars at start and end of sentence 1. pre-trained with HindEnCorp, trained with Vis Gen 2. trained with Visual Genome Selected best of two for each sentence according to translation
7SILO_NLPMMCHMM23en-hi2022/07/18 17:33:086959------0.784169---NMTYesObject Tags (Image) + Flickr8 dataset as additional resource + Finetune mBART
8CNLP-NITSMMCHMM23en-hi2020/09/18 15:52:353894------0.754141---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
9ORGANIZERMMCHMM23en-hi2020/11/07 01:35:404179------0.644230---NMTNo
10ORGANIZERMMCHMM23en-hi2020/11/07 01:39:324178------0.644230---NMTNo
11ORGANIZERMMCHMM23en-hi2020/11/07 01:47:444180------0.644230---NMTNo

Notice:

Back to top

AMFM


# Team Task Date/Time DataID AMFM
Method
Other
Resources
System
Description
unuse unuse unuse unuse unuse unuse unuse unuse unuse unuse
1VoltaMMCHMM23en-hi2021/05/25 13:56:036430------0.877000---NMTYesFinetuned mBART (Used IITB for data augmentation) and added object tags to the input using Mask RCNN
2iitpMMCHMM23en-hi2021/05/01 23:25:565942------0.823429---NMTNoRemoved special chars at start and end of sentence 1. pre-trained with HindEnCorp, trained with Vis Gen 2. trained with Visual Genome Selected best of two for each sentence according to translation
3CNLP-NITS-PPMMCHMM23en-hi2021/04/27 23:56:015730------0.817356---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) and visual features in training using
4CNLP-NITSMMCHMM23en-hi2020/09/18 15:52:353894------0.787320---NMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
5ORGANIZERMMCHMM23en-hi2020/11/07 01:35:404179------0.669760---NMTNo
6ORGANIZERMMCHMM23en-hi2020/11/07 01:39:324178------0.669760---NMTNo
7ORGANIZERMMCHMM23en-hi2020/11/07 01:47:444180------0.669760---NMTNo
8CNLP-NITS-PPMMCHMM23en-hi2022/07/11 12:39:256741------0.000000---NMTNoTransliteration-based phrase pairs augmentation and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
9SILO_NLPMMCHMM23en-hi2022/07/18 17:33:086959------0.000000---NMTYesObject Tags (Image) + Flickr8 dataset as additional resource + Finetune mBART
10ODIAGENMMCHMM23en-hi2023/07/06 03:54:047106------0.000000---NMTNoImage features extracted as Object tags appended with text and MBART fine-tuning
11BITS-PMMCHMM23en-hi2023/07/08 13:44:397124------0.000000---NMTYesNLLB model finetuned on captions + object tags of original & synthetic images using DETR model

Notice:

Back to top

HUMAN (WAT2022)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1CNLP-NITS-PPMMCHMM23en-hi2022/07/11 12:39:256741UnderwayNMTNoTransliteration-based phrase pairs augmentation and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.
2SILO_NLPMMCHMM23en-hi2022/07/18 17:33:086959UnderwayNMTYesObject Tags (Image) + Flickr8 dataset as additional resource + Finetune mBART

Notice:
Back to top

HUMAN (WAT2021)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1CNLP-NITS-PPMMCHMM23en-hi2021/04/27 23:56:015730UnderwayNMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) and visual features in training using
2iitpMMCHMM23en-hi2021/05/01 23:25:565942UnderwayNMTNoRemoved special chars at start and end of sentence 1. pre-trained with HindEnCorp, trained with Vis Gen 2. trained with Visual Genome Selected best of two for each sentence according to translation

Notice:
Back to top

HUMAN (WAT2020)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1CNLP-NITSMMCHMM23en-hi2020/09/18 15:52:353894UnderwayNMTYesPretrained monolingual data (IITB) using Glove and fine-tuned with parallel data and visual features in training using BRNN encoder and doubly-attentive-rnn decoder.

Notice:
Back to top

HUMAN (WAT2019)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2018)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2017)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2016)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2015)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2014)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02