NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[EVALUATION RESULTS TOP] | [BLEU] | [RIBES] | [AMFM] | [HUMAN (WAT2022)] | [HUMAN (WAT2021)] | [HUMAN (WAT2020)] | [HUMAN (WAT2019)] | [HUMAN (WAT2018)] | [HUMAN (WAT2017)] | [HUMAN (WAT2016)] | [HUMAN (WAT2015)] | [HUMAN (WAT2014)] | [EVALUATION RESULTS USAGE POLICY]

BLEU


# Team Task Date/Time DataID BLEU
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
100-7MMCHTEXT24en-hi2024/08/09 18:03:297313------54.10---NMTNo
2ODIAGENMMCHTEXT24en-hi2023/07/03 02:56:517088------53.60---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
3ODIAGENMMCHTEXT24en-hi2023/07/06 12:31:367110------53.10---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
4VoltaMMCHTEXT24en-hi2021/05/25 13:55:366429------51.66---NMTYesFinetuned mBART (Used IITB corpus for data augmentation)
5ODIAGENMMCHTEXT24en-hi2024/08/12 23:38:257358------44.10---OtherNoLLM based (fine-tuning Mistral7B)
6nlp_novicesMMCHTEXT24en-hi2022/07/10 22:51:346725------41.80---NMTYesFinetuned Transformers over OPUS Corpora additionally
7ODIANLPMMCHTEXT24en-hi2020/09/14 19:07:053713------38.50---NMTYesTransformer model (used IITB as an additional resource for training)
8CNLP-NITS-PPMMCHTEXT24en-hi2022/07/11 12:43:476742------37.20---NMTNoTransliteration-based phrase pairs augmentation in training using BRNN-based NMT
9CNLP-NITS-PPMMCHTEXT24en-hi2021/04/28 00:15:035732------37.16---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model.
10DCU_NMTMMCHTEXT24en-hi2024/08/11 22:57:157349------35.90---NMTNoEn-Hi system trained using backtranslated data from flicker.
11ORGANIZERMMCHTEXT24en-hi2020/11/07 02:12:484188------30.72---NMTNo
12ORGANIZERMMCHTEXT24en-hi2020/11/07 02:15:084189------30.72---NMTNo
13ORGANIZERMMCHTEXT24en-hi2020/11/07 02:18:174190------30.72---NMTNo
14SILO_NLPMMCHTEXT24en-hi2022/07/12 04:15:446838------29.60---NMTNoFine-tuning with pre-trained mBART-50 model
15DCU_NMTMMCHTEXT24en-hi2024/08/11 23:00:107350------29.20---NMTNoThe baseline text-only model trained only on provided data.
16ORGANIZERMMCHTEXT24en-hi2020/08/27 21:08:363587------28.35---NMTNoTransformer base model
17CNLP-NITSMMCHTEXT24en-hi2020/09/18 16:12:323898------27.75---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model.
18MMCHTEXT24en-hi2020/09/19 06:17:424031------20.52---NMTNoFor the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t
19ODIAGENMMCHTEXT24en-hi2024/08/15 17:16:067402------ 1.10---OtherNoLLM based 8fine-tuned multimodal Llava model for region-specific instruction set in Hindi

Notice:

Back to top

RIBES


# Team Task Date/Time DataID RIBES
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
100-7MMCHTEXT24en-hi2024/08/09 18:03:297313------0.858322---NMTNo
2ODIAGENMMCHTEXT24en-hi2023/07/03 02:56:517088------0.858033---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
3VoltaMMCHTEXT24en-hi2021/05/25 13:55:366429------0.855410---NMTYesFinetuned mBART (Used IITB corpus for data augmentation)
4ODIAGENMMCHTEXT24en-hi2023/07/06 12:31:367110------0.854334---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
5ODIAGENMMCHTEXT24en-hi2024/08/12 23:38:257358------0.815457---OtherNoLLM based (fine-tuning Mistral7B)
6nlp_novicesMMCHTEXT24en-hi2022/07/10 22:51:346725------0.812500---NMTYesFinetuned Transformers over OPUS Corpora additionally
7ODIANLPMMCHTEXT24en-hi2020/09/14 19:07:053713------0.785252---NMTYesTransformer model (used IITB as an additional resource for training)
8CNLP-NITS-PPMMCHTEXT24en-hi2022/07/11 12:43:476742------0.770640---NMTNoTransliteration-based phrase pairs augmentation in training using BRNN-based NMT
9CNLP-NITS-PPMMCHTEXT24en-hi2021/04/28 00:15:035732------0.770621---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model.
10DCU_NMTMMCHTEXT24en-hi2024/08/11 22:57:157349------0.762839---NMTNoEn-Hi system trained using backtranslated data from flicker.
11ORGANIZERMMCHTEXT24en-hi2020/11/07 02:12:484188------0.736262---NMTNo
12ORGANIZERMMCHTEXT24en-hi2020/11/07 02:15:084189------0.736262---NMTNo
13ORGANIZERMMCHTEXT24en-hi2020/11/07 02:18:174190------0.736262---NMTNo
14SILO_NLPMMCHTEXT24en-hi2022/07/12 04:15:446838------0.728801---NMTNoFine-tuning with pre-trained mBART-50 model
15CNLP-NITSMMCHTEXT24en-hi2020/09/18 16:12:323898------0.714980---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model.
16DCU_NMTMMCHTEXT24en-hi2024/08/11 23:00:107350------0.709336---NMTNoThe baseline text-only model trained only on provided data.
17ORGANIZERMMCHTEXT24en-hi2020/08/27 21:08:363587------0.691315---NMTNoTransformer base model
18MMCHTEXT24en-hi2020/09/19 06:17:424031------0.623644---NMTNoFor the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t
19ODIAGENMMCHTEXT24en-hi2024/08/15 17:16:067402------0.151195---OtherNoLLM based 8fine-tuned multimodal Llava model for region-specific instruction set in Hindi

Notice:

Back to top

AMFM


# Team Task Date/Time DataID AMFM
Method
Other
Resources
System
Description
unuse unuse unuse unuse unuse unuse unuse unuse unuse unuse
1VoltaMMCHTEXT24en-hi2021/05/25 13:55:366429------0.876300---NMTYesFinetuned mBART (Used IITB corpus for data augmentation)
2ODIANLPMMCHTEXT24en-hi2020/09/14 19:07:053713------0.804880---NMTYesTransformer model (used IITB as an additional resource for training)
3CNLP-NITS-PPMMCHTEXT24en-hi2021/04/28 00:15:035732------0.797409---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model.
4ORGANIZERMMCHTEXT24en-hi2020/11/07 02:12:484188------0.773230---NMTNo
5ORGANIZERMMCHTEXT24en-hi2020/11/07 02:15:084189------0.773230---NMTNo
6ORGANIZERMMCHTEXT24en-hi2020/11/07 02:18:174190------0.773230---NMTNo
7CNLP-NITSMMCHTEXT24en-hi2020/09/18 16:12:323898------0.750320---NMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model.
8ORGANIZERMMCHTEXT24en-hi2020/08/27 21:08:363587------0.727010---NMTNoTransformer base model
9MMCHTEXT24en-hi2020/09/19 06:17:424031------0.698600---NMTNoFor the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t
10nlp_novicesMMCHTEXT24en-hi2022/07/10 22:51:346725------0.000000---NMTYesFinetuned Transformers over OPUS Corpora additionally
11CNLP-NITS-PPMMCHTEXT24en-hi2022/07/11 12:43:476742------0.000000---NMTNoTransliteration-based phrase pairs augmentation in training using BRNN-based NMT
12SILO_NLPMMCHTEXT24en-hi2022/07/12 04:15:446838------0.000000---NMTNoFine-tuning with pre-trained mBART-50 model
13ODIAGENMMCHTEXT24en-hi2023/07/03 02:56:517088------0.000000---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
14ODIAGENMMCHTEXT24en-hi2023/07/06 12:31:367110------0.000000---NMTNoFine-tuning Transformer using NLLB-200 from Facebook
1500-7MMCHTEXT24en-hi2024/08/09 18:03:297313------0.000000---NMTNo
16DCU_NMTMMCHTEXT24en-hi2024/08/11 22:57:157349------0.000000---NMTNoEn-Hi system trained using backtranslated data from flicker.
17DCU_NMTMMCHTEXT24en-hi2024/08/11 23:00:107350------0.000000---NMTNoThe baseline text-only model trained only on provided data.
18ODIAGENMMCHTEXT24en-hi2024/08/12 23:38:257358------0.000000---OtherNoLLM based (fine-tuning Mistral7B)
19ODIAGENMMCHTEXT24en-hi2024/08/15 17:16:067402------0.000000---OtherNoLLM based 8fine-tuned multimodal Llava model for region-specific instruction set in Hindi

Notice:

Back to top

HUMAN (WAT2022)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1CNLP-NITS-PPMMCHTEXT24en-hi2022/07/11 12:43:4767423.078NMTNoTransliteration-based phrase pairs augmentation in training using BRNN-based NMT
2SILO_NLPMMCHTEXT24en-hi2022/07/12 04:15:4468382.715NMTNoFine-tuning with pre-trained mBART-50 model
3nlp_novicesMMCHTEXT24en-hi2022/07/10 22:51:346725UnderwayNMTYesFinetuned Transformers over OPUS Corpora additionally

Notice:
Back to top

HUMAN (WAT2021)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1CNLP-NITS-PPMMCHTEXT24en-hi2021/04/28 00:15:035732UnderwayNMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model.

Notice:
Back to top

HUMAN (WAT2020)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1ODIANLPMMCHTEXT24en-hi2020/09/14 19:07:053713UnderwayNMTYesTransformer model (used IITB as an additional resource for training)
2CNLP-NITSMMCHTEXT24en-hi2020/09/18 16:12:323898UnderwayNMTYesPretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model.
3MMCHTEXT24en-hi2020/09/19 06:17:424031UnderwayNMTNoFor the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t

Notice:
Back to top

HUMAN (WAT2019)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2018)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2017)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2016)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2015)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2014)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02