NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[EVALUATION RESULTS TOP] | [BLEU] | [RIBES] | [AMFM] | [HUMAN (WAT2022)] | [HUMAN (WAT2021)] | [HUMAN (WAT2020)] | [HUMAN (WAT2019)] | [HUMAN (WAT2018)] | [HUMAN (WAT2017)] | [HUMAN (WAT2016)] | [HUMAN (WAT2015)] | [HUMAN (WAT2014)] | [EVALUATION RESULTS USAGE POLICY]

BLEU


# Team Task Date/Time DataID BLEU
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1ORGANIZERHINDENhi-en2016/07/26 10:04:531031---21.37--- 0.00 0.00 0.00OtherYesOnline A (2016)
2ORGANIZERHINDENhi-en2016/07/26 13:25:181048---15.58--- 0.00 0.00 0.00OtherYesOnline B (2016)
3ORGANIZERHINDENhi-en2016/07/26 15:44:201054---10.32--- 0.00 0.00 0.00SMTNoPhrase-based SMT
4IITP-MTHINDENhi-en2016/08/29 15:10:411289--- 9.62--- 0.00 0.00 0.00SMTNoHierarchical SMT
5XMUNLPHINDENhi-en2017/07/24 08:47:291427---13.30--- 0.00 0.00 0.00NMTNosingle nmt model
6XMUNLPHINDENhi-en2017/07/26 22:54:461488---20.61--- 0.00 0.00 0.00NMTNosingle nmt model + monolingual data
7XMUNLPHINDENhi-en2017/07/27 23:00:461511---22.44--- 0.00 0.00 0.00NMTNoensemble of 4 nmt models + monolingual data
8IITB-MTGHINDENhi-en2017/08/01 15:10:091726---11.55--- 0.00 0.00 0.00NMTNoNMT with ensemble (last 3 + best validation)
9cvitHINDENhi-en2018/09/14 13:21:462331---20.63---- 0.00 0.00NMTYesConvS2S Model Uses External Data
10CUNIHINDENhi-en2018/09/15 03:10:302381---17.80---- 0.00 0.00NMTNoTransformer big, transfer learning from CS-EN 1M steps, only original HI-EN, beam=8; alpha=0.8; averaging of last 8 models after 230k steps
11cvitHINDENhi-en2018/11/06 15:51:542563---21.04---- 0.00 0.00NMTYes
12ORGANIZERHINDENhi-en2018/11/13 14:57:122567---15.44---- 0.00 0.00NMTNoNMT with Attention
13cvitHINDENhi-en2019/03/15 01:23:172643---22.62---- 0.00 0.00NMTYesmassive-multi
14cvitHINDENhi-en2019/03/15 01:33:222645---20.66---- 0.00 0.00NMTYesmassive-multi + ft
15cvitHINDENhi-en2019/03/22 05:52:472658---10.76---- 0.00 0.00NMTYesmany to en (Transformer model) trained on WAT2018 data. Detokenized!
16cvitHINDENhi-en2019/05/27 16:04:362681---22.91---- 0.00 0.00NMTYesmassive-multi + bt
17NICT-5HINDENhi-en2019/07/23 17:36:362865---19.06------NMTNoHiEn and TaEn mixed training NMT model. Transformer on t2t (Hi-En is external data)
18LTRC-MTHINDENhi-en2019/07/27 04:04:093117---16.32------NMTNoTransformer Baseline, Only IIT-B data
19LTRC-MTHINDENhi-en2019/07/27 04:49:363119---18.64------NMTNoTransformer Model with Backtranslation
20LTRC-MTHINDENhi-en2019/07/27 05:34:143121---17.44------NMTNoLSTM with attention, Backtranslation, Reinforcement Learning for 1 epoch
21LTRC-MTHINDENhi-en2019/07/27 05:58:333124---17.07------NMTNoLSTM with global attention & Backtranslation
22cvitHINDENhi-en2020/06/10 15:37:123418---23.83------NMTYesXX-to-EN model, uses PIB-V0 dataset
23cvitHINDENhi-en2020/07/06 02:29:113419---24.65------NMTYesXX-to-EN Model, uses PIB-V1 Data
24cvitHINDENhi-en2020/07/06 06:22:023422---21.94------NMTYesMultilingual model, mm-all-iter0
25cvitHINDENhi-en2020/07/06 06:38:143423---22.48------NMTYesMultilingual Model, Uses PIB-V0 data. (mm-all-iter1)
26cvitHINDENhi-en2020/07/10 04:28:173434---24.82------NMTYesxx-to-en model uses PIB-v2 data
27cvitHINDENhi-en2020/07/20 20:38:093441---24.85------NMTYesxx-en model, uses PIB-v2 data
28cvitHINDENhi-en2020/08/18 05:27:083446---25.26------NMTYes
29WTHINDENhi-en2020/09/03 18:12:323638---29.59------NMTNoUsed 5M Back translation news crawl data to train. Method: Transformer NMT; Preprocessing: 1. Removed mixed language sentences 2. moses tokeniser for English and for Hindi indicnlp normaliser and toke

Notice:

Back to top

RIBES


# Team Task Date/Time DataID RIBES
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1ORGANIZERHINDENhi-en2016/07/26 10:04:531031---0.714537---0.0000000.0000000.000000OtherYesOnline A (2016)
2ORGANIZERHINDENhi-en2016/07/26 13:25:181048---0.683214---0.0000000.0000000.000000OtherYesOnline B (2016)
3ORGANIZERHINDENhi-en2016/07/26 15:44:201054---0.638090---0.0000000.0000000.000000SMTNoPhrase-based SMT
4IITP-MTHINDENhi-en2016/08/29 15:10:411289---0.628666---0.0000000.0000000.000000SMTNoHierarchical SMT
5XMUNLPHINDENhi-en2017/07/24 08:47:291427---0.697707---0.0000000.0000000.000000NMTNosingle nmt model
6XMUNLPHINDENhi-en2017/07/26 22:54:461488---0.743656---0.0000000.0000000.000000NMTNosingle nmt model + monolingual data
7XMUNLPHINDENhi-en2017/07/27 23:00:461511---0.750921---0.0000000.0000000.000000NMTNoensemble of 4 nmt models + monolingual data
8IITB-MTGHINDENhi-en2017/08/01 15:10:091726---0.682902---0.0000000.0000000.000000NMTNoNMT with ensemble (last 3 + best validation)
9cvitHINDENhi-en2018/09/14 13:21:462331---0.751883----0.0000000.000000NMTYesConvS2S Model Uses External Data
10CUNIHINDENhi-en2018/09/15 03:10:302381---0.731727----0.0000000.000000NMTNoTransformer big, transfer learning from CS-EN 1M steps, only original HI-EN, beam=8; alpha=0.8; averaging of last 8 models after 230k steps
11cvitHINDENhi-en2018/11/06 15:51:542563---0.755941----0.0000000.000000NMTYes
12ORGANIZERHINDENhi-en2018/11/13 14:57:122567---0.718751----0.0000000.000000NMTNoNMT with Attention
13cvitHINDENhi-en2019/03/15 01:23:172643---0.766180----0.0000000.000000NMTYesmassive-multi
14cvitHINDENhi-en2019/03/15 01:33:222645---0.758910----0.0000000.000000NMTYesmassive-multi + ft
15cvitHINDENhi-en2019/03/22 05:52:472658---0.667353----0.0000000.000000NMTYesmany to en (Transformer model) trained on WAT2018 data. Detokenized!
16cvitHINDENhi-en2019/05/27 16:04:362681---0.768324----0.0000000.000000NMTYesmassive-multi + bt
17NICT-5HINDENhi-en2019/07/23 17:36:362865---0.741197------NMTNoHiEn and TaEn mixed training NMT model. Transformer on t2t (Hi-En is external data)
18LTRC-MTHINDENhi-en2019/07/27 04:04:093117---0.729072------NMTNoTransformer Baseline, Only IIT-B data
19LTRC-MTHINDENhi-en2019/07/27 04:49:363119---0.735358------NMTNoTransformer Model with Backtranslation
20LTRC-MTHINDENhi-en2019/07/27 05:34:143121---0.735357------NMTNoLSTM with attention, Backtranslation, Reinforcement Learning for 1 epoch
21LTRC-MTHINDENhi-en2019/07/27 05:58:333124---0.729059------NMTNoLSTM with global attention & Backtranslation
22cvitHINDENhi-en2020/06/10 15:37:123418---0.770450------NMTYesXX-to-EN model, uses PIB-V0 dataset
23cvitHINDENhi-en2020/07/06 02:29:113419---0.774354------NMTYesXX-to-EN Model, uses PIB-V1 Data
24cvitHINDENhi-en2020/07/06 06:22:023422---0.763418------NMTYesMultilingual model, mm-all-iter0
25cvitHINDENhi-en2020/07/06 06:38:143423---0.766637------NMTYesMultilingual Model, Uses PIB-V0 data. (mm-all-iter1)
26cvitHINDENhi-en2020/07/10 04:28:173434---0.775515------NMTYesxx-to-en model uses PIB-v2 data
27cvitHINDENhi-en2020/07/20 20:38:093441---0.774830------NMTYesxx-en model, uses PIB-v2 data
28cvitHINDENhi-en2020/08/18 05:27:083446---0.777445------NMTYes
29WTHINDENhi-en2020/09/03 18:12:323638---0.792065------NMTNoUsed 5M Back translation news crawl data to train. Method: Transformer NMT; Preprocessing: 1. Removed mixed language sentences 2. moses tokeniser for English and for Hindi indicnlp normaliser and toke

Notice:

Back to top

AMFM


# Team Task Date/Time DataID AMFM
Method
Other
Resources
System
Description
unuse unuse unuse unuse unuse unuse unuse unuse unuse unuse
1ORGANIZERHINDENhi-en2016/07/26 10:04:531031---0.621100---0.0000000.0000000.000000OtherYesOnline A (2016)
2ORGANIZERHINDENhi-en2016/07/26 13:25:181048---0.590520---0.0000000.0000000.000000OtherYesOnline B (2016)
3ORGANIZERHINDENhi-en2016/07/26 15:44:201054---0.574850---0.0000000.0000000.000000SMTNoPhrase-based SMT
4IITP-MTHINDENhi-en2016/08/29 15:10:411289---0.567370---0.0000000.0000000.000000SMTNoHierarchical SMT
5XMUNLPHINDENhi-en2017/07/24 08:47:291427---0.568010---0.0000000.0000000.000000NMTNosingle nmt model
6XMUNLPHINDENhi-en2017/07/26 22:54:461488---0.627190---0.0000000.0000000.000000NMTNosingle nmt model + monolingual data
7XMUNLPHINDENhi-en2017/07/27 23:00:461511---0.629530---0.0000000.0000000.000000NMTNoensemble of 4 nmt models + monolingual data
8IITB-MTGHINDENhi-en2017/08/01 15:10:091726---0.557040---0.0000000.0000000.000000NMTNoNMT with ensemble (last 3 + best validation)
9cvitHINDENhi-en2018/09/14 13:21:462331---0.623240----0.0000000.000000NMTYesConvS2S Model Uses External Data
10CUNIHINDENhi-en2018/09/15 03:10:302381---0.611090----0.0000000.000000NMTNoTransformer big, transfer learning from CS-EN 1M steps, only original HI-EN, beam=8; alpha=0.8; averaging of last 8 models after 230k steps
11cvitHINDENhi-en2018/11/06 15:51:542563---0.628600----0.0000000.000000NMTYes
12ORGANIZERHINDENhi-en2018/11/13 14:57:122567---0.586360----0.0000000.000000NMTNoNMT with Attention
13cvitHINDENhi-en2019/03/15 01:23:172643---0.637230----0.0000000.000000NMTYesmassive-multi
14cvitHINDENhi-en2019/03/15 01:33:222645---0.631250----0.0000000.000000NMTYesmassive-multi + ft
15cvitHINDENhi-en2019/03/22 05:52:472658---0.554700----0.0000000.000000NMTYesmany to en (Transformer model) trained on WAT2018 data. Detokenized!
16cvitHINDENhi-en2019/05/27 16:04:362681---0.641730----0.0000000.000000NMTYesmassive-multi + bt
17NICT-5HINDENhi-en2019/07/23 17:36:362865---0.566490------NMTNoHiEn and TaEn mixed training NMT model. Transformer on t2t (Hi-En is external data)
18LTRC-MTHINDENhi-en2019/07/27 04:04:093117---0.563590------NMTNoTransformer Baseline, Only IIT-B data
19LTRC-MTHINDENhi-en2019/07/27 04:49:363119---0.594770------NMTNoTransformer Model with Backtranslation
20LTRC-MTHINDENhi-en2019/07/27 05:34:143121---0.594550------NMTNoLSTM with attention, Backtranslation, Reinforcement Learning for 1 epoch
21LTRC-MTHINDENhi-en2019/07/27 05:58:333124---0.587060------NMTNoLSTM with global attention & Backtranslation
22cvitHINDENhi-en2020/06/10 15:37:123418---0.601810------NMTYesXX-to-EN model, uses PIB-V0 dataset
23cvitHINDENhi-en2020/07/06 02:29:113419---0.609500------NMTYesXX-to-EN Model, uses PIB-V1 Data
24cvitHINDENhi-en2020/07/06 06:22:023422---0.596650------NMTYesMultilingual model, mm-all-iter0
25cvitHINDENhi-en2020/07/06 06:38:143423---0.596890------NMTYesMultilingual Model, Uses PIB-V0 data. (mm-all-iter1)
26cvitHINDENhi-en2020/07/10 04:28:173434---0.610650------NMTYesxx-to-en model uses PIB-v2 data
27cvitHINDENhi-en2020/07/20 20:38:093441---0.610910------NMTYesxx-en model, uses PIB-v2 data
28cvitHINDENhi-en2020/08/18 05:27:083446---0.614060------NMTYes
29WTHINDENhi-en2020/09/03 18:12:323638---0.637410------NMTNoUsed 5M Back translation news crawl data to train. Method: Transformer NMT; Preprocessing: 1. Removed mixed language sentences 2. moses tokeniser for English and for Hindi indicnlp normaliser and toke

Notice:

Back to top

HUMAN (WAT2022)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2021)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2020)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1WTHINDENhi-en2020/09/03 18:12:3236383.720NMTNoUsed 5M Back translation news crawl data to train. Method: Transformer NMT; Preprocessing: 1. Removed mixed language sentences 2. moses tokeniser for English and for Hindi indicnlp normaliser and toke

Notice:
Back to top

HUMAN (WAT2019)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1cvitHINDENhi-en2019/05/27 16:04:362681UnderwayNMTYesmassive-multi + bt
2NICT-5HINDENhi-en2019/07/23 17:36:362865UnderwayNMTNoHiEn and TaEn mixed training NMT model. Transformer on t2t (Hi-En is external data)
3LTRC-MTHINDENhi-en2019/07/27 04:49:363119UnderwayNMTNoTransformer Model with Backtranslation

Notice:
Back to top

HUMAN (WAT2018)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1cvitHINDENhi-en2018/09/14 13:21:46233172.250NMTYesConvS2S Model Uses External Data
2CUNIHINDENhi-en2018/09/15 03:10:30238167.250NMTNoTransformer big, transfer learning from CS-EN 1M steps, only original HI-EN, beam=8; alpha=0.8; averaging of last 8 models after 230k steps

Notice:
Back to top

HUMAN (WAT2017)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1XMUNLPHINDENhi-en2017/07/27 23:00:46151168.250NMTNoensemble of 4 nmt models + monolingual data
2IITB-MTGHINDENhi-en2017/08/01 15:10:09172621.000NMTNoNMT with ensemble (last 3 + best validation)

Notice:
Back to top

HUMAN (WAT2016)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1ORGANIZERHINDENhi-en2016/07/26 10:04:53103144.750OtherYesOnline A (2016)
2ORGANIZERHINDENhi-en2016/07/26 13:25:18104814.000OtherYesOnline B (2016)

Notice:
Back to top

HUMAN (WAT2015)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2014)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02