NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[EVALUATION RESULTS TOP] | [BLEU] | [RIBES] | [AMFM] | [HUMAN (WAT2019)] | [HUMAN (WAT2018)] | [HUMAN (WAT2017)] | [HUMAN (WAT2016)] | [HUMAN (WAT2015)] | [HUMAN (WAT2014)] | [EVALUATION RESULTS USAGE POLICY]

BLEU


# Team Task Date/Time DataID BLEU
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1FBAIALT2en-my2019/07/27 14:38:043203--------39.25-NMTYes5 model ensemble, (BT iter1 + Self-Training) + (BT+ST) iter2 + fine tuning + noisy channel
2FBAIALT2en-my2019/07/27 08:10:073149--------36.77-NMTNoensemble 5 models, (BT iter1 + Self-Training)
3NICT-4ALT2en-my2019/07/26 11:33:552979--------31.33-OtherYesSame as last year but with cleaner monolingual data
4NICTALT2en-my2019/07/22 17:15:422818--------30.84-NMTYesSingle model+language model pre-training
5NICT-4ALT2en-my2019/07/23 14:09:562855--------30.79-NMTYes
6UCSMNLPALT2en-my2019/07/26 17:00:183021--------28.20-SMTYes
7NICTALT2en-my2019/07/22 17:16:442819--------25.88-NMTNoSingle model+language model pre-training
8ORGANIZERALT2en-my2019/07/22 19:12:362827--------22.53-NMTNoNMT with Attention
9UCSYNLPALT2en-my2019/07/23 15:28:072858--------20.86-NMTNoNMT with attention
10ORGANIZERALT2en-my2019/07/24 16:18:022900--------20.63-OtherYesOnline A
11sarahALT2en-my2019/07/27 08:01:013146--------19.94-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 0 + 1
12sarahALT2en-my2019/07/27 08:05:393147--------19.75-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 1
13sarahALT2en-my2019/07/27 07:28:013137--------19.64-NMTNoSingle model. Transformer. 5 layers, 2 heads, random 0

Notice:

Back to top

RIBES


# Team Task Date/Time DataID RIBES
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1FBAIALT2en-my2019/07/27 14:38:043203--------0.782472-NMTYes5 model ensemble, (BT iter1 + Self-Training) + (BT+ST) iter2 + fine tuning + noisy channel
2FBAIALT2en-my2019/07/27 08:10:073149--------0.769902-NMTNoensemble 5 models, (BT iter1 + Self-Training)
3NICTALT2en-my2019/07/22 17:15:422818--------0.760247-NMTYesSingle model+language model pre-training
4NICT-4ALT2en-my2019/07/23 14:09:562855--------0.750353-NMTYes
5NICT-4ALT2en-my2019/07/26 11:33:552979--------0.732998-OtherYesSame as last year but with cleaner monolingual data
6NICTALT2en-my2019/07/22 17:16:442819--------0.727299-NMTNoSingle model+language model pre-training
7sarahALT2en-my2019/07/27 08:05:393147--------0.719219-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 1
8sarahALT2en-my2019/07/27 08:01:013146--------0.716243-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 0 + 1
9sarahALT2en-my2019/07/27 07:28:013137--------0.711348-NMTNoSingle model. Transformer. 5 layers, 2 heads, random 0
10UCSYNLPALT2en-my2019/07/23 15:28:072858--------0.698507-NMTNoNMT with attention
11ORGANIZERALT2en-my2019/07/24 16:18:022900--------0.680398-OtherYesOnline A
12ORGANIZERALT2en-my2019/07/22 19:12:362827--------0.670375-NMTNoNMT with Attention
13UCSMNLPALT2en-my2019/07/26 17:00:183021--------0.596811-SMTYes

Notice:

Back to top

AMFM


# Team Task Date/Time DataID AMFM
Method
Other
Resources
System
Description
juman kytea mecab moses-
tokenizer
stanford-
segmenter-
ctb
stanford-
segmenter-
pku
indic-
tokenizer
unuse myseg kmseg
1FBAIALT2en-my2019/07/27 14:38:043203--------0.789410-NMTYes5 model ensemble, (BT iter1 + Self-Training) + (BT+ST) iter2 + fine tuning + noisy channel
2FBAIALT2en-my2019/07/27 08:10:073149--------0.782200-NMTNoensemble 5 models, (BT iter1 + Self-Training)
3NICTALT2en-my2019/07/22 17:15:422818--------0.757590-NMTYesSingle model+language model pre-training
4NICT-4ALT2en-my2019/07/26 11:33:552979--------0.756290-OtherYesSame as last year but with cleaner monolingual data
5NICT-4ALT2en-my2019/07/23 14:09:562855--------0.753200-NMTYes
6NICTALT2en-my2019/07/22 17:16:442819--------0.746480-NMTNoSingle model+language model pre-training
7sarahALT2en-my2019/07/27 08:01:013146--------0.714740-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 0 + 1
8sarahALT2en-my2019/07/27 07:28:013137--------0.713620-NMTNoSingle model. Transformer. 5 layers, 2 heads, random 0
9sarahALT2en-my2019/07/27 08:05:393147--------0.712580-NMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 1
10UCSMNLPALT2en-my2019/07/26 17:00:183021--------0.693420-SMTYes
11UCSYNLPALT2en-my2019/07/23 15:28:072858--------0.687260-NMTNoNMT with attention
12ORGANIZERALT2en-my2019/07/22 19:12:362827--------0.661920-NMTNoNMT with Attention
13ORGANIZERALT2en-my2019/07/24 16:18:022900--------0.622430-OtherYesOnline A

Notice:

Back to top

HUMAN (WAT2019)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description
1NICTALT2en-my2019/07/22 17:15:422818UnderwayNMTYesSingle model+language model pre-training
2UCSYNLPALT2en-my2019/07/23 15:28:072858UnderwayNMTNoNMT with attention
3NICT-4ALT2en-my2019/07/26 11:33:552979UnderwayOtherYesSame as last year but with cleaner monolingual data
4UCSMNLPALT2en-my2019/07/26 17:00:183021UnderwaySMTYes
5sarahALT2en-my2019/07/27 07:28:013137UnderwayNMTNoSingle model. Transformer. 5 layers, 2 heads, random 0
6sarahALT2en-my2019/07/27 08:01:013146UnderwayNMTNoEnsemble. Transformer. 5 layers, 2 heads, random 0 + 0 + 1
7FBAIALT2en-my2019/07/27 08:10:073149UnderwayNMTNoensemble 5 models, (BT iter1 + Self-Training)
8FBAIALT2en-my2019/07/27 14:38:043203UnderwayNMTYes5 model ensemble, (BT iter1 + Self-Training) + (BT+ST) iter2 + fine tuning + noisy channel

Notice:
Back to top

HUMAN (WAT2018)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2017)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2016)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2015)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

HUMAN (WAT2014)


# Team Task Date/Time DataID HUMAN
Method
Other
Resources
System
Description

Notice:
Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02