WAT
The Workshop on Asian Translation
Evaluation Results
[EVALUATION RESULTS TOP]
| [BLEU]
| [RIBES]
| [AMFM]
| [HUMAN (WAT2022)]
| [HUMAN (WAT2021)]
| [HUMAN (WAT2020)]
| [HUMAN (WAT2019)]
| [HUMAN (WAT2018)]
| [HUMAN (WAT2017)]
| [HUMAN (WAT2016)]
| [HUMAN (WAT2015)]
| [HUMAN (WAT2014)]
| [EVALUATION RESULTS USAGE POLICY]
BLEU
Notice:
- This table is sorted by the leftmost segmenters. You can change the segmenter used to sort by clicking each segmenter link.
RIBES
Notice:
- This table is sorted by the leftmost segmenters. You can change the segmenter used to sort by clicking each segmenter link.
AMFM
Notice:
- This table is sorted by the leftmost segmenters. You can change the segmenter used to sort by clicking each segmenter link.
- Adequacy-Fluency Metrics (AMFM) is a two-dimensional automatic evaluation metric for machine translation, designed to operate at the sentence level.
It is based on adequacy and fluency, to decouple semantic and syntactic components of the translation process to provide a balanced view on translation quality.
- AMFM is calculated without tokenizers.
- The detail of AMFM is shown on the following paper:
"Adequacy–Fluency Metrics: Evaluating MT in the Continuous Space Model Framework"
[pdf].
Invited Talk in WAT2015 also helps understanding [slide].
HUMAN (WAT2022)
Notice:
- HUMAN (WAT2022) is the result of the Pairwise Crowdsourcing Evaluation on WAT2022.
- HUMAN (WAT2022) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2021)
Notice:
- HUMAN (WAT2021) is the result of the Pairwise Crowdsourcing Evaluation on WAT2021.
- HUMAN (WAT2021) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2020)
Notice:
- HUMAN (WAT2020) is the result of the Pairwise Crowdsourcing Evaluation on WAT2020.
- HUMAN (WAT2020) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2019)
Notice:
- HUMAN (WAT2019) is the result of the Pairwise Crowdsourcing Evaluation on WAT2019.
- HUMAN (WAT2019) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2018)
Notice:
- HUMAN (WAT2018) is the result of the Pairwise Crowdsourcing Evaluation on WAT2018.
- HUMAN (WAT2018) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2017)
Notice:
- HUMAN (WAT2017) is the result of the Pairwise Crowdsourcing Evaluation on WAT2017.
- HUMAN (WAT2017) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2016)
Notice:
- HUMAN (WAT2016) is the result of the Pairwise Crowdsourcing Evaluation on WAT2016.
- HUMAN (WAT2016) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
HUMAN (WAT2015)
Notice:
- HUMAN (WAT2015) is the result of the Pairwise Crowdsourcing Evaluation on WAT2015.
- HUMAN (WAT2015) was evaluated by 5 different workers and the final decision is made by the voting of the judgements.
- The detail of the evaluation can be found in the pdf document (PDF file).
HUMAN (WAT2014)
Notice:
- HUMAN (WAT2014) is the result of the Pairwise Crowdsourcing Evaluation on WAT2014.
- HUMAN (WAT2014) was evaluated by 3 different workers and the final decision is made by the voting of the judgements.
- The detail of the evaluation can be found in the pdf document (PDF file).
EVALUATION RESULTS USAGE POLICY
When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores
(including both automatic and human evaluations) and ranks of your
system among others. You can also use the scores of the other systems,
but you MUST anonymize the other system's names. In addition, you can
show the links (URLs) to the WAT evaluation result pages.
NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-02