NICT_LOGO.JPG KYOTO-U_LOGO.JPG

WAT

The Workshop on Asian Translation
Evaluation Results

[HOME] | [EVALUATION RESULTS] | [AUTOMATIC EVALUATION PROCEDURES] | [EVALUATION RESULTS USAGE POLICY] | [TECHNICAL COLLABORATOR]

EVALUATION RESULTS

+ Document-level translation tasks:
[ASPEC]
Task: ASPECen-jaEvaluation Result
Task: ASPECja-enEvaluation Result
Task: ASPECzh-jaEvaluation Result
Task: ASPECja-zhEvaluation Result
[BSD Corpus (Document-level Business Scene Dialogue Translation)]**
Task: BSDen-jaEvaluation Result
Task: BSDja-enEvaluation Result
[JIJI Corpus]
Task: JIJIen-jaEvaluation Result
Task: JIJIja-enEvaluation Result
Task: JIJICen-jaEvaluation Result
Task: JIJICja-enEvaluation Result
[NICT-SAP (Multilingual Multi-domain evaluation task: Unstructured document translation (IT tasks 2022))]**
Task: SOFTWARE22en-hiEvaluation Result
Task: SOFTWARE22hi-enEvaluation Result
Task: SOFTWARE22en-idEvaluation Result
Task: SOFTWARE22id-enEvaluation Result
Task: SOFTWARE22en-jaEvaluation Result
Task: SOFTWARE22ja-enEvaluation Result
Task: SOFTWARE22en-koEvaluation Result
Task: SOFTWARE22ko-enEvaluation Result
Task: SOFTWARE22en-msEvaluation Result
Task: SOFTWARE22ms-enEvaluation Result
Task: SOFTWARE22en-thEvaluation Result
Task: SOFTWARE22th-enEvaluation Result
Task: SOFTWARE22en-viEvaluation Result
Task: SOFTWARE22vi-enEvaluation Result
Task: SOFTWARE22en-zfEvaluation Result
Task: SOFTWARE22zf-enEvaluation Result
Task: SOFTWARE22en-zhEvaluation Result
Task: SOFTWARE22zh-enEvaluation Result
[NICT-SAP (Multilingual Multi-domain evaluation task: Structured document translation (IT tasks 2022))]** ***
Task: SWSTR22en-jaEvaluation Result
Task: SWSTR22ja-enEvaluation Result
Task: SWSTR22en-koEvaluation Result
Task: SWSTR22ko-enEvaluation Result
Task: SWSTR22en-zfEvaluation Result
Task: SWSTR22zf-enEvaluation Result
Task: SWSTR22en-zhEvaluation Result
Task: SWSTR22zh-enEvaluation Result
[NICT-SAP (Multilingual Multi-domain evaluation task (IT tasks 2021))]
Task: SOFTWAREen-hiEvaluation Result
Task: SOFTWAREhi-enEvaluation Result
Task: SOFTWAREen-idEvaluation Result
Task: SOFTWAREid-enEvaluation Result
Task: SOFTWAREen-msEvaluation Result
Task: SOFTWAREms-enEvaluation Result
Task: SOFTWAREen-thEvaluation Result
Task: SOFTWAREth-enEvaluation Result
[NICT-SAP (Multilingual Multi-domain evaluation task (Wikinews tasks) (Asian Language Treebank (ALT)))]**
Task: ALT20en-hiEvaluation Result
Task: ALT20hi-enEvaluation Result
Task: ALT20en-idEvaluation Result
Task: ALT20id-enEvaluation Result
Task: ALT20en-msEvaluation Result
Task: ALT20ms-enEvaluation Result
Task: ALT20en-thEvaluation Result
Task: ALT20th-enEvaluation Result
------------------------------
+ Multimodal translation tasks:
[Video Guided MT Task for Ambiguous Subtitles (2022)]
Task: VIDEOGASja-enEvaluation Result
[Bengali, Hindi, and Malayalam Visual Genome (2022)]**
Task: HINDENMMEVTEXT22en-bnEvaluation Result
Task: HINDENMMEVHI22en-bnEvaluation Result
Task: HINDENMMEVMM22en-bnEvaluation Result
Task: HINDENMMCHTEXT22en-bnEvaluation Result
Task: HINDENMMCHHI22en-bnEvaluation Result
Task: HINDENMMCHMM22en-bnEvaluation Result
Task: HINDENMMEVTEXT22en-hiEvaluation Result
Task: HINDENMMEVHI22en-hiEvaluation Result
Task: HINDENMMEVMM22en-hiEvaluation Result
Task: HINDENMMCHTEXT22en-hiEvaluation Result
Task: HINDENMMCHHI22en-hiEvaluation Result
Task: HINDENMMCHMM22en-hiEvaluation Result
Task: HINDENMMEVTEXT22en-mlEvaluation Result
Task: HINDENMMEVHI22en-mlEvaluation Result
Task: HINDENMMEVMM22en-mlEvaluation Result
Task: HINDENMMCHTEXT22en-mlEvaluation Result
Task: HINDENMMCHHI22en-mlEvaluation Result
Task: HINDENMMCHMM22en-mlEvaluation Result
[Flickr30kEnt-JP (Multimodal task)]
Task: MMTen-jaEvaluation Result
Task: MMTja-enEvaluation Result
[Ambiguous MS COCO]**
Task: MSCOCOen-jaEvaluation Result
Task: MSCOCOja-enEvaluation Result
------------------------------
+ Indic tasks:
[MultiIndicMT (Indic Languages Multilingual Parallel Corpus 2022)]**
Task: INDIC22en-asEvaluation Result
Task: INDIC22as-enEvaluation Result
Task: INDIC22en-bnEvaluation Result
Task: INDIC22bn-enEvaluation Result
Task: INDIC22en-guEvaluation Result
Task: INDIC22gu-enEvaluation Result
Task: INDIC22en-hiEvaluation Result
Task: INDIC22hi-enEvaluation Result
Task: INDIC22en-knEvaluation Result
Task: INDIC22kn-enEvaluation Result
Task: INDIC22en-mlEvaluation Result
Task: INDIC22ml-enEvaluation Result
Task: INDIC22en-mrEvaluation Result
Task: INDIC22mr-enEvaluation Result
Task: INDIC22en-neEvaluation Result
Task: INDIC22ne-enEvaluation Result
Task: INDIC22en-orEvaluation Result
Task: INDIC22or-enEvaluation Result
Task: INDIC22en-paEvaluation Result
Task: INDIC22pa-enEvaluation Result
Task: INDIC22en-sdEvaluation Result
Task: INDIC22sd-enEvaluation Result
Task: INDIC22en-siEvaluation Result
Task: INDIC22si-enEvaluation Result
Task: INDIC22en-taEvaluation Result
Task: INDIC22ta-enEvaluation Result
Task: INDIC22en-teEvaluation Result
Task: INDIC22te-enEvaluation Result
Task: INDIC22en-urEvaluation Result
Task: INDIC22ur-enEvaluation Result
[MultiIndicMT (Indic Languages Multilingual Parallel Corpus 2021)]**
Task: INDIC21en-bnEvaluation Result
Task: INDIC21bn-enEvaluation Result
Task: INDIC21en-guEvaluation Result
Task: INDIC21gu-enEvaluation Result
Task: INDIC21en-hiEvaluation Result
Task: INDIC21hi-enEvaluation Result
Task: INDIC21en-knEvaluation Result
Task: INDIC21kn-enEvaluation Result
Task: INDIC21en-mlEvaluation Result
Task: INDIC21ml-enEvaluation Result
Task: INDIC21en-mrEvaluation Result
Task: INDIC21mr-enEvaluation Result
Task: INDIC21en-orEvaluation Result
Task: INDIC21or-enEvaluation Result
Task: INDIC21en-paEvaluation Result
Task: INDIC21pa-enEvaluation Result
Task: INDIC21en-taEvaluation Result
Task: INDIC21ta-enEvaluation Result
Task: INDIC21en-teEvaluation Result
Task: INDIC21te-enEvaluation Result
------------------------------
+ ALT+ tasks:
[+UCSY]
Task: ALT2en-myEvaluation Result
Task: ALT2my-enEvaluation Result
------------------------------
+ Patent task:
[JPC3 (JPO Patent Corpus3 (2022 -))]* **
Task: JPC22zh-jaEvaluation Result
Task: JPC22ja-zhEvaluation Result
Task: JPC22ko-jaEvaluation Result
Task: JPC22ja-koEvaluation Result
Task: JPC22en-jaEvaluation Result
Task: JPC22ja-enEvaluation Result
Task: JPCN1zh-jaEvaluation Result
Task: JPCN1ja-zhEvaluation Result
Task: JPCN1ko-jaEvaluation Result
Task: JPCN1ja-koEvaluation Result
Task: JPCN1en-jaEvaluation Result
Task: JPCN1ja-enEvaluation Result
Task: JPCN2zh-jaEvaluation Result
Task: JPCN2ja-zhEvaluation Result
Task: JPCN2ko-ja
Task: JPCN2ja-ko
Task: JPCN2en-jaEvaluation Result
Task: JPCN2ja-enEvaluation Result
Task: JPCN3zh-jaEvaluation Result
Task: JPCN3ja-zhEvaluation Result
Task: JPCN3ko-jaEvaluation Result
Task: JPCN3ja-koEvaluation Result
Task: JPCN3en-jaEvaluation Result
Task: JPCN3ja-enEvaluation Result
Task: JPCN4zh-jaEvaluation Result
Task: JPCN4ja-zhEvaluation Result
Task: JPCN4ko-jaEvaluation Result
Task: JPCN4ja-koEvaluation Result
Task: JPCN4en-jaEvaluation Result
Task: JPCN4ja-enEvaluation Result
------------------------------
+ News Commentary task:
[JaRuNC (News Commentary task)]
Task: NCPDru-jaEvaluation Result
Task: NCPDja-ruEvaluation Result
------------------------------
+ Restricted Translation task:
Task: ASPECRTen-jaEvaluation Result
Task: ASPECRTja-enEvaluation Result
Task: ASPECRTja-zhEvaluation Result
Task: ASPECRTzh-jaEvaluation Result
------------------------------
+ Speech Translation Task:
[Low-Resource Khmer --> English/French Speech Translation Task]**
Task: ECCCkm-enEvaluation Result
Task: ECCCkm-frEvaluation Result

* JPCN1{zh-ja,ja-zh,ko-ja,ja-ko,en-ja,ja-en} are the same tasks as JPC{zh-ja,ja-zh,ko-ja,ja-ko,en-ja,ja-en}, respectively.

** AMFM is not calculated for JPC{N,N2,N3,N4}, MMT, ALT20, BSD, MSCOCO, INDIC22, ECCC, MM{EV,CH}{TEXT,HI,MM}22, SOFTWARE22 and SWSTR22 tasks.

*** The BLEU values for SWSTR22 tasks are XML-BLUE. Also, RIBES for these tasks are not calculated. Please refer to this page for more information.

Back to top

EVALUATION RESULTS (2014 - 2021)

[MultiIndicMT (Indic Languages Multilingual Parallel Corpus 2020)]
Task: INDIC20en-bnEvaluation Result
Task: INDIC20bn-enEvaluation Result
Task: INDIC20en-hiEvaluation Result
Task: INDIC20hi-enEvaluation Result
Task: INDIC20en-guEvaluation Result
Task: INDIC20gu-enEvaluation Result
Task: INDIC20en-mlEvaluation Result
Task: INDIC20ml-enEvaluation Result
Task: INDIC20en-mrEvaluation Result
Task: INDIC20mr-enEvaluation Result
Task: INDIC20en-taEvaluation Result
Task: INDIC20ta-enEvaluation Result
Task: INDIC20en-teEvaluation Result
Task: INDIC20te-enEvaluation Result
[IITB Corpus (2020)]
Task: HINDENen-hiEvaluation Result
Task: HINDENhi-enEvaluation Result
Task: HINDENhi-jaEvaluation Result
Task: HINDENja-hiEvaluation Result
Task: UFALta-enEvaluation Result
Task: UFALen-taEvaluation Result
Task: ODIAENen-odEvaluation Result
Task: ODIAENod-enEvaluation Result
[Timely Disclosure tasks: Timely Disclosure Documents Corpus (2020)]
Task: TDDCITMen-jaEvaluation Result
Task: TDDCITMja-enEvaluation Result
Task: TDDCTXTen-jaEvaluation Result
Task: TDDCTXTja-enEvaluation Result
[Asian Language Treebank (ALT) Project 2019]
Task: ALT2en-kmEvaluation Result
Task: ALT2km-enEvaluation Result
Task: ALT2en-myEvaluation Result
Task: ALT2my-enEvaluation Result
[JPO Patent Corpus2 (-2021)]
Task: JPCNzh-jaEvaluation Result
Task: JPCNja-zhEvaluation Result
Task: JPCNko-jaEvaluation Result
Task: JPCNja-koEvaluation Result
Task: JPCNen-jaEvaluation Result
Task: JPCNja-enEvaluation Result
Task: JPCN2ko-jaEvaluation Result
Task: JPCN2ja-koEvaluation Result
Task: JPCEPzh-jaEvaluation Result
[JPO Patent Corpus (-2017)]
Task: JPCzh-ja*Evaluation Result
Task: JPCja-zh*Evaluation Result
Task: JPCko-ja*Evaluation Result
Task: JPCja-ko*Evaluation Result
Task: JPCen-ja*Evaluation Result
Task: JPCja-en*Evaluation Result
[BPPT Corpus (2016)]
Task: BPPTen-idEvaluation Result
Task: BPPTid-enEvaluation Result
[RECIPE Corpus]
Task: RECIPEALLen-jaEvaluation Result
Task: RECIPEALLja-enEvaluation Result
Task: RECIPETTLen-jaEvaluation Result (for only title)
Task: RECIPETTLja-enEvaluation Result (for only title)
Task: RECIPESTEen-jaEvaluation Result (for only steps)
Task: RECIPESTEja-enEvaluation Result (for only steps)
Task: RECIPEINGen-jaEvaluation Result (for only ingredients)
Task: RECIPEINGja-enEvaluation Result (for only ingredients)
[Indic Languages Multilingual Parallel Corpus]
Task: INDICen-bnEvaluation Result
Task: INDICbn-enEvaluation Result
Task: INDICen-hiEvaluation Result
Task: INDIChi-enEvaluation Result
Task: INDICen-mlEvaluation Result
Task: INDICml-enEvaluation Result
Task: INDICen-taEvaluation Result
Task: INDICta-enEvaluation Result
Task: INDICen-teEvaluation Result
Task: INDICte-enEvaluation Result
Task: INDICen-urEvaluation Result
Task: INDICur-enEvaluation Result
Task: INDICen-siEvaluation Result
Task: INDICsi-enEvaluation Result
Task: HINDENMMEVTEXTen-hiEvaluation Result
Task: HINDENMMEVHIen-hiEvaluation Result
Task: HINDENMMEVMMen-hiEvaluation Result
Task: HINDENMMCHTEXTen-hiEvaluation Result
Task: HINDENMMCHHIen-hiEvaluation Result
Task: HINDENMMCHMMen-hiEvaluation Result
[Asian Language Treebank (ALT) Project]
Task: ALTen-myEvaluation Result
Task: ALTmy-enEvaluation Result
Back to top

AUTOMATIC EVALUATION PROCEDURES

Tools

Procedures


Back to top

EVALUATION RESULTS USAGE POLICY

When you use the WAT evaluation results for any purpose such as:
- writing technical papers,
- making presentations about your system,
- advertising your MT system to the customers,
you can use the information about translation directions, scores (including both automatic and human evaluations) and ranks of your system among others. You can also use the scores of the other systems, but you MUST anonymize the other system's names. In addition, you can show the links (URLs) to the WAT evaluation result pages.
Back to top

TECHNICAL COLLABORATOR

Back to top

NICT (National Institute of Information and Communications Technology)
Kyoto University
Last Modified: 2018-08-13