# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | Bering Lab | JPCNzh-ja | 2021/05/04 11:01:52 | 6200 | 0.975588 | 0.975588 | 0.975588 | - | - | - | - | - | - | - | NMT | Yes | Transformer Ensemble with additional crawled parallel corpus |
2 | tpt_wat | JPCNzh-ja | 2021/04/27 01:40:19 | 5689 | 0.972917 | 0.972917 | 0.972917 | - | - | - | - | - | - | - | NMT | No | Base Transformer base trained on a shared 8k vocab |
3 | ORGANIZER | JPCNzh-ja | 2018/08/16 16:06:18 | 1995 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | 0.000000 | 0.000000 | NMT | No | NMT with Attention |
4 | USTC | JPCNzh-ja | 2018/08/31 17:20:53 | 2205 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | 0.000000 | 0.000000 | NMT | No | tensor2tensor, 4 model average, r2l rerank |
5 | EHR | JPCNzh-ja | 2018/08/31 18:44:03 | 2209 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | 0.000000 | 0.000000 | NMT | No | SMT reranked NMT |
6 | sarah | JPCNzh-ja | 2019/07/22 14:45:06 | 2802 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer, single model |
7 | sarah | JPCNzh-ja | 2019/07/25 12:11:56 | 2920 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer, ensemble of 4 models |
8 | ryan | JPCNzh-ja | 2019/07/25 21:58:10 | 2949 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Base Transformer |
9 | KNU_Hyundai | JPCNzh-ja | 2019/07/27 08:27:13 | 3152 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer(base) + *Used ASPEC corpus* with relative position, bt, multi source, r2l rerank, 5-model ensemble |
10 | goku20 | JPCNzh-ja | 2020/09/18 17:10:40 | 3915 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer, ensemble of 3 models |
11 | goku20 | JPCNzh-ja | 2020/09/18 17:24:39 | 3925 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | mBART pre-training, ensemble of 3 models |