# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | goku20 | BSDen-ja | 2020/09/15 20:27:00 | 3753 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | mBART pre-training, doc-level single model |
2 | goku20 | BSDen-ja | 2020/09/15 20:33:23 | 3756 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | mBART pre-training, doc-level ensembled model, JESC parallel corpus |
3 | ut-mrt | BSDen-ja | 2020/09/17 12:04:19 | 3793 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on large batches of the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0, and jParaCrawl; tuned on the full BSD corpus. |
4 | ut-mrt | BSDen-ja | 2020/09/17 13:50:02 | 3803 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 |
5 | ut-mrt | BSDen-ja | 2020/09/17 13:55:20 | 3805 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k) |
6 | ut-mrt | BSDen-ja | 2020/09/17 16:51:27 | 3810 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer-small (4layer) trained the BSD corpus from GitHub (20k). |
7 | ut-mrt | BSDen-ja | 2020/09/18 17:19:52 | 3920 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | SMT | No | SMT Baseline trained the BSD corpus from GitHub (20k) |
8 | ut-mrt | BSDen-ja | 2020/09/18 17:57:22 | 3937 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-small trained on the full BSD corpus (80k) with one previous context sentence |
9 | ut-mrt | BSDen-ja | 2020/09/18 18:49:05 | 3942 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base ensemble of 2 best models trained on large batches of the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0, WMT 2020, and news corpora; tuned on the full BSD corpus. |
10 | DEEPNLP | BSDen-ja | 2020/09/19 03:22:48 | 4026 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer model trained on BSD corpus (20k) |
11 | DEEPNLP | BSDen-ja | 2020/09/19 03:23:53 | 4027 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Training corpus is a mix of several publicly available JA-EN datasets such as JESC,KFTT,MTNT, etc. Model - Base Transformer trained and then finetuned on BSD. |
12 | DEEPNLP | BSDen-ja | 2020/09/19 15:01:28 | 4049 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | An ensemble of two transformer models trained on BSD corpus (20k) |
13 | DEEPNLP | BSDen-ja | 2020/09/19 15:03:40 | 4050 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | An ensemble of transformer models trained on several publicly available JA-EN datasets such as JESC, KFTT, MTNT, etc, and then finetuned on filtered back-translated data further finetuning on BSD. |
14 | ut-mrt | BSDen-ja | 2020/09/19 18:55:29 | 4060 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | No | Transformer-small (4layer) trained the BSD corpus from GitHub (20k) with one previous context sentence. Average of four best models. |
15 | ut-mrt | BSDen-ja | 2020/09/19 20:21:04 | 4068 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on data from WMT 2020 without any BSD |
16 | ut-mrt | BSDen-ja | 2020/09/19 20:25:59 | 4070 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on data from WMT 2020 + the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 |
17 | ut-mrt | BSDen-ja | 2020/09/19 20:44:13 | 4071 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with domain tags to separate each corpus and one previous context sentence. Average of 4 best models. |
18 | ut-mrt | BSDen-ja | 2020/09/19 20:49:11 | 4074 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with domain tags to separate each corpus. Average of 4 best models. |
19 | ut-mrt | BSDen-ja | 2020/09/19 20:55:04 | 4076 | 0.000000 | 0.000000 | 0.000000 | - | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on document aligned news data, the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with one previous context sentence. |