# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | goku20 | BSDja-en | 2020/09/15 19:41:51 | 3747 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | mBART pre-training, doc-level ensembled model, JESC parallel corpus |
2 | goku20 | BSDja-en | 2020/09/15 19:43:02 | 3748 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | mBART pre-training, sentence-level single model |
3 | ut-mrt | BSDja-en | 2020/09/17 13:49:46 | 3802 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 |
4 | ut-mrt | BSDja-en | 2020/09/17 13:55:01 | 3804 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k) |
5 | DEEPNLP | BSDja-en | 2020/09/17 15:58:22 | 3806 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Transformer-base model trained on BSD corpus (20k). |
6 | DEEPNLP | BSDja-en | 2020/09/17 16:21:53 | 3808 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Transformer-base model trained on BSD corpus (20k). |
7 | adapt-dcu | BSDja-en | 2020/09/17 23:50:55 | 3836 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Training corpus is a mix of OpenSubtitles, JESC and BSD.
Marian-NMT toolkit used to train transformer and fine-tuned on the same corpus oversampled with BSD. |
8 | ut-mrt | BSDja-en | 2020/09/18 17:19:26 | 3919 | - | - | - | 0.000000 | - | - | - | - | - | - | SMT | No | SMT Baseline trained the BSD corpus from GitHub (20k) |
9 | ut-mrt | BSDja-en | 2020/09/18 17:57:02 | 3936 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-small trained on the full BSD corpus (80k) with one previous context sentence |
10 | adapt-dcu | BSDja-en | 2020/09/18 18:35:31 | 3940 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Training corpus is a mix of OpenSubtitles, JESC and BSD.
Marian-NMT toolkit used to train transformer model (baseline model) |
11 | adapt-dcu | BSDja-en | 2020/09/18 19:26:01 | 3963 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Training corpus is a mix of OpenSubtitles, JESC and BSD.
Marian-NMT toolkit used to train transformer and fine-tuned on source-original synthetic corpus. |
12 | ut-mrt | BSDja-en | 2020/09/18 19:35:10 | 3966 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base ensemble of 2 best models trained on large batches of the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 and WMT 2020 corpora; tuned on the full BSD corpus. |
13 | adapt-dcu | BSDja-en | 2020/09/18 20:07:07 | 3970 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Marian-NMT toolkit used to train transformer and fine-tuned on source-original synthetic corpus, as well as BSD training corpus. |
14 | DEEPNLP | BSDja-en | 2020/09/19 14:57:57 | 4047 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | An ensemble of two transformer models trained on BSD corpus (20k) |
15 | DEEPNLP | BSDja-en | 2020/09/19 15:00:04 | 4048 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | An ensemble of transformer models trained on several publicly available JA-EN datasets such as JESC, KFTT, MTNT, etc, and then finetuned on filtered back-translated data followed by finetuning on BSD. |
16 | ut-mrt | BSDja-en | 2020/09/19 18:55:02 | 4059 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Transformer-small (4layer) trained the BSD corpus from GitHub (20k) with one previous context sentence. Average of four best models. |
17 | ut-mrt | BSDja-en | 2020/09/19 19:06:00 | 4065 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Transformer-small (4layer) trained the BSD corpus from GitHub (20k). Average of four best models. |
18 | ut-mrt | BSDja-en | 2020/09/19 20:20:28 | 4067 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on data from WMT 2020 + the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 |
19 | ut-mrt | BSDja-en | 2020/09/19 20:25:23 | 4069 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on data from WMT 2020 without any BSD |
20 | ut-mrt | BSDja-en | 2020/09/19 20:46:38 | 4072 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with domain tags to separate each corpus and one previous context sentence. Average of 4 best models. |
21 | ut-mrt | BSDja-en | 2020/09/19 20:47:47 | 4073 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with domain tags to separate each corpus. Average of 4 best models. |
22 | ut-mrt | BSDja-en | 2020/09/19 20:54:35 | 4075 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | Yes | Transformer-base trained on document aligned news data, the full BSD corpus (80k), AMI meeting corpus, Ontonotes 5.0 with one previous context sentence. |
23 | DEEPNLP | BSDja-en | 2020/10/16 18:03:12 | 4162 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | |