# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | ORGANIZER | INDIC20en-te | 2020/09/02 16:42:39 | 3630 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Baseline MLNMT En to XX model using PIB and Filtered PMI data. Transformer big model. Default settings.
|
2 | ODIANLP | INDIC20en-te | 2020/09/17 01:51:37 | 3781 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transformer Base with Relative position representations + en-xx model + PMI Data |
3 | cvit | INDIC20en-te | 2020/09/18 02:24:55 | 3859 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | Yes | |
4 | Deterministic Algorithms Lab | INDIC20en-te | 2020/09/18 18:04:54 | 3939 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | XLM Model with DAE Loss, MT Loss, MLM Loss, TLM loss and Back-Translation Loss. |
5 | NICT-5 | INDIC20en-te | 2020/09/18 21:04:54 | 3996 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | XX to XX transformer model trained on officially provided PMI and PKB data. Corpora were size balanced. |
6 | NICT-5 | INDIC20en-te | 2020/09/18 21:27:49 | 4009 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | XX to XX transformer model trained on officially provided PMI and PKB data. Corpora were size unbalanced. |
7 | HW-TSC | INDIC20en-te | 2020/09/19 17:28:48 | 4054 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | transformer deep(pre LN),en2xx,PMI data and filtered PBI data (fasttext domain adaptation)+ sentencepiece + 4 model ensemble + adapter |
8 | cvit | INDIC20en-te | 2020/09/19 18:29:16 | 4056 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transformer Multi-Lingual Model, encoder pre-training on Telugu monolingual corpus then fine-tuned to English-Telugu translation |
9 | cvit | INDIC20en-te | 2020/09/19 18:45:54 | 4058 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transformer Multi-Lingual Model, encoder pre-training on Telugu monolingual corpus then fine-tuned to English-Telugu translation |
10 | cvit | INDIC20en-te | 2020/09/19 18:57:57 | 4061 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transformer Multi-lingual baseline model. Encoder pre-training then fine-tuned on English-Telugu Parallel corpora |