# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | NLPHut | INDIC21pa-en | 2021/03/20 00:15:19 | 4615 | - | - | - | 0.717322 | - | - | - | - | - | - | NMT | No | Transformer trained using all languages PMI data. Then fine tuned using all pa-en data.
|
2 | ORGANIZER | INDIC21pa-en | 2021/04/08 17:25:18 | 4803 | - | - | - | 0.701483 | - | - | - | - | - | - | NMT | No | Bilingual baseline trained on PMI data. Transformer base. LR=10-3 |
3 | NICT-5 | INDIC21pa-en | 2021/04/21 15:45:23 | 5288 | - | - | - | 0.792541 | - | - | - | - | - | - | NMT | No | Pretrain MBART on IndicCorp and FT on bilingual PMI data. Beam search. Model is bilingual. |
4 | SRPOL | INDIC21pa-en | 2021/04/21 19:33:28 | 5332 | - | - | - | 0.815069 | - | - | - | - | - | - | NMT | No | Base transformer on all WAT21 data |
5 | NICT-5 | INDIC21pa-en | 2021/04/22 11:53:47 | 5363 | - | - | - | 0.800753 | - | - | - | - | - | - | NMT | No | MBART+MNMT. Beam 4. |
6 | gaurvar | INDIC21pa-en | 2021/04/25 18:11:23 | 5534 | - | - | - | 0.686625 | - | - | - | - | - | - | NMT | No | Multi Task Multi Lingual T5 trained for Multiple Indic Languages |
7 | gaurvar | INDIC21pa-en | 2021/04/25 18:35:36 | 5551 | - | - | - | 0.693631 | - | - | - | - | - | - | NMT | No | Multi Task Multi Lingual T5 trained for Multiple Indic Languages |
8 | gaurvar | INDIC21pa-en | 2021/04/25 18:49:16 | 5562 | - | - | - | 0.692458 | - | - | - | - | - | - | NMT | No | Multi Task Multi Lingual T5 trained for Multiple Indic Languages |
9 | gaurvar | INDIC21pa-en | 2021/04/25 19:02:55 | 5572 | - | - | - | 0.694658 | - | - | - | - | - | - | NMT | No | Multi Task Multi Lingual T5 trained for Multiple Indic Languages |
10 | sakura | INDIC21pa-en | 2021/04/30 22:59:33 | 5877 | - | - | - | 0.823464 | - | - | - | - | - | - | NMT | No | Fine-tuning of multilingual mBART many2many model with training corpus. |
11 | IIIT-H | INDIC21pa-en | 2021/05/03 18:15:36 | 6023 | - | - | - | 0.811169 | - | - | - | - | - | - | NMT | No | MNMT system (XX-En) trained via exploiting lexical similarity on PMI+CVIT parallel corpus, then improved using back translation on PMI monolingual data followed by fine tuning. |
12 | CFILT | INDIC21pa-en | 2021/05/04 01:17:27 | 6059 | - | - | - | 0.804561 | - | - | - | - | - | - | NMT | No | Multilingual(Many-to-One(XX-En)) NMT model based on Transformer with shared encoder and decoder. |
13 | coastal | INDIC21pa-en | 2021/05/04 01:46:35 | 6108 | - | - | - | 0.779252 | - | - | - | - | - | - | NMT | No | seq2seq model trained on all WAT2021 data |
14 | CFILT-IITB | INDIC21pa-en | 2021/05/04 01:56:11 | 6123 | - | - | - | 0.772655 | - | - | - | - | - | - | NMT | No | Multilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all indic language data converted to same script |
15 | CFILT-IITB | INDIC21pa-en | 2021/05/04 01:59:52 | 6129 | - | - | - | 0.782112 | - | - | - | - | - | - | NMT | No | Multilingual NMT (Many to One): Transformer based model with shared encoder-decoder and shared BPE vocabulary trained using all Indo-Aryan languages data converted to same script |
16 | coastal | INDIC21pa-en | 2021/05/04 05:43:28 | 6168 | - | - | - | 0.814440 | - | - | - | - | - | - | NMT | No | mT5 trained only on PMI |
17 | sakura | INDIC21pa-en | 2021/05/04 13:20:33 | 6209 | - | - | - | 0.823371 | - | - | - | - | - | - | NMT | No | Pre-training multilingual mBART many2many model with training corpus followed by finetuning on PMI Parallel.
|
18 | SRPOL | INDIC21pa-en | 2021/05/04 15:28:09 | 6249 | - | - | - | 0.841641 | - | - | - | - | - | - | NMT | No | Ensemble of many-to-one on all data. Pretrained on BT, finetuned on PMI |
19 | SRPOL | INDIC21pa-en | 2021/05/04 16:33:03 | 6275 | - | - | - | 0.836440 | - | - | - | - | - | - | NMT | No | Many-to-one on all data. Pretrained on BT, finetuned on PMI |
20 | IITP-MT | INDIC21pa-en | 2021/05/04 18:12:25 | 6301 | - | - | - | 0.815989 | - | - | - | - | - | - | NMT | No | Many-to-One model trained on all training data with base Transformer. All indic language data is romanized. Model fine-tuned on BT PMI monolingual corpus. |
21 | mcairt | INDIC21pa-en | 2021/05/04 19:28:11 | 6342 | - | - | - | 0.818332 | - | - | - | - | - | - | NMT | No | multilingual model(many to one) trained on all WAT 2021 data by using base transformer. |
22 | NICT-5 | INDIC21pa-en | 2021/06/21 12:05:01 | 6479 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Using PMI and PIB data for fine-tuning on am mbart model trained for over 5 epochs. |
23 | NICT-5 | INDIC21pa-en | 2021/06/25 11:50:20 | 6500 | - | - | - | 0.000000 | - | - | - | - | - | - | NMT | No | Using PMI and PIB data for fine-tuning on a mbart model trained for over 5 epochs. MNMT model. |