# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | ORGANIZER | MMEVTEXT24en-hi | 2020/08/27 21:02:12 | 3586 | - | - | - | - | - | - | 0.796570 | - | - | - | NMT | No | Transformer base model |
2 | ODIANLP | MMEVTEXT24en-hi | 2020/09/14 18:52:37 | 3711 | - | - | - | - | - | - | 0.808340 | - | - | - | NMT | Yes | Transformer model (used IITB as an additional resource for training) |
3 | CNLP-NITS | MMEVTEXT24en-hi | 2020/09/18 16:08:14 | 3897 | - | - | - | - | - | - | 0.804250 | - | - | - | NMT | Yes | Pretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model. |
4 | | MMEVTEXT24en-hi | 2020/09/19 04:50:41 | 4030 | - | - | - | - | - | - | 0.767900 | - | - | - | NMT | No | For the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t |
5 | CNLP-NITS-PP | MMEVTEXT24en-hi | 2021/04/28 00:20:19 | 5733 | - | - | - | - | - | - | 0.642785 | - | - | - | NMT | Yes | Pretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model. |
6 | Volta | MMEVTEXT24en-hi | 2021/05/25 13:55:19 | 6427 | - | - | - | - | - | - | 0.838180 | - | - | - | NMT | Yes | Finetuned mBART (Used IITB corpus for data augmentation) |
7 | nlp_novices | MMEVTEXT24en-hi | 2022/07/10 23:57:47 | 6733 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | Yes | Finetuned Transformers over OPUS Corpora additionally |
8 | CNLP-NITS-PP | MMEVTEXT24en-hi | 2022/07/11 12:24:45 | 6739 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transliteration-based phrase pairs augmentation in training using BRNN-based NMT
|
9 | SILO_NLP | MMEVTEXT24en-hi | 2022/07/12 04:13:50 | 6836 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning with pre-trained mBART-50 model |
10 | ODIAGEN | MMEVTEXT24en-hi | 2023/07/03 02:52:22 | 7087 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning Transformer using NLLB-200 from Facebook |
11 | ODIAGEN | MMEVTEXT24en-hi | 2023/07/06 12:27:29 | 7109 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning Transformer using NLLB-200 from Facebook |
12 | 00-7 | MMEVTEXT24en-hi | 2024/08/11 13:13:28 | 7322 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | Yes | |
13 | ODIAGEN | MMEVTEXT24en-hi | 2024/08/11 18:14:56 | 7335 | - | - | - | - | - | - | 0.000000 | - | - | - | Other | No | LLM based (Mistral-7B fine-tuning) |
14 | DCU_NMT | MMEVTEXT24en-hi | 2024/08/11 22:54:11 | 7347 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | The baseline model trained in English-Hindi direction using Fairseq NMT for text only. |
15 | DCU_NMT | MMEVTEXT24en-hi | 2024/08/11 22:55:41 | 7348 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | Yes | The En-Hi system trained using additional 8k data extracted from Flicker. |