# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | ORGANIZER | MMCHTEXT24en-hi | 2020/08/27 21:08:36 | 3587 | - | - | - | - | - | - | 0.727010 | - | - | - | NMT | No | Transformer base model |
2 | ODIANLP | MMCHTEXT24en-hi | 2020/09/14 19:07:05 | 3713 | - | - | - | - | - | - | 0.804880 | - | - | - | NMT | Yes | Transformer model (used IITB as an additional resource for training) |
3 | CNLP-NITS | MMCHTEXT24en-hi | 2020/09/18 16:12:32 | 3898 | - | - | - | - | - | - | 0.750320 | - | - | - | NMT | Yes | Pretrained monolingual data (IITB) and fine-tuned with parallel data in training using BRNN model. |
4 | | MMCHTEXT24en-hi | 2020/09/19 06:17:42 | 4031 | - | - | - | - | - | - | 0.698600 | - | - | - | NMT | No | For the text-only Eng-Hindi translation task, we use an adaptation of the NMT-Keras code to suit our problem. Here, the focus is on long term translation as well as active learning strategies. The t |
5 | ORGANIZER | MMCHTEXT24en-hi | 2020/11/07 02:12:48 | 4188 | - | - | - | - | - | - | 0.773230 | - | - | - | NMT | No | |
6 | ORGANIZER | MMCHTEXT24en-hi | 2020/11/07 02:15:08 | 4189 | - | - | - | - | - | - | 0.773230 | - | - | - | NMT | No | |
7 | ORGANIZER | MMCHTEXT24en-hi | 2020/11/07 02:18:17 | 4190 | - | - | - | - | - | - | 0.773230 | - | - | - | NMT | No | |
8 | CNLP-NITS-PP | MMCHTEXT24en-hi | 2021/04/28 00:15:03 | 5732 | - | - | - | - | - | - | 0.797409 | - | - | - | NMT | Yes | Pretrained monolingual data (IITB) and fine-tuned with parallel data (WAT21 train data+ Extracted Phrase pairs from WAT21 train data +IITB train data) in training using BRNN model. |
9 | Volta | MMCHTEXT24en-hi | 2021/05/25 13:55:36 | 6429 | - | - | - | - | - | - | 0.876300 | - | - | - | NMT | Yes | Finetuned mBART (Used IITB corpus for data augmentation) |
10 | nlp_novices | MMCHTEXT24en-hi | 2022/07/10 22:51:34 | 6725 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | Yes | Finetuned Transformers over OPUS Corpora additionally |
11 | CNLP-NITS-PP | MMCHTEXT24en-hi | 2022/07/11 12:43:47 | 6742 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Transliteration-based phrase pairs augmentation in training using BRNN-based NMT |
12 | SILO_NLP | MMCHTEXT24en-hi | 2022/07/12 04:15:44 | 6838 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning with pre-trained mBART-50 model |
13 | ODIAGEN | MMCHTEXT24en-hi | 2023/07/03 02:56:51 | 7088 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning Transformer using NLLB-200 from Facebook |
14 | ODIAGEN | MMCHTEXT24en-hi | 2023/07/06 12:31:36 | 7110 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | Fine-tuning Transformer using NLLB-200 from Facebook |
15 | 00-7 | MMCHTEXT24en-hi | 2024/08/09 18:03:29 | 7313 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | |
16 | DCU_NMT | MMCHTEXT24en-hi | 2024/08/11 22:57:15 | 7349 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | En-Hi system trained using backtranslated data from flicker. |
17 | DCU_NMT | MMCHTEXT24en-hi | 2024/08/11 23:00:10 | 7350 | - | - | - | - | - | - | 0.000000 | - | - | - | NMT | No | The baseline text-only model trained only on provided data. |
18 | ODIAGEN | MMCHTEXT24en-hi | 2024/08/12 23:38:25 | 7358 | - | - | - | - | - | - | 0.000000 | - | - | - | Other | No | LLM based (fine-tuning Mistral7B) |
19 | ODIAGEN | MMCHTEXT24en-hi | 2024/08/15 17:16:06 | 7402 | - | - | - | - | - | - | 0.000000 | - | - | - | Other | No | LLM based 8fine-tuned multimodal Llava model for region-specific instruction set in Hindi |