# |
Team |
Task |
Date/Time |
DataID |
AMFM |
Method
|
Other Resources
|
System Description |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
unuse |
|
1 | ORGANIZER | HINDENhi-en | 2016/07/26 10:04:53 | 1031 | - | - | - | 0.621100 | - | - | - | 0.000000 | 0.000000 | 0.000000 | Other | Yes | Online A (2016) |
2 | ORGANIZER | HINDENhi-en | 2016/07/26 13:25:18 | 1048 | - | - | - | 0.590520 | - | - | - | 0.000000 | 0.000000 | 0.000000 | Other | Yes | Online B (2016) |
3 | ORGANIZER | HINDENhi-en | 2016/07/26 15:44:20 | 1054 | - | - | - | 0.574850 | - | - | - | 0.000000 | 0.000000 | 0.000000 | SMT | No | Phrase-based SMT |
4 | IITP-MT | HINDENhi-en | 2016/08/29 15:10:41 | 1289 | - | - | - | 0.567370 | - | - | - | 0.000000 | 0.000000 | 0.000000 | SMT | No | Hierarchical SMT |
5 | XMUNLP | HINDENhi-en | 2017/07/24 08:47:29 | 1427 | - | - | - | 0.568010 | - | - | - | 0.000000 | 0.000000 | 0.000000 | NMT | No | single nmt model |
6 | XMUNLP | HINDENhi-en | 2017/07/26 22:54:46 | 1488 | - | - | - | 0.627190 | - | - | - | 0.000000 | 0.000000 | 0.000000 | NMT | No | single nmt model + monolingual data |
7 | XMUNLP | HINDENhi-en | 2017/07/27 23:00:46 | 1511 | - | - | - | 0.629530 | - | - | - | 0.000000 | 0.000000 | 0.000000 | NMT | No | ensemble of 4 nmt models + monolingual data |
8 | IITB-MTG | HINDENhi-en | 2017/08/01 15:10:09 | 1726 | - | - | - | 0.557040 | - | - | - | 0.000000 | 0.000000 | 0.000000 | NMT | No | NMT with ensemble (last 3 + best validation) |
9 | cvit | HINDENhi-en | 2018/09/14 13:21:46 | 2331 | - | - | - | 0.623240 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | ConvS2S Model
Uses External Data |
10 | CUNI | HINDENhi-en | 2018/09/15 03:10:30 | 2381 | - | - | - | 0.611090 | - | - | - | - | 0.000000 | 0.000000 | NMT | No | Transformer big, transfer learning from CS-EN 1M steps, only original HI-EN, beam=8; alpha=0.8; averaging of last 8 models after 230k steps |
11 | cvit | HINDENhi-en | 2018/11/06 15:51:54 | 2563 | - | - | - | 0.628600 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | |
12 | ORGANIZER | HINDENhi-en | 2018/11/13 14:57:12 | 2567 | - | - | - | 0.586360 | - | - | - | - | 0.000000 | 0.000000 | NMT | No | NMT with Attention |
13 | cvit | HINDENhi-en | 2019/03/15 01:23:17 | 2643 | - | - | - | 0.637230 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | massive-multi |
14 | cvit | HINDENhi-en | 2019/03/15 01:33:22 | 2645 | - | - | - | 0.631250 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | massive-multi + ft |
15 | cvit | HINDENhi-en | 2019/03/22 05:52:47 | 2658 | - | - | - | 0.554700 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | many to en (Transformer model) trained on WAT2018 data. Detokenized! |
16 | cvit | HINDENhi-en | 2019/05/27 16:04:36 | 2681 | - | - | - | 0.641730 | - | - | - | - | 0.000000 | 0.000000 | NMT | Yes | massive-multi + bt |
17 | NICT-5 | HINDENhi-en | 2019/07/23 17:36:36 | 2865 | - | - | - | 0.566490 | - | - | - | - | - | - | NMT | No | HiEn and TaEn mixed training NMT model. Transformer on t2t
(Hi-En is external data) |
18 | LTRC-MT | HINDENhi-en | 2019/07/27 04:04:09 | 3117 | - | - | - | 0.563590 | - | - | - | - | - | - | NMT | No | Transformer Baseline, Only IIT-B data |
19 | LTRC-MT | HINDENhi-en | 2019/07/27 04:49:36 | 3119 | - | - | - | 0.594770 | - | - | - | - | - | - | NMT | No | Transformer Model with Backtranslation |
20 | LTRC-MT | HINDENhi-en | 2019/07/27 05:34:14 | 3121 | - | - | - | 0.594550 | - | - | - | - | - | - | NMT | No | LSTM with attention, Backtranslation, Reinforcement Learning for 1 epoch |
21 | LTRC-MT | HINDENhi-en | 2019/07/27 05:58:33 | 3124 | - | - | - | 0.587060 | - | - | - | - | - | - | NMT | No | LSTM with global attention & Backtranslation |
22 | cvit | HINDENhi-en | 2020/06/10 15:37:12 | 3418 | - | - | - | 0.601810 | - | - | - | - | - | - | NMT | Yes | XX-to-EN model, uses PIB-V0 dataset |
23 | cvit | HINDENhi-en | 2020/07/06 02:29:11 | 3419 | - | - | - | 0.609500 | - | - | - | - | - | - | NMT | Yes | XX-to-EN Model, uses PIB-V1 Data |
24 | cvit | HINDENhi-en | 2020/07/06 06:22:02 | 3422 | - | - | - | 0.596650 | - | - | - | - | - | - | NMT | Yes | Multilingual model, mm-all-iter0 |
25 | cvit | HINDENhi-en | 2020/07/06 06:38:14 | 3423 | - | - | - | 0.596890 | - | - | - | - | - | - | NMT | Yes | Multilingual Model, Uses PIB-V0 data.
(mm-all-iter1) |
26 | cvit | HINDENhi-en | 2020/07/10 04:28:17 | 3434 | - | - | - | 0.610650 | - | - | - | - | - | - | NMT | Yes | xx-to-en model uses PIB-v2 data |
27 | cvit | HINDENhi-en | 2020/07/20 20:38:09 | 3441 | - | - | - | 0.610910 | - | - | - | - | - | - | NMT | Yes | xx-en model, uses PIB-v2 data |
28 | cvit | HINDENhi-en | 2020/08/18 05:27:08 | 3446 | - | - | - | 0.614060 | - | - | - | - | - | - | NMT | Yes | |
29 | WT | HINDENhi-en | 2020/09/03 18:12:32 | 3638 | - | - | - | 0.637410 | - | - | - | - | - | - | NMT | No | Used 5M Back translation news crawl data to train. Method: Transformer NMT; Preprocessing: 1. Removed mixed language sentences 2. moses tokeniser for English and for Hindi indicnlp normaliser and toke |