Skip to content

Latest commit

 

History

History
53 lines (35 loc) · 2.04 KB

File metadata and controls

53 lines (35 loc) · 2.04 KB

WMT16

Paper

Title: Findings of the 2016 Conference on Machine Translation Abstract: http://www.aclweb.org/anthology/W/W16/W16-2301

Homepage: https://huggingface.co/datasets/wmt16

Citation

@InProceedings{bojar-EtAl:2016:WMT1,
  author    = {Bojar, Ond
{r}ej  and  Chatterjee, Rajen  and  Federmann, Christian  and  Graham, Yvette  and  Haddow, Barry  and  Huck, Matthias  and  Jimeno Yepes, Antonio  and  Koehn, Philipp  and  Logacheva, Varvara  and  Monz, Christof  and  Negri, Matteo  and  Neveol, Aurelie  and  Neves, Mariana  and  Popel, Martin  and  Post, Matt  and  Rubino, Raphael  and  Scarton, Carolina  and  Specia, Lucia  and  Turchi, Marco  and  Verspoor, Karin  and  Zampieri, Marcos},
  title     = {Findings of the 2016 Conference on Machine Translation},
  booktitle = {Proceedings of the First Conference on Machine Translation},
  month     = {August},
  year      = {2016},
  address   = {Berlin, Germany},
  publisher = {Association for Computational Linguistics},
  pages     = {131--198},
  url       = {http://www.aclweb.org/anthology/W/W16/W16-2301}
}

Groups and Tasks

Groups

  • wmt-t5-prompt: Group for all wmt tasks with prompt templates used for T5 (Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer)

Tasks

With specific prompt styles

  • wmt-ro-en-t5-prompt: WMT16 with the prompt template used for T5

Checklist

For adding novel benchmarks/datasets to the library:

  • Is the task an existing benchmark in the literature?
    • Have you referenced the original paper that introduced the task?
    • If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?

If other tasks on this dataset are already supported:

  • Is the "Main" variant of this task clearly denoted?
  • Have you provided a short sentence in a README on what each new variant adds / evaluates?
  • Have you noted which, if any, published evaluation setups are matched by this variant?