Skip to content

Latest commit

 

History

History
84 lines (66 loc) · 2.61 KB

File metadata and controls

84 lines (66 loc) · 2.61 KB

XStoryCloze

Paper

Title: Few-shot Learning with Multilingual Language Models

Abstract: https://arxiv.org/abs/2112.10668

XStoryCloze consists of the professionally translated version of the English StoryCloze dataset (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI.

Homepage: facebookresearch/fairseq#4820

Citation

@article{DBLP:journals/corr/abs-2112-10668,
  author    = {Xi Victoria Lin and
               Todor Mihaylov and
               Mikel Artetxe and
               Tianlu Wang and
               Shuohui Chen and
               Daniel Simig and
               Myle Ott and
               Naman Goyal and
               Shruti Bhosale and
               Jingfei Du and
               Ramakanth Pasunuru and
               Sam Shleifer and
               Punit Singh Koura and
               Vishrav Chaudhary and
               Brian O'Horo and
               Jeff Wang and
               Luke Zettlemoyer and
               Zornitsa Kozareva and
               Mona T. Diab and
               Veselin Stoyanov and
               Xian Li},
  title     = {Few-shot Learning with Multilingual Language Models},
  journal   = {CoRR},
  volume    = {abs/2112.10668},
  year      = {2021},
  url       = {https://arxiv.org/abs/2112.10668},
  eprinttype = {arXiv},
  eprint    = {2112.10668},
  timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Groups and Tasks

Groups

  • xstorycloze

Tasks

  • xstorycloze_ar: Arabic
  • xstorycloze_en: English
  • xstorycloze_es: Spanish
  • xstorycloze_eu: Basque
  • xstorycloze_hi: Hindi
  • xstorycloze_id: Indonesian
  • xstorycloze_my: Burmese
  • xstorycloze_ru: Russian
  • xstorycloze_sw: Swahili
  • xstorycloze_te: Telugu
  • xstorycloze_zh: Chinese

Checklist

For adding novel benchmarks/datasets to the library:

  • Is the task an existing benchmark in the literature?
    • Have you referenced the original paper that introduced the task?
    • If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?

If other tasks on this dataset are already supported:

  • Is the "Main" variant of this task clearly denoted?
  • Have you provided a short sentence in a README on what each new variant adds / evaluates?
  • Have you noted which, if any, published evaluation setups are matched by this variant?