top of page
About
Contact

ZeroSCROLLS: Zero-Shot CompaRison Over Long Language Sequences

​

What is Zero-SCROLLS?

ZeroSCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes ten natural language tasks across multiple domains, including summarization, question answering, aggregated sentiment classification and information reordering.

​

​

​

​

Citing ZeroSCROLLS

Please use the following bibliography to cite ZeroSCROLLS:

@inproceedings{shaham-etal-2023-zeroscrolls,
    title = "{Z}ero{SCROLLS}: A Zero-Shot Benchmark for Long Text Understanding",
    author = "Shaham, Uri  and
      Ivgi, Maor  and
      Efrat, Avia  and
      Berant, Jonathan  and
      Levy, Omer",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-emnlp.536",
    doi = "10.18653/v1/2023.findings-emnlp.536",
    pages = "7977--7989"
}

When citing ZeroSCROLLS, please make sure to cite all of the original dataset papers. [bibtex]

​

Contact Us

scrolls-benchmark-contact@googlegroups.com

Tel-Aviv University NLP lab
bottom of page