| license: cc-by-nc-sa-4.0 | |
| pipeline_tag: fill-mask | |
| language: en | |
| datasets: | |
| - OpenSubtitles | |
| ## Model description | |
| This model is based on [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint). | |