Commit
·
ccc2120
1
Parent(s):
eb4dac2
Upload README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,115 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
-
-
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
-
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
datasets:
|
| 5 |
+
- c4
|
| 6 |
tags:
|
| 7 |
+
- deep-narrow
|
| 8 |
+
inference: false
|
| 9 |
+
|
| 10 |
+
license: apache-2.0
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# T5-Efficient-BASE-DL8 (Deep-Narrow version)
|
| 14 |
+
|
| 15 |
+
T5-Efficient-BASE-DL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
|
| 16 |
+
It is a *pretrained-only* checkpoint and was released with the
|
| 17 |
+
paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
|
| 18 |
+
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
|
| 19 |
+
|
| 20 |
+
In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
|
| 21 |
+
of similar parameter count.
|
| 22 |
+
|
| 23 |
+
To quote the paper:
|
| 24 |
+
|
| 25 |
+
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
|
| 26 |
+
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
|
| 27 |
+
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
|
| 28 |
+
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
|
| 29 |
+
> a tall base model might also generally more efficient compared to a large model. We generally find
|
| 30 |
+
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
|
| 31 |
+
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
|
| 32 |
+
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
|
| 33 |
+
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
|
| 34 |
+
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
|
| 35 |
+
> consider.
|
| 36 |
+
|
| 37 |
+
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
|
| 38 |
+
A sequence of word embeddings is therefore processed sequentially by each transformer block.
|
| 39 |
+
|
| 40 |
+
## Details model architecture
|
| 41 |
+
|
| 42 |
+
This model checkpoint - **t5-efficient-base-dl8** - is of model type **Base** with the following variations:
|
| 43 |
+
- **dl** is **8**
|
| 44 |
+
|
| 45 |
+
It has **185.17** million parameters and thus requires *ca.* **740.67 MB** of memory in full precision (*fp32*)
|
| 46 |
+
or **370.34 MB** of memory in half precision (*fp16* or *bf16*).
|
| 47 |
+
|
| 48 |
+
A summary of the *original* T5 model architectures can be seen here:
|
| 49 |
+
|
| 50 |
+
| Model | nl (el/dl) | ff | dm | kv | nh | #Params|
|
| 51 |
+
| ----| ---- | ---- | ---- | ---- | ---- | ----|
|
| 52 |
+
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
|
| 53 |
+
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
|
| 54 |
+
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
|
| 55 |
+
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
|
| 56 |
+
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
|
| 57 |
+
| Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
|
| 58 |
+
| XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
|
| 59 |
+
|
| 60 |
+
whereas the following abbreviations are used:
|
| 61 |
+
|
| 62 |
+
| Abbreviation | Definition |
|
| 63 |
+
| ----| ---- |
|
| 64 |
+
| nl | Number of transformer blocks (depth) |
|
| 65 |
+
| dm | Dimension of embedding vector (output vector of transformers block) |
|
| 66 |
+
| kv | Dimension of key/value projection matrix |
|
| 67 |
+
| nh | Number of attention heads |
|
| 68 |
+
| ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
|
| 69 |
+
| el | Number of transformer blocks in the encoder (encoder depth) |
|
| 70 |
+
| dl | Number of transformer blocks in the decoder (decoder depth) |
|
| 71 |
+
| sh | Signifies that attention heads are shared |
|
| 72 |
+
| skv | Signifies that key-values projection matrices are tied |
|
| 73 |
+
|
| 74 |
+
If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
|
| 75 |
+
|
| 76 |
+
## Pre-Training
|
| 77 |
+
|
| 78 |
+
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
|
| 79 |
+
the span-based masked language modeling (MLM) objective.
|
| 80 |
+
|
| 81 |
+
## Fine-Tuning
|
| 82 |
+
|
| 83 |
+
**Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage.
|
| 84 |
+
The checkpoint was pretrained in English and is therefore only useful for English NLP tasks.
|
| 85 |
+
You can follow on of the following examples on how to fine-tune the model:
|
| 86 |
+
|
| 87 |
+
*PyTorch*:
|
| 88 |
+
|
| 89 |
+
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization)
|
| 90 |
+
- [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py)
|
| 91 |
+
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
|
| 92 |
+
|
| 93 |
+
*Tensorflow*:
|
| 94 |
+
|
| 95 |
+
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization)
|
| 96 |
+
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
|
| 97 |
+
|
| 98 |
+
*JAX/Flax*:
|
| 99 |
+
|
| 100 |
+
- [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization)
|
| 101 |
+
- [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model.
|
| 102 |
+
|
| 103 |
+
## Downstream Performance
|
| 104 |
+
|
| 105 |
+
TODO: Add table if available
|
| 106 |
+
|
| 107 |
+
## Computational Complexity
|
| 108 |
+
|
| 109 |
+
TODO: Add table if available
|
| 110 |
+
|
| 111 |
+
## More information
|
| 112 |
+
|
| 113 |
+
We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint.
|
| 114 |
+
As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv*
|
| 115 |
+
model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
|