Attempts to fill out the 1B3 model details that diverge from the main one.
#1
by
meg
HF Staff
- opened
README.md
CHANGED
|
@@ -120,11 +120,11 @@ Please see [the BLOOM training README](https://github.com/bigscience-workshop/bi
|
|
| 120 |
|
| 121 |
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
|
| 122 |
|
| 123 |
-
*
|
| 124 |
|
| 125 |
-
*
|
| 126 |
|
| 127 |
-
* Hidden layers are
|
| 128 |
|
| 129 |
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
|
| 130 |
|
|
@@ -132,17 +132,19 @@ Please see [the BLOOM training README](https://github.com/bigscience-workshop/bi
|
|
| 132 |
|
| 133 |
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
|
| 134 |
|
| 135 |
-
* Hardware:
|
| 136 |
-
|
| 137 |
-
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
|
| 138 |
-
|
| 139 |
-
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
|
| 140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
* CPU: AMD
|
| 142 |
|
| 143 |
-
* CPU memory:
|
| 144 |
|
| 145 |
-
* GPU memory:
|
| 146 |
|
| 147 |
* Inter-node connect: Omni-Path Architecture (OPA)
|
| 148 |
|
|
@@ -163,28 +165,21 @@ Please see [the BLOOM training README](https://github.com/bigscience-workshop/bi
|
|
| 163 |
|
| 164 |
#### **Training**
|
| 165 |
|
| 166 |
-
|
| 167 |
-
_In progress._
|
| 168 |
-
|
| 169 |
-
Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/)
|
| 170 |
-
|
| 171 |
- Checkpoint size:
|
| 172 |
|
| 173 |
-
-
|
| 174 |
|
| 175 |
-
- Full checkpoint with optimizer states:
|
| 176 |
|
| 177 |
-
- Training throughput:
|
| 178 |
|
| 179 |
-
- Number of epochs: 1
|
| 180 |
|
| 181 |
- Dates:
|
| 182 |
|
| 183 |
-
-
|
| 184 |
-
|
| 185 |
-
- Estimated end: 5th July, 2022
|
| 186 |
|
| 187 |
-
-
|
| 188 |
|
| 189 |
- Server training location: Île-de-France, France
|
| 190 |
|
|
|
|
| 120 |
|
| 121 |
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
|
| 122 |
|
| 123 |
+
* 1.3 billion parameters:
|
| 124 |
|
| 125 |
+
* 24 layers, 16 attention heads
|
| 126 |
|
| 127 |
+
* Hidden layers are 2048-dimensional
|
| 128 |
|
| 129 |
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
|
| 130 |
|
|
|
|
| 132 |
|
| 133 |
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
|
| 134 |
|
| 135 |
+
* Hardware: 64 V100 16/32GB GPUs (16 nodes):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
+
* 4 GPUs per node
|
| 138 |
+
|
| 139 |
+
* 40 CPUs per task
|
| 140 |
+
|
| 141 |
+
* 1 task per node
|
| 142 |
+
|
| 143 |
* CPU: AMD
|
| 144 |
|
| 145 |
+
* CPU memory: 160GB per node
|
| 146 |
|
| 147 |
+
* GPU memory: 64GB or 128GB (depending on node availability during training) per node
|
| 148 |
|
| 149 |
* Inter-node connect: Omni-Path Architecture (OPA)
|
| 150 |
|
|
|
|
| 165 |
|
| 166 |
#### **Training**
|
| 167 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
- Checkpoint size:
|
| 169 |
|
| 170 |
+
- Fp16 weights: 2.6GB (# params * 2)
|
| 171 |
|
| 172 |
+
- Full checkpoint with optimizer states: --
|
| 173 |
|
| 174 |
+
- Training throughput: --
|
| 175 |
|
| 176 |
+
- Number of epochs: 1
|
| 177 |
|
| 178 |
- Dates:
|
| 179 |
|
| 180 |
+
- Start: 11th March, 2022 11:42am PST
|
|
|
|
|
|
|
| 181 |
|
| 182 |
+
- End: 20 May, 2022
|
| 183 |
|
| 184 |
- Server training location: Île-de-France, France
|
| 185 |
|