Update README.md
Browse files
README.md
CHANGED
|
@@ -41,68 +41,62 @@ It achieves the following results on the evaluation set:
|
|
| 41 |
```
|
| 42 |
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
| 43 |
pip install optimum[openvino,nncf]==1.7.0
|
| 44 |
-
|
| 45 |
-
pip install -r requirements.txt
|
| 46 |
pip install wandb # optional
|
| 47 |
```
|
| 48 |
|
| 49 |
-
##
|
| 50 |
|
| 51 |
-
See
|
| 52 |
|
| 53 |
|
| 54 |
## Run
|
| 55 |
|
| 56 |
We use one card for training.
|
| 57 |
|
| 58 |
-
```
|
| 59 |
-
NNCFCFG=/path/to/
|
| 60 |
python run_glue.py \
|
| 61 |
-
--lr_scheduler_type cosine_with_restarts \
|
| 62 |
-
--
|
| 63 |
-
--
|
| 64 |
-
--
|
| 65 |
-
--
|
| 66 |
-
--model_name_or_path textattack/bert-base-uncased-SST-2 \
|
| 67 |
-
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
|
| 68 |
-
--distillation_temperature 2 \
|
| 69 |
-
--task_name sst2 \
|
| 70 |
-
--nncf_compression_config $NNCFCFG \
|
| 71 |
-
--distillation_weight 0.95 \
|
| 72 |
-
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80
|
| 73 |
-
--
|
| 74 |
-
--
|
| 75 |
-
--do_train \
|
| 76 |
-
--do_eval \
|
| 77 |
-
--max_seq_length 128 \
|
| 78 |
-
--per_device_train_batch_size 32 \
|
| 79 |
-
--per_device_eval_batch_size 32 \
|
| 80 |
-
--learning_rate 5e-05 \
|
| 81 |
-
--optim adamw_torch \
|
| 82 |
-
--num_train_epochs 17 \
|
| 83 |
-
--logging_steps 1 \
|
| 84 |
-
--evaluation_strategy steps \
|
| 85 |
-
--eval_steps 250 \
|
| 86 |
-
--save_strategy steps \
|
| 87 |
-
--save_steps 250 \
|
| 88 |
-
--save_total_limit 1 \
|
| 89 |
-
--fp16 \
|
| 90 |
-
--seed 1
|
| 91 |
```
|
| 92 |
|
| 93 |
-
The best model checkpoint is stored in the `best_model` folder. Here we only upload that checkpoint folder together with some config files.
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
## inference
|
| 97 |
-
|
| 98 |
-
https://gist.github.com/yujiepan-work/c38dc4e56c7a9d803c42988f7b7d260a
|
| 99 |
-
|
| 100 |
-
|
| 101 |
### Framework versions
|
| 102 |
|
| 103 |
- Transformers 4.26.0
|
| 104 |
- Pytorch 1.13.1+cu116
|
| 105 |
- Datasets 2.8.0
|
| 106 |
- Tokenizers 0.13.2
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
|
|
|
|
| 41 |
```
|
| 42 |
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
|
| 43 |
pip install optimum[openvino,nncf]==1.7.0
|
| 44 |
+
pip install datasets sentencepiece scipy scikit-learn protobuf evaluate
|
|
|
|
| 45 |
pip install wandb # optional
|
| 46 |
```
|
| 47 |
|
| 48 |
+
## Training script
|
| 49 |
|
| 50 |
+
See https://gist.github.com/yujiepan-work/5d7e513a47b353db89f6e1b512d7c080
|
| 51 |
|
| 52 |
|
| 53 |
## Run
|
| 54 |
|
| 55 |
We use one card for training.
|
| 56 |
|
| 57 |
+
```bash
|
| 58 |
+
NNCFCFG=/path/to/nncf_config/json
|
| 59 |
python run_glue.py \
|
| 60 |
+
--lr_scheduler_type cosine_with_restarts \
|
| 61 |
+
--cosine_lr_scheduler_cycles 11 6 \
|
| 62 |
+
--record_best_model_after_epoch 9 \
|
| 63 |
+
--load_best_model_at_end True \
|
| 64 |
+
--metric_for_best_model accuracy \
|
| 65 |
+
--model_name_or_path textattack/bert-base-uncased-SST-2 \
|
| 66 |
+
--teacher_model_or_path yoshitomo-matsubara/bert-large-uncased-sst2 \
|
| 67 |
+
--distillation_temperature 2 \
|
| 68 |
+
--task_name sst2 \
|
| 69 |
+
--nncf_compression_config $NNCFCFG \
|
| 70 |
+
--distillation_weight 0.95 \
|
| 71 |
+
--output_dir /tmp/bert-base-uncased-sst2-int8-unstructured80 \
|
| 72 |
+
--overwrite_output_dir \
|
| 73 |
+
--run_name bert-base-uncased-sst2-int8-unstructured80 \
|
| 74 |
+
--do_train \
|
| 75 |
+
--do_eval \
|
| 76 |
+
--max_seq_length 128 \
|
| 77 |
+
--per_device_train_batch_size 32 \
|
| 78 |
+
--per_device_eval_batch_size 32 \
|
| 79 |
+
--learning_rate 5e-05 \
|
| 80 |
+
--optim adamw_torch \
|
| 81 |
+
--num_train_epochs 17 \
|
| 82 |
+
--logging_steps 1 \
|
| 83 |
+
--evaluation_strategy steps \
|
| 84 |
+
--eval_steps 250 \
|
| 85 |
+
--save_strategy steps \
|
| 86 |
+
--save_steps 250 \
|
| 87 |
+
--save_total_limit 1 \
|
| 88 |
+
--fp16 \
|
| 89 |
+
--seed 1
|
| 90 |
```
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
### Framework versions
|
| 93 |
|
| 94 |
- Transformers 4.26.0
|
| 95 |
- Pytorch 1.13.1+cu116
|
| 96 |
- Datasets 2.8.0
|
| 97 |
- Tokenizers 0.13.2
|
| 98 |
+
- Optimum 1.6.3
|
| 99 |
+
- Optimum-intel 1.7.0
|
| 100 |
+
- NNCF 2.4.0
|
| 101 |
|
| 102 |
For a full description of the environment, please refer to `pip-requirements.txt` and `conda-requirements.txt`.
|