Update README.md
ef49e22
verified
-
1_Pooling
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
checkpoint-1386
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
checkpoint-2079
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
checkpoint-2772
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
checkpoint-693
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
eval
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
2.47 kB
Track all the big files I'm about to upload
-
20.6 kB
Update README.md
-
619 Bytes
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
205 Bytes
Upload folder using huggingface_hub
-
471 MB
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
229 Bytes
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
53 Bytes
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
964 Bytes
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
17.1 MB
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
1.45 kB
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
training_args.bin
Detected Pickle imports (12)
- "sentence_transformers.training_args.BatchSamplers",
- "transformers.trainer_pt_utils.AcceleratorConfig",
- "transformers.trainer_utils.SchedulerType",
- "transformers.trainer_utils.SaveStrategy",
- "torch.device",
- "transformers.training_args.OptimizerNames",
- "accelerate.state.PartialState",
- "transformers.trainer_utils.IntervalStrategy",
- "sentence_transformers.training_args.MultiDatasetBatchSamplers",
- "sentence_transformers.training_args.SentenceTransformerTrainingArguments",
- "transformers.trainer_utils.HubStrategy",
- "accelerate.utils.dataclasses.DistributedType"
How to fix it?
6.1 kB
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting
-
14.8 MB
Latest training run, just 4 epochs, optimizations all pulled except for FP16, save and eval at epochs to avoid over-fitting