mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on the None dataset. It achieves the following results on the evaluation set:

  • F1 Micro: 0.0062
  • F1 Macro: 0.0059
  • Precision At 5: 0.0131
  • Recall At 5: 0.0040
  • Precision At 8: 0.0108
  • Recall At 8: 0.0056
  • Precision At 15: 0.0124
  • Recall At 15: 0.0101
  • Rare F1 Micro: 0.0040
  • Rare F1 Macro: 0.0040
  • Rare Precision: 0.0020
  • Rare Recall: 0.9992
  • Rare Precision At 5: 0.0055
  • Rare Recall At 5: 0.0025
  • Rare Precision At 8: 0.0041
  • Rare Recall At 8: 0.0029
  • Rare Precision At 15: 0.0032
  • Rare Recall At 15: 0.0044
  • Not Rare F1 Micro: 0.1354
  • Not Rare F1 Macro: 0.1308
  • Not Rare Precision: 0.0726
  • Not Rare Recall: 0.9998
  • Not Rare Precision At 5: 0.1391
  • Not Rare Recall At 5: 0.0842
  • Not Rare Precision At 8: 0.1066
  • Not Rare Recall At 8: 0.1005
  • Not Rare Precision At 15: 0.0989
  • Not Rare Recall At 15: 0.1650
  • Loss: -2.3104

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step F1 Micro F1 Macro Precision At 5 Recall At 5 Precision At 8 Recall At 8 Precision At 15 Recall At 15 Rare F1 Micro Rare F1 Macro Rare Precision Rare Recall Rare Precision At 5 Rare Recall At 5 Rare Precision At 8 Rare Recall At 8 Rare Precision At 15 Rare Recall At 15 Not Rare F1 Micro Not Rare F1 Macro Not Rare Precision Not Rare Recall Not Rare Precision At 5 Not Rare Recall At 5 Not Rare Precision At 8 Not Rare Recall At 8 Not Rare Precision At 15 Not Rare Recall At 15 Validation Loss
-2.5733 0.9981 262 0.0086 0.0060 0.2032 0.0452 0.1975 0.0694 0.1826 0.1185 0.0051 0.0040 0.0026 0.7894 0.0369 0.0112 0.0329 0.0162 0.0290 0.0270 0.1354 0.1308 0.0726 1.0 0.2012 0.1187 0.1963 0.1842 0.1802 0.3115 -2.1808
-2.8745 1.9981 524 0.0070 0.0062 0.1153 0.0311 0.1079 0.0456 0.0933 0.0723 0.0044 0.0041 0.0022 0.8685 0.0391 0.0155 0.0333 0.0210 0.0281 0.0323 0.1399 0.1337 0.0754 0.9720 0.1735 0.1110 0.1544 0.1553 0.1400 0.2550 -2.2971
-3.0665 2.9981 786 0.0064 0.0060 0.0525 0.0148 0.0450 0.0203 0.0392 0.0309 0.0041 0.0040 0.0020 0.9688 0.0150 0.0061 0.0134 0.0086 0.0107 0.0129 0.1376 0.1323 0.0739 0.9840 0.1498 0.0950 0.1236 0.1245 0.1147 0.2041 -2.3224
-3.5627 3.9981 1048 0.0062 0.0060 0.0182 0.0059 0.0152 0.0075 0.0163 0.0135 0.0040 0.0040 0.0020 0.9920 0.0069 0.0031 0.0052 0.0039 0.0044 0.0062 0.1361 0.1313 0.0730 0.9973 0.1394 0.0855 0.1093 0.1055 0.1022 0.1756 -2.3239
-4.0526 4.9981 1310 0.0062 0.0059 0.0131 0.0040 0.0108 0.0056 0.0124 0.0101 0.0040 0.0040 0.0020 0.9992 0.0055 0.0025 0.0041 0.0029 0.0032 0.0044 0.1354 0.1308 0.0726 0.9998 0.1391 0.0842 0.1066 0.1005 0.0989 0.1650 -2.3104

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.6.0
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify

Quantized
(180)
this model