mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify
/
classification_log_2025-06-05_15-02-25.log
2025-06-05 15:02:25,280 - INFO - ================================================================================ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:25,281 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:25,281 - INFO - ================================================================================ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:25,281 - INFO - 🚀 Setting up data paths and environment variables... - [multilabel_classify.py:3560:main] | |
2025-06-05 15:02:25,281 - INFO - 📂 Using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3566:main] | |
2025-06-05 15:02:25,281 - INFO - 🛠️ Command-line Arguments: - [multilabel_classify.py:368:print_args] | |
2025-06-05 15:02:25,281 - INFO - | |
🔹 output_dir: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b | |
🔹 source_url: XURLs.MIMIC4_DEMO | |
🔹 data: mimic4_icd10_full | |
🔹 logfile: classification_log | |
🔹 base_dir: ../tmp/MIMIC4_DEMO | |
🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt | |
🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3 | |
🔹 max_length: 512 | |
🔹 do_fresh_training: True | |
🔹 load_from_checkpoint: False | |
🔹 task: multilabel-classify | |
🔹 num_train_epochs: 4 | |
🔹 per_device_train_batch_size: 8 | |
🔹 per_device_eval_batch_size: 8 | |
🔹 metric_for_best_model: precision_at_15 | |
🔹 learning_rate: 0.0001 | |
🔹 final_lr_scheduling: 1e-06 | |
🔹 warmup_steps: 500 | |
🔹 logfile_path: ../tmp/logs/classification_log_2025-06-05_15-02-25.log | |
🔹 source: /home/ubuntu/.xcube/data/mimic4_demo - [multilabel_classify.py:369:print_args] | |
2025-06-05 15:02:25,281 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [multilabel_classify.py:370:print_args] | |
2025-06-05 15:02:25,281 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:25,281 - INFO - + ✨ LOADING DATASETS + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:25,281 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:25,282 - INFO - 📊 Loading main datasets.... - [multilabel_classify.py:3572:main] | |
2025-06-05 15:02:33,869 - INFO - 🔍 Total unique labels in dataset: 7942 - [multilabel_classify.py:3353:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,882 - INFO - 🧪 Attempt 1: Sampled 122 rows covering 863 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,892 - INFO - 🧪 Attempt 2: Sampled 122 rows covering 816 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,901 - INFO - 🧪 Attempt 3: Sampled 122 rows covering 885 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,910 - INFO - 🧪 Attempt 4: Sampled 122 rows covering 828 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,920 - INFO - 🧪 Attempt 5: Sampled 122 rows covering 879 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,928 - INFO - 🧪 Attempt 6: Sampled 122 rows covering 852 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,937 - INFO - 🧪 Attempt 7: Sampled 122 rows covering 838 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,946 - INFO - 🧪 Attempt 8: Sampled 122 rows covering 851 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,955 - INFO - 🧪 Attempt 9: Sampled 122 rows covering 825 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,964 - INFO - 🧪 Attempt 10: Sampled 122 rows covering 833 labels. - [multilabel_classify.py:3367:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,964 - INFO - ⚠️ Skipping label coverage fix. 7109 labels are missing. - [multilabel_classify.py:3385:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,964 - INFO - ✅ Final row count: 122 (Valid: 20, Not-valid: 102) - [multilabel_classify.py:3390:sample_df_with_full_label_coverage] | |
2025-06-05 15:02:33,983 - INFO - ******************************************************************************** - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:33,983 - INFO - * 🌟 STARTING MULTI_LABEL CLASSIFICATION MODEL TRAINING * - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:33,983 - INFO - ******************************************************************************** - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:33,983 - INFO - 🔐 Loaded authentication token from environment - [multilabel_classify.py:3599:main] | |
2025-06-05 15:02:33,983 - INFO - 🏷️ Hub Model ID for this Classification task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3603:main] | |
2025-06-05 15:02:33,984 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:33,984 - INFO - - 📋 MODEL EXISTENCE CHECK - - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:33,984 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:33,984 - INFO - 🔍 Checking model existence locally and on Hugging Face Hub... - [multilabel_classify.py:3468:check_model_existence] | |
2025-06-05 15:02:33,984 - INFO - ✅ Model exists locally at: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3473:check_model_existence] | |
2025-06-05 15:02:34,066 - INFO - ✅ Model exists on Hugging Face Hub with ID: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3487:check_model_existence] | |
2025-06-05 15:02:34,066 - INFO - 📁 Model exists either locally or on Hub - [multilabel_classify.py:3513:check_model_existence] | |
2025-06-05 15:02:34,067 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:34,067 - INFO - + ✨ STARTING FRESH TRAINING + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:34,067 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:34,067 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [multilabel_classify.py:3616:main] | |
2025-06-05 15:02:34,081 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login] | |
2025-06-05 15:02:34,081 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:34,081 - INFO - + ✨ LOADING BASE MODEL + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:34,081 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:34,081 - INFO - 📥 Loading pretrained model and tokenizer... - [multilabel_classify.py:3646:main] | |
2025-06-05 15:02:34,081 - INFO - 🚀 Starting model and tokenizer loading process... - [multilabel_classify.py:1243:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,082 - INFO - 📊 Quantization config: BitsAndBytesConfig { | |
"_load_in_4bit": true, | |
"_load_in_8bit": false, | |
"bnb_4bit_compute_dtype": "bfloat16", | |
"bnb_4bit_quant_storage": "uint8", | |
"bnb_4bit_quant_type": "nf4", | |
"bnb_4bit_use_double_quant": true, | |
"llm_int8_enable_fp32_cpu_offload": false, | |
"llm_int8_has_fp16_weight": false, | |
"llm_int8_skip_modules": null, | |
"llm_int8_threshold": 6.0, | |
"load_in_4bit": true, | |
"load_in_8bit": false, | |
"quant_method": "bitsandbytes" | |
} | |
- [multilabel_classify.py:1252:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,082 - INFO - 🔤 Loading tokenizer for model: deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1256:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,439 - INFO - 🔍 Checking if deb101/mistral-7b-instruct-v0.3-mimic4-adapt is a PEFT model... - [multilabel_classify.py:1268:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,464 - INFO - ✅ Detected PEFT model. Base model: mistralai/Mistral-7B-Instruct-v0.3 - [multilabel_classify.py:1272:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,464 - INFO - 🔍 Loading model configuration for mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1282:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,496 - INFO - Model type: mistral, Architectures: ['MistralForCausalLM'] - [multilabel_classify.py:1288:load_base_model_and_tokenizer] | |
2025-06-05 15:02:34,496 - INFO - 🧠 Loading base model: mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1351:load_base_model_and_tokenizer] | |
2025-06-05 15:02:35,026 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory] | |
2025-06-05 15:02:40,290 - INFO - 🧩 Loading PEFT adapters for deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1366:load_base_model_and_tokenizer] | |
2025-06-05 15:02:40,733 - INFO - 🔧 Before enabling PEFT adapters for training - [multilabel_classify.py:1368:load_base_model_and_tokenizer] | |
2025-06-05 15:02:40,735 - INFO - 📊 trainable params: 0 || all params: 7,254,839,296 || trainable%: 0.0000 - [multilabel_classify.py:159:log_print_output] | |
2025-06-05 15:02:40,738 - INFO - 🔧 After Enabling PEFT adapters for training - [multilabel_classify.py:1375:load_base_model_and_tokenizer] | |
2025-06-05 15:02:40,740 - INFO - 📊 trainable params: 6,815,744 || all params: 7,254,839,296 || trainable%: 0.0939 - [multilabel_classify.py:159:log_print_output] | |
2025-06-05 15:02:40,741 - INFO - ✅ Model and tokenizer successfully loaded! - [multilabel_classify.py:1416:load_base_model_and_tokenizer] | |
2025-06-05 15:02:40,741 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:40,741 - INFO - + ✨ DATA PREPROCESSING + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:40,741 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:40,742 - INFO - 🔄 Loading and preprocessing training data... - [multilabel_classify.py:3654:main] | |
2025-06-05 15:02:40,746 - INFO - Total number of labels: 833 - [multilabel_classify.py:1044:preprocess_data] | |
2025-06-05 15:02:40,746 - INFO - Rare labels (freq < 50): 832 - [multilabel_classify.py:1045:preprocess_data] | |
2025-06-05 15:02:40,746 - INFO - Not rare labels (freq >= 50): 1 - [multilabel_classify.py:1046:preprocess_data] | |
2025-06-05 15:02:40,746 - INFO - Label partitions and classes saved to ../tmp/MIMIC4_DEMO/labels_partition.json - [multilabel_classify.py:1047:preprocess_data] | |
2025-06-05 15:02:42,069 - INFO - The size of training set: 567 - [multilabel_classify.py:1143:preprocess_data] | |
2025-06-05 15:02:42,069 - INFO - The size of Evaluation set: 136 - [multilabel_classify.py:1144:preprocess_data] | |
2025-06-05 15:02:42,074 - INFO - Number of unique ICD-10 codes: 833 - [multilabel_classify.py:3660:main] | |
2025-06-05 15:02:42,074 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:42,074 - INFO - + ✨ MODEL INITIALIZATION + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:42,074 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:42,075 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [multilabel_classify.py:3663:main] | |
2025-06-05 15:02:42,075 - INFO - Will now start to create Multilabel-Classification Model from the base model - [multilabel_classify.py:560:__init__] | |
2025-06-05 15:02:42,078 - INFO - 📊 trainable params: 6,815,744 || all params: 3,765,178,368 || trainable%: 0.1810 - [multilabel_classify.py:614:compute_trainable_params] | |
2025-06-05 15:02:42,416 - INFO - Creating the Multi-Label Classification Model from base model mistralai/Mistral-7B-Instruct-v0.3 completed!!! - [multilabel_classify.py:602:__init__] | |
2025-06-05 15:02:42,419 - INFO - 📊 trainable params: 84,177,025 || all params: 3,842,539,649 || trainable%: 2.1907 - [multilabel_classify.py:614:compute_trainable_params] | |
2025-06-05 15:02:42,419 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:42,419 - INFO - + ✨ TRAINING PREPARATION + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:42,419 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:42,419 - INFO - ⚙️ Preparing training components and optimizers... - [multilabel_classify.py:3670:main] | |
2025-06-05 15:02:42,476 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [multilabel_classify.py:891:log_training_configuration] | |
2025-06-05 15:02:42,476 - INFO - 🔋 CUDA Available: True - [multilabel_classify.py:894:log_training_configuration] | |
2025-06-05 15:02:42,476 - INFO - 💾 CUDA Device Count: 1 - [multilabel_classify.py:895:log_training_configuration] | |
2025-06-05 15:02:42,478 - INFO - | |
📋 Training Configuration 📋 | |
+----------+-----------------------------+------------------------------------------------------------------+ | |
| 🌟 Emoji | 🏷️ Parameter | 📊 Value | | |
+----------+-----------------------------+------------------------------------------------------------------+ | |
| 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b | | |
| 🔁 | Training Epochs | 4 | | |
| 🏋️ | Train Batch Size | 8 | | |
| 🔍 | Eval Batch Size | 8 | | |
| 📊 | Gradient Accumulation Steps | 4 | | |
| 🚀 | Learning Rate | 0.0001 | | |
| 🌅 | Warmup Steps | 500 | | |
| 💾 | Save Strategy | epoch | | |
| 💾 | Save Total Limit | 10 | | |
| 📊 | Evaluation Strategy | epoch | | |
| 🎯 | Best Model Metric | precision_at_15 | | |
| 📝 | Logging Strategy | steps (every 10 steps) | | |
| 🌐 | Push to Hub | True | | |
| 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify | | |
| 🔢 | Steps per Epoch | 17 | | |
| 🔢 | Total Training Steps | 68 | | |
| 🔢 | Evaluation Steps | 17 | | |
| 📊 | Training Dataset Size | 567 samples 🏋️ | | |
| 📊 | Evaluation Dataset Size | 136 samples 🔍 | | |
+----------+-----------------------------+------------------------------------------------------------------+ - [multilabel_classify.py:883:log_training_args] | |
2025-06-05 15:02:42,478 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:02:42,478 - INFO - + ✨ MODEL TRAINING + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:02:42,478 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:02:42,479 - INFO - 🏋️ Starting model training process... - [multilabel_classify.py:3692:main] | |
2025-06-05 15:02:42,520 - INFO - We are registering the tokenizer deb101/mistral-7b-instruct-v0.3-mimic4-adapt in Custom Trainer - [multilabel_classify.py:1988:__init__] | |
2025-06-05 15:02:42,770 - INFO - 🚀 Starting Training... - [multilabel_classify.py:1642:on_train_begin] | |
2025-06-05 15:03:03,122 - INFO - | |
[36m🚂 Training Metrics (Step 10) 🚂 | |
+---------------+---------+ | |
| Metric | Value | | |
+===============+=========+ | |
| loss | 0.6384 | | |
+---------------+---------+ | |
| grad_norm | 6.84453 | | |
+---------------+---------+ | |
| learning_rate | 2e-06 | | |
+---------------+---------+ | |
| epoch | 0.56338 | | |
+---------------+---------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:03:17,100 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:2000:evaluate] | |
2025-06-05 15:03:52,220 - INFO - | |
[33m🔍 Evaluation Metrics 🔍 | |
+-------------------------------+----------+ | |
| Metric | Value | | |
+===============================+==========+ | |
| eval_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_precision_at_5 | 0.036765 | | |
+-------------------------------+----------+ | |
| eval_recall_at_5 | 0.008744 | | |
+-------------------------------+----------+ | |
| eval_precision_at_8 | 0.037684 | | |
+-------------------------------+----------+ | |
| eval_recall_at_8 | 0.017401 | | |
+-------------------------------+----------+ | |
| eval_precision_at_15 | 0.037745 | | |
+-------------------------------+----------+ | |
| eval_recall_at_15 | 0.032384 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_5 | 0.029412 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_5 | 0.00805 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_8 | 0.026654 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_8 | 0.010721 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_15 | 0.027451 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_15 | 0.021147 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_micro | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_macro | 0.373272 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_5 | 0.080882 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_5 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_8 | 0.050551 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_8 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_15 | 0.026961 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_15 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_loss | 0.229106 | | |
+-------------------------------+----------+[0m - [multilabel_classify.py:1855:on_evaluate] | |
2025-06-05 15:03:55,736 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:03:55,737 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:03:55,738 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 642.30 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4267.74 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:04:01,845 - INFO - | |
[36m🚂 Training Metrics (Step 20) 🚂 | |
+---------------+---------+ | |
| Metric | Value | | |
+===============+=========+ | |
| loss | 0.345 | | |
+---------------+---------+ | |
| grad_norm | 2.53065 | | |
+---------------+---------+ | |
| learning_rate | 4e-06 | | |
+---------------+---------+ | |
| epoch | 1.11268 | | |
+---------------+---------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:04:19,966 - INFO - | |
[36m🚂 Training Metrics (Step 30) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.1265 | | |
+---------------+----------+ | |
| grad_norm | 0.237818 | | |
+---------------+----------+ | |
| learning_rate | 6e-06 | | |
+---------------+----------+ | |
| epoch | 1.67606 | | |
+---------------+----------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:04:30,351 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:2000:evaluate] | |
2025-06-05 15:05:05,550 - INFO - | |
[33m🔍 Evaluation Metrics 🔍 | |
+-------------------------------+----------+ | |
| Metric | Value | | |
+===============================+==========+ | |
| eval_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_precision_at_5 | 0.041176 | | |
+-------------------------------+----------+ | |
| eval_recall_at_5 | 0.009645 | | |
+-------------------------------+----------+ | |
| eval_precision_at_8 | 0.039522 | | |
+-------------------------------+----------+ | |
| eval_recall_at_8 | 0.016571 | | |
+-------------------------------+----------+ | |
| eval_precision_at_15 | 0.037255 | | |
+-------------------------------+----------+ | |
| eval_recall_at_15 | 0.030333 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_5 | 0.025 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_5 | 0.004753 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_8 | 0.027574 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_8 | 0.010902 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_15 | 0.028431 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_15 | 0.02441 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_micro | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_macro | 0.373272 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_5 | 0.080882 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_5 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_8 | 0.050551 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_8 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_15 | 0.026961 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_15 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_loss | 0.12156 | | |
+-------------------------------+----------+[0m - [multilabel_classify.py:1855:on_evaluate] | |
2025-06-05 15:05:09,029 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:05:09,030 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:05:09,031 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 642.30 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4267.74 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:05:19,082 - INFO - | |
[36m🚂 Training Metrics (Step 40) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.1096 | | |
+---------------+----------+ | |
| grad_norm | 0.465592 | | |
+---------------+----------+ | |
| learning_rate | 8e-06 | | |
+---------------+----------+ | |
| epoch | 2.22535 | | |
+---------------+----------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:05:37,209 - INFO - | |
[36m🚂 Training Metrics (Step 50) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.1092 | | |
+---------------+----------+ | |
| grad_norm | 0.099644 | | |
+---------------+----------+ | |
| learning_rate | 1e-05 | | |
+---------------+----------+ | |
| epoch | 2.78873 | | |
+---------------+----------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:05:43,993 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:2000:evaluate] | |
2025-06-05 15:06:19,081 - INFO - | |
[33m🔍 Evaluation Metrics 🔍 | |
+-------------------------------+----------+ | |
| Metric | Value | | |
+===============================+==========+ | |
| eval_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_precision_at_5 | 0.116176 | | |
+-------------------------------+----------+ | |
| eval_recall_at_5 | 0.039057 | | |
+-------------------------------+----------+ | |
| eval_precision_at_8 | 0.100184 | | |
+-------------------------------+----------+ | |
| eval_recall_at_8 | 0.059546 | | |
+-------------------------------+----------+ | |
| eval_precision_at_15 | 0.081373 | | |
+-------------------------------+----------+ | |
| eval_recall_at_15 | 0.089004 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_5 | 0.060294 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_5 | 0.020816 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_8 | 0.057904 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_8 | 0.033152 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_15 | 0.056373 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_15 | 0.059526 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_micro | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_macro | 0.373272 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_5 | 0.080882 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_5 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_8 | 0.050551 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_8 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_15 | 0.026961 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_15 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_loss | 0.106887 | | |
+-------------------------------+----------+[0m - [multilabel_classify.py:1855:on_evaluate] | |
2025-06-05 15:06:22,738 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:06:22,739 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:06:22,740 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 642.30 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4267.74 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:06:36,356 - INFO - | |
[36m🚂 Training Metrics (Step 60) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.1033 | | |
+---------------+----------+ | |
| grad_norm | 0.216208 | | |
+---------------+----------+ | |
| learning_rate | 1.2e-05 | | |
+---------------+----------+ | |
| epoch | 3.33803 | | |
+---------------+----------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:06:54,505 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:06:54,507 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:06:54,508 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 642.30 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4267.74 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:06:54,897 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:2000:evaluate] | |
2025-06-05 15:07:30,167 - INFO - | |
[33m🔍 Evaluation Metrics 🔍 | |
+-------------------------------+----------+ | |
| Metric | Value | | |
+===============================+==========+ | |
| eval_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_precision_at_5 | 0.227941 | | |
+-------------------------------+----------+ | |
| eval_recall_at_5 | 0.094901 | | |
+-------------------------------+----------+ | |
| eval_precision_at_8 | 0.16636 | | |
+-------------------------------+----------+ | |
| eval_recall_at_8 | 0.103783 | | |
+-------------------------------+----------+ | |
| eval_precision_at_15 | 0.113725 | | |
+-------------------------------+----------+ | |
| eval_recall_at_15 | 0.128501 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_5 | 0.15 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_5 | 0.06448 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_8 | 0.120404 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_8 | 0.078831 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_15 | 0.087255 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_15 | 0.099736 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_micro | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_macro | 0.373272 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall | 0.595588 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_5 | 0.080882 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_5 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_8 | 0.050551 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_8 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_15 | 0.026961 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_15 | 0.404412 | | |
+-------------------------------+----------+ | |
| eval_loss | 0.104832 | | |
+-------------------------------+----------+[0m - [multilabel_classify.py:1855:on_evaluate] | |
2025-06-05 15:07:33,859 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:07:33,860 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:07:33,862 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 642.30 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4267.74 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:07:34,327 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2167:_load_best_model] | |
2025-06-05 15:07:34,327 - INFO - 🖥️ Model is on device: cuda:0 - [multilabel_classify.py:2177:_load_best_model] | |
2025-06-05 15:07:34,399 - INFO - 🔑 Key order comparison: | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| Index | Saved state_dict Keys | Model state_dict Keys | | |
+=========+============================================+======================================================================================+ | |
| 1 | attention.in_proj_bias | boost_mul | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 2 | attention.in_proj_weight | boost_add | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 3 | attention.out_proj.bias | base_model.base_model.model.model.embed_tokens.weight | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 4 | attention.out_proj.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 5 | base_model.base_model.model.lm_head.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ - [multilabel_classify.py:2201:_load_best_model] | |
2025-06-05 15:07:35,427 - INFO - ✅ Loaded best model weights from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68/model.safetensors - [multilabel_classify.py:2218:_load_best_model] | |
2025-06-05 15:07:35,452 - INFO - ✔️ Weight for boost_mul matches between saved and loaded state_dict - [multilabel_classify.py:2230:_load_best_model] | |
2025-06-05 15:07:35,475 - INFO - ✔️ Weight for boost_add matches between saved and loaded state_dict - [multilabel_classify.py:2230:_load_best_model] | |
2025-06-05 15:07:35,490 - INFO - | |
[36m🚂 Training Metrics (Step 68) 🚂 | |
+--------------------------+----------+ | |
| Metric | Value | | |
+==========================+==========+ | |
| train_runtime | 292.721 | | |
+--------------------------+----------+ | |
| train_samples_per_second | 7.748 | | |
+--------------------------+----------+ | |
| train_steps_per_second | 0.232 | | |
+--------------------------+----------+ | |
| total_flos | 0 | | |
+--------------------------+----------+ | |
| train_loss | 0.222837 | | |
+--------------------------+----------+ | |
| epoch | 3.78873 | | |
+--------------------------+----------+[0m - [multilabel_classify.py:1836:on_log] | |
2025-06-05 15:07:35,490 - INFO - ✨ Training Completed! ✨ - [multilabel_classify.py:1709:on_train_end] | |
2025-06-05 15:07:35,559 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/train_loss_plot.png' - [multilabel_classify.py:1905:on_train_end] | |
2025-06-05 15:07:35,619 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_loss_plot.png' - [multilabel_classify.py:1919:on_train_end] | |
2025-06-05 15:07:35,680 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_precision_at_15_plot.png' - [multilabel_classify.py:1940:on_train_end] | |
2025-06-05 15:07:35,680 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section] | |
2025-06-05 15:07:35,680 - INFO - + ✨ MODEL SAVING + - [multilabel_classify.py:101:log_section] | |
2025-06-05 15:07:35,680 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section] | |
2025-06-05 15:07:35,680 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [multilabel_classify.py:3706:main] | |
2025-06-05 15:07:35,680 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2693:save_and_push] | |
2025-06-05 15:07:39,183 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:07:39,184 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:07:39,186 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b: | |
+---------+--------------------------------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+============================================+============+ | |
| 1 | eval_loss_plot.png | 0.03 MB | | |
+---------+--------------------------------------------+------------+ | |
| 2 | training_args.bin | 0.01 MB | | |
+---------+--------------------------------------------+------------+ | |
| 3 | tokenizer.model | 0.56 MB | | |
+---------+--------------------------------------------+------------+ | |
| 4 | tokenizer.json | 3.50 MB | | |
+---------+--------------------------------------------+------------+ | |
| 5 | model.safetensors | 4267.74 MB | | |
+---------+--------------------------------------------+------------+ | |
| 6 | config.json | 0.00 MB | | |
+---------+--------------------------------------------+------------+ | |
| 7 | special_tokens_map.json | 0.00 MB | | |
+---------+--------------------------------------------+------------+ | |
| 8 | tokenizer_config.json | 0.13 MB | | |
+---------+--------------------------------------------+------------+ | |
| 9 | train_loss_plot.png | 0.02 MB | | |
+---------+--------------------------------------------+------------+ | |
| 10 | eval_precision_at_15_plot.png | 0.03 MB | | |
+---------+--------------------------------------------+------------+ | |
| 11 | README.md | 0.01 MB | | |
+---------+--------------------------------------------+------------+ | |
| 12 | classification_log_2025-06-05_14-33-04.log | 0.04 MB | | |
+---------+--------------------------------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:07:42,412 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2093:_save] | |
2025-06-05 15:07:42,413 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2098:_save] | |
2025-06-05 15:07:42,415 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b: | |
+---------+--------------------------------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+============================================+============+ | |
| 1 | eval_loss_plot.png | 0.03 MB | | |
+---------+--------------------------------------------+------------+ | |
| 2 | training_args.bin | 0.01 MB | | |
+---------+--------------------------------------------+------------+ | |
| 3 | tokenizer.model | 0.56 MB | | |
+---------+--------------------------------------------+------------+ | |
| 4 | tokenizer.json | 3.50 MB | | |
+---------+--------------------------------------------+------------+ | |
| 5 | model.safetensors | 4267.74 MB | | |
+---------+--------------------------------------------+------------+ | |
| 6 | config.json | 0.00 MB | | |
+---------+--------------------------------------------+------------+ | |
| 7 | special_tokens_map.json | 0.00 MB | | |
+---------+--------------------------------------------+------------+ | |
| 8 | tokenizer_config.json | 0.13 MB | | |
+---------+--------------------------------------------+------------+ | |
| 9 | train_loss_plot.png | 0.02 MB | | |
+---------+--------------------------------------------+------------+ | |
| 10 | eval_precision_at_15_plot.png | 0.03 MB | | |
+---------+--------------------------------------------+------------+ | |
| 11 | README.md | 0.01 MB | | |
+---------+--------------------------------------------+------------+ | |
| 12 | classification_log_2025-06-05_14-33-04.log | 0.04 MB | | |
+---------+--------------------------------------------+------------+ - [multilabel_classify.py:2115:_save] | |
2025-06-05 15:09:03,581 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2697:save_and_push] | |
2025-06-05 15:09:03,611 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2701:save_and_push] | |