mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify
/
classification_log_2025-06-13_18-45-53.log
2025-06-13 18:45:53,711 - INFO - ================================================================================ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:45:53,711 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:45:53,711 - INFO - ================================================================================ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:45:53,711 - INFO - 🚀 Setting up data paths and environment variables... - [multilabel_classify.py:3916:main] | |
2025-06-13 18:45:53,712 - INFO - 📂 Using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3922:main] | |
2025-06-13 18:45:53,712 - INFO - 🛠️ Command-line Arguments: - [multilabel_classify.py:369:print_args] | |
2025-06-13 18:45:53,712 - INFO - | |
🔹 output_dir: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b | |
🔹 source_url: XURLs.MIMIC4_DEMO | |
🔹 data: mimic4_icd10_full | |
🔹 logfile: classification_log | |
🔹 base_dir: ../tmp/MIMIC4_DEMO | |
🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt | |
🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3 | |
🔹 max_length: 512 | |
🔹 do_fresh_training: True | |
🔹 load_from_checkpoint: False | |
🔹 task: multilabel-classify | |
🔹 num_train_epochs: 1 | |
🔹 per_device_train_batch_size: 8 | |
🔹 per_device_eval_batch_size: 8 | |
🔹 metric_for_best_model: precision_at_15 | |
🔹 learning_rate: 0.0001 | |
🔹 final_lr_scheduling: 1e-06 | |
🔹 warmup_steps: 500 | |
🔹 logfile_path: ../tmp/logs/classification_log_2025-06-13_18-45-53.log | |
🔹 source: /home/ubuntu/.xcube/data/mimic4_demo - [multilabel_classify.py:370:print_args] | |
2025-06-13 18:45:53,712 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [multilabel_classify.py:371:print_args] | |
2025-06-13 18:45:53,722 - INFO - | |
🚀 Quick Git Info: 📁 xcube | 🌿 plant | 🔍 0bd4309 | 👤 Debjyoti Saha Roy | ⚡ MIXED (1 staged, 2 unstaged) | 🔬 git show 0bd4309 - [multilabel_classify.py:3928:main] | |
2025-06-13 18:45:53,722 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:45:53,723 - INFO - + ✨ LOADING DATASETS + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:45:53,723 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:45:53,723 - INFO - 📊 Loading main datasets.... - [multilabel_classify.py:3931:main] | |
2025-06-13 18:46:02,259 - INFO - 🔍 Total unique labels in dataset: 7942 - [multilabel_classify.py:3707:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,272 - INFO - 🧪 Attempt 1: Sampled 122 rows covering 863 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,282 - INFO - 🧪 Attempt 2: Sampled 122 rows covering 816 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,291 - INFO - 🧪 Attempt 3: Sampled 122 rows covering 885 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,300 - INFO - 🧪 Attempt 4: Sampled 122 rows covering 828 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,309 - INFO - 🧪 Attempt 5: Sampled 122 rows covering 879 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,317 - INFO - 🧪 Attempt 6: Sampled 122 rows covering 852 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,326 - INFO - 🧪 Attempt 7: Sampled 122 rows covering 838 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,335 - INFO - 🧪 Attempt 8: Sampled 122 rows covering 851 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,343 - INFO - 🧪 Attempt 9: Sampled 122 rows covering 825 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,351 - INFO - 🧪 Attempt 10: Sampled 122 rows covering 833 labels. - [multilabel_classify.py:3721:sample_df_with_full_label_coverage] | |
2025-06-13 18:46:02,356 - INFO - 🛠️ Fixing missing labels: 7109 remaining... - [multilabel_classify.py:3754:sample_df_with_full_label_coverage] | |
2025-06-13 18:49:30,886 - INFO - ✅ Added 1648 rows to achieve full label coverage. - [multilabel_classify.py:3786:sample_df_with_full_label_coverage] | |
2025-06-13 18:49:30,889 - INFO - 📊 Final total labels: 7942 - [multilabel_classify.py:3789:sample_df_with_full_label_coverage] | |
2025-06-13 18:49:30,889 - INFO - ✅ Final row count: 1770 (Valid: 420, Not-valid: 1350) - [multilabel_classify.py:3797:sample_df_with_full_label_coverage] | |
2025-06-13 18:49:31,659 - INFO - ******************************************************************************** - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:49:31,659 - INFO - * 🌟 STARTING MULTI_LABEL CLASSIFICATION MODEL TRAINING * - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:49:31,659 - INFO - ******************************************************************************** - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:49:31,659 - INFO - 🔐 Loaded authentication token from environment - [multilabel_classify.py:3958:main] | |
2025-06-13 18:49:31,659 - INFO - 🏷️ Hub Model ID for this Classification task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3962:main] | |
2025-06-13 18:49:31,659 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:49:31,659 - INFO - - 📋 MODEL EXISTENCE CHECK - - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:49:31,659 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:49:31,659 - INFO - 🔍 Checking model existence locally and on Hugging Face Hub... - [multilabel_classify.py:3822:check_model_existence] | |
2025-06-13 18:49:31,659 - INFO - ❌ Model not found locally at: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3829:check_model_existence] | |
2025-06-13 18:49:31,726 - INFO - ✅ Model exists on Hugging Face Hub with ID: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3841:check_model_existence] | |
2025-06-13 18:49:31,726 - INFO - 📁 Model exists either locally or on Hub - [multilabel_classify.py:3867:check_model_existence] | |
2025-06-13 18:49:31,726 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:49:31,726 - INFO - + ✨ STARTING FRESH TRAINING + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:49:31,727 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:49:31,727 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [multilabel_classify.py:3975:main] | |
2025-06-13 18:49:31,738 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login] | |
2025-06-13 18:49:31,738 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:49:31,738 - INFO - + ✨ LOADING BASE MODEL + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:49:31,738 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:49:31,738 - INFO - 📥 Loading pretrained model and tokenizer... - [multilabel_classify.py:4007:main] | |
2025-06-13 18:49:31,738 - INFO - 🚀 Starting model and tokenizer loading process... - [multilabel_classify.py:1579:load_base_model_and_tokenizer] | |
2025-06-13 18:49:31,739 - INFO - 📊 Quantization config: 4-bit, nf4, double_quant, bfloat16 - [multilabel_classify.py:1588:load_base_model_and_tokenizer] | |
2025-06-13 18:49:31,739 - INFO - 🔤 Loading tokenizer for model: deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1592:load_base_model_and_tokenizer] | |
2025-06-13 18:49:32,680 - INFO - 🔍 Checking if deb101/mistral-7b-instruct-v0.3-mimic4-adapt is a PEFT model... - [multilabel_classify.py:1603:load_base_model_and_tokenizer] | |
2025-06-13 18:49:32,735 - INFO - ✅ Detected PEFT model. Base model: mistralai/Mistral-7B-Instruct-v0.3 - [multilabel_classify.py:1607:load_base_model_and_tokenizer] | |
2025-06-13 18:49:32,735 - INFO - 🔍 Loading model configuration for mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1615:load_base_model_and_tokenizer] | |
2025-06-13 18:49:32,810 - INFO - Model type: mistral, Architectures: ['MistralForCausalLM'] - [multilabel_classify.py:1630:load_base_model_and_tokenizer] | |
2025-06-13 18:49:32,810 - INFO - 🧠 Loading base model: mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1698:load_base_model_and_tokenizer] | |
2025-06-13 18:49:33,322 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory] | |
2025-06-13 18:49:38,581 - INFO - 🧩 Loading PEFT adapters for deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1718:load_base_model_and_tokenizer] | |
2025-06-13 18:49:39,365 - INFO - 🔧 Before enabling PEFT adapters - [multilabel_classify.py:1720:load_base_model_and_tokenizer] | |
2025-06-13 18:49:39,367 - INFO - 📊 trainable params: 0 || all params: 7,254,839,296 || trainable%: 0.0000 - [multilabel_classify.py:160:log_print_output] | |
2025-06-13 18:49:39,370 - INFO - Enabled gradients for parameters: ['base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.0.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.0.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.1.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.1.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.1.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.1.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.2.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.2.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.2.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.2.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.3.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.3.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.3.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.3.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.4.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.4.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.4.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.4.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.5.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.5.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.5.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.5.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.6.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.6.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.6.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.6.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.7.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.7.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.7.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.7.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.8.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.8.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.8.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.8.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.9.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.9.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.9.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.9.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.10.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.10.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.10.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.10.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.11.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.11.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.11.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.11.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.12.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.12.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.12.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.12.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.13.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.13.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.13.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.13.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.14.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.14.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.14.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.14.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.15.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.15.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.15.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.15.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.16.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.16.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.16.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.16.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.17.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.17.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.17.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.17.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.18.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.18.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.18.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.18.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.19.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.19.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.19.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.19.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.20.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.20.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.20.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.20.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.21.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.21.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.21.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.21.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.22.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.22.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.22.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.22.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.23.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.23.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.23.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.23.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.24.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.24.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.24.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.24.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.25.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.25.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.25.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.25.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.26.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.26.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.26.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.26.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.27.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.27.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.27.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.27.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.28.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.28.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.28.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.28.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.29.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.29.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.29.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.29.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.30.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.30.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.30.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.30.self_attn.v_proj.lora_B.default.weight', 'base_model.model.model.layers.31.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.layers.31.self_attn.q_proj.lora_B.default.weight', 'base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight', 'base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight'] - [multilabel_classify.py:1730:load_base_model_and_tokenizer] | |
2025-06-13 18:49:39,370 - INFO - 🔧 After enabling PEFT adapters - [multilabel_classify.py:1731:load_base_model_and_tokenizer] | |
2025-06-13 18:49:39,372 - INFO - 📊 trainable params: 6,815,744 || all params: 7,254,839,296 || trainable%: 0.0939 - [multilabel_classify.py:160:log_print_output] | |
2025-06-13 18:49:39,374 - INFO - ✅ Model and tokenizer successfully loaded! - [multilabel_classify.py:1769:load_base_model_and_tokenizer] | |
2025-06-13 18:49:39,374 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:49:39,374 - INFO - + ✨ DATA PREPROCESSING + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:49:39,374 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:49:39,374 - INFO - 🔄 Loading and preprocessing training data... - [multilabel_classify.py:4017:main] | |
2025-06-13 18:49:39,553 - INFO - Total number of labels: 7942 - [multilabel_classify.py:1172:preprocess_data] | |
2025-06-13 18:49:39,553 - INFO - Rare labels (freq < 50): 7817 - [multilabel_classify.py:1173:preprocess_data] | |
2025-06-13 18:49:39,553 - INFO - Not rare labels (freq >= 50): 125 - [multilabel_classify.py:1174:preprocess_data] | |
2025-06-13 18:49:39,553 - INFO - Label partitions and classes saved to ../tmp/MIMIC4_DEMO/labels_partition.json - [multilabel_classify.py:1175:preprocess_data] | |
2025-06-13 18:50:36,704 - INFO - The size of training set: 8393 - [multilabel_classify.py:1271:preprocess_data] | |
2025-06-13 18:50:36,704 - INFO - The size of Evaluation set: 2528 - [multilabel_classify.py:1272:preprocess_data] | |
2025-06-13 18:50:37,110 - INFO - Number of unique ICD-10 codes: 7942 - [multilabel_classify.py:4023:main] | |
2025-06-13 18:50:37,112 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:50:37,112 - INFO - + ✨ MODEL INITIALIZATION + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:50:37,112 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:50:37,112 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [multilabel_classify.py:4026:main] | |
2025-06-13 18:50:37,113 - INFO - 🏥📊 Creating MultilabelICDClassifier - Standard multilabel medical classifier! 🔬💫 - [multilabel_classify.py:860:define_model] | |
2025-06-13 18:50:37,113 - INFO - Will now start to create Multilabel-Classification Model from the base model - [multilabel_classify.py:565:__init__] | |
2025-06-13 18:50:37,117 - INFO - 📊 trainable params: 6,815,744 || all params: 3,765,178,368 || trainable%: 0.1810 - [multilabel_classify.py:619:compute_trainable_params] | |
2025-06-13 18:50:38,856 - INFO - Creating the Multi-Label Classification Model from base model mistralai/Mistral-7B-Instruct-v0.3 completed!!! - [multilabel_classify.py:607:__init__] | |
2025-06-13 18:50:38,860 - INFO - 📊 trainable params: 171,532,417 || all params: 3,929,895,041 || trainable%: 4.3648 - [multilabel_classify.py:619:compute_trainable_params] | |
2025-06-13 18:50:38,860 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:50:38,860 - INFO - + ✨ TRAINING PREPARATION + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:50:38,860 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:50:38,861 - INFO - ⚙️ Preparing training components and optimizers... - [multilabel_classify.py:4033:main] | |
2025-06-13 18:50:38,945 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [multilabel_classify.py:1019:log_training_configuration] | |
2025-06-13 18:50:38,945 - INFO - 🔋 CUDA Available: True - [multilabel_classify.py:1022:log_training_configuration] | |
2025-06-13 18:50:38,945 - INFO - 💾 CUDA Device Count: 1 - [multilabel_classify.py:1023:log_training_configuration] | |
2025-06-13 18:50:38,947 - INFO - | |
📋 Training Configuration 📋 | |
+----------+-----------------------------+------------------------------------------------------------------+ | |
| 🌟 Emoji | 🏷️ Parameter | 📊 Value | | |
+----------+-----------------------------+------------------------------------------------------------------+ | |
| 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b | | |
| 🔁 | Training Epochs | 1 | | |
| 🏋️ | Train Batch Size | 8 | | |
| 🔍 | Eval Batch Size | 8 | | |
| 📊 | Gradient Accumulation Steps | 4 | | |
| 🚀 | Learning Rate | 0.0001 | | |
| 🌅 | Warmup Steps | 500 | | |
| 💾 | Save Strategy | epoch | | |
| 💾 | Save Total Limit | 10 | | |
| 📊 | Evaluation Strategy | epoch | | |
| 🎯 | Best Model Metric | precision_at_15 | | |
| 📝 | Logging Strategy | steps (every 10 steps) | | |
| 🌐 | Push to Hub | True | | |
| 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify | | |
| 🔢 | Steps per Epoch | 262 | | |
| 🔢 | Total Training Steps | 262 | | |
| 🔢 | Evaluation Steps | 316 | | |
| 📊 | Training Dataset Size | 8393 samples 🏋️ | | |
| 📊 | Evaluation Dataset Size | 2528 samples 🔍 | | |
+----------+-----------------------------+------------------------------------------------------------------+ - [multilabel_classify.py:1011:log_training_args] | |
2025-06-13 18:50:38,947 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 18:50:38,948 - INFO - + ✨ MODEL TRAINING + - [multilabel_classify.py:102:log_section] | |
2025-06-13 18:50:38,948 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 18:50:38,948 - INFO - 🏋️ Starting model training process... - [multilabel_classify.py:4055:main] | |
2025-06-13 18:50:38,998 - INFO - We are registering the tokenizer deb101/mistral-7b-instruct-v0.3-mimic4-adapt in Custom Trainer - [multilabel_classify.py:2340:__init__] | |
2025-06-13 18:50:39,246 - INFO - 🚀 Starting Training... - [multilabel_classify.py:1994:on_train_begin] | |
2025-06-13 18:51:01,764 - INFO - | |
[36m🚂 Training Metrics (Step 10) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.6752 | | |
+---------------+----------+ | |
| grad_norm | 7.27432 | | |
+---------------+----------+ | |
| learning_rate | 2e-06 | | |
+---------------+----------+ | |
| epoch | 0.038095 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:51:21,360 - INFO - | |
[36m🚂 Training Metrics (Step 20) 🚂 | |
+---------------+---------+ | |
| Metric | Value | | |
+===============+=========+ | |
| loss | 0.3475 | | |
+---------------+---------+ | |
| grad_norm | 2.94909 | | |
+---------------+---------+ | |
| learning_rate | 4e-06 | | |
+---------------+---------+ | |
| epoch | 0.07619 | | |
+---------------+---------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:51:40,988 - INFO - | |
[36m🚂 Training Metrics (Step 30) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0737 | | |
+---------------+----------+ | |
| grad_norm | 0.276477 | | |
+---------------+----------+ | |
| learning_rate | 6e-06 | | |
+---------------+----------+ | |
| epoch | 0.114286 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:52:00,566 - INFO - | |
[36m🚂 Training Metrics (Step 40) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0233 | | |
+---------------+----------+ | |
| grad_norm | 0.104187 | | |
+---------------+----------+ | |
| learning_rate | 8e-06 | | |
+---------------+----------+ | |
| epoch | 0.152381 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:52:20,179 - INFO - | |
[36m🚂 Training Metrics (Step 50) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0282 | | |
+---------------+----------+ | |
| grad_norm | 0.154837 | | |
+---------------+----------+ | |
| learning_rate | 1e-05 | | |
+---------------+----------+ | |
| epoch | 0.190476 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:52:39,823 - INFO - | |
[36m🚂 Training Metrics (Step 60) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.027 | | |
+---------------+----------+ | |
| grad_norm | 0.136466 | | |
+---------------+----------+ | |
| learning_rate | 1.2e-05 | | |
+---------------+----------+ | |
| epoch | 0.228571 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:52:59,512 - INFO - | |
[36m🚂 Training Metrics (Step 70) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0244 | | |
+---------------+----------+ | |
| grad_norm | 0.029749 | | |
+---------------+----------+ | |
| learning_rate | 1.4e-05 | | |
+---------------+----------+ | |
| epoch | 0.266667 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:53:19,196 - INFO - | |
[36m🚂 Training Metrics (Step 80) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0234 | | |
+---------------+----------+ | |
| grad_norm | 0.042736 | | |
+---------------+----------+ | |
| learning_rate | 1.6e-05 | | |
+---------------+----------+ | |
| epoch | 0.304762 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:53:38,910 - INFO - | |
[36m🚂 Training Metrics (Step 90) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0235 | | |
+---------------+----------+ | |
| grad_norm | 0.035706 | | |
+---------------+----------+ | |
| learning_rate | 1.8e-05 | | |
+---------------+----------+ | |
| epoch | 0.342857 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:53:58,615 - INFO - | |
[36m🚂 Training Metrics (Step 100) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0243 | | |
+---------------+----------+ | |
| grad_norm | 0.230328 | | |
+---------------+----------+ | |
| learning_rate | 2e-05 | | |
+---------------+----------+ | |
| epoch | 0.380952 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:54:18,318 - INFO - | |
[36m🚂 Training Metrics (Step 110) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.023 | | |
+---------------+----------+ | |
| grad_norm | 0.011574 | | |
+---------------+----------+ | |
| learning_rate | 2.2e-05 | | |
+---------------+----------+ | |
| epoch | 0.419048 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:54:38,052 - INFO - | |
[36m🚂 Training Metrics (Step 120) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0223 | | |
+---------------+----------+ | |
| grad_norm | 0.01187 | | |
+---------------+----------+ | |
| learning_rate | 2.4e-05 | | |
+---------------+----------+ | |
| epoch | 0.457143 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:54:57,782 - INFO - | |
[36m🚂 Training Metrics (Step 130) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0234 | | |
+---------------+----------+ | |
| grad_norm | 0.008039 | | |
+---------------+----------+ | |
| learning_rate | 2.6e-05 | | |
+---------------+----------+ | |
| epoch | 0.495238 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:55:17,523 - INFO - | |
[36m🚂 Training Metrics (Step 140) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0233 | | |
+---------------+----------+ | |
| grad_norm | 0.007428 | | |
+---------------+----------+ | |
| learning_rate | 2.8e-05 | | |
+---------------+----------+ | |
| epoch | 0.533333 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:55:37,253 - INFO - | |
[36m🚂 Training Metrics (Step 150) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0227 | | |
+---------------+----------+ | |
| grad_norm | 0.025854 | | |
+---------------+----------+ | |
| learning_rate | 3e-05 | | |
+---------------+----------+ | |
| epoch | 0.571429 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:55:56,990 - INFO - | |
[36m🚂 Training Metrics (Step 160) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0218 | | |
+---------------+----------+ | |
| grad_norm | 0.015084 | | |
+---------------+----------+ | |
| learning_rate | 3.2e-05 | | |
+---------------+----------+ | |
| epoch | 0.609524 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:56:16,703 - INFO - | |
[36m🚂 Training Metrics (Step 170) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0216 | | |
+---------------+----------+ | |
| grad_norm | 0.011318 | | |
+---------------+----------+ | |
| learning_rate | 3.4e-05 | | |
+---------------+----------+ | |
| epoch | 0.647619 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:56:36,435 - INFO - | |
[36m🚂 Training Metrics (Step 180) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0231 | | |
+---------------+----------+ | |
| grad_norm | 0.021338 | | |
+---------------+----------+ | |
| learning_rate | 3.6e-05 | | |
+---------------+----------+ | |
| epoch | 0.685714 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:56:56,191 - INFO - | |
[36m🚂 Training Metrics (Step 190) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0236 | | |
+---------------+----------+ | |
| grad_norm | 0.004745 | | |
+---------------+----------+ | |
| learning_rate | 3.8e-05 | | |
+---------------+----------+ | |
| epoch | 0.72381 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:57:15,942 - INFO - | |
[36m🚂 Training Metrics (Step 200) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0231 | | |
+---------------+----------+ | |
| grad_norm | 0.025924 | | |
+---------------+----------+ | |
| learning_rate | 4e-05 | | |
+---------------+----------+ | |
| epoch | 0.761905 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:57:35,695 - INFO - | |
[36m🚂 Training Metrics (Step 210) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0222 | | |
+---------------+----------+ | |
| grad_norm | 0.007095 | | |
+---------------+----------+ | |
| learning_rate | 4.2e-05 | | |
+---------------+----------+ | |
| epoch | 0.8 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:57:55,463 - INFO - | |
[36m🚂 Training Metrics (Step 220) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0225 | | |
+---------------+----------+ | |
| grad_norm | 0.012384 | | |
+---------------+----------+ | |
| learning_rate | 4.4e-05 | | |
+---------------+----------+ | |
| epoch | 0.838095 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:58:15,215 - INFO - | |
[36m🚂 Training Metrics (Step 230) 🚂 | |
+---------------+---------+ | |
| Metric | Value | | |
+===============+=========+ | |
| loss | 0.0238 | | |
+---------------+---------+ | |
| grad_norm | 0.00828 | | |
+---------------+---------+ | |
| learning_rate | 4.6e-05 | | |
+---------------+---------+ | |
| epoch | 0.87619 | | |
+---------------+---------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:58:34,964 - INFO - | |
[36m🚂 Training Metrics (Step 240) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0222 | | |
+---------------+----------+ | |
| grad_norm | 0.006233 | | |
+---------------+----------+ | |
| learning_rate | 4.8e-05 | | |
+---------------+----------+ | |
| epoch | 0.914286 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:58:54,736 - INFO - | |
[36m🚂 Training Metrics (Step 250) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.0232 | | |
+---------------+----------+ | |
| grad_norm | 0.011117 | | |
+---------------+----------+ | |
| learning_rate | 5e-05 | | |
+---------------+----------+ | |
| epoch | 0.952381 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:59:14,504 - INFO - | |
[36m🚂 Training Metrics (Step 260) 🚂 | |
+---------------+----------+ | |
| Metric | Value | | |
+===============+==========+ | |
| loss | 0.023 | | |
+---------------+----------+ | |
| grad_norm | 0.008961 | | |
+---------------+----------+ | |
| learning_rate | 5.2e-05 | | |
+---------------+----------+ | |
| epoch | 0.990476 | | |
+---------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 18:59:19,754 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262 - [multilabel_classify.py:2445:_save] | |
2025-06-13 18:59:19,756 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262 - [multilabel_classify.py:2450:_save] | |
2025-06-13 18:59:19,757 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262: | |
+---------+-------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+===================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+-------------------+------------+ | |
| 2 | model.safetensors | 4600.97 MB | | |
+---------+-------------------+------------+ | |
| 3 | config.json | 0.00 MB | | |
+---------+-------------------+------------+ - [multilabel_classify.py:2467:_save] | |
2025-06-13 18:59:20,381 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:2352:evaluate] | |
2025-06-13 19:12:32,601 - INFO - | |
[33m🔍 Evaluation Metrics 🔍 | |
+-------------------------------+----------+ | |
| Metric | Value | | |
+===============================+==========+ | |
| eval_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_precision_at_5 | 0.274921 | | |
+-------------------------------+----------+ | |
| eval_recall_at_5 | 0.063731 | | |
+-------------------------------+----------+ | |
| eval_precision_at_8 | 0.253956 | | |
+-------------------------------+----------+ | |
| eval_recall_at_8 | 0.090858 | | |
+-------------------------------+----------+ | |
| eval_precision_at_15 | 0.190533 | | |
+-------------------------------+----------+ | |
| eval_recall_at_15 | 0.122413 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_5 | 0.003718 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_5 | 0.001292 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_8 | 0.004302 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_8 | 0.002289 | | |
+-------------------------------+----------+ | |
| eval_rare_precision_at_15 | 0.004905 | | |
+-------------------------------+----------+ | |
| eval_rare_recall_at_15 | 0.00478 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_micro | 0 | | |
+-------------------------------+----------+ | |
| eval_not_rare_f1_macro | 0 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision | 0 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall | 0 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_5 | 0.274209 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_5 | 0.168014 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_8 | 0.254005 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_8 | 0.239598 | | |
+-------------------------------+----------+ | |
| eval_not_rare_precision_at_15 | 0.190585 | | |
+-------------------------------+----------+ | |
| eval_not_rare_recall_at_15 | 0.324765 | | |
+-------------------------------+----------+ | |
| eval_loss | 0.020932 | | |
+-------------------------------+----------+[0m - [multilabel_classify.py:2207:on_evaluate] | |
2025-06-13 19:12:36,537 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262 - [multilabel_classify.py:2445:_save] | |
2025-06-13 19:12:36,538 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262 - [multilabel_classify.py:2450:_save] | |
2025-06-13 19:12:36,540 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262: | |
+---------+--------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+====================+============+ | |
| 1 | training_args.bin | 0.01 MB | | |
+---------+--------------------+------------+ | |
| 2 | optimizer.pt | 1308.77 MB | | |
+---------+--------------------+------------+ | |
| 3 | model.safetensors | 4600.97 MB | | |
+---------+--------------------+------------+ | |
| 4 | scaler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 5 | config.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 6 | scheduler.pt | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 7 | trainer_state.json | 0.00 MB | | |
+---------+--------------------+------------+ | |
| 8 | rng_state.pth | 0.01 MB | | |
+---------+--------------------+------------+ - [multilabel_classify.py:2467:_save] | |
2025-06-13 19:12:37,957 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262 - [multilabel_classify.py:2519:_load_best_model] | |
2025-06-13 19:12:37,957 - INFO - 🖥️ Model is on device: cuda:0 - [multilabel_classify.py:2529:_load_best_model] | |
2025-06-13 19:12:38,014 - INFO - 🔑 Key order comparison: | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| Index | Saved state_dict Keys | Model state_dict Keys | | |
+=========+============================================+======================================================================================+ | |
| 1 | attention.in_proj_bias | boost_mul | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 2 | attention.in_proj_weight | boost_add | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 3 | attention.out_proj.bias | base_model.base_model.model.model.embed_tokens.weight | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 4 | attention.out_proj.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ | |
| 5 | base_model.base_model.model.lm_head.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax | | |
+---------+--------------------------------------------+--------------------------------------------------------------------------------------+ - [multilabel_classify.py:2553:_load_best_model] | |
2025-06-13 19:12:39,020 - INFO - ✅ Loaded best model weights from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-262/model.safetensors - [multilabel_classify.py:2570:_load_best_model] | |
2025-06-13 19:12:39,059 - INFO - ✔️ Weight for boost_mul matches between saved and loaded state_dict - [multilabel_classify.py:2582:_load_best_model] | |
2025-06-13 19:12:39,091 - INFO - ✔️ Weight for boost_add matches between saved and loaded state_dict - [multilabel_classify.py:2582:_load_best_model] | |
2025-06-13 19:12:39,108 - INFO - | |
[36m🚂 Training Metrics (Step 262) 🚂 | |
+--------------------------+----------+ | |
| Metric | Value | | |
+==========================+==========+ | |
| train_runtime | 1319.86 | | |
+--------------------------+----------+ | |
| train_samples_per_second | 6.359 | | |
+--------------------------+----------+ | |
| train_steps_per_second | 0.199 | | |
+--------------------------+----------+ | |
| total_flos | 0 | | |
+--------------------------+----------+ | |
| train_loss | 0.062579 | | |
+--------------------------+----------+ | |
| epoch | 0.998095 | | |
+--------------------------+----------+[0m - [multilabel_classify.py:2188:on_log] | |
2025-06-13 19:12:39,108 - INFO - ✨ Training Completed! ✨ - [multilabel_classify.py:2061:on_train_end] | |
2025-06-13 19:12:39,183 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/train_loss_plot.png' - [multilabel_classify.py:2257:on_train_end] | |
2025-06-13 19:12:39,237 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_loss_plot.png' - [multilabel_classify.py:2271:on_train_end] | |
2025-06-13 19:12:39,297 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_precision_at_15_plot.png' - [multilabel_classify.py:2292:on_train_end] | |
2025-06-13 19:12:39,298 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:101:log_section] | |
2025-06-13 19:12:39,298 - INFO - + ✨ MODEL SAVING + - [multilabel_classify.py:102:log_section] | |
2025-06-13 19:12:39,298 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:105:log_section] | |
2025-06-13 19:12:39,298 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [multilabel_classify.py:4069:main] | |
2025-06-13 19:12:39,298 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3045:save_and_push] | |
2025-06-13 19:12:40,623 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2445:_save] | |
2025-06-13 19:12:40,625 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2450:_save] | |
2025-06-13 19:12:40,626 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b: | |
+---------+-------------------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+===============================+============+ | |
| 1 | eval_loss_plot.png | 0.02 MB | | |
+---------+-------------------------------+------------+ | |
| 2 | training_args.bin | 0.01 MB | | |
+---------+-------------------------------+------------+ | |
| 3 | model.safetensors | 4600.97 MB | | |
+---------+-------------------------------+------------+ | |
| 4 | config.json | 0.00 MB | | |
+---------+-------------------------------+------------+ | |
| 5 | train_loss_plot.png | 0.02 MB | | |
+---------+-------------------------------+------------+ | |
| 6 | eval_precision_at_15_plot.png | 0.03 MB | | |
+---------+-------------------------------+------------+ - [multilabel_classify.py:2467:_save] | |
2025-06-13 19:12:44,632 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2445:_save] | |
2025-06-13 19:12:44,634 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2450:_save] | |
2025-06-13 19:12:44,635 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b: | |
+---------+-------------------------------+------------+ | |
| Index | Saved File | Size | | |
+=========+===============================+============+ | |
| 1 | eval_loss_plot.png | 0.02 MB | | |
+---------+-------------------------------+------------+ | |
| 2 | training_args.bin | 0.01 MB | | |
+---------+-------------------------------+------------+ | |
| 3 | model.safetensors | 4600.97 MB | | |
+---------+-------------------------------+------------+ | |
| 4 | config.json | 0.00 MB | | |
+---------+-------------------------------+------------+ | |
| 5 | train_loss_plot.png | 0.02 MB | | |
+---------+-------------------------------+------------+ | |
| 6 | eval_precision_at_15_plot.png | 0.03 MB | | |
+---------+-------------------------------+------------+ - [multilabel_classify.py:2467:_save] | |
2025-06-13 19:14:09,684 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3049:save_and_push] | |
2025-06-13 19:14:09,714 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3053:save_and_push] | |