deb101 commited on
Commit
7cdd071
·
verified ·
1 Parent(s): e569498

Trained classifier model on MIMIC-IV

Browse files
classification_log_2025-06-05_14-33-04.log ADDED
@@ -0,0 +1,650 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-06-05 14:33:04,831 - INFO - ================================================================================ - [multilabel_classify.py:100:log_section]
2
+ 2025-06-05 14:33:04,831 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [multilabel_classify.py:101:log_section]
3
+ 2025-06-05 14:33:04,831 - INFO - ================================================================================ - [multilabel_classify.py:104:log_section]
4
+ 2025-06-05 14:33:04,831 - INFO - 🚀 Setting up data paths and environment variables... - [multilabel_classify.py:3558:main]
5
+ 2025-06-05 14:33:04,832 - INFO - 📂 Using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3564:main]
6
+ 2025-06-05 14:33:04,832 - INFO - 🛠️ Command-line Arguments: - [multilabel_classify.py:368:print_args]
7
+ 2025-06-05 14:33:04,832 - INFO -
8
+ 🔹 output_dir: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b
9
+ 🔹 source_url: XURLs.MIMIC4_DEMO
10
+ 🔹 data: mimic4_icd10_full
11
+ 🔹 logfile: classification_log
12
+ 🔹 base_dir: ../tmp/MIMIC4_DEMO
13
+ 🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt
14
+ 🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3
15
+ 🔹 max_length: 512
16
+ 🔹 do_fresh_training: True
17
+ 🔹 load_from_checkpoint: False
18
+ 🔹 task: multilabel-classify
19
+ 🔹 num_train_epochs: 4
20
+ 🔹 per_device_train_batch_size: 8
21
+ 🔹 per_device_eval_batch_size: 8
22
+ 🔹 metric_for_best_model: precision_at_15
23
+ 🔹 learning_rate: 0.0001
24
+ 🔹 final_lr_scheduling: 1e-06
25
+ 🔹 warmup_steps: 500
26
+ 🔹 logfile_path: ../tmp/logs/classification_log_2025-06-05_14-33-04.log
27
+ 🔹 source: /home/ubuntu/.xcube/data/mimic4_demo - [multilabel_classify.py:369:print_args]
28
+ 2025-06-05 14:33:04,832 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [multilabel_classify.py:370:print_args]
29
+ 2025-06-05 14:33:04,832 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
30
+ 2025-06-05 14:33:04,832 - INFO - + ✨ LOADING DATASETS + - [multilabel_classify.py:101:log_section]
31
+ 2025-06-05 14:33:04,832 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
32
+ 2025-06-05 14:33:04,832 - INFO - 📊 Loading main datasets.... - [multilabel_classify.py:3570:main]
33
+ 2025-06-05 14:33:13,354 - INFO - 🔍 Total unique labels in dataset: 7942 - [multilabel_classify.py:3351:sample_df_with_full_label_coverage]
34
+ 2025-06-05 14:33:13,367 - INFO - 🧪 Attempt 1: Sampled 122 rows covering 863 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
35
+ 2025-06-05 14:33:13,376 - INFO - 🧪 Attempt 2: Sampled 122 rows covering 816 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
36
+ 2025-06-05 14:33:13,385 - INFO - 🧪 Attempt 3: Sampled 122 rows covering 885 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
37
+ 2025-06-05 14:33:13,394 - INFO - 🧪 Attempt 4: Sampled 122 rows covering 828 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
38
+ 2025-06-05 14:33:13,403 - INFO - 🧪 Attempt 5: Sampled 122 rows covering 879 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
39
+ 2025-06-05 14:33:13,411 - INFO - 🧪 Attempt 6: Sampled 122 rows covering 852 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
40
+ 2025-06-05 14:33:13,420 - INFO - 🧪 Attempt 7: Sampled 122 rows covering 838 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
41
+ 2025-06-05 14:33:13,428 - INFO - 🧪 Attempt 8: Sampled 122 rows covering 851 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
42
+ 2025-06-05 14:33:13,437 - INFO - 🧪 Attempt 9: Sampled 122 rows covering 825 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
43
+ 2025-06-05 14:33:13,445 - INFO - 🧪 Attempt 10: Sampled 122 rows covering 833 labels. - [multilabel_classify.py:3365:sample_df_with_full_label_coverage]
44
+ 2025-06-05 14:33:13,445 - INFO - ⚠️ Skipping label coverage fix. 7109 labels are missing. - [multilabel_classify.py:3383:sample_df_with_full_label_coverage]
45
+ 2025-06-05 14:33:13,446 - INFO - ✅ Final row count: 122 (Valid: 20, Not-valid: 102) - [multilabel_classify.py:3388:sample_df_with_full_label_coverage]
46
+ 2025-06-05 14:33:13,464 - INFO - ******************************************************************************** - [multilabel_classify.py:100:log_section]
47
+ 2025-06-05 14:33:13,464 - INFO - * 🌟 STARTING MULTI_LABEL CLASSIFICATION MODEL TRAINING * - [multilabel_classify.py:101:log_section]
48
+ 2025-06-05 14:33:13,464 - INFO - ******************************************************************************** - [multilabel_classify.py:104:log_section]
49
+ 2025-06-05 14:33:13,464 - INFO - 🔐 Loaded authentication token from environment - [multilabel_classify.py:3597:main]
50
+ 2025-06-05 14:33:13,465 - INFO - 🏷️ Hub Model ID for this Classification task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3601:main]
51
+ 2025-06-05 14:33:13,465 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:100:log_section]
52
+ 2025-06-05 14:33:13,465 - INFO - - 📋 MODEL EXISTENCE CHECK - - [multilabel_classify.py:101:log_section]
53
+ 2025-06-05 14:33:13,465 - INFO - -------------------------------------------------------------------------------- - [multilabel_classify.py:104:log_section]
54
+ 2025-06-05 14:33:13,465 - INFO - 🔍 Checking model existence locally and on Hugging Face Hub... - [multilabel_classify.py:3466:check_model_existence]
55
+ 2025-06-05 14:33:13,465 - INFO - ❌ Model not found locally at: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:3473:check_model_existence]
56
+ 2025-06-05 14:33:13,511 - INFO - ✅ Model exists on Hugging Face Hub with ID: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3485:check_model_existence]
57
+ 2025-06-05 14:33:13,512 - INFO - 📁 Model exists either locally or on Hub - [multilabel_classify.py:3511:check_model_existence]
58
+ 2025-06-05 14:33:13,512 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
59
+ 2025-06-05 14:33:13,512 - INFO - + ✨ STARTING FRESH TRAINING + - [multilabel_classify.py:101:log_section]
60
+ 2025-06-05 14:33:13,512 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
61
+ 2025-06-05 14:33:13,512 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [multilabel_classify.py:3614:main]
62
+ 2025-06-05 14:33:13,532 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login]
63
+ 2025-06-05 14:33:13,533 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
64
+ 2025-06-05 14:33:13,533 - INFO - + ✨ LOADING BASE MODEL + - [multilabel_classify.py:101:log_section]
65
+ 2025-06-05 14:33:13,533 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
66
+ 2025-06-05 14:33:13,533 - INFO - 📥 Loading pretrained model and tokenizer... - [multilabel_classify.py:3644:main]
67
+ 2025-06-05 14:33:13,533 - INFO - 🚀 Starting model and tokenizer loading process... - [multilabel_classify.py:1241:load_base_model_and_tokenizer]
68
+ 2025-06-05 14:33:13,534 - INFO - 📊 Quantization config: BitsAndBytesConfig {
69
+ "_load_in_4bit": true,
70
+ "_load_in_8bit": false,
71
+ "bnb_4bit_compute_dtype": "bfloat16",
72
+ "bnb_4bit_quant_storage": "uint8",
73
+ "bnb_4bit_quant_type": "nf4",
74
+ "bnb_4bit_use_double_quant": true,
75
+ "llm_int8_enable_fp32_cpu_offload": false,
76
+ "llm_int8_has_fp16_weight": false,
77
+ "llm_int8_skip_modules": null,
78
+ "llm_int8_threshold": 6.0,
79
+ "load_in_4bit": true,
80
+ "load_in_8bit": false,
81
+ "quant_method": "bitsandbytes"
82
+ }
83
+ - [multilabel_classify.py:1250:load_base_model_and_tokenizer]
84
+ 2025-06-05 14:33:13,534 - INFO - 🔤 Loading tokenizer for model: deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1254:load_base_model_and_tokenizer]
85
+ 2025-06-05 14:33:13,824 - INFO - 🔍 Checking if deb101/mistral-7b-instruct-v0.3-mimic4-adapt is a PEFT model... - [multilabel_classify.py:1266:load_base_model_and_tokenizer]
86
+ 2025-06-05 14:33:13,840 - INFO - ✅ Detected PEFT model. Base model: mistralai/Mistral-7B-Instruct-v0.3 - [multilabel_classify.py:1270:load_base_model_and_tokenizer]
87
+ 2025-06-05 14:33:13,840 - INFO - 🔍 Loading model configuration for mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1280:load_base_model_and_tokenizer]
88
+ 2025-06-05 14:33:13,865 - INFO - Model type: mistral, Architectures: ['MistralForCausalLM'] - [multilabel_classify.py:1286:load_base_model_and_tokenizer]
89
+ 2025-06-05 14:33:13,865 - INFO - 🧠 Loading base model: mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1349:load_base_model_and_tokenizer]
90
+ 2025-06-05 14:33:14,403 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory]
91
+ 2025-06-05 14:33:19,721 - INFO - 🧩 Loading PEFT adapters for deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1364:load_base_model_and_tokenizer]
92
+ 2025-06-05 14:33:20,219 - INFO - 🔧 Before enabling PEFT adapters for training - [multilabel_classify.py:1366:load_base_model_and_tokenizer]
93
+ 2025-06-05 14:33:20,221 - INFO - 📊 trainable params: 0 || all params: 7,254,839,296 || trainable%: 0.0000 - [multilabel_classify.py:159:log_print_output]
94
+ 2025-06-05 14:33:20,224 - INFO - 🔧 After Enabling PEFT adapters for training - [multilabel_classify.py:1373:load_base_model_and_tokenizer]
95
+ 2025-06-05 14:33:20,226 - INFO - 📊 trainable params: 6,815,744 || all params: 7,254,839,296 || trainable%: 0.0939 - [multilabel_classify.py:159:log_print_output]
96
+ 2025-06-05 14:33:20,227 - INFO - ✅ Model and tokenizer successfully loaded! - [multilabel_classify.py:1414:load_base_model_and_tokenizer]
97
+ 2025-06-05 14:33:20,227 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
98
+ 2025-06-05 14:33:20,227 - INFO - + ✨ DATA PREPROCESSING + - [multilabel_classify.py:101:log_section]
99
+ 2025-06-05 14:33:20,227 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
100
+ 2025-06-05 14:33:20,227 - INFO - 🔄 Loading and preprocessing training data... - [multilabel_classify.py:3652:main]
101
+ 2025-06-05 14:33:20,231 - INFO - Total number of labels: 833 - [multilabel_classify.py:1042:preprocess_data]
102
+ 2025-06-05 14:33:20,231 - INFO - Rare labels (freq < 50): 832 - [multilabel_classify.py:1043:preprocess_data]
103
+ 2025-06-05 14:33:20,231 - INFO - Not rare labels (freq >= 50): 1 - [multilabel_classify.py:1044:preprocess_data]
104
+ 2025-06-05 14:33:20,231 - INFO - Label partitions and classes saved to ../tmp/MIMIC4_DEMO/labels_partition.json - [multilabel_classify.py:1045:preprocess_data]
105
+ 2025-06-05 14:33:21,544 - INFO - The size of training set: 567 - [multilabel_classify.py:1141:preprocess_data]
106
+ 2025-06-05 14:33:21,544 - INFO - The size of Evaluation set: 136 - [multilabel_classify.py:1142:preprocess_data]
107
+ 2025-06-05 14:33:21,550 - INFO - Number of unique ICD-10 codes: 833 - [multilabel_classify.py:3658:main]
108
+ 2025-06-05 14:33:21,550 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
109
+ 2025-06-05 14:33:21,550 - INFO - + ✨ MODEL INITIALIZATION + - [multilabel_classify.py:101:log_section]
110
+ 2025-06-05 14:33:21,550 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
111
+ 2025-06-05 14:33:21,550 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [multilabel_classify.py:3661:main]
112
+ 2025-06-05 14:33:21,550 - INFO - Will now start to create Multilabel-Classification Model from the base model - [multilabel_classify.py:560:__init__]
113
+ 2025-06-05 14:33:21,554 - INFO - Trainable params: 6815744 / 3765178368 (0.18%) - [multilabel_classify.py:612:compute_trainable_params]
114
+ 2025-06-05 14:33:21,892 - INFO - Creating the Multi-Label Classification Model from base model mistralai/Mistral-7B-Instruct-v0.3 completed!!! - [multilabel_classify.py:602:__init__]
115
+ 2025-06-05 14:33:21,895 - INFO - Trainable params: 84177025 / 3842539649 (2.19%) - [multilabel_classify.py:612:compute_trainable_params]
116
+ 2025-06-05 14:33:21,895 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
117
+ 2025-06-05 14:33:21,895 - INFO - + ✨ TRAINING PREPARATION + - [multilabel_classify.py:101:log_section]
118
+ 2025-06-05 14:33:21,895 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
119
+ 2025-06-05 14:33:21,895 - INFO - ⚙️ Preparing training components and optimizers... - [multilabel_classify.py:3668:main]
120
+ 2025-06-05 14:33:21,952 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [multilabel_classify.py:889:log_training_configuration]
121
+ 2025-06-05 14:33:21,952 - INFO - 🔋 CUDA Available: True - [multilabel_classify.py:892:log_training_configuration]
122
+ 2025-06-05 14:33:21,952 - INFO - 💾 CUDA Device Count: 1 - [multilabel_classify.py:893:log_training_configuration]
123
+ 2025-06-05 14:33:21,954 - INFO -
124
+ 📋 Training Configuration 📋
125
+ +----------+-----------------------------+------------------------------------------------------------------+
126
+ | 🌟 Emoji | 🏷️ Parameter | 📊 Value |
127
+ +----------+-----------------------------+------------------------------------------------------------------+
128
+ | 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b |
129
+ | 🔁 | Training Epochs | 4 |
130
+ | 🏋️ | Train Batch Size | 8 |
131
+ | 🔍 | Eval Batch Size | 8 |
132
+ | 📊 | Gradient Accumulation Steps | 4 |
133
+ | 🚀 | Learning Rate | 0.0001 |
134
+ | 🌅 | Warmup Steps | 500 |
135
+ | 💾 | Save Strategy | epoch |
136
+ | 💾 | Save Total Limit | 10 |
137
+ | 📊 | Evaluation Strategy | epoch |
138
+ | 🎯 | Best Model Metric | precision_at_15 |
139
+ | 📝 | Logging Strategy | steps (every 10 steps) |
140
+ | 🌐 | Push to Hub | True |
141
+ | 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify |
142
+ | 🔢 | Steps per Epoch | 17 |
143
+ | 🔢 | Total Training Steps | 68 |
144
+ | 🔢 | Evaluation Steps | 17 |
145
+ | 📊 | Training Dataset Size | 567 samples 🏋️ |
146
+ | 📊 | Evaluation Dataset Size | 136 samples 🔍 |
147
+ +----------+-----------------------------+------------------------------------------------------------------+ - [multilabel_classify.py:881:log_training_args]
148
+ 2025-06-05 14:33:21,954 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
149
+ 2025-06-05 14:33:21,954 - INFO - + ✨ MODEL TRAINING + - [multilabel_classify.py:101:log_section]
150
+ 2025-06-05 14:33:21,954 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
151
+ 2025-06-05 14:33:21,954 - INFO - 🏋️ Starting model training process... - [multilabel_classify.py:3690:main]
152
+ 2025-06-05 14:33:21,997 - INFO - We are registering the tokenizer deb101/mistral-7b-instruct-v0.3-mimic4-adapt in Custom Trainer - [multilabel_classify.py:1986:__init__]
153
+ 2025-06-05 14:33:22,241 - INFO - 🚀 Starting Training... - [multilabel_classify.py:1640:on_train_begin]
154
+ 2025-06-05 14:33:42,624 - INFO -
155
+ 🚂 Training Metrics (Step 10) 🚂
156
+ +---------------+---------+
157
+ | Metric | Value |
158
+ +===============+=========+
159
+ | loss | 0.5906 |
160
+ +---------------+---------+
161
+ | grad_norm | 6.4284 |
162
+ +---------------+---------+
163
+ | learning_rate | 2e-06 |
164
+ +---------------+---------+
165
+ | epoch | 0.56338 |
166
+ +---------------+---------+ - [multilabel_classify.py:1834:on_log]
167
+ 2025-06-05 14:33:56,611 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1998:evaluate]
168
+ 2025-06-05 14:34:31,640 - INFO -
169
+ 🔍 Evaluation Metrics 🔍
170
+ +-------------------------------+----------+
171
+ | Metric | Value |
172
+ +===============================+==========+
173
+ | eval_f1_micro | 0 |
174
+ +-------------------------------+----------+
175
+ | eval_f1_macro | 0 |
176
+ +-------------------------------+----------+
177
+ | eval_precision_at_5 | 0.022059 |
178
+ +-------------------------------+----------+
179
+ | eval_recall_at_5 | 0.004382 |
180
+ +-------------------------------+----------+
181
+ | eval_precision_at_8 | 0.023897 |
182
+ +-------------------------------+----------+
183
+ | eval_recall_at_8 | 0.009611 |
184
+ +-------------------------------+----------+
185
+ | eval_precision_at_15 | 0.028431 |
186
+ +-------------------------------+----------+
187
+ | eval_recall_at_15 | 0.021702 |
188
+ +-------------------------------+----------+
189
+ | eval_rare_f1_micro | 0 |
190
+ +-------------------------------+----------+
191
+ | eval_rare_f1_macro | 0 |
192
+ +-------------------------------+----------+
193
+ | eval_rare_precision | 0 |
194
+ +-------------------------------+----------+
195
+ | eval_rare_recall | 0 |
196
+ +-------------------------------+----------+
197
+ | eval_rare_precision_at_5 | 0.023529 |
198
+ +-------------------------------+----------+
199
+ | eval_rare_recall_at_5 | 0.00463 |
200
+ +-------------------------------+----------+
201
+ | eval_rare_precision_at_8 | 0.027574 |
202
+ +-------------------------------+----------+
203
+ | eval_rare_recall_at_8 | 0.010825 |
204
+ +-------------------------------+----------+
205
+ | eval_rare_precision_at_15 | 0.027941 |
206
+ +-------------------------------+----------+
207
+ | eval_rare_recall_at_15 | 0.021662 |
208
+ +-------------------------------+----------+
209
+ | eval_not_rare_f1_micro | 0.595588 |
210
+ +-------------------------------+----------+
211
+ | eval_not_rare_f1_macro | 0.373272 |
212
+ +-------------------------------+----------+
213
+ | eval_not_rare_precision | 0.595588 |
214
+ +-------------------------------+----------+
215
+ | eval_not_rare_recall | 0.595588 |
216
+ +-------------------------------+----------+
217
+ | eval_not_rare_precision_at_5 | 0.080882 |
218
+ +-------------------------------+----------+
219
+ | eval_not_rare_recall_at_5 | 0.404412 |
220
+ +-------------------------------+----------+
221
+ | eval_not_rare_precision_at_8 | 0.050551 |
222
+ +-------------------------------+----------+
223
+ | eval_not_rare_recall_at_8 | 0.404412 |
224
+ +-------------------------------+----------+
225
+ | eval_not_rare_precision_at_15 | 0.026961 |
226
+ +-------------------------------+----------+
227
+ | eval_not_rare_recall_at_15 | 0.404412 |
228
+ +-------------------------------+----------+
229
+ | eval_loss | 0.207322 |
230
+ +-------------------------------+----------+ - [multilabel_classify.py:1853:on_evaluate]
231
+ 2025-06-05 14:34:35,042 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2091:_save]
232
+ 2025-06-05 14:34:35,043 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2096:_save]
233
+ 2025-06-05 14:34:35,045 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18:
234
+ +---------+--------------------+------------+
235
+ | Index | Saved File | Size |
236
+ +=========+====================+============+
237
+ | 1 | training_args.bin | 0.01 MB |
238
+ +---------+--------------------+------------+
239
+ | 2 | optimizer.pt | 642.30 MB |
240
+ +---------+--------------------+------------+
241
+ | 3 | model.safetensors | 4267.74 MB |
242
+ +---------+--------------------+------------+
243
+ | 4 | scaler.pt | 0.00 MB |
244
+ +---------+--------------------+------------+
245
+ | 5 | config.json | 0.00 MB |
246
+ +---------+--------------------+------------+
247
+ | 6 | scheduler.pt | 0.00 MB |
248
+ +---------+--------------------+------------+
249
+ | 7 | trainer_state.json | 0.00 MB |
250
+ +---------+--------------------+------------+
251
+ | 8 | rng_state.pth | 0.01 MB |
252
+ +---------+--------------------+------------+ - [multilabel_classify.py:2113:_save]
253
+ 2025-06-05 14:34:41,566 - INFO -
254
+ 🚂 Training Metrics (Step 20) 🚂
255
+ +---------------+---------+
256
+ | Metric | Value |
257
+ +===============+=========+
258
+ | loss | 0.3142 |
259
+ +---------------+---------+
260
+ | grad_norm | 2.40296 |
261
+ +---------------+---------+
262
+ | learning_rate | 4e-06 |
263
+ +---------------+---------+
264
+ | epoch | 1.11268 |
265
+ +---------------+---------+ - [multilabel_classify.py:1834:on_log]
266
+ 2025-06-05 14:34:59,655 - INFO -
267
+ 🚂 Training Metrics (Step 30) 🚂
268
+ +---------------+----------+
269
+ | Metric | Value |
270
+ +===============+==========+
271
+ | loss | 0.1215 |
272
+ +---------------+----------+
273
+ | grad_norm | 0.298427 |
274
+ +---------------+----------+
275
+ | learning_rate | 6e-06 |
276
+ +---------------+----------+
277
+ | epoch | 1.67606 |
278
+ +---------------+----------+ - [multilabel_classify.py:1834:on_log]
279
+ 2025-06-05 14:35:10,046 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1998:evaluate]
280
+ 2025-06-05 14:35:45,066 - INFO -
281
+ 🔍 Evaluation Metrics 🔍
282
+ +-------------------------------+----------+
283
+ | Metric | Value |
284
+ +===============================+==========+
285
+ | eval_f1_micro | 0 |
286
+ +-------------------------------+----------+
287
+ | eval_f1_macro | 0 |
288
+ +-------------------------------+----------+
289
+ | eval_precision_at_5 | 0.038235 |
290
+ +-------------------------------+----------+
291
+ | eval_recall_at_5 | 0.010113 |
292
+ +-------------------------------+----------+
293
+ | eval_precision_at_8 | 0.035846 |
294
+ +-------------------------------+----------+
295
+ | eval_recall_at_8 | 0.014265 |
296
+ +-------------------------------+----------+
297
+ | eval_precision_at_15 | 0.030392 |
298
+ +-------------------------------+----------+
299
+ | eval_recall_at_15 | 0.025588 |
300
+ +-------------------------------+----------+
301
+ | eval_rare_f1_micro | 0 |
302
+ +-------------------------------+----------+
303
+ | eval_rare_f1_macro | 0 |
304
+ +-------------------------------+----------+
305
+ | eval_rare_precision | 0 |
306
+ +-------------------------------+----------+
307
+ | eval_rare_recall | 0 |
308
+ +-------------------------------+----------+
309
+ | eval_rare_precision_at_5 | 0.038235 |
310
+ +-------------------------------+----------+
311
+ | eval_rare_recall_at_5 | 0.010045 |
312
+ +-------------------------------+----------+
313
+ | eval_rare_precision_at_8 | 0.033088 |
314
+ +-------------------------------+----------+
315
+ | eval_rare_recall_at_8 | 0.013213 |
316
+ +-------------------------------+----------+
317
+ | eval_rare_precision_at_15 | 0.030392 |
318
+ +-------------------------------+----------+
319
+ | eval_rare_recall_at_15 | 0.026888 |
320
+ +-------------------------------+----------+
321
+ | eval_not_rare_f1_micro | 0.595588 |
322
+ +-------------------------------+----------+
323
+ | eval_not_rare_f1_macro | 0.373272 |
324
+ +-------------------------------+----------+
325
+ | eval_not_rare_precision | 0.595588 |
326
+ +-------------------------------+----------+
327
+ | eval_not_rare_recall | 0.595588 |
328
+ +-------------------------------+----------+
329
+ | eval_not_rare_precision_at_5 | 0.080882 |
330
+ +-------------------------------+----------+
331
+ | eval_not_rare_recall_at_5 | 0.404412 |
332
+ +-------------------------------+----------+
333
+ | eval_not_rare_precision_at_8 | 0.050551 |
334
+ +-------------------------------+----------+
335
+ | eval_not_rare_recall_at_8 | 0.404412 |
336
+ +-------------------------------+----------+
337
+ | eval_not_rare_precision_at_15 | 0.026961 |
338
+ +-------------------------------+----------+
339
+ | eval_not_rare_recall_at_15 | 0.404412 |
340
+ +-------------------------------+----------+
341
+ | eval_loss | 0.121781 |
342
+ +-------------------------------+----------+ - [multilabel_classify.py:1853:on_evaluate]
343
+ 2025-06-05 14:35:46,261 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2091:_save]
344
+ 2025-06-05 14:35:46,262 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2096:_save]
345
+ 2025-06-05 14:35:46,263 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36:
346
+ +---------+-------------------+------------+
347
+ | Index | Saved File | Size |
348
+ +=========+===================+============+
349
+ | 1 | training_args.bin | 0.01 MB |
350
+ +---------+-------------------+------------+
351
+ | 2 | model.safetensors | 4267.74 MB |
352
+ +---------+-------------------+------------+
353
+ | 3 | config.json | 0.00 MB |
354
+ +---------+-------------------+------------+ - [multilabel_classify.py:2113:_save]
355
+ 2025-06-05 14:35:55,917 - INFO -
356
+ 🚂 Training Metrics (Step 40) 🚂
357
+ +---------------+----------+
358
+ | Metric | Value |
359
+ +===============+==========+
360
+ | loss | 0.1101 |
361
+ +---------------+----------+
362
+ | grad_norm | 0.456469 |
363
+ +---------------+----------+
364
+ | learning_rate | 8e-06 |
365
+ +---------------+----------+
366
+ | epoch | 2.22535 |
367
+ +---------------+----------+ - [multilabel_classify.py:1834:on_log]
368
+ 2025-06-05 14:36:14,058 - INFO -
369
+ 🚂 Training Metrics (Step 50) 🚂
370
+ +---------------+----------+
371
+ | Metric | Value |
372
+ +===============+==========+
373
+ | loss | 0.1081 |
374
+ +---------------+----------+
375
+ | grad_norm | 0.047327 |
376
+ +---------------+----------+
377
+ | learning_rate | 1e-05 |
378
+ +---------------+----------+
379
+ | epoch | 2.78873 |
380
+ +---------------+----------+ - [multilabel_classify.py:1834:on_log]
381
+ 2025-06-05 14:36:20,858 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1998:evaluate]
382
+ 2025-06-05 14:36:56,071 - INFO -
383
+ 🔍 Evaluation Metrics 🔍
384
+ +-------------------------------+----------+
385
+ | Metric | Value |
386
+ +===============================+==========+
387
+ | eval_f1_micro | 0 |
388
+ +-------------------------------+----------+
389
+ | eval_f1_macro | 0 |
390
+ +-------------------------------+----------+
391
+ | eval_precision_at_5 | 0.069118 |
392
+ +-------------------------------+----------+
393
+ | eval_recall_at_5 | 0.022519 |
394
+ +-------------------------------+----------+
395
+ | eval_precision_at_8 | 0.056066 |
396
+ +-------------------------------+----------+
397
+ | eval_recall_at_8 | 0.027557 |
398
+ +-------------------------------+----------+
399
+ | eval_precision_at_15 | 0.045588 |
400
+ +-------------------------------+----------+
401
+ | eval_recall_at_15 | 0.039367 |
402
+ +-------------------------------+----------+
403
+ | eval_rare_f1_micro | 0 |
404
+ +-------------------------------+----------+
405
+ | eval_rare_f1_macro | 0 |
406
+ +-------------------------------+----------+
407
+ | eval_rare_precision | 0 |
408
+ +-------------------------------+----------+
409
+ | eval_rare_recall | 0 |
410
+ +-------------------------------+----------+
411
+ | eval_rare_precision_at_5 | 0.070588 |
412
+ +-------------------------------+----------+
413
+ | eval_rare_recall_at_5 | 0.024437 |
414
+ +-------------------------------+----------+
415
+ | eval_rare_precision_at_8 | 0.056066 |
416
+ +-------------------------------+----------+
417
+ | eval_rare_recall_at_8 | 0.029136 |
418
+ +-------------------------------+----------+
419
+ | eval_rare_precision_at_15 | 0.041667 |
420
+ +-------------------------------+----------+
421
+ | eval_rare_recall_at_15 | 0.036998 |
422
+ +-------------------------------+----------+
423
+ | eval_not_rare_f1_micro | 0.595588 |
424
+ +-------------------------------+----------+
425
+ | eval_not_rare_f1_macro | 0.373272 |
426
+ +-------------------------------+----------+
427
+ | eval_not_rare_precision | 0.595588 |
428
+ +-------------------------------+----------+
429
+ | eval_not_rare_recall | 0.595588 |
430
+ +-------------------------------+----------+
431
+ | eval_not_rare_precision_at_5 | 0.080882 |
432
+ +-------------------------------+----------+
433
+ | eval_not_rare_recall_at_5 | 0.404412 |
434
+ +-------------------------------+----------+
435
+ | eval_not_rare_precision_at_8 | 0.050551 |
436
+ +-------------------------------+----------+
437
+ | eval_not_rare_recall_at_8 | 0.404412 |
438
+ +-------------------------------+----------+
439
+ | eval_not_rare_precision_at_15 | 0.026961 |
440
+ +-------------------------------+----------+
441
+ | eval_not_rare_recall_at_15 | 0.404412 |
442
+ +-------------------------------+----------+
443
+ | eval_loss | 0.10727 |
444
+ +-------------------------------+----------+ - [multilabel_classify.py:1853:on_evaluate]
445
+ 2025-06-05 14:36:57,298 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2091:_save]
446
+ 2025-06-05 14:36:57,299 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2096:_save]
447
+ 2025-06-05 14:36:57,300 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54:
448
+ +---------+-------------------+------------+
449
+ | Index | Saved File | Size |
450
+ +=========+===================+============+
451
+ | 1 | training_args.bin | 0.01 MB |
452
+ +---------+-------------------+------------+
453
+ | 2 | model.safetensors | 4267.74 MB |
454
+ +---------+-------------------+------------+
455
+ | 3 | config.json | 0.00 MB |
456
+ +---------+-------------------+------------+ - [multilabel_classify.py:2113:_save]
457
+ 2025-06-05 14:37:10,608 - INFO -
458
+ 🚂 Training Metrics (Step 60) 🚂
459
+ +---------------+---------+
460
+ | Metric | Value |
461
+ +===============+=========+
462
+ | loss | 0.1034 |
463
+ +---------------+---------+
464
+ | grad_norm | 0.1776 |
465
+ +---------------+---------+
466
+ | learning_rate | 1.2e-05 |
467
+ +---------------+---------+
468
+ | epoch | 3.33803 |
469
+ +---------------+---------+ - [multilabel_classify.py:1834:on_log]
470
+ 2025-06-05 14:37:26,330 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2091:_save]
471
+ 2025-06-05 14:37:26,332 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2096:_save]
472
+ 2025-06-05 14:37:26,333 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68:
473
+ +---------+-------------------+------------+
474
+ | Index | Saved File | Size |
475
+ +=========+===================+============+
476
+ | 1 | training_args.bin | 0.01 MB |
477
+ +---------+-------------------+------------+
478
+ | 2 | model.safetensors | 4267.74 MB |
479
+ +---------+-------------------+------------+
480
+ | 3 | config.json | 0.00 MB |
481
+ +---------+-------------------+------------+ - [multilabel_classify.py:2113:_save]
482
+ 2025-06-05 14:37:26,638 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1998:evaluate]
483
+ 2025-06-05 14:38:01,918 - INFO -
484
+ 🔍 Evaluation Metrics 🔍
485
+ +-------------------------------+----------+
486
+ | Metric | Value |
487
+ +===============================+==========+
488
+ | eval_f1_micro | 0 |
489
+ +-------------------------------+----------+
490
+ | eval_f1_macro | 0 |
491
+ +-------------------------------+----------+
492
+ | eval_precision_at_5 | 0.141176 |
493
+ +-------------------------------+----------+
494
+ | eval_recall_at_5 | 0.049623 |
495
+ +-------------------------------+----------+
496
+ | eval_precision_at_8 | 0.113051 |
497
+ +-------------------------------+----------+
498
+ | eval_recall_at_8 | 0.062767 |
499
+ +-------------------------------+----------+
500
+ | eval_precision_at_15 | 0.078922 |
501
+ +-------------------------------+----------+
502
+ | eval_recall_at_15 | 0.084687 |
503
+ +-------------------------------+----------+
504
+ | eval_rare_f1_micro | 0 |
505
+ +-------------------------------+----------+
506
+ | eval_rare_f1_macro | 0 |
507
+ +-------------------------------+----------+
508
+ | eval_rare_precision | 0 |
509
+ +-------------------------------+----------+
510
+ | eval_rare_recall | 0 |
511
+ +-------------------------------+----------+
512
+ | eval_rare_precision_at_5 | 0.083824 |
513
+ +-------------------------------+----------+
514
+ | eval_rare_recall_at_5 | 0.027629 |
515
+ +-------------------------------+----------+
516
+ | eval_rare_precision_at_8 | 0.067096 |
517
+ +-------------------------------+----------+
518
+ | eval_rare_recall_at_8 | 0.039039 |
519
+ +-------------------------------+----------+
520
+ | eval_rare_precision_at_15 | 0.054902 |
521
+ +-------------------------------+----------+
522
+ | eval_rare_recall_at_15 | 0.056481 |
523
+ +-------------------------------+----------+
524
+ | eval_not_rare_f1_micro | 0.595588 |
525
+ +-------------------------------+----------+
526
+ | eval_not_rare_f1_macro | 0.373272 |
527
+ +-------------------------------+----------+
528
+ | eval_not_rare_precision | 0.595588 |
529
+ +-------------------------------+----------+
530
+ | eval_not_rare_recall | 0.595588 |
531
+ +-------------------------------+----------+
532
+ | eval_not_rare_precision_at_5 | 0.080882 |
533
+ +-------------------------------+----------+
534
+ | eval_not_rare_recall_at_5 | 0.404412 |
535
+ +-------------------------------+----------+
536
+ | eval_not_rare_precision_at_8 | 0.050551 |
537
+ +-------------------------------+----------+
538
+ | eval_not_rare_recall_at_8 | 0.404412 |
539
+ +-------------------------------+----------+
540
+ | eval_not_rare_precision_at_15 | 0.026961 |
541
+ +-------------------------------+----------+
542
+ | eval_not_rare_recall_at_15 | 0.404412 |
543
+ +-------------------------------+----------+
544
+ | eval_loss | 0.104956 |
545
+ +-------------------------------+----------+ - [multilabel_classify.py:1853:on_evaluate]
546
+ 2025-06-05 14:38:05,654 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2091:_save]
547
+ 2025-06-05 14:38:05,655 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2096:_save]
548
+ 2025-06-05 14:38:05,656 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68:
549
+ +---------+--------------------+------------+
550
+ | Index | Saved File | Size |
551
+ +=========+====================+============+
552
+ | 1 | training_args.bin | 0.01 MB |
553
+ +---------+--------------------+------------+
554
+ | 2 | optimizer.pt | 642.30 MB |
555
+ +---------+--------------------+------------+
556
+ | 3 | model.safetensors | 4267.74 MB |
557
+ +---------+--------------------+------------+
558
+ | 4 | scaler.pt | 0.00 MB |
559
+ +---------+--------------------+------------+
560
+ | 5 | config.json | 0.00 MB |
561
+ +---------+--------------------+------------+
562
+ | 6 | scheduler.pt | 0.00 MB |
563
+ +---------+--------------------+------------+
564
+ | 7 | trainer_state.json | 0.01 MB |
565
+ +---------+--------------------+------------+
566
+ | 8 | rng_state.pth | 0.01 MB |
567
+ +---------+--------------------+------------+ - [multilabel_classify.py:2113:_save]
568
+ 2025-06-05 14:38:06,173 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2165:_load_best_model]
569
+ 2025-06-05 14:38:06,173 - INFO - 🖥️ Model is on device: cuda:0 - [multilabel_classify.py:2175:_load_best_model]
570
+ 2025-06-05 14:38:06,227 - INFO - 🔑 Key order comparison:
571
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
572
+ | Index | Saved state_dict Keys | Model state_dict Keys |
573
+ +=========+============================================+======================================================================================+
574
+ | 1 | attention.in_proj_bias | boost_mul |
575
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
576
+ | 2 | attention.in_proj_weight | boost_add |
577
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
578
+ | 3 | attention.out_proj.bias | base_model.base_model.model.model.embed_tokens.weight |
579
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
580
+ | 4 | attention.out_proj.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight |
581
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
582
+ | 5 | base_model.base_model.model.lm_head.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax |
583
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+ - [multilabel_classify.py:2199:_load_best_model]
584
+ 2025-06-05 14:38:07,251 - INFO - ✅ Loaded best model weights from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68/model.safetensors - [multilabel_classify.py:2216:_load_best_model]
585
+ 2025-06-05 14:38:07,277 - INFO - ✔️ Weight for boost_mul matches between saved and loaded state_dict - [multilabel_classify.py:2228:_load_best_model]
586
+ 2025-06-05 14:38:07,301 - INFO - ✔️ Weight for boost_add matches between saved and loaded state_dict - [multilabel_classify.py:2228:_load_best_model]
587
+ 2025-06-05 14:38:07,316 - INFO -
588
+ 🚂 Training Metrics (Step 68) 🚂
589
+ +--------------------------+----------+
590
+ | Metric | Value |
591
+ +==========================+==========+
592
+ | train_runtime | 285.076 |
593
+ +--------------------------+----------+
594
+ | train_samples_per_second | 7.956 |
595
+ +--------------------------+----------+
596
+ | train_steps_per_second | 0.239 |
597
+ +--------------------------+----------+
598
+ | total_flos | 0 |
599
+ +--------------------------+----------+
600
+ | train_loss | 0.210457 |
601
+ +--------------------------+----------+
602
+ | epoch | 3.78873 |
603
+ +--------------------------+----------+ - [multilabel_classify.py:1834:on_log]
604
+ 2025-06-05 14:38:07,317 - INFO - ✨ Training Completed! ✨ - [multilabel_classify.py:1707:on_train_end]
605
+ 2025-06-05 14:38:07,385 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/train_loss_plot.png' - [multilabel_classify.py:1903:on_train_end]
606
+ 2025-06-05 14:38:07,444 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_loss_plot.png' - [multilabel_classify.py:1917:on_train_end]
607
+ 2025-06-05 14:38:07,502 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_precision_at_15_plot.png' - [multilabel_classify.py:1938:on_train_end]
608
+ 2025-06-05 14:38:07,503 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
609
+ 2025-06-05 14:38:07,503 - INFO - + ✨ MODEL SAVING + - [multilabel_classify.py:101:log_section]
610
+ 2025-06-05 14:38:07,503 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:104:log_section]
611
+ 2025-06-05 14:38:07,503 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [multilabel_classify.py:3704:main]
612
+ 2025-06-05 14:38:07,503 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2691:save_and_push]
613
+ 2025-06-05 14:38:08,708 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2091:_save]
614
+ 2025-06-05 14:38:08,709 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2096:_save]
615
+ 2025-06-05 14:38:08,710 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b:
616
+ +---------+-------------------------------+------------+
617
+ | Index | Saved File | Size |
618
+ +=========+===============================+============+
619
+ | 1 | eval_loss_plot.png | 0.03 MB |
620
+ +---------+-------------------------------+------------+
621
+ | 2 | training_args.bin | 0.01 MB |
622
+ +---------+-------------------------------+------------+
623
+ | 3 | model.safetensors | 4267.74 MB |
624
+ +---------+-------------------------------+------------+
625
+ | 4 | config.json | 0.00 MB |
626
+ +---------+-------------------------------+------------+
627
+ | 5 | train_loss_plot.png | 0.02 MB |
628
+ +---------+-------------------------------+------------+
629
+ | 6 | eval_precision_at_15_plot.png | 0.03 MB |
630
+ +---------+-------------------------------+------------+ - [multilabel_classify.py:2113:_save]
631
+ 2025-06-05 14:38:12,338 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2091:_save]
632
+ 2025-06-05 14:38:12,339 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2096:_save]
633
+ 2025-06-05 14:38:12,340 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b:
634
+ +---------+-------------------------------+------------+
635
+ | Index | Saved File | Size |
636
+ +=========+===============================+============+
637
+ | 1 | eval_loss_plot.png | 0.03 MB |
638
+ +---------+-------------------------------+------------+
639
+ | 2 | training_args.bin | 0.01 MB |
640
+ +---------+-------------------------------+------------+
641
+ | 3 | model.safetensors | 4267.74 MB |
642
+ +---------+-------------------------------+------------+
643
+ | 4 | config.json | 0.00 MB |
644
+ +---------+-------------------------------+------------+
645
+ | 5 | train_loss_plot.png | 0.02 MB |
646
+ +---------+-------------------------------+------------+
647
+ | 6 | eval_precision_at_15_plot.png | 0.03 MB |
648
+ +---------+-------------------------------+------------+ - [multilabel_classify.py:2113:_save]
649
+ 2025-06-05 14:39:35,835 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2695:save_and_push]
650
+ 2025-06-05 14:39:35,865 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2699:save_and_push]