deb101 commited on
Commit
b7943cb
·
verified ·
1 Parent(s): 61a83a3

Trained classifier model on MIMIC-IV

Browse files
classification_log_2025-06-03_20-22-05.log ADDED
@@ -0,0 +1,646 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-06-03 20:22:05,457 - INFO - 🛠️ Command-line Arguments: - [multilabel_classify.py:323:print_args]
2
+ 2025-06-03 20:22:05,457 - INFO -
3
+ 🔹 output_dir: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b
4
+ 🔹 source_url: XURLs.MIMIC4_DEMO
5
+ 🔹 data: mimic4_icd10_full
6
+ 🔹 logfile: classification_log
7
+ 🔹 base_dir: ../tmp/MIMIC4_DEMO
8
+ 🔹 hub_model_id: deb101/mistral-7b-instruct-v0.3-mimic4-adapt
9
+ 🔹 model_name: mistralai/Mistral-7B-Instruct-v0.3
10
+ 🔹 max_length: 512
11
+ 🔹 do_fresh_training: True
12
+ 🔹 load_from_checkpoint: False
13
+ 🔹 task: multilabel-classify
14
+ 🔹 num_train_epochs: 4
15
+ 🔹 metric_for_best_model: precision_at_15
16
+ 🔹 learning_rate: 0.0001
17
+ 🔹 final_lr_scheduling: 1e-06
18
+ 🔹 warmup_steps: 500
19
+ 🔹 source: /home/ubuntu/.xcube/data/mimic4_demo
20
+ 🔹 logfile_path: ../tmp/MIMIC4_DEMO/logs/classification_log_2025-06-03_20-22-05.log - [multilabel_classify.py:324:print_args]
21
+ 2025-06-03 20:22:05,457 - INFO - ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ - [multilabel_classify.py:325:print_args]
22
+ 2025-06-03 20:22:05,458 - INFO - +----------------------------------------------------+ - [multilabel_classify.py:133:log_with_box]
23
+ 2025-06-03 20:22:05,458 - INFO - | 🤖🏷️ ****Multi-Label Classification Started**** 🏷️🤖 | - [multilabel_classify.py:133:log_with_box]
24
+ 2025-06-03 20:22:05,458 - INFO - +----------------------------------------------------+ - [multilabel_classify.py:133:log_with_box]
25
+ 2025-06-03 20:22:05,458 - INFO - ================================================================================ - [multilabel_classify.py:96:log_section]
26
+ 2025-06-03 20:22:05,458 - INFO - = 📌 INITIALIZING TRAINING ENVIRONMENT = - [multilabel_classify.py:97:log_section]
27
+ 2025-06-03 20:22:05,458 - INFO - ================================================================================ - [multilabel_classify.py:100:log_section]
28
+ 2025-06-03 20:22:05,458 - INFO - Hub Model ID for this multi-label classification task: deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify - [multilabel_classify.py:3227:main]
29
+ 2025-06-03 20:22:05,504 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
30
+ 2025-06-03 20:22:05,505 - INFO - + ✨ LOADING DATASETS + - [multilabel_classify.py:97:log_section]
31
+ 2025-06-03 20:22:05,505 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
32
+ 2025-06-03 20:22:05,505 - INFO - 📊 Loading main datasets.... - [multilabel_classify.py:3245:main]
33
+ 2025-06-03 20:22:13,776 - INFO - 🔍 Total unique labels in dataset: 7942 - [multilabel_classify.py:3086:sample_df_with_full_label_coverage]
34
+ 2025-06-03 20:22:13,789 - INFO - 🧪 Attempt 1: Sampled 122 rows covering 863 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
35
+ 2025-06-03 20:22:13,798 - INFO - 🧪 Attempt 2: Sampled 122 rows covering 816 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
36
+ 2025-06-03 20:22:13,807 - INFO - 🧪 Attempt 3: Sampled 122 rows covering 885 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
37
+ 2025-06-03 20:22:13,816 - INFO - 🧪 Attempt 4: Sampled 122 rows covering 828 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
38
+ 2025-06-03 20:22:13,825 - INFO - 🧪 Attempt 5: Sampled 122 rows covering 879 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
39
+ 2025-06-03 20:22:13,834 - INFO - 🧪 Attempt 6: Sampled 122 rows covering 852 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
40
+ 2025-06-03 20:22:13,842 - INFO - 🧪 Attempt 7: Sampled 122 rows covering 838 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
41
+ 2025-06-03 20:22:13,850 - INFO - 🧪 Attempt 8: Sampled 122 rows covering 851 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
42
+ 2025-06-03 20:22:13,859 - INFO - 🧪 Attempt 9: Sampled 122 rows covering 825 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
43
+ 2025-06-03 20:22:13,867 - INFO - 🧪 Attempt 10: Sampled 122 rows covering 833 labels. - [multilabel_classify.py:3100:sample_df_with_full_label_coverage]
44
+ 2025-06-03 20:22:13,868 - INFO - ⚠️ Skipping label coverage fix. 7109 labels are missing. - [multilabel_classify.py:3118:sample_df_with_full_label_coverage]
45
+ 2025-06-03 20:22:13,868 - INFO - ✅ Final row count: 122 (Valid: 20, Not-valid: 102) - [multilabel_classify.py:3121:sample_df_with_full_label_coverage]
46
+ 2025-06-03 20:22:13,886 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
47
+ 2025-06-03 20:22:13,887 - INFO - + ✨ STARTING FRESH TRAINING + - [multilabel_classify.py:97:log_section]
48
+ 2025-06-03 20:22:13,887 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
49
+ 2025-06-03 20:22:13,887 - INFO - 🔄 Starting fresh training (either forced or model not found)... - [multilabel_classify.py:3266:main]
50
+ 2025-06-03 20:22:13,902 - WARNING - Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured. - [_login.py:415:_login]
51
+ 2025-06-03 20:22:13,902 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
52
+ 2025-06-03 20:22:13,902 - INFO - + ✨ LOADING BASE MODEL + - [multilabel_classify.py:97:log_section]
53
+ 2025-06-03 20:22:13,902 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
54
+ 2025-06-03 20:22:13,902 - INFO - 📥 Loading pretrained model and tokenizer... - [multilabel_classify.py:3296:main]
55
+ 2025-06-03 20:22:13,902 - INFO - 🚀 Starting model and tokenizer loading process... - [multilabel_classify.py:1198:load_base_model_and_tokenizer]
56
+ 2025-06-03 20:22:13,903 - INFO - 📊 Quantization config: BitsAndBytesConfig {
57
+ "_load_in_4bit": true,
58
+ "_load_in_8bit": false,
59
+ "bnb_4bit_compute_dtype": "bfloat16",
60
+ "bnb_4bit_quant_storage": "uint8",
61
+ "bnb_4bit_quant_type": "nf4",
62
+ "bnb_4bit_use_double_quant": true,
63
+ "llm_int8_enable_fp32_cpu_offload": false,
64
+ "llm_int8_has_fp16_weight": false,
65
+ "llm_int8_skip_modules": null,
66
+ "llm_int8_threshold": 6.0,
67
+ "load_in_4bit": true,
68
+ "load_in_8bit": false,
69
+ "quant_method": "bitsandbytes"
70
+ }
71
+ - [multilabel_classify.py:1207:load_base_model_and_tokenizer]
72
+ 2025-06-03 20:22:13,904 - INFO - 🔤 Loading tokenizer for model: deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1211:load_base_model_and_tokenizer]
73
+ 2025-06-03 20:22:14,413 - INFO - 🔍 Checking if deb101/mistral-7b-instruct-v0.3-mimic4-adapt is a PEFT model... - [multilabel_classify.py:1223:load_base_model_and_tokenizer]
74
+ 2025-06-03 20:22:14,438 - INFO - ✅ Detected PEFT model. Base model: mistralai/Mistral-7B-Instruct-v0.3 - [multilabel_classify.py:1227:load_base_model_and_tokenizer]
75
+ 2025-06-03 20:22:14,438 - INFO - 🔍 Loading model configuration for mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1237:load_base_model_and_tokenizer]
76
+ 2025-06-03 20:22:14,461 - INFO - Model type: mistral, Architectures: ['MistralForCausalLM'] - [multilabel_classify.py:1243:load_base_model_and_tokenizer]
77
+ 2025-06-03 20:22:14,461 - INFO - 🧠 Loading base model: mistralai/Mistral-7B-Instruct-v0.3... - [multilabel_classify.py:1306:load_base_model_and_tokenizer]
78
+ 2025-06-03 20:22:15,021 - INFO - We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set `max_memory` in to a higher value to use more memory (at your own risk). - [modeling.py:991:get_balanced_memory]
79
+ 2025-06-03 20:22:20,344 - INFO - 🧩 Loading PEFT adapters for deb101/mistral-7b-instruct-v0.3-mimic4-adapt... - [multilabel_classify.py:1321:load_base_model_and_tokenizer]
80
+ 2025-06-03 20:22:20,612 - INFO - 🔧 Before enabling PEFT adapters for training - [multilabel_classify.py:1323:load_base_model_and_tokenizer]
81
+ 2025-06-03 20:22:20,617 - INFO - 🔧 After Enabling PEFT adapters for training - [multilabel_classify.py:1330:load_base_model_and_tokenizer]
82
+ 2025-06-03 20:22:20,620 - INFO - ✅ Model and tokenizer successfully loaded! - [multilabel_classify.py:1371:load_base_model_and_tokenizer]
83
+ 2025-06-03 20:22:20,620 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
84
+ 2025-06-03 20:22:20,620 - INFO - + ✨ DATA PREPROCESSING + - [multilabel_classify.py:97:log_section]
85
+ 2025-06-03 20:22:20,620 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
86
+ 2025-06-03 20:22:20,620 - INFO - 🔄 Loading and preprocessing training data... - [multilabel_classify.py:3304:main]
87
+ 2025-06-03 20:22:20,624 - INFO - Total number of labels: 833 - [multilabel_classify.py:999:preprocess_data]
88
+ 2025-06-03 20:22:20,624 - INFO - Rare labels (freq < 50): 832 - [multilabel_classify.py:1000:preprocess_data]
89
+ 2025-06-03 20:22:20,624 - INFO - Not rare labels (freq >= 50): 1 - [multilabel_classify.py:1001:preprocess_data]
90
+ 2025-06-03 20:22:20,624 - INFO - Label partitions and classes saved to ../tmp/MIMIC4_DEMO/labels_partition.json - [multilabel_classify.py:1002:preprocess_data]
91
+ 2025-06-03 20:22:21,956 - INFO - The size of training set: 567 - [multilabel_classify.py:1098:preprocess_data]
92
+ 2025-06-03 20:22:21,956 - INFO - The size of Evaluation set: 136 - [multilabel_classify.py:1099:preprocess_data]
93
+ 2025-06-03 20:22:21,961 - INFO - Number of unique ICD-10 codes: 833 - [multilabel_classify.py:3310:main]
94
+ 2025-06-03 20:22:21,961 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
95
+ 2025-06-03 20:22:21,961 - INFO - + ✨ MODEL INITIALIZATION + - [multilabel_classify.py:97:log_section]
96
+ 2025-06-03 20:22:21,961 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
97
+ 2025-06-03 20:22:21,961 - INFO - 🧠 Initializing custom L2R model for outputting per-token relevance scores per ICD-10 codes. - [multilabel_classify.py:3313:main]
98
+ 2025-06-03 20:22:21,962 - INFO - Will now start to create Multilabel-Classification Model from the base model - [multilabel_classify.py:516:__init__]
99
+ 2025-06-03 20:22:21,965 - INFO - Trainable params: 6815744 / 3765178368 (0.18%) - [multilabel_classify.py:568:compute_trainable_params]
100
+ 2025-06-03 20:22:22,304 - INFO - Creating the Multi-Label Classification Model from base model mistralai/Mistral-7B-Instruct-v0.3 completed!!! - [multilabel_classify.py:558:__init__]
101
+ 2025-06-03 20:22:22,307 - INFO - Trainable params: 84177025 / 3842539649 (2.19%) - [multilabel_classify.py:568:compute_trainable_params]
102
+ 2025-06-03 20:22:22,307 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
103
+ 2025-06-03 20:22:22,307 - INFO - + ✨ TRAINING PREPARATION + - [multilabel_classify.py:97:log_section]
104
+ 2025-06-03 20:22:22,307 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
105
+ 2025-06-03 20:22:22,307 - INFO - ⚙️ Preparing training components and optimizers... - [multilabel_classify.py:3320:main]
106
+ 2025-06-03 20:22:22,367 - INFO - 🖥️ Device: NVIDIA GH200 480GB - [multilabel_classify.py:845:log_training_configuration]
107
+ 2025-06-03 20:22:22,367 - INFO - 🔋 CUDA Available: True - [multilabel_classify.py:848:log_training_configuration]
108
+ 2025-06-03 20:22:22,367 - INFO - 💾 CUDA Device Count: 1 - [multilabel_classify.py:849:log_training_configuration]
109
+ 2025-06-03 20:22:22,368 - INFO -
110
+ 📋 Training Configuration 📋
111
+ +----------+-----------------------------+------------------------------------------------------------------+
112
+ | 🌟 Emoji | 🏷️ Parameter | 📊 Value |
113
+ +----------+-----------------------------+------------------------------------------------------------------+
114
+ | 📁 | Output Directory | ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b |
115
+ | 🔁 | Training Epochs | 4 |
116
+ | 🏋️ | Train Batch Size | 8 |
117
+ | 🔍 | Eval Batch Size | 8 |
118
+ | 📊 | Gradient Accumulation Steps | 4 |
119
+ | 🚀 | Learning Rate | 0.0001 |
120
+ | 🌅 | Warmup Steps | 500 |
121
+ | 💾 | Save Strategy | epoch |
122
+ | 💾 | Save Total Limit | 10 |
123
+ | 📊 | Evaluation Strategy | epoch |
124
+ | 🎯 | Best Model Metric | precision_at_15 |
125
+ | 📝 | Logging Strategy | steps (every 10 steps) |
126
+ | 🌐 | Push to Hub | True |
127
+ | 🌐 | Hub Model ID | deb101/mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify |
128
+ | 🔢 | Steps per Epoch | 17 |
129
+ | 🔢 | Total Training Steps | 68 |
130
+ | 🔢 | Evaluation Steps | 17 |
131
+ | 📊 | Training Dataset Size | 567 samples 🏋️ |
132
+ | 📊 | Evaluation Dataset Size | 136 samples 🔍 |
133
+ +----------+-----------------------------+------------------------------------------------------------------+ - [multilabel_classify.py:837:log_training_args]
134
+ 2025-06-03 20:22:22,368 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
135
+ 2025-06-03 20:22:22,368 - INFO - + ✨ MODEL TRAINING + - [multilabel_classify.py:97:log_section]
136
+ 2025-06-03 20:22:22,368 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
137
+ 2025-06-03 20:22:22,368 - INFO - 🏋️ Starting model training process... - [multilabel_classify.py:3340:main]
138
+ 2025-06-03 20:22:22,411 - INFO - We are registering the tokenizer deb101/mistral-7b-instruct-v0.3-mimic4-adapt in Custom Trainer - [multilabel_classify.py:1941:__init__]
139
+ 2025-06-03 20:22:22,667 - INFO - 🚀 Starting Training... - [multilabel_classify.py:1595:on_train_begin]
140
+ 2025-06-03 20:22:43,213 - INFO -
141
+ 🚂 Training Metrics (Step 10) 🚂
142
+ +---------------+---------+
143
+ | Metric | Value |
144
+ +===============+=========+
145
+ | loss | 0.624 |
146
+ +---------------+---------+
147
+ | grad_norm | 6.71916 |
148
+ +---------------+---------+
149
+ | learning_rate | 2e-06 |
150
+ +---------------+---------+
151
+ | epoch | 0.56338 |
152
+ +---------------+---------+ - [multilabel_classify.py:1789:on_log]
153
+ 2025-06-03 20:22:57,293 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1953:evaluate]
154
+ 2025-06-03 20:23:32,597 - INFO -
155
+ 🔍 Evaluation Metrics 🔍
156
+ +-------------------------------+----------+
157
+ | Metric | Value |
158
+ +===============================+==========+
159
+ | eval_f1_micro | 0 |
160
+ +-------------------------------+----------+
161
+ | eval_f1_macro | 0 |
162
+ +-------------------------------+----------+
163
+ | eval_precision_at_5 | 0.005882 |
164
+ +-------------------------------+----------+
165
+ | eval_recall_at_5 | 0.001306 |
166
+ +-------------------------------+----------+
167
+ | eval_precision_at_8 | 0.011029 |
168
+ +-------------------------------+----------+
169
+ | eval_recall_at_8 | 0.003704 |
170
+ +-------------------------------+----------+
171
+ | eval_precision_at_15 | 0.014706 |
172
+ +-------------------------------+----------+
173
+ | eval_recall_at_15 | 0.011093 |
174
+ +-------------------------------+----------+
175
+ | eval_rare_f1_micro | 0 |
176
+ +-------------------------------+----------+
177
+ | eval_rare_f1_macro | 0 |
178
+ +-------------------------------+----------+
179
+ | eval_rare_precision | 0 |
180
+ +-------------------------------+----------+
181
+ | eval_rare_recall | 0 |
182
+ +-------------------------------+----------+
183
+ | eval_rare_precision_at_5 | 0.010294 |
184
+ +-------------------------------+----------+
185
+ | eval_rare_recall_at_5 | 0.002439 |
186
+ +-------------------------------+----------+
187
+ | eval_rare_precision_at_8 | 0.012868 |
188
+ +-------------------------------+----------+
189
+ | eval_rare_recall_at_8 | 0.00592 |
190
+ +-------------------------------+----------+
191
+ | eval_rare_precision_at_15 | 0.014216 |
192
+ +-------------------------------+----------+
193
+ | eval_rare_recall_at_15 | 0.011398 |
194
+ +-------------------------------+----------+
195
+ | eval_not_rare_f1_micro | 0.595588 |
196
+ +-------------------------------+----------+
197
+ | eval_not_rare_f1_macro | 0.373272 |
198
+ +-------------------------------+----------+
199
+ | eval_not_rare_precision | 0.595588 |
200
+ +-------------------------------+----------+
201
+ | eval_not_rare_recall | 0.595588 |
202
+ +-------------------------------+----------+
203
+ | eval_not_rare_precision_at_5 | 0.080882 |
204
+ +-------------------------------+----------+
205
+ | eval_not_rare_recall_at_5 | 0.404412 |
206
+ +-------------------------------+----------+
207
+ | eval_not_rare_precision_at_8 | 0.050551 |
208
+ +-------------------------------+----------+
209
+ | eval_not_rare_recall_at_8 | 0.404412 |
210
+ +-------------------------------+----------+
211
+ | eval_not_rare_precision_at_15 | 0.026961 |
212
+ +-------------------------------+----------+
213
+ | eval_not_rare_recall_at_15 | 0.404412 |
214
+ +-------------------------------+----------+
215
+ | eval_loss | 0.221656 |
216
+ +-------------------------------+----------+ - [multilabel_classify.py:1808:on_evaluate]
217
+ 2025-06-03 20:23:36,490 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2046:_save]
218
+ 2025-06-03 20:23:36,493 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18 - [multilabel_classify.py:2051:_save]
219
+ 2025-06-03 20:23:36,494 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-18:
220
+ +---------+--------------------+------------+
221
+ | Index | Saved File | Size |
222
+ +=========+====================+============+
223
+ | 1 | training_args.bin | 0.01 MB |
224
+ +---------+--------------------+------------+
225
+ | 2 | optimizer.pt | 642.30 MB |
226
+ +---------+--------------------+------------+
227
+ | 3 | model.safetensors | 4267.74 MB |
228
+ +---------+--------------------+------------+
229
+ | 4 | scaler.pt | 0.00 MB |
230
+ +---------+--------------------+------------+
231
+ | 5 | config.json | 0.04 MB |
232
+ +---------+--------------------+------------+
233
+ | 6 | scheduler.pt | 0.00 MB |
234
+ +---------+--------------------+------------+
235
+ | 7 | trainer_state.json | 0.00 MB |
236
+ +---------+--------------------+------------+
237
+ | 8 | rng_state.pth | 0.01 MB |
238
+ +---------+--------------------+------------+ - [multilabel_classify.py:2068:_save]
239
+ 2025-06-03 20:23:42,667 - INFO -
240
+ 🚂 Training Metrics (Step 20) 🚂
241
+ +---------------+---------+
242
+ | Metric | Value |
243
+ +===============+=========+
244
+ | loss | 0.3386 |
245
+ +---------------+---------+
246
+ | grad_norm | 2.49967 |
247
+ +---------------+---------+
248
+ | learning_rate | 4e-06 |
249
+ +---------------+---------+
250
+ | epoch | 1.11268 |
251
+ +---------------+---------+ - [multilabel_classify.py:1789:on_log]
252
+ 2025-06-03 20:24:00,820 - INFO -
253
+ 🚂 Training Metrics (Step 30) 🚂
254
+ +---------------+----------+
255
+ | Metric | Value |
256
+ +===============+==========+
257
+ | loss | 0.1249 |
258
+ +---------------+----------+
259
+ | grad_norm | 0.258035 |
260
+ +---------------+----------+
261
+ | learning_rate | 6e-06 |
262
+ +---------------+----------+
263
+ | epoch | 1.67606 |
264
+ +---------------+----------+ - [multilabel_classify.py:1789:on_log]
265
+ 2025-06-03 20:24:11,212 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1953:evaluate]
266
+ 2025-06-03 20:24:46,604 - INFO -
267
+ 🔍 Evaluation Metrics 🔍
268
+ +-------------------------------+----------+
269
+ | Metric | Value |
270
+ +===============================+==========+
271
+ | eval_f1_micro | 0 |
272
+ +-------------------------------+----------+
273
+ | eval_f1_macro | 0 |
274
+ +-------------------------------+----------+
275
+ | eval_precision_at_5 | 0.008824 |
276
+ +-------------------------------+----------+
277
+ | eval_recall_at_5 | 0.001777 |
278
+ +-------------------------------+----------+
279
+ | eval_precision_at_8 | 0.011029 |
280
+ +-------------------------------+----------+
281
+ | eval_recall_at_8 | 0.003763 |
282
+ +-------------------------------+----------+
283
+ | eval_precision_at_15 | 0.015196 |
284
+ +-------------------------------+----------+
285
+ | eval_recall_at_15 | 0.010958 |
286
+ +-------------------------------+----------+
287
+ | eval_rare_f1_micro | 0 |
288
+ +-------------------------------+----------+
289
+ | eval_rare_f1_macro | 0 |
290
+ +-------------------------------+----------+
291
+ | eval_rare_precision | 0 |
292
+ +-------------------------------+----------+
293
+ | eval_rare_recall | 0 |
294
+ +-------------------------------+----------+
295
+ | eval_rare_precision_at_5 | 0.008824 |
296
+ +-------------------------------+----------+
297
+ | eval_rare_recall_at_5 | 0.001785 |
298
+ +-------------------------------+----------+
299
+ | eval_rare_precision_at_8 | 0.011029 |
300
+ +-------------------------------+----------+
301
+ | eval_rare_recall_at_8 | 0.004688 |
302
+ +-------------------------------+----------+
303
+ | eval_rare_precision_at_15 | 0.014216 |
304
+ +-------------------------------+----------+
305
+ | eval_rare_recall_at_15 | 0.012365 |
306
+ +-------------------------------+----------+
307
+ | eval_not_rare_f1_micro | 0.595588 |
308
+ +-------------------------------+----------+
309
+ | eval_not_rare_f1_macro | 0.373272 |
310
+ +-------------------------------+----------+
311
+ | eval_not_rare_precision | 0.595588 |
312
+ +-------------------------------+----------+
313
+ | eval_not_rare_recall | 0.595588 |
314
+ +-------------------------------+----------+
315
+ | eval_not_rare_precision_at_5 | 0.080882 |
316
+ +-------------------------------+----------+
317
+ | eval_not_rare_recall_at_5 | 0.404412 |
318
+ +-------------------------------+----------+
319
+ | eval_not_rare_precision_at_8 | 0.050551 |
320
+ +-------------------------------+----------+
321
+ | eval_not_rare_recall_at_8 | 0.404412 |
322
+ +-------------------------------+----------+
323
+ | eval_not_rare_precision_at_15 | 0.026961 |
324
+ +-------------------------------+----------+
325
+ | eval_not_rare_recall_at_15 | 0.404412 |
326
+ +-------------------------------+----------+
327
+ | eval_loss | 0.121922 |
328
+ +-------------------------------+----------+ - [multilabel_classify.py:1808:on_evaluate]
329
+ 2025-06-03 20:24:50,330 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2046:_save]
330
+ 2025-06-03 20:24:50,333 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36 - [multilabel_classify.py:2051:_save]
331
+ 2025-06-03 20:24:50,334 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-36:
332
+ +---------+--------------------+------------+
333
+ | Index | Saved File | Size |
334
+ +=========+====================+============+
335
+ | 1 | training_args.bin | 0.01 MB |
336
+ +---------+--------------------+------------+
337
+ | 2 | optimizer.pt | 642.30 MB |
338
+ +---------+--------------------+------------+
339
+ | 3 | model.safetensors | 4267.74 MB |
340
+ +---------+--------------------+------------+
341
+ | 4 | scaler.pt | 0.00 MB |
342
+ +---------+--------------------+------------+
343
+ | 5 | config.json | 0.04 MB |
344
+ +---------+--------------------+------------+
345
+ | 6 | scheduler.pt | 0.00 MB |
346
+ +---------+--------------------+------------+
347
+ | 7 | trainer_state.json | 0.00 MB |
348
+ +---------+--------------------+------------+
349
+ | 8 | rng_state.pth | 0.01 MB |
350
+ +---------+--------------------+------------+ - [multilabel_classify.py:2068:_save]
351
+ 2025-06-03 20:25:00,280 - INFO -
352
+ 🚂 Training Metrics (Step 40) 🚂
353
+ +---------------+----------+
354
+ | Metric | Value |
355
+ +===============+==========+
356
+ | loss | 0.1098 |
357
+ +---------------+----------+
358
+ | grad_norm | 0.470693 |
359
+ +---------------+----------+
360
+ | learning_rate | 8e-06 |
361
+ +---------------+----------+
362
+ | epoch | 2.22535 |
363
+ +---------------+----------+ - [multilabel_classify.py:1789:on_log]
364
+ 2025-06-03 20:25:18,391 - INFO -
365
+ 🚂 Training Metrics (Step 50) 🚂
366
+ +---------------+----------+
367
+ | Metric | Value |
368
+ +===============+==========+
369
+ | loss | 0.1086 |
370
+ +---------------+----------+
371
+ | grad_norm | 0.067761 |
372
+ +---------------+----------+
373
+ | learning_rate | 1e-05 |
374
+ +---------------+----------+
375
+ | epoch | 2.78873 |
376
+ +---------------+----------+ - [multilabel_classify.py:1789:on_log]
377
+ 2025-06-03 20:25:25,137 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1953:evaluate]
378
+ 2025-06-03 20:26:00,429 - INFO -
379
+ 🔍 Evaluation Metrics 🔍
380
+ +-------------------------------+----------+
381
+ | Metric | Value |
382
+ +===============================+==========+
383
+ | eval_f1_micro | 0 |
384
+ +-------------------------------+----------+
385
+ | eval_f1_macro | 0 |
386
+ +-------------------------------+----------+
387
+ | eval_precision_at_5 | 0.047059 |
388
+ +-------------------------------+----------+
389
+ | eval_recall_at_5 | 0.012123 |
390
+ +-------------------------------+----------+
391
+ | eval_precision_at_8 | 0.050551 |
392
+ +-------------------------------+----------+
393
+ | eval_recall_at_8 | 0.02375 |
394
+ +-------------------------------+----------+
395
+ | eval_precision_at_15 | 0.045588 |
396
+ +-------------------------------+----------+
397
+ | eval_recall_at_15 | 0.045718 |
398
+ +-------------------------------+----------+
399
+ | eval_rare_f1_micro | 0 |
400
+ +-------------------------------+----------+
401
+ | eval_rare_f1_macro | 0 |
402
+ +-------------------------------+----------+
403
+ | eval_rare_precision | 0 |
404
+ +-------------------------------+----------+
405
+ | eval_rare_recall | 0 |
406
+ +-------------------------------+----------+
407
+ | eval_rare_precision_at_5 | 0.036765 |
408
+ +-------------------------------+----------+
409
+ | eval_rare_recall_at_5 | 0.010554 |
410
+ +-------------------------------+----------+
411
+ | eval_rare_precision_at_8 | 0.04136 |
412
+ +-------------------------------+----------+
413
+ | eval_rare_recall_at_8 | 0.019845 |
414
+ +-------------------------------+----------+
415
+ | eval_rare_precision_at_15 | 0.036765 |
416
+ +-------------------------------+----------+
417
+ | eval_rare_recall_at_15 | 0.037016 |
418
+ +-------------------------------+----------+
419
+ | eval_not_rare_f1_micro | 0.595588 |
420
+ +-------------------------------+----------+
421
+ | eval_not_rare_f1_macro | 0.373272 |
422
+ +-------------------------------+----------+
423
+ | eval_not_rare_precision | 0.595588 |
424
+ +-------------------------------+----------+
425
+ | eval_not_rare_recall | 0.595588 |
426
+ +-------------------------------+----------+
427
+ | eval_not_rare_precision_at_5 | 0.080882 |
428
+ +-------------------------------+----------+
429
+ | eval_not_rare_recall_at_5 | 0.404412 |
430
+ +-------------------------------+----------+
431
+ | eval_not_rare_precision_at_8 | 0.050551 |
432
+ +-------------------------------+----------+
433
+ | eval_not_rare_recall_at_8 | 0.404412 |
434
+ +-------------------------------+----------+
435
+ | eval_not_rare_precision_at_15 | 0.026961 |
436
+ +-------------------------------+----------+
437
+ | eval_not_rare_recall_at_15 | 0.404412 |
438
+ +-------------------------------+----------+
439
+ | eval_loss | 0.107352 |
440
+ +-------------------------------+----------+ - [multilabel_classify.py:1808:on_evaluate]
441
+ 2025-06-03 20:26:01,710 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2046:_save]
442
+ 2025-06-03 20:26:01,713 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54 - [multilabel_classify.py:2051:_save]
443
+ 2025-06-03 20:26:01,714 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-54:
444
+ +---------+-------------------+------------+
445
+ | Index | Saved File | Size |
446
+ +=========+===================+============+
447
+ | 1 | training_args.bin | 0.01 MB |
448
+ +---------+-------------------+------------+
449
+ | 2 | model.safetensors | 4267.74 MB |
450
+ +---------+-------------------+------------+
451
+ | 3 | config.json | 0.04 MB |
452
+ +---------+-------------------+------------+ - [multilabel_classify.py:2068:_save]
453
+ 2025-06-03 20:26:15,035 - INFO -
454
+ 🚂 Training Metrics (Step 60) 🚂
455
+ +---------------+----------+
456
+ | Metric | Value |
457
+ +===============+==========+
458
+ | loss | 0.1033 |
459
+ +---------------+----------+
460
+ | grad_norm | 0.206313 |
461
+ +---------------+----------+
462
+ | learning_rate | 1.2e-05 |
463
+ +---------------+----------+
464
+ | epoch | 3.33803 |
465
+ +---------------+----------+ - [multilabel_classify.py:1789:on_log]
466
+ 2025-06-03 20:26:30,807 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2046:_save]
467
+ 2025-06-03 20:26:30,810 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2051:_save]
468
+ 2025-06-03 20:26:30,811 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68:
469
+ +---------+-------------------+------------+
470
+ | Index | Saved File | Size |
471
+ +=========+===================+============+
472
+ | 1 | training_args.bin | 0.01 MB |
473
+ +---------+-------------------+------------+
474
+ | 2 | model.safetensors | 4267.74 MB |
475
+ +---------+-------------------+------------+
476
+ | 3 | config.json | 0.04 MB |
477
+ +---------+-------------------+------------+ - [multilabel_classify.py:2068:_save]
478
+ 2025-06-03 20:26:31,136 - INFO - Removing 'token_type_ids' from eval_dataset as they are not needed. - [multilabel_classify.py:1953:evaluate]
479
+ 2025-06-03 20:27:06,536 - INFO -
480
+ 🔍 Evaluation Metrics 🔍
481
+ +-------------------------------+----------+
482
+ | Metric | Value |
483
+ +===============================+==========+
484
+ | eval_f1_micro | 0 |
485
+ +-------------------------------+----------+
486
+ | eval_f1_macro | 0 |
487
+ +-------------------------------+----------+
488
+ | eval_precision_at_5 | 0.167647 |
489
+ +-------------------------------+----------+
490
+ | eval_recall_at_5 | 0.059694 |
491
+ +-------------------------------+----------+
492
+ | eval_precision_at_8 | 0.137868 |
493
+ +-------------------------------+----------+
494
+ | eval_recall_at_8 | 0.077015 |
495
+ +-------------------------------+----------+
496
+ | eval_precision_at_15 | 0.092647 |
497
+ +-------------------------------+----------+
498
+ | eval_recall_at_15 | 0.099223 |
499
+ +-------------------------------+----------+
500
+ | eval_rare_f1_micro | 0 |
501
+ +-------------------------------+----------+
502
+ | eval_rare_f1_macro | 0 |
503
+ +-------------------------------+----------+
504
+ | eval_rare_precision | 0 |
505
+ +-------------------------------+----------+
506
+ | eval_rare_recall | 0 |
507
+ +-------------------------------+----------+
508
+ | eval_rare_precision_at_5 | 0.127941 |
509
+ +-------------------------------+----------+
510
+ | eval_rare_recall_at_5 | 0.046166 |
511
+ +-------------------------------+----------+
512
+ | eval_rare_precision_at_8 | 0.09375 |
513
+ +-------------------------------+----------+
514
+ | eval_rare_recall_at_8 | 0.052229 |
515
+ +-------------------------------+----------+
516
+ | eval_rare_precision_at_15 | 0.064216 |
517
+ +-------------------------------+----------+
518
+ | eval_rare_recall_at_15 | 0.069951 |
519
+ +-------------------------------+----------+
520
+ | eval_not_rare_f1_micro | 0.595588 |
521
+ +-------------------------------+----------+
522
+ | eval_not_rare_f1_macro | 0.373272 |
523
+ +-------------------------------+----------+
524
+ | eval_not_rare_precision | 0.595588 |
525
+ +-------------------------------+----------+
526
+ | eval_not_rare_recall | 0.595588 |
527
+ +-------------------------------+----------+
528
+ | eval_not_rare_precision_at_5 | 0.080882 |
529
+ +-------------------------------+----------+
530
+ | eval_not_rare_recall_at_5 | 0.404412 |
531
+ +-------------------------------+----------+
532
+ | eval_not_rare_precision_at_8 | 0.050551 |
533
+ +-------------------------------+----------+
534
+ | eval_not_rare_recall_at_8 | 0.404412 |
535
+ +-------------------------------+----------+
536
+ | eval_not_rare_precision_at_15 | 0.026961 |
537
+ +-------------------------------+----------+
538
+ | eval_not_rare_recall_at_15 | 0.404412 |
539
+ +-------------------------------+----------+
540
+ | eval_loss | 0.10503 |
541
+ +-------------------------------+----------+ - [multilabel_classify.py:1808:on_evaluate]
542
+ 2025-06-03 20:27:10,158 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2046:_save]
543
+ 2025-06-03 20:27:10,161 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2051:_save]
544
+ 2025-06-03 20:27:10,163 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68:
545
+ +---------+--------------------+------------+
546
+ | Index | Saved File | Size |
547
+ +=========+====================+============+
548
+ | 1 | training_args.bin | 0.01 MB |
549
+ +---------+--------------------+------------+
550
+ | 2 | optimizer.pt | 642.30 MB |
551
+ +---------+--------------------+------------+
552
+ | 3 | model.safetensors | 4267.74 MB |
553
+ +---------+--------------------+------------+
554
+ | 4 | scaler.pt | 0.00 MB |
555
+ +---------+--------------------+------------+
556
+ | 5 | config.json | 0.04 MB |
557
+ +---------+--------------------+------------+
558
+ | 6 | scheduler.pt | 0.00 MB |
559
+ +---------+--------------------+------------+
560
+ | 7 | trainer_state.json | 0.01 MB |
561
+ +---------+--------------------+------------+
562
+ | 8 | rng_state.pth | 0.01 MB |
563
+ +---------+--------------------+------------+ - [multilabel_classify.py:2068:_save]
564
+ 2025-06-03 20:27:10,994 - INFO - 📂 Loading best model from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68 - [multilabel_classify.py:2120:_load_best_model]
565
+ 2025-06-03 20:27:10,994 - INFO - 🖥️ Model is on device: cuda:0 - [multilabel_classify.py:2130:_load_best_model]
566
+ 2025-06-03 20:27:11,049 - INFO - 🔑 Key order comparison:
567
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
568
+ | Index | Saved state_dict Keys | Model state_dict Keys |
569
+ +=========+============================================+======================================================================================+
570
+ | 1 | attention.in_proj_bias | boost_mul |
571
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
572
+ | 2 | attention.in_proj_weight | boost_add |
573
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
574
+ | 3 | attention.out_proj.bias | base_model.base_model.model.model.embed_tokens.weight |
575
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
576
+ | 4 | attention.out_proj.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight |
577
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+
578
+ | 5 | base_model.base_model.model.lm_head.weight | base_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight.absmax |
579
+ +---------+--------------------------------------------+--------------------------------------------------------------------------------------+ - [multilabel_classify.py:2154:_load_best_model]
580
+ 2025-06-03 20:27:12,079 - INFO - ✅ Loaded best model weights from ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/checkpoint-68/model.safetensors - [multilabel_classify.py:2171:_load_best_model]
581
+ 2025-06-03 20:27:12,105 - INFO - ✔️ Weight for boost_mul matches between saved and loaded state_dict - [multilabel_classify.py:2183:_load_best_model]
582
+ 2025-06-03 20:27:12,129 - INFO - ✔️ Weight for boost_add matches between saved and loaded state_dict - [multilabel_classify.py:2183:_load_best_model]
583
+ 2025-06-03 20:27:12,152 - INFO -
584
+ 🚂 Training Metrics (Step 68) 🚂
585
+ +--------------------------+---------+
586
+ | Metric | Value |
587
+ +==========================+=========+
588
+ | train_runtime | 289.486 |
589
+ +--------------------------+---------+
590
+ | train_samples_per_second | 7.835 |
591
+ +--------------------------+---------+
592
+ | train_steps_per_second | 0.235 |
593
+ +--------------------------+---------+
594
+ | total_flos | 0 |
595
+ +--------------------------+---------+
596
+ | train_loss | 0.21949 |
597
+ +--------------------------+---------+
598
+ | epoch | 3.78873 |
599
+ +--------------------------+---------+ - [multilabel_classify.py:1789:on_log]
600
+ 2025-06-03 20:27:12,152 - INFO - ✨ Training Completed! ✨ - [multilabel_classify.py:1662:on_train_end]
601
+ 2025-06-03 20:27:12,224 - INFO - 📊 Training loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/train_loss_plot.png' - [multilabel_classify.py:1858:on_train_end]
602
+ 2025-06-03 20:27:12,286 - INFO - 📊 Evaluation loss plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_loss_plot.png' - [multilabel_classify.py:1872:on_train_end]
603
+ 2025-06-03 20:27:12,348 - INFO - 📊 Evaluation metric plot saved as '../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b/eval_precision_at_15_plot.png' - [multilabel_classify.py:1893:on_train_end]
604
+ 2025-06-03 20:27:12,348 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:96:log_section]
605
+ 2025-06-03 20:27:12,348 - INFO - + ✨ MODEL SAVING + - [multilabel_classify.py:97:log_section]
606
+ 2025-06-03 20:27:12,348 - INFO - ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - [multilabel_classify.py:100:log_section]
607
+ 2025-06-03 20:27:12,349 - INFO - 💾 Saving trained model and pushing to Hugging Face Hub... - [multilabel_classify.py:3354:main]
608
+ 2025-06-03 20:27:12,349 - INFO - 📁 Creating/using output directory: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2646:save_and_push]
609
+ 2025-06-03 20:27:13,614 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2046:_save]
610
+ 2025-06-03 20:27:13,617 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2051:_save]
611
+ 2025-06-03 20:27:13,618 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b:
612
+ +---------+-------------------------------+------------+
613
+ | Index | Saved File | Size |
614
+ +=========+===============================+============+
615
+ | 1 | eval_loss_plot.png | 0.03 MB |
616
+ +---------+-------------------------------+------------+
617
+ | 2 | training_args.bin | 0.01 MB |
618
+ +---------+-------------------------------+------------+
619
+ | 3 | model.safetensors | 4267.74 MB |
620
+ +---------+-------------------------------+------------+
621
+ | 4 | config.json | 0.04 MB |
622
+ +---------+-------------------------------+------------+
623
+ | 5 | train_loss_plot.png | 0.02 MB |
624
+ +---------+-------------------------------+------------+
625
+ | 6 | eval_precision_at_15_plot.png | 0.03 MB |
626
+ +---------+-------------------------------+------------+ - [multilabel_classify.py:2068:_save]
627
+ 2025-06-03 20:27:17,349 - INFO - 💾 Model weights saved in safetensors format: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2046:_save]
628
+ 2025-06-03 20:27:17,352 - INFO - ⚙️ Config saved in checkpoint: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2051:_save]
629
+ 2025-06-03 20:27:17,354 - INFO - 📋 Saved files in ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b:
630
+ +---------+-------------------------------+------------+
631
+ | Index | Saved File | Size |
632
+ +=========+===============================+============+
633
+ | 1 | eval_loss_plot.png | 0.03 MB |
634
+ +---------+-------------------------------+------------+
635
+ | 2 | training_args.bin | 0.01 MB |
636
+ +---------+-------------------------------+------------+
637
+ | 3 | model.safetensors | 4267.74 MB |
638
+ +---------+-------------------------------+------------+
639
+ | 4 | config.json | 0.04 MB |
640
+ +---------+-------------------------------+------------+
641
+ | 5 | train_loss_plot.png | 0.02 MB |
642
+ +---------+-------------------------------+------------+
643
+ | 6 | eval_precision_at_15_plot.png | 0.03 MB |
644
+ +---------+-------------------------------+------------+ - [multilabel_classify.py:2068:_save]
645
+ 2025-06-03 20:28:44,815 - INFO - 💾 Model saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2650:save_and_push]
646
+ 2025-06-03 20:28:44,845 - INFO - 🖌️ Tokenizer saved to: ../tmp/MIMIC4_DEMO/mimic4_classify_mistral7b - [multilabel_classify.py:2654:save_and_push]
tokenizer.json CHANGED
@@ -1,19 +1,7 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
- "padding": {
10
- "strategy": "BatchLongest",
11
- "direction": "Left",
12
- "pad_to_multiple_of": null,
13
- "pad_id": 2,
14
- "pad_type_id": 0,
15
- "pad_token": "</s>"
16
- },
17
  "added_tokens": [
18
  {
19
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
4
+ "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,