finalform commited on
Commit
43898da
·
verified ·
1 Parent(s): e63e356

Delete checkpoint-750

Browse files
checkpoint-750/README.md DELETED
@@ -1,202 +0,0 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-Coder-7B-Instruct
3
- library_name: peft
4
- ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
-
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
- ### Framework versions
201
-
202
- - PEFT 0.15.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/adapter_config.json DELETED
@@ -1,39 +0,0 @@
1
- {
2
- "alpha_pattern": {},
3
- "auto_mapping": null,
4
- "base_model_name_or_path": "Qwen/Qwen2.5-Coder-7B-Instruct",
5
- "bias": "none",
6
- "corda_config": null,
7
- "eva_config": null,
8
- "exclude_modules": null,
9
- "fan_in_fan_out": false,
10
- "inference_mode": true,
11
- "init_lora_weights": true,
12
- "layer_replication": null,
13
- "layers_pattern": null,
14
- "layers_to_transform": null,
15
- "loftq_config": {},
16
- "lora_alpha": 16,
17
- "lora_bias": false,
18
- "lora_dropout": 0.1,
19
- "megatron_config": null,
20
- "megatron_core": "megatron.core",
21
- "modules_to_save": null,
22
- "peft_type": "LORA",
23
- "r": 32,
24
- "rank_pattern": {},
25
- "revision": null,
26
- "target_modules": [
27
- "down_proj",
28
- "q_proj",
29
- "k_proj",
30
- "gate_proj",
31
- "o_proj",
32
- "v_proj",
33
- "up_proj"
34
- ],
35
- "task_type": "CAUSAL_LM",
36
- "trainable_token_indices": null,
37
- "use_dora": false,
38
- "use_rslora": false
39
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/adapter_model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a4a77617550e1e326304ab8eeabdd44835dc0758327eba5fd15508c15e55c354
3
- size 323014168
 
 
 
 
checkpoint-750/added_tokens.json DELETED
@@ -1,24 +0,0 @@
1
- {
2
- "</tool_call>": 151658,
3
- "<tool_call>": 151657,
4
- "<|box_end|>": 151649,
5
- "<|box_start|>": 151648,
6
- "<|endoftext|>": 151643,
7
- "<|file_sep|>": 151664,
8
- "<|fim_middle|>": 151660,
9
- "<|fim_pad|>": 151662,
10
- "<|fim_prefix|>": 151659,
11
- "<|fim_suffix|>": 151661,
12
- "<|im_end|>": 151645,
13
- "<|im_start|>": 151644,
14
- "<|image_pad|>": 151655,
15
- "<|object_ref_end|>": 151647,
16
- "<|object_ref_start|>": 151646,
17
- "<|quad_end|>": 151651,
18
- "<|quad_start|>": 151650,
19
- "<|repo_name|>": 151663,
20
- "<|video_pad|>": 151656,
21
- "<|vision_end|>": 151653,
22
- "<|vision_pad|>": 151654,
23
- "<|vision_start|>": 151652
24
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/chat_template.jinja DELETED
@@ -1,54 +0,0 @@
1
- {%- if tools %}
2
- {{- '<|im_start|>system\n' }}
3
- {%- if messages[0]['role'] == 'system' %}
4
- {{- messages[0]['content'] }}
5
- {%- else %}
6
- {{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
7
- {%- endif %}
8
- {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
- {%- for tool in tools %}
10
- {{- "\n" }}
11
- {{- tool | tojson }}
12
- {%- endfor %}
13
- {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
- {%- else %}
15
- {%- if messages[0]['role'] == 'system' %}
16
- {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
- {%- else %}
18
- {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
19
- {%- endif %}
20
- {%- endif %}
21
- {%- for message in messages %}
22
- {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
- {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
- {%- elif message.role == "assistant" %}
25
- {{- '<|im_start|>' + message.role }}
26
- {%- if message.content %}
27
- {{- '\n' + message.content }}
28
- {%- endif %}
29
- {%- for tool_call in message.tool_calls %}
30
- {%- if tool_call.function is defined %}
31
- {%- set tool_call = tool_call.function %}
32
- {%- endif %}
33
- {{- '\n<tool_call>\n{"name": "' }}
34
- {{- tool_call.name }}
35
- {{- '", "arguments": ' }}
36
- {{- tool_call.arguments | tojson }}
37
- {{- '}\n</tool_call>' }}
38
- {%- endfor %}
39
- {{- '<|im_end|>\n' }}
40
- {%- elif message.role == "tool" %}
41
- {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
- {{- '<|im_start|>user' }}
43
- {%- endif %}
44
- {{- '\n<tool_response>\n' }}
45
- {{- message.content }}
46
- {{- '\n</tool_response>' }}
47
- {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
- {{- '<|im_end|>\n' }}
49
- {%- endif %}
50
- {%- endif %}
51
- {%- endfor %}
52
- {%- if add_generation_prompt %}
53
- {{- '<|im_start|>assistant\n' }}
54
- {%- endif %}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/merges.txt DELETED
The diff for this file is too large to render. See raw diff
 
checkpoint-750/optimizer.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ebda1bce5fc59da7794c977c54f2700d89174dbd9f958164c87db00a6057de5b
3
- size 646164683
 
 
 
 
checkpoint-750/rng_state.pth DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c851de74280efaa2b302269ad9d3377a1bf73f8d2e95813e6a60e5f2944be538
3
- size 14645
 
 
 
 
checkpoint-750/scheduler.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f219c746dd03301eb7ed441bdaf27ac17211dfd9896a46898ad0379f606de1bf
3
- size 1465
 
 
 
 
checkpoint-750/special_tokens_map.json DELETED
@@ -1,25 +0,0 @@
1
- {
2
- "additional_special_tokens": [
3
- "<|im_start|>",
4
- "<|im_end|>",
5
- "<|object_ref_start|>",
6
- "<|object_ref_end|>",
7
- "<|box_start|>",
8
- "<|box_end|>",
9
- "<|quad_start|>",
10
- "<|quad_end|>",
11
- "<|vision_start|>",
12
- "<|vision_end|>",
13
- "<|vision_pad|>",
14
- "<|image_pad|>",
15
- "<|video_pad|>"
16
- ],
17
- "eos_token": {
18
- "content": "<|im_end|>",
19
- "lstrip": false,
20
- "normalized": false,
21
- "rstrip": false,
22
- "single_word": false
23
- },
24
- "pad_token": "<|im_end|>"
25
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/tokenizer.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c3f9d93e80cff961819dcba7d892cf9656e086a0cf83cdbef23f10c1a493faa2
3
- size 11422061
 
 
 
 
checkpoint-750/tokenizer_config.json DELETED
@@ -1,207 +0,0 @@
1
- {
2
- "add_bos_token": false,
3
- "add_prefix_space": false,
4
- "added_tokens_decoder": {
5
- "151643": {
6
- "content": "<|endoftext|>",
7
- "lstrip": false,
8
- "normalized": false,
9
- "rstrip": false,
10
- "single_word": false,
11
- "special": true
12
- },
13
- "151644": {
14
- "content": "<|im_start|>",
15
- "lstrip": false,
16
- "normalized": false,
17
- "rstrip": false,
18
- "single_word": false,
19
- "special": true
20
- },
21
- "151645": {
22
- "content": "<|im_end|>",
23
- "lstrip": false,
24
- "normalized": false,
25
- "rstrip": false,
26
- "single_word": false,
27
- "special": true
28
- },
29
- "151646": {
30
- "content": "<|object_ref_start|>",
31
- "lstrip": false,
32
- "normalized": false,
33
- "rstrip": false,
34
- "single_word": false,
35
- "special": true
36
- },
37
- "151647": {
38
- "content": "<|object_ref_end|>",
39
- "lstrip": false,
40
- "normalized": false,
41
- "rstrip": false,
42
- "single_word": false,
43
- "special": true
44
- },
45
- "151648": {
46
- "content": "<|box_start|>",
47
- "lstrip": false,
48
- "normalized": false,
49
- "rstrip": false,
50
- "single_word": false,
51
- "special": true
52
- },
53
- "151649": {
54
- "content": "<|box_end|>",
55
- "lstrip": false,
56
- "normalized": false,
57
- "rstrip": false,
58
- "single_word": false,
59
- "special": true
60
- },
61
- "151650": {
62
- "content": "<|quad_start|>",
63
- "lstrip": false,
64
- "normalized": false,
65
- "rstrip": false,
66
- "single_word": false,
67
- "special": true
68
- },
69
- "151651": {
70
- "content": "<|quad_end|>",
71
- "lstrip": false,
72
- "normalized": false,
73
- "rstrip": false,
74
- "single_word": false,
75
- "special": true
76
- },
77
- "151652": {
78
- "content": "<|vision_start|>",
79
- "lstrip": false,
80
- "normalized": false,
81
- "rstrip": false,
82
- "single_word": false,
83
- "special": true
84
- },
85
- "151653": {
86
- "content": "<|vision_end|>",
87
- "lstrip": false,
88
- "normalized": false,
89
- "rstrip": false,
90
- "single_word": false,
91
- "special": true
92
- },
93
- "151654": {
94
- "content": "<|vision_pad|>",
95
- "lstrip": false,
96
- "normalized": false,
97
- "rstrip": false,
98
- "single_word": false,
99
- "special": true
100
- },
101
- "151655": {
102
- "content": "<|image_pad|>",
103
- "lstrip": false,
104
- "normalized": false,
105
- "rstrip": false,
106
- "single_word": false,
107
- "special": true
108
- },
109
- "151656": {
110
- "content": "<|video_pad|>",
111
- "lstrip": false,
112
- "normalized": false,
113
- "rstrip": false,
114
- "single_word": false,
115
- "special": true
116
- },
117
- "151657": {
118
- "content": "<tool_call>",
119
- "lstrip": false,
120
- "normalized": false,
121
- "rstrip": false,
122
- "single_word": false,
123
- "special": false
124
- },
125
- "151658": {
126
- "content": "</tool_call>",
127
- "lstrip": false,
128
- "normalized": false,
129
- "rstrip": false,
130
- "single_word": false,
131
- "special": false
132
- },
133
- "151659": {
134
- "content": "<|fim_prefix|>",
135
- "lstrip": false,
136
- "normalized": false,
137
- "rstrip": false,
138
- "single_word": false,
139
- "special": false
140
- },
141
- "151660": {
142
- "content": "<|fim_middle|>",
143
- "lstrip": false,
144
- "normalized": false,
145
- "rstrip": false,
146
- "single_word": false,
147
- "special": false
148
- },
149
- "151661": {
150
- "content": "<|fim_suffix|>",
151
- "lstrip": false,
152
- "normalized": false,
153
- "rstrip": false,
154
- "single_word": false,
155
- "special": false
156
- },
157
- "151662": {
158
- "content": "<|fim_pad|>",
159
- "lstrip": false,
160
- "normalized": false,
161
- "rstrip": false,
162
- "single_word": false,
163
- "special": false
164
- },
165
- "151663": {
166
- "content": "<|repo_name|>",
167
- "lstrip": false,
168
- "normalized": false,
169
- "rstrip": false,
170
- "single_word": false,
171
- "special": false
172
- },
173
- "151664": {
174
- "content": "<|file_sep|>",
175
- "lstrip": false,
176
- "normalized": false,
177
- "rstrip": false,
178
- "single_word": false,
179
- "special": false
180
- }
181
- },
182
- "additional_special_tokens": [
183
- "<|im_start|>",
184
- "<|im_end|>",
185
- "<|object_ref_start|>",
186
- "<|object_ref_end|>",
187
- "<|box_start|>",
188
- "<|box_end|>",
189
- "<|quad_start|>",
190
- "<|quad_end|>",
191
- "<|vision_start|>",
192
- "<|vision_end|>",
193
- "<|vision_pad|>",
194
- "<|image_pad|>",
195
- "<|video_pad|>"
196
- ],
197
- "bos_token": null,
198
- "clean_up_tokenization_spaces": false,
199
- "eos_token": "<|im_end|>",
200
- "errors": "replace",
201
- "extra_special_tokens": {},
202
- "model_max_length": 32768,
203
- "pad_token": "<|im_end|>",
204
- "split_special_tokens": false,
205
- "tokenizer_class": "Qwen2Tokenizer",
206
- "unk_token": null
207
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/trainer_state.json DELETED
@@ -1,304 +0,0 @@
1
- {
2
- "best_global_step": null,
3
- "best_metric": null,
4
- "best_model_checkpoint": null,
5
- "epoch": 1.8086904043452021,
6
- "eval_steps": 500,
7
- "global_step": 750,
8
- "is_hyper_param_search": false,
9
- "is_local_process_zero": true,
10
- "is_world_process_zero": true,
11
- "log_history": [
12
- {
13
- "epoch": 0.060350030175015085,
14
- "grad_norm": 0.33087673783302307,
15
- "learning_rate": 0.0001894736842105263,
16
- "loss": 1.6792,
17
- "mean_token_accuracy": 0.6403850489854812,
18
- "num_tokens": 155302.0,
19
- "step": 25
20
- },
21
- {
22
- "epoch": 0.12070006035003017,
23
- "grad_norm": 0.3621300756931305,
24
- "learning_rate": 0.00029993852448555923,
25
- "loss": 0.7776,
26
- "mean_token_accuracy": 0.7993393504619598,
27
- "num_tokens": 282669.0,
28
- "step": 50
29
- },
30
- {
31
- "epoch": 0.18105009052504525,
32
- "grad_norm": 0.2122274786233902,
33
- "learning_rate": 0.00029934198818572623,
34
- "loss": 0.5734,
35
- "mean_token_accuracy": 0.8396402627229691,
36
- "num_tokens": 442128.0,
37
- "step": 75
38
- },
39
- {
40
- "epoch": 0.24140012070006034,
41
- "grad_norm": 0.31355953216552734,
42
- "learning_rate": 0.0002981133400718627,
43
- "loss": 0.4664,
44
- "mean_token_accuracy": 0.8685537815093994,
45
- "num_tokens": 569759.0,
46
- "step": 100
47
- },
48
- {
49
- "epoch": 0.30175015087507545,
50
- "grad_norm": 0.23219779133796692,
51
- "learning_rate": 0.0002962577805768642,
52
- "loss": 0.3425,
53
- "mean_token_accuracy": 0.9009260469675064,
54
- "num_tokens": 726392.0,
55
- "step": 125
56
- },
57
- {
58
- "epoch": 0.3621001810500905,
59
- "grad_norm": 0.3648935854434967,
60
- "learning_rate": 0.00029378316362776546,
61
- "loss": 0.3287,
62
- "mean_token_accuracy": 0.9059167605638504,
63
- "num_tokens": 852613.0,
64
- "step": 150
65
- },
66
- {
67
- "epoch": 0.4224502112251056,
68
- "grad_norm": 0.21752676367759705,
69
- "learning_rate": 0.0002906999634028451,
70
- "loss": 0.2293,
71
- "mean_token_accuracy": 0.9322728031873703,
72
- "num_tokens": 1009739.0,
73
- "step": 175
74
- },
75
- {
76
- "epoch": 0.4828002414001207,
77
- "grad_norm": 0.3420015871524811,
78
- "learning_rate": 0.0002870212299981334,
79
- "loss": 0.2291,
80
- "mean_token_accuracy": 0.9348296165466309,
81
- "num_tokens": 1135843.0,
82
- "step": 200
83
- },
84
- {
85
- "epoch": 0.5431502715751357,
86
- "grad_norm": 0.25294411182403564,
87
- "learning_rate": 0.00028276253419097193,
88
- "loss": 0.1738,
89
- "mean_token_accuracy": 0.9496343213319779,
90
- "num_tokens": 1292939.0,
91
- "step": 225
92
- },
93
- {
94
- "epoch": 0.6035003017501509,
95
- "grad_norm": 0.4094804525375366,
96
- "learning_rate": 0.00027794190153442033,
97
- "loss": 0.1446,
98
- "mean_token_accuracy": 0.9596614474058152,
99
- "num_tokens": 1419489.0,
100
- "step": 250
101
- },
102
- {
103
- "epoch": 0.663850331925166,
104
- "grad_norm": 0.19021408259868622,
105
- "learning_rate": 0.00027257973606146575,
106
- "loss": 0.1281,
107
- "mean_token_accuracy": 0.9641434782743454,
108
- "num_tokens": 1576570.0,
109
- "step": 275
110
- },
111
- {
112
- "epoch": 0.724200362100181,
113
- "grad_norm": 0.3236108422279358,
114
- "learning_rate": 0.0002666987339219681,
115
- "loss": 0.1214,
116
- "mean_token_accuracy": 0.9668925404548645,
117
- "num_tokens": 1701672.0,
118
- "step": 300
119
- },
120
- {
121
- "epoch": 0.7845503922751962,
122
- "grad_norm": 0.1582169234752655,
123
- "learning_rate": 0.0002603237873178853,
124
- "loss": 0.0963,
125
- "mean_token_accuracy": 0.9728625810146332,
126
- "num_tokens": 1859803.0,
127
- "step": 325
128
- },
129
- {
130
- "epoch": 0.8449004224502112,
131
- "grad_norm": 0.4106225371360779,
132
- "learning_rate": 0.0002534818791433866,
133
- "loss": 0.0846,
134
- "mean_token_accuracy": 0.9761718648672104,
135
- "num_tokens": 1986604.0,
136
- "step": 350
137
- },
138
- {
139
- "epoch": 0.9052504526252263,
140
- "grad_norm": 0.15011939406394958,
141
- "learning_rate": 0.00024620196877580576,
142
- "loss": 0.0887,
143
- "mean_token_accuracy": 0.9751533496379853,
144
- "num_tokens": 2146742.0,
145
- "step": 375
146
- },
147
- {
148
- "epoch": 0.9656004828002414,
149
- "grad_norm": 0.3166883885860443,
150
- "learning_rate": 0.00023851486950083892,
151
- "loss": 0.0763,
152
- "mean_token_accuracy": 0.9789857685565948,
153
- "num_tokens": 2274751.0,
154
- "step": 400
155
- },
156
- {
157
- "epoch": 1.024140012070006,
158
- "grad_norm": 0.17489100992679596,
159
- "learning_rate": 0.00023045311809080567,
160
- "loss": 0.0877,
161
- "mean_token_accuracy": 0.9752338018613992,
162
- "num_tokens": 2424064.0,
163
- "step": 425
164
- },
165
- {
166
- "epoch": 1.0844900422450212,
167
- "grad_norm": 0.09131183475255966,
168
- "learning_rate": 0.00022205083708799942,
169
- "loss": 0.055,
170
- "mean_token_accuracy": 0.9844016236066818,
171
- "num_tokens": 2568194.0,
172
- "step": 450
173
- },
174
- {
175
- "epoch": 1.1448400724200363,
176
- "grad_norm": 0.10630343854427338,
177
- "learning_rate": 0.0002133435903760353,
178
- "loss": 0.0693,
179
- "mean_token_accuracy": 0.9806501251459122,
180
- "num_tokens": 2710763.0,
181
- "step": 475
182
- },
183
- {
184
- "epoch": 1.2051901025950513,
185
- "grad_norm": 0.08131673187017441,
186
- "learning_rate": 0.0002043682326505094,
187
- "loss": 0.0498,
188
- "mean_token_accuracy": 0.9855681067705154,
189
- "num_tokens": 2853852.0,
190
- "step": 500
191
- },
192
- {
193
- "epoch": 1.2655401327700664,
194
- "grad_norm": 0.09712078422307968,
195
- "learning_rate": 0.000195162753426108,
196
- "loss": 0.0596,
197
- "mean_token_accuracy": 0.9835783392190933,
198
- "num_tokens": 2995994.0,
199
- "step": 525
200
- },
201
- {
202
- "epoch": 1.3258901629450814,
203
- "grad_norm": 0.08598675578832626,
204
- "learning_rate": 0.00018576611624042852,
205
- "loss": 0.0428,
206
- "mean_token_accuracy": 0.9874356603622436,
207
- "num_tokens": 3138861.0,
208
- "step": 550
209
- },
210
- {
211
- "epoch": 1.3862401931200965,
212
- "grad_norm": 0.07483246177434921,
213
- "learning_rate": 0.00017621809373510641,
214
- "loss": 0.0548,
215
- "mean_token_accuracy": 0.984878249168396,
216
- "num_tokens": 3277890.0,
217
- "step": 575
218
- },
219
- {
220
- "epoch": 1.4465902232951118,
221
- "grad_norm": 0.0830983892083168,
222
- "learning_rate": 0.00016655909931229048,
223
- "loss": 0.0422,
224
- "mean_token_accuracy": 0.9876492458581925,
225
- "num_tokens": 3419564.0,
226
- "step": 600
227
- },
228
- {
229
- "epoch": 1.5069402534701268,
230
- "grad_norm": 0.08557037264108658,
231
- "learning_rate": 0.00015683001607900553,
232
- "loss": 0.0542,
233
- "mean_token_accuracy": 0.9849796974658966,
234
- "num_tokens": 3559982.0,
235
- "step": 625
236
- },
237
- {
238
- "epoch": 1.567290283645142,
239
- "grad_norm": 0.07204229384660721,
240
- "learning_rate": 0.00014707202380342108,
241
- "loss": 0.0388,
242
- "mean_token_accuracy": 0.9885922729969024,
243
- "num_tokens": 3702832.0,
244
- "step": 650
245
- },
246
- {
247
- "epoch": 1.627640313820157,
248
- "grad_norm": 0.08426109701395035,
249
- "learning_rate": 0.00013732642461545747,
250
- "loss": 0.0525,
251
- "mean_token_accuracy": 0.9846392464637757,
252
- "num_tokens": 3844937.0,
253
- "step": 675
254
- },
255
- {
256
- "epoch": 1.687990343995172,
257
- "grad_norm": 0.05054735392332077,
258
- "learning_rate": 0.00012763446818947865,
259
- "loss": 0.0404,
260
- "mean_token_accuracy": 0.9881810343265534,
261
- "num_tokens": 3985760.0,
262
- "step": 700
263
- },
264
- {
265
- "epoch": 1.748340374170187,
266
- "grad_norm": 0.10268588364124298,
267
- "learning_rate": 0.00011803717714901029,
268
- "loss": 0.0484,
269
- "mean_token_accuracy": 0.9864322727918625,
270
- "num_tokens": 4127561.0,
271
- "step": 725
272
- },
273
- {
274
- "epoch": 1.8086904043452021,
275
- "grad_norm": 0.07311568409204483,
276
- "learning_rate": 0.00010857517343248423,
277
- "loss": 0.0357,
278
- "mean_token_accuracy": 0.9894903290271759,
279
- "num_tokens": 4267752.0,
280
- "step": 750
281
- }
282
- ],
283
- "logging_steps": 25,
284
- "max_steps": 1245,
285
- "num_input_tokens_seen": 0,
286
- "num_train_epochs": 3,
287
- "save_steps": 750,
288
- "stateful_callbacks": {
289
- "TrainerControl": {
290
- "args": {
291
- "should_epoch_stop": false,
292
- "should_evaluate": false,
293
- "should_log": false,
294
- "should_save": true,
295
- "should_training_stop": false
296
- },
297
- "attributes": {}
298
- }
299
- },
300
- "total_flos": 1.8334295026816205e+17,
301
- "train_batch_size": 2,
302
- "trial_name": null,
303
- "trial_params": null
304
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-750/training_args.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:675df782282515e145bba3e95b55d5ae20ae54e1bade13b651c2399bbb998490
3
- size 6033
 
 
 
 
checkpoint-750/vocab.json DELETED
The diff for this file is too large to render. See raw diff