chriswritescode commited on
Commit
1a6748c
·
verified ·
1 Parent(s): 701b6a7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.msc ADDED
Binary file (3.03 kB). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1753286944
README.md ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Instruct-2507/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
+ base_model:
7
+ - Qwen/Qwen3-235B-A22B-Instruct-2507
8
+ ---
9
+
10
+ # Qwen3-235B-A22B-Instruct-2507-AWQ
11
+
12
+ ## Intro
13
+
14
+ The AWQ version is quantized using [ms-swift](https://github.com/modelscope/ms-swift). The AWQ models for Qwen3-235B-A22B-Instruct-2507 have been verified to work with both Transformers and vLLM.
15
+
16
+ ## Inference
17
+
18
+ use transformers:
19
+
20
+ ```python
21
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
22
+
23
+ model_name = "swift/Qwen3-235B-A22B-Instruct-2507-AWQ"
24
+
25
+ # load the tokenizer and the model
26
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
27
+ model = AutoModelForCausalLM.from_pretrained(
28
+ model_name,
29
+ torch_dtype="auto",
30
+ device_map="auto"
31
+ )
32
+
33
+ # prepare the model input
34
+ prompt = "Give me a short introduction to large language model."
35
+ messages = [
36
+ {"role": "user", "content": prompt}
37
+ ]
38
+ text = tokenizer.apply_chat_template(
39
+ messages,
40
+ tokenize=False,
41
+ add_generation_prompt=True,
42
+ )
43
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
44
+
45
+ # conduct text completion
46
+ generated_ids = model.generate(
47
+ **model_inputs,
48
+ max_new_tokens=16384
49
+ )
50
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
51
+
52
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
53
+
54
+ print("content:", content)
55
+ ```
56
+
57
+ use vllm:
58
+ ```shell
59
+ VLLM_USE_MODELSCOPE=true vllm serve \
60
+ swift/Qwen3-235B-A22B-Instruct-2507-AWQ \
61
+ --tensor-parallel-size 4 \
62
+ --max-model-len 262144
63
+ ```
64
+
65
+ use ms-swift:
66
+ ```shell
67
+ # pip install git+https://github.com/modelscope/ms-swift.git
68
+ swift infer \
69
+ --model swift/Qwen3-235B-A22B-Instruct-2507-AWQ \
70
+ --infer_backend vllm \
71
+ --vllm_tensor_parallel_size 4 \
72
+ --vllm_max_model_len 262144
73
+ ```
74
+
75
+ ## Quantization
76
+
77
+ The model has undergone AWQ int4 quantization using the [ms-swift](https://github.com/modelscope/ms-swift) framework.
78
+
79
+ Quantization command:
80
+
81
+ ```shell
82
+ swift export \
83
+ --model Qwen/Qwen3-235B-A22B-Instruct-2507 \
84
+ --dataset 'swift/Chinese-Qwen3-235B-2507-Distill-data-110k-SFT' \
85
+ --device_map auto \
86
+ --quant_n_samples 256 \
87
+ --quant_batch_size -1 \
88
+ --max_length 12000 \
89
+ --quant_method awq \
90
+ --quant_bits 4 \
91
+ --output_dir Qwen3-235B-A22B-Instruct-2507-AWQ
92
+ ```
93
+
94
+ If you have fine-tuned the model and wish to quantize the fine-tuned version, you can refer to the following quantization scripts:
95
+
96
+ - Dense Model Quantization Script: [View Here](https://github.com/modelscope/ms-swift/blob/main/examples/export/quantize/awq.sh)
97
+ - MoE Model Quantization Script: [View Here](https://github.com/modelscope/ms-swift/blob/main/examples/export/quantize/moe/awq.sh)
98
+
99
+ With these scripts, you can easily complete the quantization process for the model.
100
+
101
+ # Qwen3-235B-A22B-Instruct-2507
102
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
103
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
104
+ </a>
105
+
106
+ ## Highlights
107
+
108
+ We introduce the updated version of the **Qwen3-235B-A22B non-thinking mode**, named **Qwen3-235B-A22B-Instruct-2507**, featuring the following key enhancements:
109
+
110
+ - **Significant improvements** in general capabilities, including **instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage**.
111
+ - **Substantial gains** in long-tail knowledge coverage across **multiple languages**.
112
+ - **Markedly better alignment** with user preferences in **subjective and open-ended tasks**, enabling more helpful responses and higher-quality text generation.
113
+ - **Enhanced capabilities** in **256K long-context understanding**.
114
+
115
+
116
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/62430a8522549d0917bfeb5a/0d7zztq4GB7G2ZYowO-dQ.jpeg)
117
+
118
+ ## Model Overview
119
+
120
+ **Qwen3-235B-A22B-Instruct-2507** has the following features:
121
+ - Type: Causal Language Models
122
+ - Training Stage: Pretraining & Post-training
123
+ - Number of Parameters: 235B in total and 22B activated
124
+ - Number of Paramaters (Non-Embedding): 234B
125
+ - Number of Layers: 94
126
+ - Number of Attention Heads (GQA): 64 for Q and 4 for KV
127
+ - Number of Experts: 128
128
+ - Number of Activated Experts: 8
129
+ - Context Length: **262,144 natively**.
130
+
131
+ **NOTE: This model supports only non-thinking mode and does not generate ``<think></think>`` blocks in its output. Meanwhile, specifying `enable_thinking=False` is no longer required.**
132
+
133
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
134
+
135
+
136
+ ## Performance
137
+
138
+ | | Deepseek-V3-0324 | GPT-4o-0327 | Claude Opus 4 Non-thinking | Kimi K2 | Qwen3-235B-A22B Non-thinking | Qwen3-235B-A22B-Instruct-2507 |
139
+ |--- | --- | --- | --- | --- | --- | ---|
140
+ | **Knowledge** | | | | | | |
141
+ | MMLU-Pro | 81.2 | 79.8 | **86.6** | 81.1 | 75.2 | 83.0 |
142
+ | MMLU-Redux | 90.4 | 91.3 | **94.2** | 92.7 | 89.2 | 93.1 |
143
+ | GPQA | 68.4 | 66.9 | 74.9 | 75.1 | 62.9 | **77.5** |
144
+ | SuperGPQA | 57.3 | 51.0 | 56.5 | 57.2 | 48.2 | **62.6** |
145
+ | SimpleQA | 27.2 | 40.3 | 22.8 | 31.0 | 12.2 | **54.3** |
146
+ | CSimpleQA | 71.1 | 60.2 | 68.0 | 74.5 | 60.8 | **84.3** |
147
+ | **Reasoning** | | | | | | |
148
+ | AIME25 | 46.6 | 26.7 | 33.9 | 49.5 | 24.7 | **70.3** |
149
+ | HMMT25 | 27.5 | 7.9 | 15.9 | 38.8 | 10.0 | **55.4** |
150
+ | ARC-AGI | 9.0 | 8.8 | 30.3 | 13.3 | 4.3 | **41.8** |
151
+ | ZebraLogic | 83.4 | 52.6 | - | 89.0 | 37.7 | **95.0** |
152
+ | LiveBench 20241125 | 66.9 | 63.7 | 74.6 | **76.4** | 62.5 | 75.4 |
153
+ | **Coding** | | | | | | |
154
+ | LiveCodeBench v6 (25.02-25.05) | 45.2 | 35.8 | 44.6 | 48.9 | 32.9 | **51.8** |
155
+ | MultiPL-E | 82.2 | 82.7 | **88.5** | 85.7 | 79.3 | 87.9 |
156
+ | Aider-Polyglot | 55.1 | 45.3 | **70.7** | 59.0 | 59.6 | 57.3 |
157
+ | **Alignment** | | | | | | |
158
+ | IFEval | 82.3 | 83.9 | 87.4 | **89.8** | 83.2 | 88.7 |
159
+ | Arena-Hard v2* | 45.6 | 61.9 | 51.5 | 66.1 | 52.0 | **79.2** |
160
+ | Creative Writing v3 | 81.6 | 84.9 | 83.8 | **88.1** | 80.4 | 87.5 |
161
+ | WritingBench | 74.5 | 75.5 | 79.2 | **86.2** | 77.0 | 85.2 |
162
+ | **Agent** | | | | | | |
163
+ | BFCL-v3 | 64.7 | 66.5 | 60.1 | 65.2 | 68.0 | **70.9** |
164
+ | TAU-Retail | 49.6 | 60.3# | **81.4** | 70.7 | 65.2 | 71.3 |
165
+ | TAU-Airline | 32.0 | 42.8# | **59.6** | 53.5 | 32.0 | 44.0 |
166
+ | **Multilingualism** | | | | | | |
167
+ | MultiIF | 66.5 | 70.4 | - | 76.2 | 70.2 | **77.5** |
168
+ | MMLU-ProX | 75.8 | 76.2 | - | 74.5 | 73.2 | **79.4** |
169
+ | INCLUDE | 80.1 | **82.1** | - | 76.9 | 75.6 | 79.5 |
170
+ | PolyMATH | 32.2 | 25.5 | 30.0 | 44.8 | 27.0 | **50.2** |
171
+
172
+ *: For reproducibility, we report the win rates evaluated by GPT-4.1.
173
+
174
+ \#: Results were generated using GPT-4o-20241120, as access to the native function calling API of GPT-4o-0327 was unavailable.
175
+
176
+
177
+ ## Quickstart
178
+
179
+ The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
180
+
181
+ With `transformers<4.51.0`, you will encounter the following error:
182
+ ```
183
+ KeyError: 'qwen3_moe'
184
+ ```
185
+
186
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
187
+ ```python
188
+ from transformers import AutoModelForCausalLM, AutoTokenizer
189
+
190
+ model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"
191
+
192
+ # load the tokenizer and the model
193
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
194
+ model = AutoModelForCausalLM.from_pretrained(
195
+ model_name,
196
+ torch_dtype="auto",
197
+ device_map="auto"
198
+ )
199
+
200
+ # prepare the model input
201
+ prompt = "Give me a short introduction to large language model."
202
+ messages = [
203
+ {"role": "user", "content": prompt}
204
+ ]
205
+ text = tokenizer.apply_chat_template(
206
+ messages,
207
+ tokenize=False,
208
+ add_generation_prompt=True,
209
+ )
210
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
211
+
212
+ # conduct text completion
213
+ generated_ids = model.generate(
214
+ **model_inputs,
215
+ max_new_tokens=16384
216
+ )
217
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
218
+
219
+ content = tokenizer.decode(output_ids, skip_special_tokens=True)
220
+
221
+ print("content:", content)
222
+ ```
223
+
224
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
225
+ - SGLang:
226
+ ```shell
227
+ python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Instruct-2507 --tp 8 --context-length 262144
228
+ ```
229
+ - vLLM:
230
+ ```shell
231
+ vllm serve Qwen/Qwen3-235B-A22B-Instruct-2507 --tensor-parallel-size 8 --max-model-len 262144
232
+ ```
233
+
234
+ **Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
235
+
236
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
237
+
238
+ ## Agentic Use
239
+
240
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
241
+
242
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
243
+ ```python
244
+ from qwen_agent.agents import Assistant
245
+
246
+ # Define LLM
247
+ llm_cfg = {
248
+ 'model': 'Qwen3-235B-A22B-Instruct-2507',
249
+
250
+ # Use a custom endpoint compatible with OpenAI API:
251
+ 'model_server': 'http://localhost:8000/v1', # api_base
252
+ 'api_key': 'EMPTY',
253
+ }
254
+
255
+ # Define Tools
256
+ tools = [
257
+ {'mcpServers': { # You can specify the MCP configuration file
258
+ 'time': {
259
+ 'command': 'uvx',
260
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
261
+ },
262
+ "fetch": {
263
+ "command": "uvx",
264
+ "args": ["mcp-server-fetch"]
265
+ }
266
+ }
267
+ },
268
+ 'code_interpreter', # Built-in tools
269
+ ]
270
+
271
+ # Define Agent
272
+ bot = Assistant(llm=llm_cfg, function_list=tools)
273
+
274
+ # Streaming generation
275
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
276
+ for responses in bot.run(messages=messages):
277
+ pass
278
+ print(responses)
279
+ ```
280
+
281
+ ## Best Practices
282
+
283
+ To achieve optimal performance, we recommend the following settings:
284
+
285
+ 1. **Sampling Parameters**:
286
+ - We suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
287
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
288
+
289
+ 2. **Adequate Output Length**: We recommend using an output length of 16,384 tokens for most queries, which is adequate for instruct models.
290
+
291
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
292
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
293
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
294
+
295
+ ### Citation
296
+
297
+ If you find our work helpful, feel free to give us a cite.
298
+
299
+ ```
300
+ @misc{qwen3technicalreport,
301
+ title={Qwen3 Technical Report},
302
+ author={Qwen Team},
303
+ year={2025},
304
+ eprint={2505.09388},
305
+ archivePrefix={arXiv},
306
+ primaryClass={cs.CL},
307
+ url={https://arxiv.org/abs/2505.09388},
308
+ }
309
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3MoeForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "decoder_sparse_step": 1,
9
+ "eos_token_id": 151645,
10
+ "head_dim": 128,
11
+ "hidden_act": "silu",
12
+ "hidden_size": 4096,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 12288,
15
+ "max_position_embeddings": 262144,
16
+ "max_window_layers": 94,
17
+ "mlp_only_layers": [],
18
+ "model_type": "qwen3_moe",
19
+ "moe_intermediate_size": 1536,
20
+ "norm_topk_prob": true,
21
+ "num_attention_heads": 64,
22
+ "num_experts": 128,
23
+ "num_experts_per_tok": 8,
24
+ "num_hidden_layers": 94,
25
+ "num_key_value_heads": 4,
26
+ "output_router_logits": false,
27
+ "pad_token_id": 151643,
28
+ "quantization_config": {
29
+ "bits": 4,
30
+ "group_size": 128,
31
+ "modules_to_not_convert": [
32
+ "mlp.gate",
33
+ "lm_head"
34
+ ],
35
+ "quant_method": "awq",
36
+ "version": "gemm",
37
+ "zero_point": true
38
+ },
39
+ "rms_norm_eps": 1e-06,
40
+ "rope_scaling": null,
41
+ "rope_theta": 5000000,
42
+ "router_aux_loss_coef": 0.001,
43
+ "sliding_window": null,
44
+ "tie_word_embeddings": false,
45
+ "torch_dtype": "float16",
46
+ "transformers_version": "4.52.4",
47
+ "use_cache": false,
48
+ "use_sliding_window": false,
49
+ "vocab_size": 151936
50
+ }
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework":"Pytorch","task":"text-generation"}
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.7,
10
+ "top_k": 20,
11
+ "top_p": 0.8,
12
+ "transformers_version": "4.52.4"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b1469502c87761cd5c362e9f9583327fed67f1abd2ef4e34c54fedca4de0092
3
+ size 4997344024
model-00002-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f595e26670c23b06d087713bd326032bbe532f98b7c60ff91a9136f1e18359f5
3
+ size 5000332592
model-00003-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a7f376fc41f7ee85e7382e9436faf2fa85995fc34dd60cf5198811b58c7fa24
3
+ size 5000333480
model-00004-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c0bf0110b2c7d2ba560afea994777703270229d5d643fddba1ec5fbeb2b1167
3
+ size 5000337248
model-00005-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3407def38b22259d5395d81e8f79480913a12cfe0e6a5ad9c93b84729e4a60a
3
+ size 5000337248
model-00006-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85f9587c1093d0f4815a2d3cb7621404459f082a1ee7a472e53b41619f3f1f26
3
+ size 5000337240
model-00007-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b915e43570d38f5d059e52f9c3076d4f2e4319209c770e820fab9e8a653bfc76
3
+ size 5000337256
model-00008-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a559e533d95cae5d85d6aa1fa7edf621c727e4abc26a85d339f5aad13f897746
3
+ size 4998184376
model-00009-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85038f68462642b0e73c26f45cd8249c546e4f74563bd08ac01e44b9b054e90c
3
+ size 5000337088
model-00010-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:321d539b865e79c1b0a3be48ad00c1b566d6804d2d00d2f5e7e98e8615f00fd6
3
+ size 5000337192
model-00011-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a57f29b82a3b6cbc13599920aa52b647f086df488553c59ddb90b9e4b711d20b
3
+ size 5000337248
model-00012-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b82ada413be41187a9d72d86cb96e5b98b08b1ed2ab8991d9fd1a7bb8911687d
3
+ size 5000337248
model-00013-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:275fa52b6e38eb57c51f1451ece311330f59dabe6010551431dd1a2111f5c874
3
+ size 5000337240
model-00014-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67f6a92c2bff1e2a4912220b332ae2efc12477010477a0ffda075bac8e9d9cea
3
+ size 5000337248
model-00015-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7a89c99b8580892fc0a6c5c65640e31de11e441cc41885af22dfe6260668f30
3
+ size 4988392944
model-00016-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c30ba9f48a39bcd32b42f0319caa6b40b15cf8e5949823b455d06a842d4bfb3
3
+ size 5000321664
model-00017-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a2fa060e41c177846453a88c897c1c03dd43b8ed7537b1256515f07760c0e63
3
+ size 5000337128
model-00018-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d971a24666ea0da8d4e1061f4b36684f7d40bd311f08fafa42f57cc8ea9a22ec
3
+ size 5000337248
model-00019-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ac4b7a5ce93e2ecc8ab3f7d95b8921379e822571ebeab1b624e6a358d4325bb
3
+ size 5000337248
model-00020-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e99ec0adf0760d4ccb3ee521c05b6c0463ce254819bc890efb1e1b8f7613fd8
3
+ size 5000337240
model-00021-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31cd6e6d4b67765e46b4360d0cd252a33973ed731bd637fa4d9608ca9fd291e7
3
+ size 5000337248
model-00022-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a57f833f310c997204e483163b07512e07d24ea1c6bded58fa3dcfd463810e8
3
+ size 5000337280
model-00023-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cab872c0c6e1676ded2c598555385c2196966a2f4e6467b83d33e78f3e370fb3
3
+ size 4998184328
model-00024-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61e422312604e931cf14afea38ce187486771c95b4fc745b91ecf0d5b8f60842
3
+ size 5000337080
model-00025-of-00025.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b86af1bdf4003b1d0013bfa7ab7a682a8cbe1a4967c22334047a3ae47c48441f
3
+ size 4079922912
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 262144,
235
+ "pad_token": "<|endoftext|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff