createthis commited on
Commit
b8dd86d
·
verified ·
1 Parent(s): 193ec37

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. bf16/README.md +219 -0
  2. bf16/config.json +67 -0
  3. bf16/configuration_deepseek.py +199 -0
  4. bf16/generation_config.json +9 -0
  5. bf16/model-00001-of-000163.safetensors +3 -0
  6. bf16/model-00002-of-000163.safetensors +3 -0
  7. bf16/model-00003-of-000163.safetensors +3 -0
  8. bf16/model-00004-of-000163.safetensors +3 -0
  9. bf16/model-00005-of-000163.safetensors +3 -0
  10. bf16/model-00006-of-000163.safetensors +3 -0
  11. bf16/model-00007-of-000163.safetensors +3 -0
  12. bf16/model-00008-of-000163.safetensors +3 -0
  13. bf16/model-00009-of-000163.safetensors +3 -0
  14. bf16/model-00010-of-000163.safetensors +3 -0
  15. bf16/model-00011-of-000163.safetensors +3 -0
  16. bf16/model-00012-of-000163.safetensors +3 -0
  17. bf16/model-00013-of-000163.safetensors +3 -0
  18. bf16/model-00014-of-000163.safetensors +3 -0
  19. bf16/model-00015-of-000163.safetensors +3 -0
  20. bf16/model-00016-of-000163.safetensors +3 -0
  21. bf16/model-00017-of-000163.safetensors +3 -0
  22. bf16/model-00018-of-000163.safetensors +3 -0
  23. bf16/model-00019-of-000163.safetensors +3 -0
  24. bf16/model-00020-of-000163.safetensors +3 -0
  25. bf16/model-00021-of-000163.safetensors +3 -0
  26. bf16/model-00022-of-000163.safetensors +3 -0
  27. bf16/model-00023-of-000163.safetensors +3 -0
  28. bf16/model-00024-of-000163.safetensors +3 -0
  29. bf16/model-00025-of-000163.safetensors +3 -0
  30. bf16/model-00026-of-000163.safetensors +3 -0
  31. bf16/model-00027-of-000163.safetensors +3 -0
  32. bf16/model-00028-of-000163.safetensors +3 -0
  33. bf16/model-00029-of-000163.safetensors +3 -0
  34. bf16/model-00030-of-000163.safetensors +3 -0
  35. bf16/model-00031-of-000163.safetensors +3 -0
  36. bf16/model-00032-of-000163.safetensors +3 -0
  37. bf16/model-00033-of-000163.safetensors +3 -0
  38. bf16/model-00034-of-000163.safetensors +3 -0
  39. bf16/model-00035-of-000163.safetensors +3 -0
  40. bf16/model-00036-of-000163.safetensors +3 -0
  41. bf16/model-00037-of-000163.safetensors +3 -0
  42. bf16/model-00038-of-000163.safetensors +3 -0
  43. bf16/model-00039-of-000163.safetensors +3 -0
  44. bf16/model-00040-of-000163.safetensors +3 -0
  45. bf16/model-00041-of-000163.safetensors +3 -0
  46. bf16/model-00042-of-000163.safetensors +3 -0
  47. bf16/model-00043-of-000163.safetensors +3 -0
  48. bf16/model-00044-of-000163.safetensors +3 -0
  49. bf16/model-00045-of-000163.safetensors +3 -0
  50. bf16/model-00046-of-000163.safetensors +3 -0
bf16/README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ ---
5
+ # DeepSeek-V3.1
6
+
7
+ <!-- markdownlint-disable first-line-h1 -->
8
+ <!-- markdownlint-disable html -->
9
+ <!-- markdownlint-disable no-duplicate-header -->
10
+
11
+ <div align="center">
12
+ <img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
13
+ </div>
14
+ <hr>
15
+ <div align="center" style="line-height: 1;">
16
+ <a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
17
+ <img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
18
+ </a>
19
+ <a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
20
+ <img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
21
+ </a>
22
+ <a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
23
+ <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
24
+ </a>
25
+ </div>
26
+
27
+ <div align="center" style="line-height: 1;">
28
+ <a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
29
+ <img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
30
+ </a>
31
+ <a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
32
+ <img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
33
+ </a>
34
+ <a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
35
+ <img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
36
+ </a>
37
+ </div>
38
+
39
+ <div align="center" style="line-height: 1;">
40
+ <a href="LICENSE" style="margin: 2px;">
41
+ <img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
42
+ </a>
43
+ </div>
44
+
45
+ ## Introduction
46
+
47
+ DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects:
48
+
49
+ - **Hybrid thinking mode**: One model supports both thinking mode and non-thinking mode by changing the chat template.
50
+
51
+ - **Smarter tool calling**: Through post-training optimization, the model's performance in tool usage and agent tasks has significantly improved.
52
+
53
+ - **Higher thinking efficiency**: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly.
54
+
55
+ DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.
56
+
57
+ ## Model Downloads
58
+
59
+ <div align="center">
60
+
61
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
62
+ | :------------: | :------------: | :------------: | :------------: | :------------: |
63
+ | DeepSeek-V3.1-Base | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1-Base) |
64
+ | DeepSeek-V3.1 | 671B | 37B | 128K | [HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) \| [ModelScope](https://modelscope.cn/models/deepseek-ai/DeepSeek-V3.1) |
65
+
66
+ </div>
67
+
68
+ ## Chat Template
69
+
70
+ The details of our chat template is described in `tokenizer_config.json` and `assets/chat_template.jinja`. Here is a brief description.
71
+
72
+ ### Non-Thinking
73
+
74
+ #### First-Turn
75
+
76
+ Prefix:
77
+ `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>`
78
+
79
+ With the given prefix, DeepSeek V3.1 generates responses to queries in non-thinking mode. Unlike DeepSeek V3, it introduces an additional token `</think>`.
80
+
81
+ #### Multi-Turn
82
+ Context:
83
+ `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
84
+
85
+ Prefix:
86
+ `<|User|>{query}<|Assistant|></think>`
87
+
88
+ By concatenating the context and the prefix, we obtain the correct prompt for the query.
89
+
90
+ ### Thinking
91
+
92
+ #### First-Turn
93
+ Prefix:
94
+ `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|><think>`
95
+
96
+ The prefix of thinking mode is similar to DeepSeek-R1.
97
+
98
+
99
+ #### Multi-Turn
100
+ Context:
101
+ `<|begin▁of▁sentence|>{system prompt}<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>...<|User|>{query}<|Assistant|></think>{response}<|end▁of▁sentence|>`
102
+
103
+ Prefix:
104
+ `<|User|>{query}<|Assistant|><think>`
105
+
106
+ The multi-turn template is the same with non-thinking multi-turn chat template. It means the thinking token in the last turn will be dropped but the `</think>` is retained in every turn of context.
107
+
108
+ ### ToolCall
109
+ Toolcall is supported in non-thinking mode. The format is:
110
+
111
+ `<|begin▁of▁sentence|>{system prompt}{tool_description}<|User|>{query}<|Assistant|></think>` where the tool_description is
112
+
113
+ ```
114
+ ## Tools
115
+ You have access to the following tools:
116
+
117
+ ### {tool_name1}
118
+ Description: {description}
119
+
120
+ Parameters: {json.dumps(parameters)}
121
+
122
+ IMPORTANT: ALWAYS adhere to this exact format for tool use:
123
+ <|tool▁calls▁begin|><|tool▁call▁begin|>tool_call_name<|tool▁sep|>tool_call_arguments<|tool▁call▁end|>{{additional_tool_calls}}<|tool▁calls▁end|>
124
+
125
+ Where:
126
+ - `tool_call_name` must be an exact match to one of the available tools
127
+ - `tool_call_arguments` must be valid JSON that strictly follows the tool's Parameters Schema
128
+ - For multiple tool calls, chain them directly without separators or spaces
129
+ ```
130
+
131
+ ### Code-Agent
132
+ We support various code agent frameworks. Please refer to the above toolcall format to create your own code agents. An example is shown in `assets/code_agent_trajectory.html`.
133
+
134
+ ### Search-Agent
135
+ We design a specific format for searching toolcall in thinking mode, to support search agent.
136
+
137
+ For complex questions that require accessing external or up-to-date information, DeepSeek-V3.1 can leverage a user-provided search tool through a multi-turn tool-calling process.
138
+
139
+ Please refer to the `assets/search_tool_trajectory.html` and `assets/search_python_tool_trajectory.html` for the detailed template.
140
+
141
+ ## Evaluation
142
+ | Category | Benchmark (Metric) | DeepSeek V3.1-NonThinking | DeepSeek V3 0324 | DeepSeek V3.1-Thinking | DeepSeek R1 0528
143
+ |----------|----------------------------------|-----------------|---|---|---|
144
+ | General |
145
+ | | MMLU-Redux (EM) | 91.8 | 90.5 | 93.7 | 93.4
146
+ | | MMLU-Pro (EM) | 83.7 | 81.2 | 84.8 | 85.0
147
+ | | GPQA-Diamond (Pass@1) | 74.9 | 68.4 | 80.1 | 81.0
148
+ | | Humanity's Last Exam (Pass@1) | - | - | 15.9 | 17.7
149
+ |Search Agent|
150
+ | | BrowseComp | - | - | 30.0 | 8.9
151
+ | | BrowseComp_zh | - | - | 49.2 | 35.7
152
+ | | Humanity's Last Exam (Python + Search) |- | - | 29.8 | 24.8
153
+ | | SimpleQA | - | - | 93.4 | 92.3
154
+ | Code |
155
+ | | LiveCodeBench (2408-2505) (Pass@1) | 56.4 | 43.0 | 74.8 | 73.3
156
+ | | Codeforces-Div1 (Rating) | - | - | 2091 | 1930
157
+ | | Aider-Polyglot (Acc.) | 68.4 | 55.1 | 76.3 | 71.6
158
+ | Code Agent|
159
+ | | SWE Verified (Agent mode) | 66.0 | 45.4 | - | 44.6
160
+ | | SWE-bench Multilingual (Agent mode) | 54.5 | 29.3 | - | 30.5
161
+ | | Terminal-bench (Terminus 1 framework) | 31.3 | 13.3 | - | 5.7
162
+ | Math |
163
+ | | AIME 2024 (Pass@1) | 66.3 | 59.4 | 93.1 | 91.4
164
+ | | AIME 2025 (Pass@1) | 49.8 | 51.3 | 88.4 | 87.5
165
+ | | HMMT 2025 (Pass@1) | 33.5 | 29.2 | 84.2 | 79.4 |
166
+
167
+ Note:
168
+ - Search agents are evaluated with our internal search framework, which uses a commercial search API + webpage filter + 128K context window. Seach agent results of R1-0528 are evaluated with a pre-defined workflow.
169
+
170
+ - SWE-bench is evaluated with our internal code agent framework.
171
+
172
+ - HLE is evaluated with the text-only subset.
173
+
174
+ ### Usage Example
175
+
176
+ ```python
177
+ import transformers
178
+
179
+ tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.1")
180
+
181
+ messages = [
182
+ {"role": "system", "content": "You are a helpful assistant"},
183
+ {"role": "user", "content": "Who are you?"},
184
+ {"role": "assistant", "content": "<think>Hmm</think>I am DeepSeek"},
185
+ {"role": "user", "content": "1+1=?"}
186
+ ]
187
+
188
+ tokenizer.apply_chat_template(messages, tokenize=False, thinking=True, add_generation_prompt=True)
189
+ # '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>'
190
+
191
+ tokenizer.apply_chat_template(messages, tokenize=False, thinking=False, add_generation_prompt=True)
192
+ # '<|begin▁of▁sentence��>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|></think>'
193
+ ```
194
+
195
+ ## How to Run Locally
196
+
197
+ The model structure of DeepSeek-V3.1 is the same as DeepSeek-V3. Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running this model locally.
198
+
199
+ ## License
200
+
201
+ This repository and the model weights are licensed under the [MIT License](LICENSE).
202
+
203
+ ## Citation
204
+
205
+ ```
206
+ @misc{deepseekai2024deepseekv3technicalreport,
207
+ title={DeepSeek-V3 Technical Report},
208
+ author={DeepSeek-AI},
209
+ year={2024},
210
+ eprint={2412.19437},
211
+ archivePrefix={arXiv},
212
+ primaryClass={cs.CL},
213
+ url={https://arxiv.org/abs/2412.19437},
214
+ }
215
+ ```
216
+
217
+ ## Contact
218
+
219
+ If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
bf16/config.json ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DeepseekV3ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_deepseek.DeepseekV3Config",
9
+ "AutoModel": "modeling_deepseek.DeepseekV3Model",
10
+ "AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
11
+ },
12
+ "bos_token_id": 0,
13
+ "eos_token_id": 1,
14
+ "ep_size": 1,
15
+ "first_k_dense_replace": 3,
16
+ "hidden_act": "silu",
17
+ "hidden_size": 7168,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 18432,
20
+ "kv_lora_rank": 512,
21
+ "max_position_embeddings": 163840,
22
+ "model_type": "deepseek_v3",
23
+ "moe_intermediate_size": 2048,
24
+ "moe_layer_freq": 1,
25
+ "n_group": 8,
26
+ "n_routed_experts": 256,
27
+ "n_shared_experts": 1,
28
+ "norm_topk_prob": true,
29
+ "num_attention_heads": 128,
30
+ "num_experts_per_tok": 8,
31
+ "num_hidden_layers": 61,
32
+ "num_key_value_heads": 128,
33
+ "num_nextn_predict_layers": 1,
34
+ "q_lora_rank": 1536,
35
+ "qk_nope_head_dim": 128,
36
+ "qk_rope_head_dim": 64,
37
+ "quantization_config": {
38
+ "activation_scheme": "dynamic",
39
+ "fmt": "e4m3",
40
+ "quant_method": "fp8",
41
+ "weight_block_size": [
42
+ 128,
43
+ 128
44
+ ]
45
+ },
46
+ "rms_norm_eps": 1e-06,
47
+ "rope_scaling": {
48
+ "beta_fast": 32,
49
+ "beta_slow": 1,
50
+ "factor": 40,
51
+ "mscale": 1.0,
52
+ "mscale_all_dim": 1.0,
53
+ "original_max_position_embeddings": 4096,
54
+ "type": "yarn"
55
+ },
56
+ "rope_theta": 10000,
57
+ "routed_scaling_factor": 2.5,
58
+ "scoring_func": "sigmoid",
59
+ "tie_word_embeddings": false,
60
+ "topk_group": 4,
61
+ "topk_method": "noaux_tc",
62
+ "torch_dtype": "bfloat16",
63
+ "transformers_version": "4.44.2",
64
+ "use_cache": true,
65
+ "v_head_dim": 128,
66
+ "vocab_size": 129280
67
+ }
bf16/configuration_deepseek.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.configuration_utils import PretrainedConfig
2
+ from transformers.utils import logging
3
+
4
+ logger = logging.get_logger(__name__)
5
+
6
+ DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
7
+ class DeepseekV3Config(PretrainedConfig):
8
+ r"""
9
+ This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
10
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
11
+ defaults will yield a similar configuration to that of the DeepSeek-V3.
12
+
13
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
14
+ documentation from [`PretrainedConfig`] for more information.
15
+
16
+
17
+ Args:
18
+ vocab_size (`int`, *optional*, defaults to 129280):
19
+ Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
20
+ `inputs_ids` passed when calling [`DeepseekV3Model`]
21
+ hidden_size (`int`, *optional*, defaults to 4096):
22
+ Dimension of the hidden representations.
23
+ intermediate_size (`int`, *optional*, defaults to 11008):
24
+ Dimension of the MLP representations.
25
+ moe_intermediate_size (`int`, *optional*, defaults to 1407):
26
+ Dimension of the MoE representations.
27
+ num_hidden_layers (`int`, *optional*, defaults to 32):
28
+ Number of hidden layers in the Transformer decoder.
29
+ num_nextn_predict_layers (`int`, *optional*, defaults to 1):
30
+ Number of nextn predict layers in the DeepSeekV3 Model.
31
+ num_attention_heads (`int`, *optional*, defaults to 32):
32
+ Number of attention heads for each attention layer in the Transformer decoder.
33
+ n_shared_experts (`int`, *optional*, defaults to None):
34
+ Number of shared experts, None means dense model.
35
+ n_routed_experts (`int`, *optional*, defaults to None):
36
+ Number of routed experts, None means dense model.
37
+ routed_scaling_factor (`float`, *optional*, defaults to 1.0):
38
+ Scaling factor or routed experts.
39
+ topk_method (`str`, *optional*, defaults to `gready`):
40
+ Topk method used in routed gate.
41
+ n_group (`int`, *optional*, defaults to None):
42
+ Number of groups for routed experts.
43
+ topk_group (`int`, *optional*, defaults to None):
44
+ Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
45
+ num_experts_per_tok (`int`, *optional*, defaults to None):
46
+ Number of selected experts, None means dense model.
47
+ moe_layer_freq (`int`, *optional*, defaults to 1):
48
+ The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
49
+ first_k_dense_replace (`int`, *optional*, defaults to 0):
50
+ Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
51
+ \--k dense layers--/
52
+ norm_topk_prob (`bool`, *optional*, defaults to False):
53
+ Whether to normalize the weights of the routed experts.
54
+ scoring_func (`str`, *optional*, defaults to 'softmax'):
55
+ Method of computing expert weights.
56
+ aux_loss_alpha (`float`, *optional*, defaults to 0.001):
57
+ Auxiliary loss weight coefficient.
58
+ seq_aux = (`bool`, *optional*, defaults to True):
59
+ Whether to compute the auxiliary loss for each individual sample.
60
+ num_key_value_heads (`int`, *optional*):
61
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
62
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
63
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
64
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
65
+ by meanpooling all the original heads within that group. For more details checkout [this
66
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
67
+ `num_attention_heads`.
68
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
69
+ The non-linear activation function (function or string) in the decoder.
70
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
71
+ The maximum sequence length that this model might ever be used with.
72
+ initializer_range (`float`, *optional*, defaults to 0.02):
73
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
74
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
75
+ The epsilon used by the rms normalization layers.
76
+ use_cache (`bool`, *optional*, defaults to `True`):
77
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
78
+ relevant if `config.is_decoder=True`.
79
+ pad_token_id (`int`, *optional*):
80
+ Padding token id.
81
+ bos_token_id (`int`, *optional*, defaults to 1):
82
+ Beginning of stream token id.
83
+ eos_token_id (`int`, *optional*, defaults to 2):
84
+ End of stream token id.
85
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
86
+ Whether to tie weight embeddings
87
+ rope_theta (`float`, *optional*, defaults to 10000.0):
88
+ The base period of the RoPE embeddings.
89
+ rope_scaling (`Dict`, *optional*):
90
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
91
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
92
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
93
+ `max_position_embeddings` to the expected new maximum.
94
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
95
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
96
+ attention_dropout (`float`, *optional*, defaults to 0.0):
97
+ The dropout ratio for the attention probabilities.
98
+
99
+ ```python
100
+ >>> from transformers import DeepseekV3Model, DeepseekV3Config
101
+
102
+ >>> # Initializing a Deepseek-V3 style configuration
103
+ >>> configuration = DeepseekV3Config()
104
+
105
+ >>> # Accessing the model configuration
106
+ >>> configuration = model.config
107
+ ```"""
108
+
109
+ model_type = "deepseek_v3"
110
+ keys_to_ignore_at_inference = ["past_key_values"]
111
+
112
+ def __init__(
113
+ self,
114
+ vocab_size=129280,
115
+ hidden_size=7168,
116
+ intermediate_size=18432,
117
+ moe_intermediate_size = 2048,
118
+ num_hidden_layers=61,
119
+ num_nextn_predict_layers=1,
120
+ num_attention_heads=128,
121
+ num_key_value_heads=128,
122
+ n_shared_experts = 1,
123
+ n_routed_experts = 256,
124
+ ep_size = 1,
125
+ routed_scaling_factor = 2.5,
126
+ kv_lora_rank = 512,
127
+ q_lora_rank = 1536,
128
+ qk_rope_head_dim = 64,
129
+ v_head_dim = 128,
130
+ qk_nope_head_dim = 128,
131
+ topk_method = 'noaux_tc',
132
+ n_group = 8,
133
+ topk_group = 4,
134
+ num_experts_per_tok = 8,
135
+ moe_layer_freq = 1,
136
+ first_k_dense_replace = 3,
137
+ norm_topk_prob = True,
138
+ scoring_func = 'sigmoid',
139
+ hidden_act="silu",
140
+ max_position_embeddings=4096,
141
+ initializer_range=0.02,
142
+ rms_norm_eps=1e-6,
143
+ use_cache=True,
144
+ pad_token_id=None,
145
+ bos_token_id=0,
146
+ eos_token_id=1,
147
+ tie_word_embeddings=False,
148
+ rope_theta=10000.0,
149
+ rope_scaling=None,
150
+ attention_bias=False,
151
+ attention_dropout=0.0,
152
+ **kwargs,
153
+ ):
154
+ self.vocab_size = vocab_size
155
+ self.max_position_embeddings = max_position_embeddings
156
+ self.hidden_size = hidden_size
157
+ self.intermediate_size = intermediate_size
158
+ self.moe_intermediate_size = moe_intermediate_size
159
+ self.num_hidden_layers = num_hidden_layers
160
+ self.num_nextn_predict_layers = num_nextn_predict_layers
161
+ self.num_attention_heads = num_attention_heads
162
+ self.n_shared_experts = n_shared_experts
163
+ self.n_routed_experts = n_routed_experts
164
+ self.ep_size = ep_size
165
+ self.routed_scaling_factor = routed_scaling_factor
166
+ self.kv_lora_rank = kv_lora_rank
167
+ self.q_lora_rank = q_lora_rank
168
+ self.qk_rope_head_dim = qk_rope_head_dim
169
+ self.v_head_dim = v_head_dim
170
+ self.qk_nope_head_dim = qk_nope_head_dim
171
+ self.topk_method = topk_method
172
+ self.n_group = n_group
173
+ self.topk_group = topk_group
174
+ self.num_experts_per_tok = num_experts_per_tok
175
+ self.moe_layer_freq = moe_layer_freq
176
+ self.first_k_dense_replace = first_k_dense_replace
177
+ self.norm_topk_prob = norm_topk_prob
178
+ self.scoring_func = scoring_func
179
+ # for backward compatibility
180
+ if num_key_value_heads is None:
181
+ num_key_value_heads = num_attention_heads
182
+
183
+ self.num_key_value_heads = num_key_value_heads
184
+ self.hidden_act = hidden_act
185
+ self.initializer_range = initializer_range
186
+ self.rms_norm_eps = rms_norm_eps
187
+ self.use_cache = use_cache
188
+ self.rope_theta = rope_theta
189
+ self.rope_scaling = rope_scaling
190
+ self.attention_bias = attention_bias
191
+ self.attention_dropout = attention_dropout
192
+
193
+ super().__init__(
194
+ pad_token_id=pad_token_id,
195
+ bos_token_id=bos_token_id,
196
+ eos_token_id=eos_token_id,
197
+ tie_word_embeddings=tie_word_embeddings,
198
+ **kwargs,
199
+ )
bf16/generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "do_sample": true,
6
+ "temperature": 0.6,
7
+ "top_p": 0.95,
8
+ "transformers_version": "4.46.3"
9
+ }
bf16/model-00001-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8138bb053dffc54278256836be0700904f02470bbc1e2577c6beaada47ea593
3
+ size 8609454256
bf16/model-00002-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa83109c30f5aff29cfa9a0e033705fff88e9ba2da70ab5248f954ed18a631d8
3
+ size 8602553952
bf16/model-00003-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdcc40c953b4222bddec8c42cd69cd853352688f8cddbdd7d17578cedd012449
3
+ size 8602554152
bf16/model-00004-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ec54283165534818c2f4213e50254e0efee05a0b0a5c8879be5d68a24539cc7
3
+ size 8598786296
bf16/model-00005-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2672e9f1a287d45e95e1f3ea77d57655c0deed29a24cf3895fa1a5fe74c0ea9a
3
+ size 8602554048
bf16/model-00006-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:957d724b4f83757843dcbfc5709e4d2ae97f6b67c09bd28f6291142765eef80f
3
+ size 8741916520
bf16/model-00007-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4e79c3e9aab392a9bf1bc953726f816334e87156212431847924f2930059f35
3
+ size 8606225096
bf16/model-00008-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:911c61ea03e3bae235324fd951de9dc4439910306019df8773cac120616bdd04
3
+ size 8602554144
bf16/model-00009-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5ec95bc992f728a66475dd5332f4950d085f73f90842ca27c1ad906dbec0da7
3
+ size 8598786392
bf16/model-00010-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84388b59c6e95df0cf6c8c6a4e883838cfa350c24c779c4e13faca5e79963a13
3
+ size 8602553952
bf16/model-00011-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92719ee92a870bcebc2f845bdc74d4a712463ae85ee7aa2e01ffefcf06f2b877
3
+ size 8602554152
bf16/model-00012-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9621cae5c61924ff7c527cc4190d52630cdd5b2a1d804f06e4ae0ad4584ec08
3
+ size 2642451624
bf16/model-00013-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48ee00fb5f6ed74887baa6b6ed4543f0e0bdbe21ca4be8edd49d7f2f73a19415
3
+ size 8598757320
bf16/model-00014-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afa0b8285e744bc9e7d5979497b14e82fe496ef6e015ee2ff135afa0cef59134
3
+ size 8602554136
bf16/model-00015-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0093228cadc519d4a84a55be1ac137d106e0fba17a42a9a6d3fc2778139ef8af
3
+ size 8598786408
bf16/model-00016-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca5693250fa3c843af0f8ca235bbe39d3cd94c3c81dd8c26b2fc863d94fc583b
3
+ size 8602553936
bf16/model-00017-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f22347b3f4320095f9721b8cef52fe5e754a1be6614cd0918b063294b323d5e
3
+ size 8602554152
bf16/model-00018-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e07d059f64aee59b4f292650274d56a6f9da1db66cb383b2e5e66b9e2bd0a4b3
3
+ size 8598786312
bf16/model-00019-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ff57ce4bfacc481bad4608aaf7d6b07753b18c00cbf248f2a7f09959d40cf33
3
+ size 8602554032
bf16/model-00020-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ca2665dd60509053efe097782a26e1d6afabf634a1355998ab5d80c11651806
3
+ size 8602554160
bf16/model-00021-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac8e4a14f424b150e38de9a5e918a0265694d7cae6fc4022fdf656316b2a7334
3
+ size 8598786512
bf16/model-00022-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac630af60a0137470f084f6e7663d9cd79a613579bac9f36f986cfd8b8c1a743
3
+ size 8602554416
bf16/model-00023-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c45e06a2c68de14b4606500402eebbed0d8c3ea56a7b6769a703b75724e5d4d
3
+ size 8598786704
bf16/model-00024-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20aa00a938246c1f16b423bb4e7bda3b48f29592374dfd715e52180979ae1163
3
+ size 8602554224
bf16/model-00025-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d08212652d7a2ce2b3bd2ae047e21f4dc1ca6b6992712ca1b2aa7a559d35d6c9
3
+ size 8602554448
bf16/model-00026-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18267e5af6a284cd6dea1f270e4cd349e725cabd8f61d5704a8af655436ad7f1
3
+ size 8598786616
bf16/model-00027-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46a123fca970f425c84240b634581a343385c6e1532c5c58fcabde3c356929ad
3
+ size 8602554312
bf16/model-00028-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4550d9673a8cfd9384126162819b25e356307233a2dd2ea0147a457d45dcc6b
3
+ size 8602554448
bf16/model-00029-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d478d92ad4f462e710e89726d94622c2b5ed336e7974dcd3f8bb63c6afc0c29
3
+ size 8598786520
bf16/model-00030-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a381a505dc66464d5eb529b2111ec69ccd44f3fd9dd2b4d295e909316392961
3
+ size 8602554408
bf16/model-00031-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec87d33fadf506fbc2318affca8fb8bc9162c83c70ffd8ea3f0f8cabb5b6bb74
3
+ size 8598786720
bf16/model-00032-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11204b3279a121f0d2fb637de1fa5908bd0c0a4641953c04f773c94491866bb8
3
+ size 8602554208
bf16/model-00033-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1d1ca95d59ce27ce710cafc353eca711fee14b988ca0aed8d22aafeff92665f
3
+ size 8602554448
bf16/model-00034-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75f6fe7468ed7aa03864c10b1233c950fd9ce5c6b6dff180b59d52a99feedf31
3
+ size 3493899088
bf16/model-00035-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c621436d0d0453aa093a327c9eb097430b4d68882eae139f92866ccd9a5d95a
3
+ size 8598757608
bf16/model-00036-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fe968e7f0a478ac3ebc4d0d88b56e266993a2763126402dbc14592ec19d9dad
3
+ size 8602554424
bf16/model-00037-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:070251c51427da143688da91eca652071f663e43c0e3c94502cc2ec66fc7f182
3
+ size 8598786704
bf16/model-00038-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ef4ebad50764988f94e8102cecf7a7dbbcf72adecae53871577f115d7f46327
3
+ size 8602554224
bf16/model-00039-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77584d7a08ba65ff81ebac8032a3b2bcd41eba1c54e04a46e21bc09f08b837cd
3
+ size 8602554448
bf16/model-00040-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd4a4b77999df7ee8c2d421a033c3eded421d7c3c6b161bb776651ed6ec828de
3
+ size 8598786608
bf16/model-00041-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b360c2c18d109a69fc502d65639deaa2f0aea8ece17f6ba8f4c550e9ab0921a5
3
+ size 8602554320
bf16/model-00042-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d14e623c5f2ccefbfab81bfebec0e3c8260a47213de4456f1669d18dde4eae6
3
+ size 8602554448
bf16/model-00043-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4abc132638f90f72c6da215fe83e0374b685a0f70f9b60db40f8ed72d3e7f34
3
+ size 8598786504
bf16/model-00044-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:172ac08c6578a9f38fd4d297a335d8e930a65b7ac3ba35c0436456c8df6e4c01
3
+ size 8602554416
bf16/model-00045-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d48cedb731d75f45e17c4d8bece01c055a82324a57eb925d41bf28dc69404a8
3
+ size 8598786704
bf16/model-00046-of-000163.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:622c4d76c837a48045d72bce160319f952c5c504cc285365af3812135fd75152
3
+ size 8602554224