JunHowie commited on
Commit
f4e66e9
·
verified ·
1 Parent(s): 5b4daa9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -30
README.md CHANGED
@@ -8,20 +8,20 @@ tags:
8
  - 量化修复
9
  - vLLM
10
  base_model:
11
- - ZhipuAI/GLM-4.5
12
  base_model_relation: quantized
13
  ---
14
  # GLM-4.5-GPTQ-Int4-Int8Mix
15
- 基础型 [ZhipuAI/GLM-4.5](https://www.modelscope.cn/models/ZhipuAI/GLM-4.5)
16
 
17
 
18
- ### 【Vllm 单机8卡启动命令】
19
- <i>注: 8卡启动该模型一定要跟`--enable-expert-parallel` ,否则其专家张量TP整除除不尽;4卡则不需要。 </i>
20
  ```
21
  CONTEXT_LENGTH=32768
22
 
23
  vllm serve \
24
- tclf90/GLM-4.5-GPTQ-Int4-Int8Mix \
25
  --served-model-name GLM-4.5-GPTQ-Int4-Int8Mix \
26
  --enable-expert-parallel \
27
  --swap-space 16 \
@@ -34,71 +34,71 @@ vllm serve \
34
  --disable-log-requests \
35
  --host 0.0.0.0 \
36
  --port 8000
 
37
  ```
38
 
39
- ### 【依赖】
40
 
41
  ```
42
  vllm==0.10.0
43
  ```
44
 
45
- ### 【模型更新日期】
46
  ```
47
  2025-07-30
48
- 1. 首次commit
49
  ```
50
 
51
  ### 【模型列表】
52
 
53
- | 文件大小 | 最近更新时间 |
54
  |---------|--------------|
55
  | `192GB` | `2025-07-30` |
56
 
57
 
58
 
59
- ### 【模型下载】
60
 
61
  ```python
62
- from modelscope import snapshot_download
63
- snapshot_download('tclf90/GLM-4.5-GPTQ-Int4-Int8Mix', cache_dir="本地路径")
64
  ```
65
 
66
 
67
- ### 【介绍】
68
  # GLM-4.5
69
 
70
  <div align="center">
71
  <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
72
  </div>
73
  <p align="center">
74
- 👋 加入我们的<a href="https://github.com/zai-org/GLM-4.5/blob/main/resources/WECHAT.md" target="_blank"> 微信群 </a>。
75
  <br>
76
- 📖 查看GLM-4.5<a href="https://z.ai/blog/glm-4.5" target="_blank"> 技术博客 </a>。
77
  <br>
78
- 📍 在<a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5"> 智谱AI开放平台 </a>上使用GLM-4.5 API服务。
79
  <br>
80
- 👉 一键体验 <a href="https://chat.z.ai" >GLM-4.5 </a>。
81
  </p>
 
 
82
 
83
- ## 模型介绍
84
-
85
- **GLM-4.5** 系列模型是专为智能体设计的基础模型。GLM-4.5拥有 **3550** 亿总参数量,其中 **320** 亿活跃参数;GLM-4.5-Air采用更紧凑的设计,拥有
86
- **1060** 亿总参数量,其中 **120** 亿活跃参数。GLM-4.5模型统一了推理、编码和智能体能力,以满足智能体应用的复杂需求。
87
 
88
- GLM-4.5 GLM-4.5-Air 都是混合推理模型,提供两种模式:用于复杂推理和工具使用的思考模式,以及用于即时响应的非思考模式。
89
 
90
- 我们已开源了 GLM-4.5 GLM-4.5-Air 的基础模型、混合推理模型以及混合推理模型的FP8版本。它们采用MIT开源许可证发布,可用于商业用途和二次开发。
91
 
92
- 在我们对12项行业标准基准的全面评估中,GLM-4.5表现卓越,得分 **63.2**,在所有专有和开源模型中排名**第3**
93
- 。值得注意的是,GLM-4.5-Air在保持优异效率的同时,仍取得了 **59.8** 的竞争性成绩。
94
 
95
  ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)
96
 
97
- 如需了解更多评估结果、展示案例和技术细节,请访问我们的 [技术博客](https://z.ai/blog/glm-4.5)。技术报告将很快发布。
 
 
98
 
99
- 模型代码、工具解析器和推理解析器可在 [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe) [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py)
100
- 和 [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py) 的实现中找到。
101
 
102
- ## 快速开始
103
 
104
- 请参考我们的[github](https://github.com/zai-org/GLM-4.5)项目。
 
8
  - 量化修复
9
  - vLLM
10
  base_model:
11
+ - zai-org/GLM-4.5
12
  base_model_relation: quantized
13
  ---
14
  # GLM-4.5-GPTQ-Int4-Int8Mix
15
+ Base model [zai-org/GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
16
 
17
 
18
+ ### 【VLLM Launch Command for 8-GPU Single Node】
19
+ <i>Note: When launching this model on 8 GPUs, you must include --enable-expert-parallel, otherwise expert tensor partitioning will fail due to mismatch. This flag is not required for 4-GPU setups.</i>
20
  ```
21
  CONTEXT_LENGTH=32768
22
 
23
  vllm serve \
24
+ QuantTrio/GLM-4.5-GPTQ-Int4-Int8Mix \
25
  --served-model-name GLM-4.5-GPTQ-Int4-Int8Mix \
26
  --enable-expert-parallel \
27
  --swap-space 16 \
 
34
  --disable-log-requests \
35
  --host 0.0.0.0 \
36
  --port 8000
37
+
38
  ```
39
 
40
+ ### 【Dependencies】
41
 
42
  ```
43
  vllm==0.10.0
44
  ```
45
 
46
+ ### 【Model Update】
47
  ```
48
  2025-07-30
49
+ 1. fast commit
50
  ```
51
 
52
  ### 【模型列表】
53
 
54
+ | File Size | Last Updated |
55
  |---------|--------------|
56
  | `192GB` | `2025-07-30` |
57
 
58
 
59
 
60
+ ### 【Model Download】
61
 
62
  ```python
63
+ from huggingface_hub import snapshot_download
64
+ snapshot_download('QuantTrio/GLM-4.5-GPTQ-Int4-Int8Mix', cache_dir="your_local_path")
65
  ```
66
 
67
 
68
+ ### 【Overview】
69
  # GLM-4.5
70
 
71
  <div align="center">
72
  <img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
73
  </div>
74
  <p align="center">
75
+ 👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
76
  <br>
77
+ 📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>.
78
  <br>
79
+ 📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
80
  <br>
81
+ 👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
82
  </p>
83
+
84
+ ## Model Introduction
85
 
86
+ The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
 
 
 
87
 
88
+ Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
89
 
90
+ We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
91
 
92
+ As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.
 
93
 
94
  ![bench](https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/bench.png)
95
 
96
+ For more eval results, show cases, and technical details, please visit
97
+ our [technical blog](https://z.ai/blog/glm-4.5). The technical report will be released soon.
98
+
99
 
100
+ The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
 
101
 
102
+ ## Quick Start
103
 
104
+ Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.