Upload README_CN.md with huggingface_hub
Browse files- README_CN.md +18 -17
README_CN.md
CHANGED
|
@@ -9,9 +9,9 @@
|
|
| 9 |
|
| 10 |
|
| 11 |
<p align="center">
|
| 12 |
-
🤗 <a href="https://huggingface.co/tencent/"><b>
|
| 13 |
-
|
| 14 |
-
|
| 15 |
</p>
|
| 16 |
|
| 17 |
<p align="center">
|
|
@@ -21,14 +21,15 @@
|
|
| 21 |
</p>
|
| 22 |
|
| 23 |
<p align="center">
|
| 24 |
-
<a href="https://github.com/Tencent-Hunyuan/
|
| 25 |
-
<a href="https://cnb.cool/tencent/hunyuan/
|
| 26 |
-
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-
|
|
|
|
|
|
|
| 27 |
</p>
|
| 28 |
|
| 29 |
|
| 30 |
|
| 31 |
-
|
| 32 |
## 模型介绍
|
| 33 |
|
| 34 |
混元是腾讯开源的高效大语言模型系列,专为多样化计算环境中的灵活部署而设计。从边缘设备到高并发生产系统,这些模型凭借先进的量化支持和超长上下文能力,在各种场景下都能提供最优性能。
|
|
@@ -37,7 +38,7 @@
|
|
| 37 |
|
| 38 |
|
| 39 |
### 核心特性与优势
|
| 40 |
-
- **混合推理支持**:同时支持快思考和慢思考两种模式,支持用户灵活选择
|
| 41 |
- **超长上下文理解**:原生支持256K上下文窗口,在长文本任务中保持稳定性能
|
| 42 |
- **增强Agent能力**:优化Agent能力,在BFCL-v3、τ-Bench、C3-Bench等智能体基准测试中领先
|
| 43 |
- **高效推理**:采用分组查询注意力(GQA)策略,支持多量化格式,实现高效推理
|
|
@@ -101,12 +102,12 @@ messages = [
|
|
| 101 |
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
| 102 |
]
|
| 103 |
tokenized_chat = tokenizer.apply_chat_template(
|
| 104 |
-
messages,
|
| 105 |
tokenize=False
|
| 106 |
add_generation_prompt=True,
|
| 107 |
enable_thinking=True
|
| 108 |
)
|
| 109 |
-
|
| 110 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 111 |
model_inputs.pop("token_type_ids", None)
|
| 112 |
outputs = model.generate(**model_inputs, max_new_tokens=4096)
|
|
@@ -140,7 +141,7 @@ print(f"answer_content:{answer_content}\n\n")
|
|
| 140 |
|
| 141 |
|
| 142 |
|
| 143 |
-
## 训练数据格式处理
|
| 144 |
|
| 145 |
如果需要微调我们的 Instruct 模型,建议将数据处理成以下格式,分别对应慢思考和快思考的场景。
|
| 146 |
|
|
@@ -288,7 +289,7 @@ AWQ使用少量校准数据(无需进行训练)来计算激活值的幅度
|
|
| 288 |
|
| 289 |
|
| 290 |
|
| 291 |
-
## 推理和部署
|
| 292 |
|
| 293 |
HunyuanLLM可以采用TensorRT-LLM, vLLM或sglang部署。为了简化部署过程HunyuanLLM提供了预构建docker镜像,详见一下章节。
|
| 294 |
|
|
@@ -330,7 +331,7 @@ def setup_llm(args):
|
|
| 330 |
free_gpu_memory_fraction=args.kv_cache_fraction,
|
| 331 |
)
|
| 332 |
spec_config = None
|
| 333 |
-
|
| 334 |
hf_ckpt_path="$your_hunyuan_model_path"
|
| 335 |
tokenizer = AutoTokenizer.from_pretrained(hf_ckpt_path, trust_remote_code=True)
|
| 336 |
llm = LLM(
|
|
@@ -472,13 +473,13 @@ curl -X POST "http://localhost:8000/v1/chat/completions" \
|
|
| 472 |
[hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm](https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags) 。您只需要下载模型文件并用下面代码启动docker即可开始推理模型。
|
| 473 |
```shell
|
| 474 |
# 下载模型:
|
| 475 |
-
# ModelScope:
|
| 476 |
modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct
|
| 477 |
# Huggingface: vllm 会自动下载
|
| 478 |
|
| 479 |
# 拉取
|
| 480 |
国内:
|
| 481 |
-
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm
|
| 482 |
国外:
|
| 483 |
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
|
| 484 |
|
|
@@ -487,7 +488,7 @@ docker run --privileged --user root --net=host --ipc=host \
|
|
| 487 |
-v ~/.cache:/root/.cache/ \
|
| 488 |
--gpus=all -it --entrypoint python docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm \
|
| 489 |
-m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \
|
| 490 |
-
--tensor-parallel-size 4 --model tencent/Hunyuan-A13B-Instruct --trust-remote-code
|
| 491 |
|
| 492 |
# 使用modelscope下载的模型起服务
|
| 493 |
docker run --privileged --user root --net=host --ipc=host \
|
|
@@ -740,7 +741,7 @@ print(response)
|
|
| 740 |
#### FP8/Int4量化模型部署:
|
| 741 |
目前 sglang 的 fp8 和 int4 量化模型正在支持中,敬请期待。
|
| 742 |
|
| 743 |
-
## 交互式Demo Web
|
| 744 |
hunyuan-A13B 现已开放网页demo。访问 https://hunyuan.tencent.com/?model=hunyuan-a13b 即可简单体验我们的模型。
|
| 745 |
|
| 746 |
|
|
|
|
| 9 |
|
| 10 |
|
| 11 |
<p align="center">
|
| 12 |
+
🤗 <a href="https://huggingface.co/tencent/"><b>HuggingFace</b></a> |
|
| 13 |
+
🤖 <a href="https://modelscope.cn/models/Tencent-Hunyuan/"><b>ModelScope</b></a> |
|
| 14 |
+
🪡 <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
|
| 15 |
</p>
|
| 16 |
|
| 17 |
<p align="center">
|
|
|
|
| 21 |
</p>
|
| 22 |
|
| 23 |
<p align="center">
|
| 24 |
+
<a href="https://github.com/Tencent-Hunyuan/"><b>GITHUB</b></a> |
|
| 25 |
+
<a href="https://cnb.cool/tencent/hunyuan/"><b>cnb.cool</b></a> |
|
| 26 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B/blob/main/LICENSE"><b>LICENSE</b></a> |
|
| 27 |
+
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
|
| 28 |
+
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
|
| 29 |
</p>
|
| 30 |
|
| 31 |
|
| 32 |
|
|
|
|
| 33 |
## 模型介绍
|
| 34 |
|
| 35 |
混元是腾讯开源的高效大语言模型系列,专为多样化计算环境中的灵活部署而设计。从边缘设备到高并发生产系统,这些模型凭借先进的量化支持和超长上下文能力,在各种场景下都能提供最优性能。
|
|
|
|
| 38 |
|
| 39 |
|
| 40 |
### 核心特性与优势
|
| 41 |
+
- **混合推理支持**:同时支持快思考和慢思考两种模式,支持用户灵活选择
|
| 42 |
- **超长上下文理解**:原生支持256K上下文窗口,在长文本任务中保持稳定性能
|
| 43 |
- **增强Agent能力**:优化Agent能力,在BFCL-v3、τ-Bench、C3-Bench等智能体基准测试中领先
|
| 44 |
- **高效推理**:采用分组查询注意力(GQA)策略,支持多量化格式,实现高效推理
|
|
|
|
| 102 |
{"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
|
| 103 |
]
|
| 104 |
tokenized_chat = tokenizer.apply_chat_template(
|
| 105 |
+
messages,
|
| 106 |
tokenize=False
|
| 107 |
add_generation_prompt=True,
|
| 108 |
enable_thinking=True
|
| 109 |
)
|
| 110 |
+
|
| 111 |
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 112 |
model_inputs.pop("token_type_ids", None)
|
| 113 |
outputs = model.generate(**model_inputs, max_new_tokens=4096)
|
|
|
|
| 141 |
|
| 142 |
|
| 143 |
|
| 144 |
+
## 训练数据格式处理
|
| 145 |
|
| 146 |
如果需要微调我们的 Instruct 模型,建议将数据处理成以下格式,分别对应慢思考和快思考的场景。
|
| 147 |
|
|
|
|
| 289 |
|
| 290 |
|
| 291 |
|
| 292 |
+
## 推理和部署
|
| 293 |
|
| 294 |
HunyuanLLM可以采用TensorRT-LLM, vLLM或sglang部署。为了简化部署过程HunyuanLLM提供了预构建docker镜像,详见一下章节。
|
| 295 |
|
|
|
|
| 331 |
free_gpu_memory_fraction=args.kv_cache_fraction,
|
| 332 |
)
|
| 333 |
spec_config = None
|
| 334 |
+
|
| 335 |
hf_ckpt_path="$your_hunyuan_model_path"
|
| 336 |
tokenizer = AutoTokenizer.from_pretrained(hf_ckpt_path, trust_remote_code=True)
|
| 337 |
llm = LLM(
|
|
|
|
| 473 |
[hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm](https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags) 。您只需要下载模型文件并用下面代码启动docker即可开始推理模型。
|
| 474 |
```shell
|
| 475 |
# 下载模型:
|
| 476 |
+
# ModelScope:
|
| 477 |
modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct
|
| 478 |
# Huggingface: vllm 会自动下载
|
| 479 |
|
| 480 |
# 拉取
|
| 481 |
国内:
|
| 482 |
+
docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm
|
| 483 |
国外:
|
| 484 |
docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
|
| 485 |
|
|
|
|
| 488 |
-v ~/.cache:/root/.cache/ \
|
| 489 |
--gpus=all -it --entrypoint python docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm \
|
| 490 |
-m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \
|
| 491 |
+
--tensor-parallel-size 4 --model tencent/Hunyuan-A13B-Instruct --trust-remote-code
|
| 492 |
|
| 493 |
# 使用modelscope下载的模型起服务
|
| 494 |
docker run --privileged --user root --net=host --ipc=host \
|
|
|
|
| 741 |
#### FP8/Int4量化模型部署:
|
| 742 |
目前 sglang 的 fp8 和 int4 量化模型正在支持中,敬请期待。
|
| 743 |
|
| 744 |
+
## 交互式Demo Web
|
| 745 |
hunyuan-A13B 现已开放网页demo。访问 https://hunyuan.tencent.com/?model=hunyuan-a13b 即可简单体验我们的模型。
|
| 746 |
|
| 747 |
|