Update README.md
Browse files
README.md
CHANGED
|
@@ -4,35 +4,31 @@ base_model:
|
|
| 4 |
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
-
|
| 8 |
-
<a href="README_CN.md">中文</a>  | English</a>
|
| 9 |
-
</p>
|
| 10 |
-
<br><br>
|
| 11 |
|
| 12 |
<p align="center">
|
| 13 |
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
| 14 |
</p><p></p>
|
| 15 |
|
| 16 |
|
| 17 |
-
<p align="center">
|
| 18 |
-
🤗 <a href="https://huggingface.co/tencent/"><b>Hugging Face</b></a> |
|
| 19 |
-
<img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/> <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a> |
|
| 20 |
-
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/6594d0c6c5f1cd69a48b261d/04ZNQlAfs08Bfg4B1o3XO.png" width="14"/> <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
|
| 21 |
-
</p>
|
| 22 |
|
| 23 |
<p align="center">
|
| 24 |
-
|
|
|
|
| 25 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
| 26 |
-
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>  
|
|
|
|
| 27 |
</p>
|
| 28 |
|
|
|
|
| 29 |
<p align="center">
|
| 30 |
-
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-
|
| 31 |
-
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-
|
| 32 |
-
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-
|
|
|
|
|
|
|
| 33 |
</p>
|
| 34 |
|
| 35 |
-
|
| 36 |
## Model Introduction
|
| 37 |
|
| 38 |
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
|
|
@@ -47,7 +43,7 @@ We have released a series of Hunyuan dense models, comprising both pre-trained a
|
|
| 47 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
| 48 |
|
| 49 |
## Related News
|
| 50 |
-
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** ,
|
| 51 |
<br>
|
| 52 |
|
| 53 |
|
|
|
|
| 4 |
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
+
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
<p align="center">
|
| 10 |
<img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
|
| 11 |
</p><p></p>
|
| 12 |
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
<p align="center">
|
| 16 |
+
🤗 <a href="https://huggingface.co/tencent/Hunyuan-4B-Instruct-FP8"><b>Hugging Face</b></a> |
|
| 17 |
+
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
|
| 18 |
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
| 19 |
+
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a> |
|
| 20 |
+
🤖 <a href="https://www.modelscope.cn/models/Tencent-Hunyuan/Hunyuan-4B-Instruct-FP8"><b>ModelScope</b></a>
|
| 21 |
</p>
|
| 22 |
|
| 23 |
+
|
| 24 |
<p align="center">
|
| 25 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B"><b>GITHUB</b></a> |
|
| 26 |
+
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-4B"><b>cnb.cool</b></a> |
|
| 27 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-4B/blob/main/LICENSE.txt"><b>LICENSE</b></a> |
|
| 28 |
+
<a href="https://raw.githubusercontent.com/Tencent-Hunyuan/Hunyuan-A13B/main/assets/1751881231452.jpg"><b>WeChat</b></a> |
|
| 29 |
+
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
|
| 30 |
</p>
|
| 31 |
|
|
|
|
| 32 |
## Model Introduction
|
| 33 |
|
| 34 |
Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
|
|
|
|
| 43 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
| 44 |
|
| 45 |
## Related News
|
| 46 |
+
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Pretrain** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Pretrain** ,**Hunyuan-7B-Instruct** on Hugging Face.
|
| 47 |
<br>
|
| 48 |
|
| 49 |
|