zRzRzRzRzRzRzR commited on
Commit
02437f7
·
1 Parent(s): 145d799
Files changed (1) hide show
  1. README.md +12 -0
README.md CHANGED
@@ -13,6 +13,18 @@ tags:
13
 
14
  # GLM-4.1V-9B-Thinking
15
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ## Model Introduction
17
 
18
  Vision-Language Models (VLMs) have become foundational components of intelligent systems. As real-world AI tasks grow
 
13
 
14
  # GLM-4.1V-9B-Thinking
15
 
16
+ <div align="center">
17
+ <img src=https://raw.githubusercontent.com/THUDM/GLM-4.1V-Thinking/99c5eb6563236f0ff43605d91d107544da9863b2/resources/logo.svg width="40%"/>
18
+ </div>
19
+ <p align="center">
20
+ 📖 View the GLM-4.1V-9B-Thinking <a href="https://arxiv.org/abs/2507.01006" target="_blank">paper</a>.
21
+ <br>
22
+ 💡 Try the <a href="https://huggingface.co/spaces/THUDM/GLM-4.1V-9B-Thinking-Demo" target="_blank">Hugging Face</a> or <a href="https://modelscope.cn/studios/ZhipuAI/GLM-4.1V-9B-Thinking-Demo" target="_blank">ModelScope</a> online demo for GLM-4.1V-9B-Thinking.
23
+ <br>
24
+ 📍 Using GLM-4.1V-9B-Thinking API at <a href="https://www.bigmodel.cn/dev/api/visual-reasoning-model/GLM-4.1V-Thinking">Zhipu Foundation Model Open Platform</a>
25
+ </p>
26
+
27
+
28
  ## Model Introduction
29
 
30
  Vision-Language Models (VLMs) have become foundational components of intelligent systems. As real-world AI tasks grow