Text Generation
Safetensors
Chinese
Traditional Chinese Medicin
Multimodal LLM
multimodal
jymcc commited on
Commit
96f2e89
·
verified ·
1 Parent(s): 6d887de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -51,12 +51,12 @@ It not only possesses strong expertise in TCM, but also supports TCM multimodal
51
 
52
 
53
  # <span>Usage</span>
54
- You can use ShizhenGPT-7B-LLM in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
55
  ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
57
 
58
- model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-8B",torch_dtype="auto",device_map="auto")
59
- tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-8B")
60
 
61
  input_text = "为什么我总是手脚冰凉,是阳虚吗?"
62
  messages = [{"role": "user", "content": input_text}]
 
51
 
52
 
53
  # <span>Usage</span>
54
+ You can use ShizhenGPT-7B-LLM in the same way as [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
55
  ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
57
 
58
+ model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/ShizhenGPT-7B-LLM",torch_dtype="auto",device_map="auto")
59
+ tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/ShizhenGPT-7B-LLM")
60
 
61
  input_text = "为什么我总是手脚冰凉,是阳虚吗?"
62
  messages = [{"role": "user", "content": input_text}]