Update README.md
Browse files
README.md
CHANGED
@@ -51,12 +51,12 @@ It not only possesses strong expertise in TCM, but also supports TCM multimodal
|
|
51 |
|
52 |
|
53 |
# <span>Usage</span>
|
54 |
-
You can use ShizhenGPT-7B-LLM in the same way as
|
55 |
```python
|
56 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
57 |
|
58 |
-
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/
|
59 |
-
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/
|
60 |
|
61 |
input_text = "为什么我总是手脚冰凉,是阳虚吗?"
|
62 |
messages = [{"role": "user", "content": input_text}]
|
|
|
51 |
|
52 |
|
53 |
# <span>Usage</span>
|
54 |
+
You can use ShizhenGPT-7B-LLM in the same way as [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct). You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
|
55 |
```python
|
56 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
57 |
|
58 |
+
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/ShizhenGPT-7B-LLM",torch_dtype="auto",device_map="auto")
|
59 |
+
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/ShizhenGPT-7B-LLM")
|
60 |
|
61 |
input_text = "为什么我总是手脚冰凉,是阳虚吗?"
|
62 |
messages = [{"role": "user", "content": input_text}]
|