Rico
commited on
Commit
·
6f7bdf5
1
Parent(s):
cca32c9
[UPDATE] update readme
Browse files
README.md
CHANGED
@@ -1,7 +1,3 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
library_name: transformers
|
4 |
-
---
|
5 |
<div align="center">
|
6 |
<picture>
|
7 |
<img src="figures/stepfun-logo.png" width="30%" alt="StepFun: Cost-Effective Multimodal Intelligence">
|
@@ -17,13 +13,13 @@ library_name: transformers
|
|
17 |
|
18 |
<div align="center" style="line-height: 1;">
|
19 |
<a href="https://github.com/stepfun-ai/Step3" target="_blank"><img alt="GitHub" src="https://img.shields.io/badge/GitHub-StepFun-white?logo=github&logoColor=white"/></a>
|
20 |
-
<a href="https://www.modelscope.cn/models/stepfun-ai/step3" target="_blank"><img alt="ModelScope" src="https://img.shields.io/badge
|
21 |
<a href="https://x.com/StepFun_ai" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-StepFun-white?logo=x&logoColor=white"/></a>
|
22 |
</div>
|
23 |
|
24 |
<div align="center" style="line-height: 1;">
|
25 |
<a href="https://discord.com/invite/XHheP5Fn" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-StepFun-white?logo=discord&logoColor=white"/></a>
|
26 |
-
<a href="https://huggingface.co/stepfun-ai/step3
|
27 |
</div>
|
28 |
|
29 |
<div align="center">
|
@@ -335,7 +331,7 @@ Note: Parts of the evaluation results are reproduced using the same settings.
|
|
335 |
|
336 |
### Inference with Hugging Face Transformers
|
337 |
|
338 |
-
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.54.0 as the development environment.We currently only support bf16 inference, and multi-patch is supported by default. This behavior is aligned with vllm and sglang.
|
339 |
|
340 |
|
341 |
```python
|
|
|
|
|
|
|
|
|
|
|
1 |
<div align="center">
|
2 |
<picture>
|
3 |
<img src="figures/stepfun-logo.png" width="30%" alt="StepFun: Cost-Effective Multimodal Intelligence">
|
|
|
13 |
|
14 |
<div align="center" style="line-height: 1;">
|
15 |
<a href="https://github.com/stepfun-ai/Step3" target="_blank"><img alt="GitHub" src="https://img.shields.io/badge/GitHub-StepFun-white?logo=github&logoColor=white"/></a>
|
16 |
+
<a href="https://www.modelscope.cn/models/stepfun-ai/step3" target="_blank"><img alt="ModelScope" src="https://img.shields.io/badge/ModelScope-StepFun-white?logo=modelscope&logoColor=white"/></a>
|
17 |
<a href="https://x.com/StepFun_ai" target="_blank"><img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-StepFun-white?logo=x&logoColor=white"/></a>
|
18 |
</div>
|
19 |
|
20 |
<div align="center" style="line-height: 1;">
|
21 |
<a href="https://discord.com/invite/XHheP5Fn" target="_blank"><img alt="Discord" src="https://img.shields.io/badge/Discord-StepFun-white?logo=discord&logoColor=white"/></a>
|
22 |
+
<a href="https://huggingface.co/stepfun-ai/step3/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-blue?&color=blue"/></a>
|
23 |
</div>
|
24 |
|
25 |
<div align="center">
|
|
|
331 |
|
332 |
### Inference with Hugging Face Transformers
|
333 |
|
334 |
+
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.54.0 as the development environment.We currently only support bf16 inference, and multi-patch for image preprocessing is supported by default. This behavior is aligned with vllm and sglang.
|
335 |
|
336 |
|
337 |
```python
|