Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ pipeline_tag: image-text-to-text
|
|
| 7 |
|
| 8 |
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
| 9 |
|
| 10 |
-
[\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
|
@@ -416,6 +416,32 @@ sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config
|
|
| 416 |
print(sess.response.text)
|
| 417 |
```
|
| 418 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 419 |
## License
|
| 420 |
|
| 421 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
|
@@ -609,6 +635,32 @@ sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config
|
|
| 609 |
print(sess.response.text)
|
| 610 |
```
|
| 611 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 612 |
## 开源许可证
|
| 613 |
|
| 614 |
该项目采用 MIT 许可证发布,而 InternLM 则采用 Apache-2.0 许可证。
|
|
|
|
| 7 |
|
| 8 |
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
| 9 |
|
| 10 |
+
[\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [魔搭社区](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \]
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
|
|
|
| 416 |
print(sess.response.text)
|
| 417 |
```
|
| 418 |
|
| 419 |
+
#### Service
|
| 420 |
+
|
| 421 |
+
For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file `chat_template.json`.
|
| 422 |
+
|
| 423 |
+
```json
|
| 424 |
+
{
|
| 425 |
+
"model_name":"internlm2",
|
| 426 |
+
"meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
|
| 427 |
+
"stop_words":["<|im_start|>", "<|im_end|>"]
|
| 428 |
+
}
|
| 429 |
+
```
|
| 430 |
+
|
| 431 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
| 432 |
+
|
| 433 |
+
```shell
|
| 434 |
+
lmdeploy serve api_server OpenGVLab/InternVL2-2B --backend turbomind --chat-template chat_template.json
|
| 435 |
+
```
|
| 436 |
+
|
| 437 |
+
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
|
| 438 |
+
|
| 439 |
+
```shell
|
| 440 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
| 441 |
+
```
|
| 442 |
+
|
| 443 |
+
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|
| 444 |
+
|
| 445 |
## License
|
| 446 |
|
| 447 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
|
|
|
| 635 |
print(sess.response.text)
|
| 636 |
```
|
| 637 |
|
| 638 |
+
#### API部署
|
| 639 |
+
|
| 640 |
+
对于 lmdeploy v0.5.0,请先配置聊天模板配置文件。创建如下的 JSON 文件 `chat_template.json`。
|
| 641 |
+
|
| 642 |
+
```json
|
| 643 |
+
{
|
| 644 |
+
"model_name":"internlm2",
|
| 645 |
+
"meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
|
| 646 |
+
"stop_words":["<|im_start|>", "<|im_end|>"]
|
| 647 |
+
}
|
| 648 |
+
```
|
| 649 |
+
|
| 650 |
+
LMDeploy 的 `api_server` 使模型能够通过一个命令轻松打包成服务。提供的 RESTful API 与 OpenAI 的接口兼容。以下是服务启动的示例:
|
| 651 |
+
|
| 652 |
+
```shell
|
| 653 |
+
lmdeploy serve api_server OpenGVLab/InternVL2-2B --backend turbomind --chat-template chat_template.json
|
| 654 |
+
```
|
| 655 |
+
|
| 656 |
+
`api_server` 的默认端口是 `23333`。服务器启动后,你可以通过 `api_client` 在终端与服务器通信:
|
| 657 |
+
|
| 658 |
+
```shell
|
| 659 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
| 660 |
+
```
|
| 661 |
+
|
| 662 |
+
你可以通过 `http://0.0.0.0:23333` 的 swagger UI 在线查看和试用 `api_server` 的 API,也可以从 [这里](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md) 阅读 API 规范。
|
| 663 |
+
|
| 664 |
## 开源许可证
|
| 665 |
|
| 666 |
该项目采用 MIT 许可证发布,而 InternLM 则采用 Apache-2.0 许可证。
|