Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -296,22 +296,99 @@ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by
|
|
| 296 |
pip install lmdeploy
|
| 297 |
```
|
| 298 |
|
| 299 |
-
|
|
|
|
|
|
|
| 300 |
|
| 301 |
```python
|
|
|
|
| 302 |
from lmdeploy.vl import load_image
|
| 303 |
-
from lmdeploy import ChatTemplateConfig, pipeline
|
| 304 |
|
| 305 |
model = 'OpenGVLab/InternVL2-2B'
|
| 306 |
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
| 307 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
| 308 |
chat_template_config = ChatTemplateConfig('internlm2-chat')
|
| 309 |
chat_template_config.meta_instruction = system_prompt
|
| 310 |
-
pipe = pipeline(model, chat_template_config=chat_template_config
|
|
|
|
| 311 |
response = pipe(('describe this image', image))
|
| 312 |
print(response)
|
| 313 |
```
|
| 314 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 315 |
## License
|
| 316 |
|
| 317 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
|
|
|
| 296 |
pip install lmdeploy
|
| 297 |
```
|
| 298 |
|
| 299 |
+
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
|
| 300 |
+
|
| 301 |
+
#### A 'Hello, world' example
|
| 302 |
|
| 303 |
```python
|
| 304 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 305 |
from lmdeploy.vl import load_image
|
|
|
|
| 306 |
|
| 307 |
model = 'OpenGVLab/InternVL2-2B'
|
| 308 |
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
| 309 |
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
|
| 310 |
chat_template_config = ChatTemplateConfig('internlm2-chat')
|
| 311 |
chat_template_config.meta_instruction = system_prompt
|
| 312 |
+
pipe = pipeline(model, chat_template_config=chat_template_config,
|
| 313 |
+
backend_config=TurbomindEngineConfig(session_len=8192))
|
| 314 |
response = pipe(('describe this image', image))
|
| 315 |
print(response)
|
| 316 |
```
|
| 317 |
|
| 318 |
+
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
|
| 319 |
+
|
| 320 |
+
#### Multi-images inference
|
| 321 |
+
|
| 322 |
+
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
|
| 323 |
+
|
| 324 |
+
```python
|
| 325 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 326 |
+
from lmdeploy.vl import load_image
|
| 327 |
+
|
| 328 |
+
model = 'OpenGVLab/InternVL2-2B'
|
| 329 |
+
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
| 330 |
+
chat_template_config = ChatTemplateConfig('internlm2-chat')
|
| 331 |
+
chat_template_config.meta_instruction = system_prompt
|
| 332 |
+
pipe = pipeline(model, chat_template_config=chat_template_config,
|
| 333 |
+
backend_config=TurbomindEngineConfig(session_len=8192))
|
| 334 |
+
|
| 335 |
+
image_urls=[
|
| 336 |
+
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
|
| 337 |
+
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
|
| 338 |
+
]
|
| 339 |
+
|
| 340 |
+
images = [load_image(img_url) for img_url in image_urls]
|
| 341 |
+
response = pipe(('describe these images', images))
|
| 342 |
+
print(response)
|
| 343 |
+
```
|
| 344 |
+
|
| 345 |
+
#### Batch prompts inference
|
| 346 |
+
|
| 347 |
+
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
|
| 348 |
+
|
| 349 |
+
```python
|
| 350 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 351 |
+
from lmdeploy.vl import load_image
|
| 352 |
+
|
| 353 |
+
model = 'OpenGVLab/InternVL2-2B'
|
| 354 |
+
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
| 355 |
+
chat_template_config = ChatTemplateConfig('internlm2-chat')
|
| 356 |
+
chat_template_config.meta_instruction = system_prompt
|
| 357 |
+
pipe = pipeline(model, chat_template_config=chat_template_config,
|
| 358 |
+
backend_config=TurbomindEngineConfig(session_len=8192))
|
| 359 |
+
|
| 360 |
+
image_urls=[
|
| 361 |
+
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
|
| 362 |
+
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
|
| 363 |
+
]
|
| 364 |
+
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
|
| 365 |
+
response = pipe(prompts)
|
| 366 |
+
print(response)
|
| 367 |
+
```
|
| 368 |
+
|
| 369 |
+
#### Multi-turn conversation
|
| 370 |
+
|
| 371 |
+
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
|
| 372 |
+
|
| 373 |
+
```python
|
| 374 |
+
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
|
| 375 |
+
from lmdeploy.vl import load_image
|
| 376 |
+
|
| 377 |
+
model = 'OpenGVLab/InternVL2-2B'
|
| 378 |
+
system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
|
| 379 |
+
chat_template_config = ChatTemplateConfig('internlm2-chat')
|
| 380 |
+
chat_template_config.meta_instruction = system_prompt
|
| 381 |
+
pipe = pipeline(model, chat_template_config=chat_template_config,
|
| 382 |
+
backend_config=TurbomindEngineConfig(session_len=8192))
|
| 383 |
+
|
| 384 |
+
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
|
| 385 |
+
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
|
| 386 |
+
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
|
| 387 |
+
print(sess.response.text)
|
| 388 |
+
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
|
| 389 |
+
print(sess.response.text)
|
| 390 |
+
```
|
| 391 |
+
|
| 392 |
## License
|
| 393 |
|
| 394 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|