sh2orc's picture
Update README.md
a3dcd69 verified
metadata
tags:
  - vllm
  - vision
  - fp8
license: apache-2.0
license_link: >-
  https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
  - en
base_model: Qwen/Qwen2.5-VL-32B-Instruct
library_name: transformers

Qwen2.5-VL-32B-Instruct-FP8-Dynamic

Model Overview

  • Model Architecture: Qwen2.5-VL-32B-Instruct
    • Input: Vision-Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 5/3/2025
  • Version: 1.0
  • Model Developers: BC Card

Quantized version of Qwen/Qwen2.5-VL-32B-Instruct.

Model Optimizations

This model was obtained by quantizing the weights of Qwen/Qwen2.5-VL-32B-Instruct to FP8 data type, ready for inference with vLLM >= 0.5.2.

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams

# prepare model
llm = LLM(
    model="BCCard/Qwen2.5-VL-32B-Instruct-FP8-Dynamic",
    trust_remote_code=True,
    max_model_len=4096,
    max_num_seqs=2,
)

# prepare inputs
question = "What is the content of this image?"
inputs = {
    "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
    "multi_modal_data": {
        "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
    },
}

# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT  : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")

vLLM also supports OpenAI-compatible serving. See the documentation for more details.