shubhrapandit commited on
Commit
0375935
·
verified ·
1 Parent(s): 4699ffe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +395 -0
README.md ADDED
@@ -0,0 +1,395 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - vllm
4
+ - vision
5
+ - w8a8
6
+ license: apache-2.0
7
+ license_link: >-
8
+ https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
9
+ language:
10
+ - en
11
+ base_model: Qwen/Qwen2.5-VL-72B-Instruct
12
+ library_name: transformers
13
+ ---
14
+
15
+ # Qwen2.5-VL-72B-Instruct-quantized-w8a8
16
+
17
+ ## Model Overview
18
+ - **Model Architecture:** Qwen/Qwen2.5-VL-72B-Instruct
19
+ - **Input:** Vision-Text
20
+ - **Output:** Text
21
+ - **Model Optimizations:**
22
+ - **Weight quantization:** INT8
23
+ - **Activation quantization:** INT8
24
+ - **Release Date:** 2/24/2025
25
+ - **Version:** 1.0
26
+ - **Model Developers:** Neural Magic
27
+
28
+ Quantized version of [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct).
29
+
30
+ ### Model Optimizations
31
+
32
+ This model was obtained by quantizing the weights of [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct) to INT8 data type, ready for inference with vLLM >= 0.5.2.
33
+
34
+ ## Deployment
35
+
36
+ ### Use with vLLM
37
+
38
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
39
+
40
+ ```python
41
+ from vllm.assets.image import ImageAsset
42
+ from vllm import LLM, SamplingParams
43
+
44
+ # prepare model
45
+ llm = LLM(
46
+ model="neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8",
47
+ trust_remote_code=True,
48
+ max_model_len=4096,
49
+ max_num_seqs=2,
50
+ )
51
+
52
+ # prepare inputs
53
+ question = "What is the content of this image?"
54
+ inputs = {
55
+ "prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
56
+ "multi_modal_data": {
57
+ "image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
58
+ },
59
+ }
60
+
61
+ # generate response
62
+ print("========== SAMPLE GENERATION ==============")
63
+ outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
64
+ print(f"PROMPT : {outputs[0].prompt}")
65
+ print(f"RESPONSE: {outputs[0].outputs[0].text}")
66
+ print("==========================================")
67
+ ```
68
+
69
+ vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
70
+
71
+ ## Creation
72
+
73
+ This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below as part a multimodal announcement blog.
74
+
75
+ <details>
76
+ <summary>Model Creation Code</summary>
77
+
78
+ ```python
79
+ import base64
80
+ from io import BytesIO
81
+ import torch
82
+ from datasets import load_dataset
83
+ from qwen_vl_utils import process_vision_info
84
+ from transformers import AutoProcessor
85
+ from llmcompressor.modifiers.quantization import GPTQModifier
86
+ from llmcompressor.transformers import oneshot
87
+ from llmcompressor.transformers.tracing import (
88
+ TraceableQwen2_5_VLForConditionalGeneration,
89
+ )
90
+
91
+ # Load model.
92
+ model_id = "Qwen/Qwen2.5-VL-72B-Instruct"
93
+ model = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(
94
+ model_id,
95
+ device_map="auto",
96
+ torch_dtype="auto",
97
+ )
98
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
99
+
100
+ # Oneshot arguments
101
+ DATASET_ID = "lmms-lab/flickr30k"
102
+ DATASET_SPLIT = {"calibration": "test[:512]"}
103
+ NUM_CALIBRATION_SAMPLES = 512
104
+ MAX_SEQUENCE_LENGTH = 2048
105
+
106
+ # Load dataset and preprocess.
107
+ ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
108
+ ds = ds.shuffle(seed=42)
109
+
110
+ dampening_frac=0.01
111
+
112
+ # Apply chat template and tokenize inputs.
113
+ def preprocess_and_tokenize(example):
114
+ # preprocess
115
+ buffered = BytesIO()
116
+ example["image"].save(buffered, format="PNG")
117
+ encoded_image = base64.b64encode(buffered.getvalue())
118
+ encoded_image_text = encoded_image.decode("utf-8")
119
+ base64_qwen = f"data:image;base64,{encoded_image_text}"
120
+ messages = [
121
+ {
122
+ "role": "user",
123
+ "content": [
124
+ {"type": "image", "image": base64_qwen},
125
+ {"type": "text", "text": "What does the image show?"},
126
+ ],
127
+ }
128
+ ]
129
+ text = processor.apply_chat_template(
130
+ messages, tokenize=False, add_generation_prompt=True
131
+ )
132
+ image_inputs, video_inputs = process_vision_info(messages)
133
+
134
+ # tokenize
135
+ return processor(
136
+ text=[text],
137
+ images=image_inputs,
138
+ videos=video_inputs,
139
+ padding=False,
140
+ max_length=MAX_SEQUENCE_LENGTH,
141
+ truncation=True,
142
+ )
143
+
144
+ ds = ds.map(preprocess_and_tokenize, remove_columns=ds["calibration"].column_names)
145
+
146
+ # Define a oneshot data collator for multimodal inputs.
147
+ def data_collator(batch):
148
+ assert len(batch) == 1
149
+ return {key: torch.tensor(value) for key, value in batch[0].items()}
150
+
151
+
152
+ # Recipe
153
+ recipe = [
154
+ GPTQModifier(
155
+ targets="Linear",
156
+ scheme="W8A8",
157
+ sequential_targets=["Qwen2_5_VLDecoderLayer"],
158
+ ignore=["lm_head", "re:visual.*"],
159
+ ),
160
+ ]
161
+
162
+ SAVE_DIR==f"{model_id.split('/')[1]}-quantized.w8a8"
163
+
164
+ # Perform oneshot
165
+ oneshot(
166
+ model=model,
167
+ tokenizer=model_id,
168
+ dataset=ds,
169
+ recipe=recipe,
170
+ max_seq_length=MAX_SEQUENCE_LENGTH,
171
+ num_calibration_samples=NUM_CALIBRATION_SAMPLES,
172
+ trust_remote_code_model=True,
173
+ data_collator=data_collator,
174
+ output_dir=SAVE_DIR
175
+ )
176
+ ```
177
+ </details>
178
+
179
+ ## Evaluation
180
+
181
+ The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
182
+
183
+ <details>
184
+ <summary>Evaluation Commands</summary>
185
+
186
+ ```
187
+ ```
188
+
189
+ </details>
190
+
191
+ ### Accuracy
192
+
193
+ ## Inference Performance
194
+
195
+
196
+ This model achieves up to xxx speedup in single-stream deployment and up to xxx speedup in multi-stream asynchronous deployment, depending on hardware and use-case scenario.
197
+ The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.7.2, and [GuideLLM](https://github.com/neuralmagic/guidellm).
198
+
199
+ <details>
200
+ <summary>Benchmarking Command</summary>
201
+ ```
202
+ guidellm --model neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8 --target "http://localhost:8000/v1" --data-type emulated --data prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>,images=<num_images>,width=<image_width>,height=<image_height> --max seconds 120 --backend aiohttp_server
203
+ ```
204
+
205
+ </details>
206
+
207
+
208
+ ### Single-stream performance (measured with vLLM version 0.7.2)
209
+
210
+ <table border="1" class="dataframe">
211
+ <thead>
212
+ <tr>
213
+ <th></th>
214
+ <th></th>
215
+ <th></th>
216
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
217
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
218
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
219
+ </tr>
220
+ <tr>
221
+ <th>Hardware</th>
222
+ <th>Model</th>
223
+ <th>Average Cost Reduction</th>
224
+ <th>Latency (s)</th>
225
+ <th>QPD</th>
226
+ <th>Latency (s)th>
227
+ <th>QPD</th>
228
+ <th>Latency (s)</th>
229
+ <th>QPD</th>
230
+ </tr>
231
+ </thead>
232
+ <tbody>
233
+ <tr>
234
+ <td>A100x4</td>
235
+ <td>Qwen/Qwen2.5-VL-72B-Instruct</td>
236
+ <td></td>
237
+ <td>6.4</td>
238
+ <td>78</td>
239
+ <td>4.5</td>
240
+ <td>111</td>
241
+ <td>4.4</td>
242
+ <td>113</td>
243
+ </tr>
244
+ <tr>
245
+ <td>A100x2</td>
246
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8</td>
247
+ <td>1.85</td>
248
+ <td>7.0</td>
249
+ <td>143</td>
250
+ <td>4.9</td>
251
+ <td>205</td>
252
+ <td>4.8</td>
253
+ <td>211</td>
254
+ </tr>
255
+ <tr>
256
+ <td>A100x1</td>
257
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
258
+ <td>3.33</td>
259
+ <td>9.4</td>
260
+ <td>213</td>
261
+ <td>5.1</td>
262
+ <td>396</td>
263
+ <td>4.8</td>
264
+ <td>420</td>
265
+ </tr>
266
+ <tr>
267
+ <td>H100x4</td>
268
+ <td>Qwen/Qwen2.5-VL-72B-Instruct</td>
269
+ <td></td>
270
+ <td>4.3</td>
271
+ <td>68</td>
272
+ <td>3.0</td>
273
+ <td>97</td>
274
+ <td>2.9</td>
275
+ <td>100</td>
276
+ </tr>
277
+ <tr>
278
+ <td>H100x2</td>
279
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic</td>
280
+ <td>1.79</td>
281
+ <td>4.6</td>
282
+ <td>122</td>
283
+ <td>3.3</td>
284
+ <td>173</td>
285
+ <td>3.2</td>
286
+ <td>177</td>
287
+ </tr>
288
+ <tr>
289
+ <td>H100x1</td>
290
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
291
+ <td>5.66</td>
292
+ <td>4.3</td>
293
+ <td>252</td>
294
+ <td>4.3</td>
295
+ <td>252</td>
296
+ <td>1.0</td>
297
+ <td>1065</td>
298
+ </tr>
299
+ </tbody>
300
+ </table>
301
+
302
+
303
+ ### Multi-stream asynchronous performance (measured with vLLM version 0.7.2)
304
+
305
+ <table border="1" class="dataframe">
306
+ <thead>
307
+ <tr>
308
+ <th></th>
309
+ <th></th>
310
+ <th></th>
311
+ <th style="text-align: center;" colspan="2" >Document Visual Question Answering<br>1680W x 2240H<br>64/128</th>
312
+ <th style="text-align: center;" colspan="2" >Visual Reasoning <br>640W x 480H<br>128/128</th>
313
+ <th style="text-align: center;" colspan="2" >Image Captioning<br>480W x 360H<br>0/128</th>
314
+ </tr>
315
+ <tr>
316
+ <th>Hardware</th>
317
+ <th>Model</th>
318
+ <th>Average Cost Reduction</th>
319
+ <th>Maximum throughput (QPS)</th>
320
+ <th>QPD</th>
321
+ <th>Maximum throughput (QPS)</th>
322
+ <th>QPD</th>
323
+ <th>Maximum throughput (QPS)</th>
324
+ <th>QPD</th>
325
+ </tr>
326
+ </thead>
327
+ <tbody style="text-align: center">
328
+ <tr>
329
+ <td>A100x4</td>
330
+ <td>Qwen/Qwen2.5-VL-72B-Instruct</td>
331
+ <td></td>
332
+ <td>0.4</td>
333
+ <td>180</td>
334
+ <td>1.1</td>
335
+ <td>539</td>
336
+ <td>1.2</td>
337
+ <td>595</td>
338
+ </tr>
339
+ <tr>
340
+ <td>A100x2</td>
341
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w8a8</td>
342
+ <td>1.80</td>
343
+ <td>0.6</td>
344
+ <td>289</td>
345
+ <td>2.0</td>
346
+ <td>1020</td>
347
+ <td>2.3</td>
348
+ <td>1133</td>
349
+ </tr>
350
+ <tr>
351
+ <td>A100x1</td>
352
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
353
+ <td>2.75</td>
354
+ <td>0.7</td>
355
+ <td>341</td>
356
+ <td>3.2</td>
357
+ <td>1588</td>
358
+ <td>4.1</td>
359
+ <td>2037</td>
360
+ </tr>
361
+ <tr>
362
+ <td>H100x4</td>
363
+ <td>Qwen/Qwen2.5-VL-72B-Instruct</td>
364
+ <td></td>
365
+ <td>0.5</td>
366
+ <td>134</td>
367
+ <td>1.2</td>
368
+ <td>357</td>
369
+ <td>1.3</td>
370
+ <td>379</td>
371
+ </tr>
372
+ <tr>
373
+ <td>H100x2</td>
374
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-FP8-Dynamic</td>
375
+ <td>1.73</td>
376
+ <td>0.9</td>
377
+ <td>247</td>
378
+ <td>2.2</td>
379
+ <td>621</td>
380
+ <td>2.4</td>
381
+ <td>669</td>
382
+ </tr>
383
+ <tr>
384
+ <td>H100x1</td>
385
+ <td>neuralmagic/Qwen2.5-VL-72B-Instruct-quantized.w4a16</td>
386
+ <td>8.27</td>
387
+ <td>3.3</td>
388
+ <td>913</td>
389
+ <td>3.3</td>
390
+ <td>913</td>
391
+ <td>24.8</td>
392
+ <td>6777</td>
393
+ </tr>
394
+ </tbody>
395
+ </table>