Alexandre-Numind commited on
Commit
05bfd61
·
verified ·
1 Parent(s): e5a36cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +572 -195
README.md CHANGED
@@ -1,199 +1,576 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ base_model:
5
+ - Qwen/Qwen2.5-VL-3B-Instruct
6
+ pipeline_tag: image-text-to-text
7
  ---
8
 
9
+ # NuExtract-2.0-8B by NuMind 🔥
10
+
11
+ NuExtract 2.0 is a family of models trained specifically for structured information extraction tasks. It supports both multimodal inputs and is multilingual.
12
+
13
+ We provide several versions of different sizes, all based on pre-trained models from the QwenVL family.
14
+ | Model Size | Model Name | Base Model | License | Huggingface Link |
15
+ |------------|------------|------------|---------|------------------|
16
+ | 2B | NuExtract-2.0-2B | [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) | MIT | 🤗 [NuExtract-2.0-2B](https://huggingface.co/numind/NuExtract-2.0-2B) |
17
+ | 3B | NuExtract-2.0-3B | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen Research License | 🤗 [NuExtract-2.0-3B](https://huggingface.co/numind/NuExtract-2.0-3B) |
18
+ | 8B | NuExtract-2.0-8B | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | MIT | 🤗 [NuExtract-2.0-8B](https://huggingface.co/numind/NuExtract-2.0-8B) |
19
+
20
+ ❗️Note: `NuExtract-2.0-2B` is based on Qwen2-VL rather than Qwen2.5-VL because the smallest Qwen2.5-VL model (3B) has a more restrictive, non-commercial license. We therefore include `NuExtract-2.0-2B` as a small model option that can be used commercially.
21
+
22
+ ## Overview
23
+
24
+ To use the model, provide an input text/image and a JSON template describing the information you need to extract. The template should be a JSON object, specifying field names and their expected type.
25
+
26
+ Support types include:
27
+ * `verbatim-string` - instructs the model to extract text that is present verbatim in the input.
28
+ * `string` - a generic string field that can incorporate paraphrasing/abstraction.
29
+ * `integer` - a whole number.
30
+ * `number` - a whole or decimal number.
31
+ * `date-time` - ISO formatted date.
32
+ * Array of any of the above types (e.g. `["string"]`)
33
+ * `enum` - a choice from set of possible answers (represented in template as an array of options, e.g. `["yes", "no", "maybe"]`).
34
+ * `multi-label` - an enum that can have multiple possible answers (represented in template as a double-wrapped array, e.g. `[["A", "B", "C"]]`).
35
+
36
+ If the model does not identify relevant information for a field, it will return `null` or `[]` (for arrays and multi-labels).
37
+
38
+ The following is an example template:
39
+ ```json
40
+ {
41
+ "first_name": "verbatim-string",
42
+ "last_name": "verbatim-string",
43
+ "description": "string",
44
+ "age": "integer",
45
+ "gpa": "number",
46
+ "birth_date": "date-time",
47
+ "nationality": ["France", "England", "Japan", "USA", "China"],
48
+ "languages_spoken": [["English", "French", "Japanese", "Mandarin", "Spanish"]]
49
+ }
50
+ ```
51
+ An example output:
52
+ ```json
53
+ {
54
+ "first_name": "Susan",
55
+ "last_name": "Smith",
56
+ "description": "A student studying computer science.",
57
+ "age": 20,
58
+ "gpa": 3.7,
59
+ "birth_date": "2005-03-01",
60
+ "nationality": "England",
61
+ "languages_spoken": ["English", "French"]
62
+ }
63
+ ```
64
+
65
+ ⚠️ We recommend using NuExtract with a temperature at or very close to 0. Some inference frameworks, such as Ollama, use a default of 0.7 which is not well suited to many extraction tasks.
66
+
67
+ ## Using NuExtract with 🤗 Transformers
68
+
69
+ ```python
70
+ import torch
71
+ from transformers import AutoProcessor, AutoModelForVision2Seq
72
+
73
+ model_name = "numind/NuExtract-2.0-2B"
74
+ # model_name = "numind/NuExtract-2.0-8B"
75
+
76
+ model = AutoModelForVision2Seq.from_pretrained(model_name,
77
+ trust_remote_code=True,
78
+ torch_dtype=torch.bfloat16,
79
+ attn_implementation="flash_attention_2",
80
+ device_map="auto")
81
+ processor = AutoProcessor.from_pretrained(model_name,
82
+ trust_remote_code=True,
83
+ padding_side='left',
84
+ use_fast=True)
85
+
86
+ # You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
87
+ # min_pixels = 256*28*28
88
+ # max_pixels = 1280*28*28
89
+ # processor = AutoProcessor.from_pretrained(model_name, min_pixels=min_pixels, max_pixels=max_pixels)
90
+ ```
91
+
92
+ You will need the following function to handle loading of image input data:
93
+ ```python
94
+ def process_all_vision_info(messages, examples=None):
95
+ """
96
+ Process vision information from both messages and in-context examples, supporting batch processing.
97
+
98
+ Args:
99
+ messages: List of message dictionaries (single input) OR list of message lists (batch input)
100
+ examples: Optional list of example dictionaries (single input) OR list of example lists (batch)
101
+
102
+ Returns:
103
+ A flat list of all images in the correct order:
104
+ - For single input: example images followed by message images
105
+ - For batch input: interleaved as (item1 examples, item1 input, item2 examples, item2 input, etc.)
106
+ - Returns None if no images were found
107
+ """
108
+ from qwen_vl_utils import process_vision_info, fetch_image
109
+
110
+ # Helper function to extract images from examples
111
+ def extract_example_images(example_item):
112
+ if not example_item:
113
+ return []
114
+
115
+ # Handle both list of examples and single example
116
+ examples_to_process = example_item if isinstance(example_item, list) else [example_item]
117
+ images = []
118
+
119
+ for example in examples_to_process:
120
+ if isinstance(example.get('input'), dict) and example['input'].get('type') == 'image':
121
+ images.append(fetch_image(example['input']))
122
+
123
+ return images
124
+
125
+ # Normalize inputs to always be batched format
126
+ is_batch = messages and isinstance(messages[0], list)
127
+ messages_batch = messages if is_batch else [messages]
128
+ is_batch_examples = examples and isinstance(examples, list) and (isinstance(examples[0], list) or examples[0] is None)
129
+ examples_batch = examples if is_batch_examples else ([examples] if examples is not None else None)
130
+
131
+ # Ensure examples batch matches messages batch if provided
132
+ if examples and len(examples_batch) != len(messages_batch):
133
+ if not is_batch and len(examples_batch) == 1:
134
+ # Single example set for a single input is fine
135
+ pass
136
+ else:
137
+ raise ValueError("Examples batch length must match messages batch length")
138
+
139
+ # Process all inputs, maintaining correct order
140
+ all_images = []
141
+ for i, message_group in enumerate(messages_batch):
142
+ # Get example images for this input
143
+ if examples and i < len(examples_batch):
144
+ input_example_images = extract_example_images(examples_batch[i])
145
+ all_images.extend(input_example_images)
146
+
147
+ # Get message images for this input
148
+ input_message_images = process_vision_info(message_group)[0] or []
149
+ all_images.extend(input_message_images)
150
+
151
+ return all_images if all_images else None
152
+ ```
153
+
154
+ E.g. To perform a basic extraction of names from a text document:
155
+ ```python
156
+ template = """{"names": ["string"]}"""
157
+ document = "John went to the restaurant with Mary. James went to the cinema."
158
+
159
+ # prepare the user message content
160
+ messages = [{"role": "user", "content": document}]
161
+ text = processor.tokenizer.apply_chat_template(
162
+ messages,
163
+ template=template, # template is specified here
164
+ tokenize=False,
165
+ add_generation_prompt=True,
166
+ )
167
+
168
+ print(text)
169
+ """"<|im_start|>user
170
+ # Template:
171
+ {"names": ["string"]}
172
+ # Context:
173
+ John went to the restaurant with Mary. James went to the cinema.<|im_end|>
174
+ <|im_start|>assistant"""
175
+
176
+ image_inputs = process_all_vision_info(messages)
177
+ inputs = processor(
178
+ text=[text],
179
+ images=image_inputs,
180
+ padding=True,
181
+ return_tensors="pt",
182
+ ).to("cuda")
183
+
184
+ # we choose greedy sampling here, which works well for most information extraction tasks
185
+ generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
186
+
187
+ # Inference: Generation of the output
188
+ generated_ids = model.generate(
189
+ **inputs,
190
+ **generation_config
191
+ )
192
+ generated_ids_trimmed = [
193
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
194
+ ]
195
+ output_text = processor.batch_decode(
196
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
197
+ )
198
+
199
+ print(output_text)
200
+ # ['{"names": ["John", "Mary", "James"]}']
201
+ ```
202
+
203
+ <details>
204
+ <summary>In-Context Examples</summary>
205
+ Sometimes the model might not perform as well as we want because our task is challenging or involves some degree of ambiguity. Alternatively, we may want the model to follow some specific formatting, or just give it a bit more help. In cases like this it can be valuable to provide "in-context examples" to help NuExtract better understand the task.
206
+
207
+ To do so, we can provide a list examples (dictionaries of input/output pairs). In the example below, we show to the model that we want the extracted names to be in captial letters with `-` on either side (for the sake of illustration). Usually providing multiple examples will lead to better results.
208
+ ```python
209
+ template = """{"names": ["string"]}"""
210
+ document = "John went to the restaurant with Mary. James went to the cinema."
211
+ examples = [
212
+ {
213
+ "input": "Stephen is the manager at Susan's store.",
214
+ "output": """{"names": ["-STEPHEN-", "-SUSAN-"]}"""
215
+ }
216
+ ]
217
+
218
+ messages = [{"role": "user", "content": document}]
219
+ text = processor.tokenizer.apply_chat_template(
220
+ messages,
221
+ template=template,
222
+ examples=examples, # examples provided here
223
+ tokenize=False,
224
+ add_generation_prompt=True,
225
+ )
226
+
227
+ image_inputs = process_all_vision_info(messages, examples)
228
+ inputs = processor(
229
+ text=[text],
230
+ images=image_inputs,
231
+ padding=True,
232
+ return_tensors="pt",
233
+ ).to("cuda")
234
+
235
+ # we choose greedy sampling here, which works well for most information extraction tasks
236
+ generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
237
+
238
+ # Inference: Generation of the output
239
+ generated_ids = model.generate(
240
+ **inputs,
241
+ **generation_config
242
+ )
243
+ generated_ids_trimmed = [
244
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
245
+ ]
246
+ output_text = processor.batch_decode(
247
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
248
+ )
249
+ print(output_text)
250
+ # ['{"names": ["-JOHN-", "-MARY-", "-JAMES-"]}']
251
+ ```
252
+ </details>
253
+
254
+ <details>
255
+ <summary>Image Inputs</summary>
256
+ If we want to give image inputs to NuExtract, instead of text, we simply provide a dictionary specifying the desired image file as the message content, instead of a string. (e.g. `{"type": "image", "image": "file://image.jpg"}`).
257
+
258
+ You can also specify an image URL (e.g. `{"type": "image", "image": "http://path/to/your/image.jpg"}`) or base64 encoding (e.g. `{"type": "image", "image": "data:image;base64,/9j/..."}`).
259
+ ```python
260
+ template = """{"store": "verbatim-string"}"""
261
+ document = {"type": "image", "image": "file://1.jpg"}
262
+
263
+ messages = [{"role": "user", "content": [document]}]
264
+ text = processor.tokenizer.apply_chat_template(
265
+ messages,
266
+ template=template,
267
+ tokenize=False,
268
+ add_generation_prompt=True,
269
+ )
270
+
271
+ image_inputs = process_all_vision_info(messages)
272
+ inputs = processor(
273
+ text=[text],
274
+ images=image_inputs,
275
+ padding=True,
276
+ return_tensors="pt",
277
+ ).to("cuda")
278
+
279
+ generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
280
+
281
+ # Inference: Generation of the output
282
+ generated_ids = model.generate(
283
+ **inputs,
284
+ **generation_config
285
+ )
286
+ generated_ids_trimmed = [
287
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
288
+ ]
289
+ output_text = processor.batch_decode(
290
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
291
+ )
292
+ print(output_text)
293
+ # ['{"store": "Trader Joe\'s"}']
294
+ ```
295
+ </details>
296
+
297
+ <details>
298
+ <summary>Batch Inference</summary>
299
+
300
+ ```python
301
+ inputs = [
302
+ # image input with no ICL examples
303
+ {
304
+ "document": {"type": "image", "image": "file://0.jpg"},
305
+ "template": """{"store_name": "verbatim-string"}""",
306
+ },
307
+ # image input with 1 ICL example
308
+ {
309
+ "document": {"type": "image", "image": "file://0.jpg"},
310
+ "template": """{"store_name": "verbatim-string"}""",
311
+ "examples": [
312
+ {
313
+ "input": {"type": "image", "image": "file://1.jpg"},
314
+ "output": """{"store_name": "Trader Joe's"}""",
315
+ }
316
+ ],
317
+ },
318
+ # text input with no ICL examples
319
+ {
320
+ "document": {"type": "text", "text": "John went to the restaurant with Mary. James went to the cinema."},
321
+ "template": """{"names": ["string"]}""",
322
+ },
323
+ # text input with ICL example
324
+ {
325
+ "document": {"type": "text", "text": "John went to the restaurant with Mary. James went to the cinema."},
326
+ "template": """{"names": ["string"]}""",
327
+ "examples": [
328
+ {
329
+ "input": "Stephen is the manager at Susan's store.",
330
+ "output": """{"names": ["STEPHEN", "SUSAN"]}"""
331
+ }
332
+ ],
333
+ },
334
+ ]
335
+
336
+ # messages should be a list of lists for batch processing
337
+ messages = [
338
+ [
339
+ {
340
+ "role": "user",
341
+ "content": [x['document']],
342
+ }
343
+ ]
344
+ for x in inputs
345
+ ]
346
+
347
+ # apply chat template to each example individually
348
+ texts = [
349
+ processor.tokenizer.apply_chat_template(
350
+ messages[i], # Now this is a list containing one message
351
+ template=x['template'],
352
+ examples=x.get('examples', None),
353
+ tokenize=False,
354
+ add_generation_prompt=True)
355
+ for i, x in enumerate(inputs)
356
+ ]
357
+
358
+ image_inputs = process_all_vision_info(messages, [x.get('examples') for x in inputs])
359
+ inputs = processor(
360
+ text=texts,
361
+ images=image_inputs,
362
+ padding=True,
363
+ return_tensors="pt",
364
+ ).to("cuda")
365
+
366
+ generation_config = {"do_sample": False, "num_beams": 1, "max_new_tokens": 2048}
367
+
368
+ # Batch Inference
369
+ generated_ids = model.generate(**inputs, **generation_config)
370
+ generated_ids_trimmed = [
371
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
372
+ ]
373
+ output_texts = processor.batch_decode(
374
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
375
+ )
376
+ for y in output_texts:
377
+ print(y)
378
+ # {"store_name": "WAL-MART"}
379
+ # {"store_name": "Walmart"}
380
+ # {"names": ["John", "Mary", "James"]}
381
+ # {"names": ["JOHN", "MARY", "JAMES"]}
382
+ ```
383
+ </details>
384
+
385
+ <details>
386
+ <summary>Template Generation</summary>
387
+ If you want to convert existing schema files you have in other formats (e.g. XML, YAML, etc.) or start from an example, NuExtract 2.0 models can automatically generate this for you.
388
+
389
+ E.g. convert XML into a NuExtract template:
390
+ ```python
391
+ xml_template = """<SportResult>
392
+ <Date></Date>
393
+ <Sport></Sport>
394
+ <Venue></Venue>
395
+ <HomeTeam></HomeTeam>
396
+ <AwayTeam></AwayTeam>
397
+ <HomeScore></HomeScore>
398
+ <AwayScore></AwayScore>
399
+ <TopScorer></TopScorer>
400
+ </SportResult>"""
401
+
402
+ messages = [
403
+ {
404
+ "role": "user",
405
+ "content": [{"type": "text", "text": xml_template}],
406
+ }
407
+ ]
408
+
409
+ text = processor.apply_chat_template(
410
+ messages, tokenize=False, add_generation_prompt=True,
411
+ )
412
+
413
+ image_inputs = process_all_vision_info(messages)
414
+ inputs = processor(
415
+ text=[text],
416
+ images=image_inputs,
417
+ padding=True,
418
+ return_tensors="pt",
419
+ ).to("cuda")
420
+
421
+ generated_ids = model.generate(
422
+ **inputs,
423
+ **generation_config
424
+ )
425
+ generated_ids_trimmed = [
426
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
427
+ ]
428
+ output_text = processor.batch_decode(
429
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
430
+ )
431
+
432
+ print(output_text[0])
433
+ # {
434
+ # "Date": "date-time",
435
+ # "Sport": "verbatim-string",
436
+ # "Venue": "verbatim-string",
437
+ # "HomeTeam": "verbatim-string",
438
+ # "AwayTeam": "verbatim-string",
439
+ # "HomeScore": "integer",
440
+ # "AwayScore": "integer",
441
+ # "TopScorer": "verbatim-string"
442
+ # }
443
+ ```
444
+
445
+ E.g. generate a template from natural language description:
446
+ ```python
447
+ description = "I would like to extract important details from the contract."
448
+
449
+ messages = [
450
+ {
451
+ "role": "user",
452
+ "content": [{"type": "text", "text": description}],
453
+ }
454
+ ]
455
+
456
+ text = processor.apply_chat_template(
457
+ messages, tokenize=False, add_generation_prompt=True,
458
+ )
459
+
460
+ image_inputs = process_all_vision_info(messages)
461
+ inputs = processor(
462
+ text=[text],
463
+ images=image_inputs,
464
+ padding=True,
465
+ return_tensors="pt",
466
+ ).to("cuda")
467
+
468
+ generated_ids = model.generate(
469
+ **inputs,
470
+ **generation_config
471
+ )
472
+ generated_ids_trimmed = [
473
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
474
+ ]
475
+ output_text = processor.batch_decode(
476
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
477
+ )
478
+
479
+ print(output_text[0])
480
+ # {
481
+ # "Contract": {
482
+ # "Title": "verbatim-string",
483
+ # "Description": "verbatim-string",
484
+ # "Terms": [
485
+ # {
486
+ # "Term": "verbatim-string",
487
+ # "Description": "verbatim-string"
488
+ # }
489
+ # ],
490
+ # "Date": "date-time",
491
+ # "Signatory": "verbatim-string"
492
+ # }
493
+ # }
494
+ ```
495
+ </details>
496
+
497
+ ## Fine-Tuning
498
+ You can find a fine-tuning tutorial notebook in the [cookbooks](https://github.com/numindai/nuextract/tree/main/cookbooks) folder of the [GitHub repo](https://github.com/numindai/nuextract/tree/main).
499
+
500
+ ## vLLM Deployment
501
+ Run the command below to serve an OpenAI-compatible API:
502
+ ```bash
503
+ vllm serve numind/NuExtract-2.0-8B --trust_remote_code --limit-mm-per-prompt image=6 --chat-template-content-format openai
504
+ ```
505
+ If you encounter memory issues, set `--max-model-len` accordingly.
506
+
507
+ Send requests to the model as follows:
508
+ ```python
509
+ import json
510
+ from openai import OpenAI
511
+
512
+ openai_api_key = "EMPTY"
513
+ openai_api_base = "http://localhost:8000/v1"
514
+
515
+ client = OpenAI(
516
+ api_key=openai_api_key,
517
+ base_url=openai_api_base,
518
+ )
519
+
520
+ chat_response = client.chat.completions.create(
521
+ model="numind/NuExtract-2.0-8B",
522
+ temperature=0,
523
+ messages=[
524
+ {
525
+ "role": "user",
526
+ "content": [{"type": "text", "text": "Yesterday I went shopping at Bunnings"}],
527
+ },
528
+ ],
529
+ extra_body={
530
+ "chat_template_kwargs": {
531
+ "template": json.dumps(json.loads("""{\"store\": \"verbatim-string\"}"""), indent=4)
532
+ },
533
+ }
534
+ )
535
+ print("Chat response:", chat_response)
536
+ ```
537
+ For image inputs, structure requests as shown below. Make sure to order the images in `"content"` as they appear in the prompt (i.e. any in-context examples before the main input).
538
+ ```python
539
+ import base64
540
+
541
+ def encode_image(image_path):
542
+ """
543
+ Encode the image file to base64 string
544
+ """
545
+ with open(image_path, "rb") as image_file:
546
+ return base64.b64encode(image_file.read()).decode('utf-8')
547
+
548
+ base64_image = encode_image("0.jpg")
549
+ base64_image2 = encode_image("1.jpg")
550
+
551
+ chat_response = client.chat.completions.create(
552
+ model="numind/NuExtract-2.0-8B",
553
+ temperature=0,
554
+ messages=[
555
+ {
556
+ "role": "user",
557
+ "content": [
558
+ {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}, # first ICL example image
559
+ {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image2}"}}, # real input image
560
+ ],
561
+ },
562
+ ],
563
+ extra_body={
564
+ "chat_template_kwargs": {
565
+ "template": json.dumps(json.loads("""{\"store\": \"verbatim-string\"}"""), indent=4),
566
+ "examples": [
567
+ {
568
+ "input": "<image>",
569
+ "output": """{\"store\": \"Walmart\"}"""
570
+ }
571
+ ]
572
+ },
573
+ }
574
+ )
575
+ print("Chat response:", chat_response)
576
+ ```