amalad commited on
Commit
1e8e8e6
·
1 Parent(s): 6a5079e

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ *.json filter=lfs diff=lfs merge=lfs -text
38
+ images/table.png filter=lfs diff=lfs merge=lfs -text
39
+ images/tech.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,6 +1,171 @@
1
- ---
2
- license: other
3
- license_name: nvidia-open-model-license
4
- license_link: >-
5
- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
6
+ ---
7
+
8
+ # Llama-Nemotron-Nano-VL-8B-V1
9
+
10
+ ## Model Overview
11
+
12
+ ### Description
13
+
14
+ Llama-Nemotron-Nano-VL-8B-V1 is a leading document intelligence vision language model (VLMs) that enables the ability to query and summarize images and video from the physical or virtual world. Llama-Nemotron-Nano-VL-8B-V1 is deployable in the data center, cloud and at the edge, including Jetson Orin and laptop by AWQ 4bit quantization through TinyChat framework. We find: (1) image-text pairs are not enough, interleaved image-text is essential; (2) unfreezing LLM during interleaved image-text pre-training enables in-context learning; (3)re-blending text-only instruction data is crucial to boost both VLM and text-only performance.
15
+
16
+ This model was trained on commercial images and videos for all three stages of training and supports single image and video inference.
17
+
18
+ ### License/Terms of Use
19
+ **Governing Terms:**
20
+
21
+ Your use of the model is governed by the [NVIDIA Open License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). Additional Information: Llama 3.1 Community Model License; Built with Llama.
22
+
23
+ **Additional Information:**
24
+
25
+ [Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
26
+
27
+
28
+ ### Deployment Geography:
29
+
30
+ Global
31
+
32
+ ### Use Case:
33
+
34
+ Customers: AI foundry enterprise customers
35
+
36
+ Use Cases: Image summarization. Text-image analysis, Optical Character Recognition, Interactive Q&A on images, Comparison and contrast of multiple images, Text Chain-of-Thought reasoning.
37
+
38
+
39
+ ## Release Date:
40
+
41
+ - Build.Nvidia.com [June 3rd, 2025] via [nvidia/llama-3_1-nemotron-nano-vl-8b-v1](https://build.nvidia.com/nvidia/llama-3_1-nemotron-nano-vl-8b-v1)
42
+ - Hugging Face [June 3rd, 2025]
43
+
44
+ ## Model Architecture:
45
+
46
+ **Network Type:** Transformer
47
+
48
+ **Network Architecture:**
49
+
50
+ Vision Encoder: CRadioV2-H
51
+
52
+ Language Encoder: Llama-3.1-8B-Instruct
53
+
54
+ ### Input
55
+
56
+ Input Type(s): Image, Video, Text
57
+ - Input Images Supported: Multiple images within 16K input + output tokens
58
+ - Language Supported: English only
59
+
60
+ Input Format(s): Image (Red, Green, Blue (RGB)), Video (.mp4), and Text (String)
61
+
62
+ Input Parameters: Image (2D), Video (3D), Text (1D)
63
+
64
+ Other Properties Related to Input:
65
+
66
+ - Input + Output Token: 16K
67
+ - Maximum Resolution: Determined by a 12-tile layout constraint, with each tile being 512 × 512 pixels. This supports aspect ratios such as:
68
+ - 4 × 3 layout: up to 2048 × 1536 pixels
69
+ - 3 × 4 layout: up to 1536 × 2048 pixels
70
+ - 2 × 6 layout: up to 1024 × 3072 pixels
71
+ - 6 × 2 layout: up to 3072 × 1024 pixels
72
+ - Other configurations allowed, provided total tiles ≤ 12
73
+ - Channel Count: 3 channels (RGB)
74
+ - Alpha Channel: Not supported (no transparency)
75
+
76
+ ### Output
77
+ Output Type(s): Text
78
+
79
+ Output Formats: String
80
+
81
+ Output Parameters: 1D
82
+
83
+ Other Properties Related to Output: Input + Output Token: 16K
84
+
85
+
86
+
87
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
88
+
89
+ ### Software Integration
90
+ Runtime Engine(s): TensorRT-LLM<br>
91
+ Supported Hardware Microarchitecture Compatibility: H100 SXM 80GB<br>
92
+ Supported Operating System(s): Linux<br>
93
+
94
+ ### Model Versions:
95
+ Llama-3.1-Nemotron-Nano-VL-8B-V1
96
+
97
+ ## Usage
98
+
99
+ ```python
100
+ from PIL import Image
101
+ from transformers import AutoImageProcessor, AutoModel, AutoTokenizer
102
+
103
+ path = "."
104
+ model = AutoModel.from_pretrained(path, trust_remote_code=True, device_map="cuda").eval()
105
+ tokenizer = AutoTokenizer.from_pretrained(path)
106
+ image_processor = AutoImageProcessor.from_pretrained(path, trust_remote_code=True, device="cuda")
107
+
108
+ image1 = Image.open("images/example1a.jpeg")
109
+ image2 = Image.open("images/example1b.jpeg")
110
+ image_features = image_processor([image1, image2])
111
+
112
+ generation_config = dict(max_new_tokens=1024, do_sample=False, eos_token_id=tokenizer.eos_token_id)
113
+
114
+ question = 'Describe the two images.'
115
+ response = model.chat(
116
+ tokenizer=tokenizer, question=question, generation_config=generation_config,
117
+ **image_features)
118
+
119
+ print(f'User: {question}\nAssistant: {response}')
120
+ ```
121
+
122
+
123
+ ## Training/Evaluation Dataset:
124
+ NV-Pretraining and NV-CosmosNemotron-SFT were used for training and evaluation
125
+
126
+ Data Collection Method by dataset (Training and Evaluation): <br>
127
+ * Hybrid: Human, Synthetic <br>
128
+
129
+ Labeling Method by dataset (Training and Evaluation): <br>
130
+ * Hybrid: Human, Synthetic <br>
131
+
132
+
133
+ Additionally, the dataset collection (for training and evaluation) consists of a mix of internal and public datasets designed for training and evaluation across various tasks. It includes: <br>
134
+ • Internal datasets built with public commercial images and internal labels, supporting tasks like conversation modeling and document analysis.<br>
135
+ • Public datasets sourced from publicly available images and annotations, adapted for tasks such as image captioning and visual question answering.<br>
136
+ • Synthetic datasets generated programmatically for specific tasks like tabular data understanding.<br>
137
+ • Specialized datasets for safety alignment, function calling, and domain-specific tasks (e.g., science diagrams, financial question answering).<br>
138
+
139
+
140
+
141
+ ## Evaluation Benchmarks:
142
+
143
+ | Benchmark | Score |
144
+ | --- | --- |
145
+ | MMMU Val with chatGPT as a judge | 48.2% |
146
+ | AI2D | 85.0% |
147
+ | ChartQA | 86.3% |
148
+ | InfoVQA Val | 77.4% |
149
+ | OCRBench | 839 |
150
+ | OCRBenchV2 English | 60.1% |
151
+ | OCRBenchV2 Chinese | 37.9% |
152
+ | DocVQA val | 91.2% |
153
+ | VideoMME | 54.7% |
154
+
155
+
156
+
157
+ # Inference:
158
+ **Engine:** TTensorRT-LLM <br>
159
+ **Test Hardware:** <br>
160
+ * 1x NVIDIA H100 SXM 80GB
161
+
162
+
163
+ ## Ethical Considerations:
164
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
165
+
166
+ Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
167
+
168
+ Outputs generated by these models may contain political content or other potentially misleading information, issues with content security and safety, or unwanted bias that is independent of our oversight.
169
+
170
+
171
+
bias.md ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ Field | Response
2
+ :---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3
+ Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | We actively considered participation from adversely impacted groups and protected classes during model design and testing by engaging diverse stakeholders, reviewing data for representation, and evaluating outputs for bias. Feedback channels were provided throughout development.
4
+ Measures taken to mitigate against unwanted bias: | We took several steps to reduce unwanted bias, including:<br>- **Evaluating** the model’s answers with regard to fairness for different groups<br>- Using tools to **identify** and measure unfairness.
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d28e10441f08143e1bad3121dbdb702df97d5ca448758d1982920e05dba0c62d
3
+ size 7662
configuration.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # Adapted from https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B under MIT License
3
+ # LICENSE is in incl_licenses directory.
4
+ # --------------------------------------------------------
5
+
6
+ from transformers import AutoConfig, LlamaConfig
7
+ from transformers.configuration_utils import PretrainedConfig
8
+ from transformers.utils import logging
9
+ from transformers.dynamic_module_utils import get_class_from_dynamic_module
10
+
11
+ logger = logging.get_logger(__name__)
12
+
13
+ class Llama_Nemotron_Nano_VL_Config(PretrainedConfig):
14
+ model_type = 'Llama_Nemotron_Nano_VL'
15
+ is_composition = True
16
+
17
+ def __init__(
18
+ self,
19
+ vision_config=None,
20
+ llm_config=None,
21
+ force_image_size=None,
22
+ downsample_ratio=0.5,
23
+ template=None,
24
+ ps_version='v1',
25
+ image_tag_type="internvl",
26
+ projector_hidden_size=4096,
27
+ vit_hidden_size=1280,
28
+ attn_implementation="flash_attention_2",
29
+ **kwargs
30
+ ):
31
+ super().__init__(**kwargs)
32
+
33
+ if vision_config is not None:
34
+ assert "auto_map" in vision_config and "AutoConfig" in vision_config["auto_map"]
35
+ vision_auto_config = get_class_from_dynamic_module(*vision_config["auto_map"]["AutoConfig"].split("--")[::-1])
36
+ self.vision_config = vision_auto_config(**vision_config)
37
+ else:
38
+ self.vision_config = PretrainedConfig()
39
+
40
+ if llm_config is None:
41
+ self.llm_config = LlamaConfig()
42
+ else:
43
+ self.llm_config = LlamaConfig(**llm_config)
44
+
45
+ # Assign configuration values
46
+ self.force_image_size = force_image_size
47
+ self.downsample_ratio = downsample_ratio
48
+ self.template = template # TODO move out of here and into the tokenizer
49
+ self.ps_version = ps_version # Pixel shuffle version
50
+ self.image_tag_type = image_tag_type # TODO: into the tokenizer too?
51
+ self.projector_hidden_size = projector_hidden_size
52
+ self.vit_hidden_size = vit_hidden_size
53
+
54
+ self._attn_implementation = attn_implementation
55
+ self.vision_config.use_flash_attn = "flash_attention" in self._attn_implementation
56
+ self.llm_config._attn_implementation = self._attn_implementation
examples.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
3
+ from PIL import Image
4
+
5
+ path = "."
6
+ model = AutoModel.from_pretrained(
7
+ path,
8
+ torch_dtype=torch.bfloat16,
9
+ low_cpu_mem_usage=True,
10
+ trust_remote_code=True,
11
+ device_map="cuda",).eval()
12
+
13
+ tokenizer = AutoTokenizer.from_pretrained(path)
14
+ image_processor = AutoImageProcessor.from_pretrained(path, device="cuda", trust_remote_code=True)
15
+
16
+ generation_config = dict(max_new_tokens=1024, do_sample=False, eos_token_id=tokenizer.eos_token_id)
17
+
18
+ # pure-text conversation
19
+ question = 'What happened in 1986?'
20
+ response, history = model.chat(
21
+ tokenizer, None, question, generation_config, history=None, return_history=True
22
+ )
23
+ print(f'User: {question}\nAssistant: {response}')
24
+
25
+ # single-image single-round conversation
26
+ image_path = 'images/table.png'
27
+ image_features = image_processor(Image.open(image_path))
28
+ question = '<image>\nExtract the table in this image as HTML.'
29
+ response = model.chat(
30
+ tokenizer=tokenizer, question=question, generation_config=generation_config,
31
+ **image_features
32
+ )
33
+ print(f'User: {question}\nAssistant: {response}')
34
+
35
+ # single-image single-round conversation
36
+ image_path = 'images/tech.png'
37
+ image_features = image_processor(Image.open(image_path))
38
+ question = '<image>\nList in bullet point the most important Technological breakthrough of Nvidia Hopper.'
39
+ response = model.chat(
40
+ tokenizer=tokenizer, question=question, generation_config=generation_config,
41
+ **image_features
42
+ )
43
+ print(f'User: {question}\nAssistant: {response}')
44
+
45
+ # two image single-round conversation
46
+ image_features = image_processor([
47
+ Image.open('images/example1a.jpeg'),
48
+ Image.open('images/example1b.jpeg')
49
+ ])
50
+
51
+ question = '<image-1>: <image>\n<image-2>: <image>\nBriefly describe the two images.'
52
+ response = model.chat(
53
+ tokenizer=tokenizer, question=question, generation_config=generation_config,
54
+ **image_features
55
+ )
56
+ print(f'User: {question}\nAssistant: {response}')
explainability.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
3
+ Intended Application & Domain: | Visual Question Answering
4
+ Model Type: | Transformer
5
+ Intended Users: | Generative AI creators working with conversational AI models and image content.
6
+ Output: | Text (Responds to posed question, stateful - remembers previous answers)
7
+ Describe how the model works: | Chat based on image/video content
8
+ Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
9
+ Technical Limitations: | Max Number of images supported: 4.<br><br>**Context Length:** Supports up to 16,000 tokens total (input + output). If exceeded, input is truncated from the start, and generation ends with an EOS token. Longer prompts may risk performance loss.<br><br>If the model fails (e.g., generates incorrect responses, repeats, or gives poor responses), issues are diagnosed via benchmarks, human review, and internal debugging tools. Only use NVIDIA provided models that use safetensors format. <br><br>Do not expose the vLLM host to a network where any untrusted connections may reach the host. Only use NVIDIA provided models that use safetensors format.
10
+ Verified to have met prescribed NVIDIA quality standards: | Yes
11
+ Performance Metrics: | MMMU Val with chatGPT as a judge, AI2D, ChartQA Test, InfoVQA Val, OCRBench, OCRBenchV2 English, OCRBenchV2 Chinese, DocVQA val, VideoMME (16 frames), SlideQA (F1)
12
+ Potential Known Risks: | The Model may produce output that is biased, toxic, or incorrect responses. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The Model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text, producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.<br>While we have taken safety and security into account and are continuously improving, outputs may still contain political content, misleading information, or unwanted bias beyond our control.
13
+ Licensing: | **Governing Terms:**<br>Your use of the software container and model is governed by the [NVIDIA Software and Model Evaluation License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-and-model-evaluation-license/).<br><br>**Additional Information:**<br>[Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
image_processing.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Union
2
+
3
+ from PIL import Image
4
+ import torch
5
+ from transformers.image_processing_base import BatchFeature
6
+ from transformers.image_processing_utils_fast import (BaseImageProcessorFast,
7
+ divide_to_patches)
8
+ from transformers.image_utils import (ChannelDimension, SizeDict,
9
+ get_image_size, make_list_of_images,
10
+ get_image_type, ImageInput, ImageType)
11
+ from transformers.utils import TensorType
12
+
13
+
14
+ def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
15
+ best_factor = float('-inf')
16
+ best_ratio = (1, 1)
17
+ area = width * height
18
+ for ratio in target_ratios:
19
+ target_aspect_ratio = ratio[0] / ratio[1]
20
+ factor_based_on_area_n_ratio = min(
21
+ (ratio[0]*ratio[1]*image_size*image_size)/ area, 0.6
22
+ )* min(
23
+ target_aspect_ratio/aspect_ratio, aspect_ratio/target_aspect_ratio)
24
+ if factor_based_on_area_n_ratio > best_factor:
25
+ best_factor = factor_based_on_area_n_ratio
26
+ best_ratio = ratio
27
+ return best_ratio
28
+
29
+
30
+ class LlamaNemotronNanoVLImageProcessor(BaseImageProcessorFast):
31
+ model_input_names = ["pixel_values"]
32
+
33
+ def __init__(self, image_size=512, max_num_tiles=12, use_thumbnail=True, **kwargs):
34
+ super().__init__(**kwargs)
35
+ self.image_size = image_size
36
+ self.max_num_tiles = max_num_tiles
37
+ self.use_thumbnail = use_thumbnail
38
+
39
+ # Based on https://github.com/OpenGVLab/InternVL/blob/c62fa4f7c850165d7386bdc48ac6bc5a6fab0864/internvl_chat/internvl/train/dataset.py#L702
40
+ def dynamic_preprocess(self, image, image_size=448, max_num_tiles=12, use_thumbnail=False):
41
+ orig_height, orig_width = get_image_size(image, channel_dim=ChannelDimension.FIRST)
42
+ aspect_ratio = orig_width / orig_height
43
+
44
+ # calculate the existing image aspect ratio
45
+ target_ratios = set(
46
+ (i, j) for n in range(1, max_num_tiles + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
47
+ i * j <= max_num_tiles and i * j >= 1)
48
+ target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
49
+
50
+ # find the closest aspect ratio to the target
51
+ target_aspect_ratio = find_closest_aspect_ratio(
52
+ aspect_ratio, target_ratios, orig_width, orig_height, image_size)
53
+
54
+ # calculate the target width and height
55
+ target_width = image_size * target_aspect_ratio[0]
56
+ target_height = image_size * target_aspect_ratio[1]
57
+
58
+ resized_img = self.resize(image, SizeDict(height=target_height, width=target_width))
59
+ patches = divide_to_patches(resized_img, image_size)
60
+ if use_thumbnail and len(patches) != 1:
61
+ patches.append(self.resize(image, SizeDict(height=image_size, width=image_size)))
62
+
63
+ return patches
64
+
65
+ def _process_image(
66
+ self,
67
+ image: ImageInput,
68
+ **kwargs,
69
+ ) -> torch.Tensor:
70
+ image_type = get_image_type(image)
71
+ if image_type not in [ImageType.PIL]:
72
+ raise ValueError(f"Unsupported input image type {image_type}. Only PIL images supported")
73
+ image = image.resize((image.width * 2, image.height * 2), Image.BILINEAR)
74
+ return super()._process_image(image, **kwargs)
75
+
76
+ def _preprocess(
77
+ self,
78
+ images: List[torch.Tensor],
79
+ image_size: int = None,
80
+ max_num_tiles: int = None,
81
+ use_thumbnail: bool = None,
82
+ do_rescale: bool = None,
83
+ return_tensors: Optional[Union[str, TensorType]] = None,
84
+ **kwargs,
85
+ ) -> List[torch.Tensor]:
86
+ image_size = image_size if image_size is not None else self.image_size
87
+ max_num_tiles = max_num_tiles if max_num_tiles is not None else self.max_num_tiles
88
+ use_thumbnail = use_thumbnail if use_thumbnail is not None else self.use_thumbnail
89
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
90
+
91
+ images = make_list_of_images(images)
92
+
93
+ all_patches = []
94
+ num_patches = []
95
+ for image in images:
96
+ patches = self.dynamic_preprocess(
97
+ image, image_size, max_num_tiles, use_thumbnail
98
+ )
99
+ all_patches.extend(patches)
100
+ num_patches.append(len(patches))
101
+
102
+ pixel_values = torch.stack(all_patches, dim=0)
103
+ pixel_values = self.rescale_and_normalize(
104
+ pixel_values,
105
+ do_rescale,
106
+ self.rescale_factor,
107
+ do_normalize=self.do_normalize,
108
+ image_mean=self.image_mean,
109
+ image_std=self.image_std
110
+ )
111
+
112
+ return BatchFeature(data={"pixel_values": pixel_values, "num_patches": num_patches}, tensor_type=return_tensors)
images/example1a.jpeg ADDED
images/example1b.jpeg ADDED
images/table.png ADDED

Git LFS Details

  • SHA256: 001461d8dd271602ce849013c9a226113279e0ae6156a27a12332bace6225e33
  • Pointer size: 131 Bytes
  • Size of remote file: 131 kB
images/tech.png ADDED

Git LFS Details

  • SHA256: 4ae75f51f941a0d05b9c7c9a5025f962930e3e7526a68a627602c47846278109
  • Pointer size: 131 Bytes
  • Size of remote file: 222 kB
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:415e7a900327cc5a057b5e57641ba689a834e5103adc2eece0512f8793415f34
3
+ size 17443626956
modeling.py ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # Adapted from https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B under MIT License
3
+ # LICENSE is in incl_licenses directory.
4
+ # --------------------------------------------------------
5
+
6
+
7
+ import warnings
8
+ from typing import List, Optional, Tuple, Union
9
+
10
+ import torch.utils.checkpoint
11
+ import transformers
12
+ from torch import nn
13
+ from torch.nn import CrossEntropyLoss
14
+ from transformers import AutoModel, AutoModelForCausalLM, GenerationConfig
15
+ from transformers.modeling_outputs import CausalLMOutputWithPast
16
+ from transformers.modeling_utils import PreTrainedModel
17
+ from transformers.utils import logging
18
+
19
+ from .configuration import Llama_Nemotron_Nano_VL_Config
20
+
21
+ logger = logging.get_logger(__name__)
22
+
23
+
24
+ """
25
+ The following code is adapted from the
26
+ https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B/blob/main/modeling_internvl_chat.py repository
27
+
28
+ The chat function is adapted to handle NVLM 1-D tile-tagging design for dynamic high-resolution images.
29
+ """
30
+ def version_cmp(v1, v2, op='eq'):
31
+ import operator
32
+
33
+ from packaging import version
34
+ op_func = getattr(operator, op)
35
+ return op_func(version.parse(v1), version.parse(v2))
36
+
37
+
38
+ class Llama_Nemotron_Nano_VL_Model(PreTrainedModel):
39
+ config_class = Llama_Nemotron_Nano_VL_Config
40
+ main_input_name = 'pixel_values'
41
+ _supports_flash_attn_2 = True
42
+ _no_split_modules = ['InternVisionModel', 'SiglipVisionModel', 'Qwen2DecoderLayer']
43
+
44
+ def __init__(self, config: Llama_Nemotron_Nano_VL_Config):
45
+ super().__init__(config)
46
+
47
+ assert version_cmp(transformers.__version__, '4.36.2', 'ge')
48
+ image_size = config.force_image_size
49
+ patch_size = config.patch_size
50
+ self.patch_size = patch_size
51
+ self.template = config.template
52
+ self.num_image_token = int((image_size // patch_size) ** 2 * (config.downsample_ratio ** 2))
53
+ self.downsample_ratio = config.downsample_ratio
54
+ self.ps_version = config.ps_version
55
+ self.image_tag_type = config.image_tag_type
56
+
57
+ logger.info(f'num_image_token: {self.num_image_token}')
58
+ logger.info(f'ps_version: {self.ps_version}')
59
+
60
+ self.language_model = AutoModelForCausalLM.from_config(config.llm_config, torch_dtype=torch.bfloat16)
61
+ self.vision_model = AutoModel.from_config(config.vision_config, trust_remote_code=True)
62
+ self.vision_model.model._initialize_weights = self.vision_model.model._init_weights # WAR for transformers issue 38358
63
+
64
+ self.drop_vision_class_token = True
65
+
66
+ # Construct the vision projection.
67
+ # Default
68
+ vit_hidden_size = config.vit_hidden_size
69
+ vision_projection_hidden_size = config.projector_hidden_size
70
+ llm_hidden_size = config.llm_config.hidden_size
71
+
72
+ self.mlp1 = nn.Sequential(
73
+ nn.LayerNorm(vit_hidden_size * int(1 / self.downsample_ratio) ** 2, bias=True),
74
+ nn.Linear(vit_hidden_size * int(1 / self.downsample_ratio) ** 2, vision_projection_hidden_size, bias=True),
75
+ nn.GELU(),
76
+ nn.Linear(vision_projection_hidden_size, llm_hidden_size, bias=True)
77
+ )
78
+ self.mlp1 = self.mlp1.to(self.language_model.config.torch_dtype)
79
+
80
+ self.img_context_token_id = None
81
+
82
+ def forward(
83
+ self,
84
+ pixel_values: torch.FloatTensor,
85
+ input_ids: torch.LongTensor = None,
86
+ attention_mask: Optional[torch.Tensor] = None,
87
+ position_ids: Optional[torch.LongTensor] = None,
88
+ image_flags: Optional[torch.LongTensor] = None,
89
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
90
+ labels: Optional[torch.LongTensor] = None,
91
+ use_cache: Optional[bool] = None,
92
+ output_attentions: Optional[bool] = None,
93
+ output_hidden_states: Optional[bool] = None,
94
+ return_dict: Optional[bool] = None,
95
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
96
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
97
+
98
+ image_flags = image_flags.squeeze(-1)
99
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
100
+
101
+ vit_embeds = self.extract_feature(pixel_values)
102
+ vit_embeds = vit_embeds[image_flags == 1]
103
+ vit_batch_size = pixel_values.shape[0]
104
+
105
+ B, N, C = input_embeds.shape
106
+ input_embeds = input_embeds.reshape(B * N, C)
107
+
108
+ if torch.distributed.get_rank() == 0:
109
+ print(f'dynamic ViT batch size: {vit_batch_size}, images per sample: {vit_batch_size / B}, dynamic token length: {N}')
110
+
111
+ input_ids = input_ids.reshape(B * N)
112
+ selected = (input_ids == self.img_context_token_id)
113
+ try:
114
+ input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds.reshape(-1, C)
115
+ except Exception as e:
116
+ vit_embeds = vit_embeds.reshape(-1, C)
117
+ print(f'warning: {e}, input_embeds[selected].shape={input_embeds[selected].shape}, '
118
+ f'vit_embeds.shape={vit_embeds.shape}')
119
+ n_token = selected.sum()
120
+ input_embeds[selected] = input_embeds[selected] * 0.0 + vit_embeds[:n_token]
121
+
122
+ input_embeds = input_embeds.reshape(B, N, C)
123
+
124
+ outputs = self.language_model(
125
+ inputs_embeds=input_embeds,
126
+ attention_mask=attention_mask,
127
+ position_ids=position_ids,
128
+ past_key_values=past_key_values,
129
+ use_cache=use_cache,
130
+ output_attentions=output_attentions,
131
+ output_hidden_states=output_hidden_states,
132
+ return_dict=return_dict,
133
+ )
134
+ logits = outputs.logits
135
+
136
+ loss = None
137
+ if labels is not None:
138
+ # Shift so that tokens < n predict n
139
+ shift_logits = logits[..., :-1, :].contiguous()
140
+ shift_labels = labels[..., 1:].contiguous()
141
+ # Flatten the tokens
142
+ loss_fct = CrossEntropyLoss()
143
+ shift_logits = shift_logits.view(-1, self.language_model.config.vocab_size)
144
+ shift_labels = shift_labels.view(-1)
145
+ # Enable model parallelism
146
+ shift_labels = shift_labels.to(shift_logits.device)
147
+ loss = loss_fct(shift_logits, shift_labels)
148
+
149
+ if not return_dict:
150
+ output = (logits,) + outputs[1:]
151
+ return (loss,) + output if loss is not None else output
152
+
153
+ return CausalLMOutputWithPast(
154
+ loss=loss,
155
+ logits=logits,
156
+ past_key_values=outputs.past_key_values,
157
+ hidden_states=outputs.hidden_states,
158
+ attentions=outputs.attentions,
159
+ )
160
+
161
+ def pixel_shuffle(self, x, scale_factor=0.5):
162
+ n, w, h, c = x.size()
163
+ # N, W, H, C --> N, W, H * scale, C // scale
164
+ x = x.view(n, w, int(h * scale_factor), int(c / scale_factor))
165
+ # N, W, H * scale, C // scale --> N, H * scale, W, C // scale
166
+ x = x.permute(0, 2, 1, 3).contiguous()
167
+ # N, H * scale, W, C // scale --> N, H * scale, W * scale, C // (scale ** 2)
168
+ x = x.view(n, int(h * scale_factor), int(w * scale_factor),
169
+ int(c / (scale_factor * scale_factor)))
170
+ if self.ps_version == 'v1':
171
+ warnings.warn("In ps_version 'v1', the height and width have not been swapped back, "
172
+ 'which results in a transposed image.')
173
+ else:
174
+ x = x.permute(0, 2, 1, 3).contiguous()
175
+ return x
176
+
177
+ def extract_feature(self, pixel_values):
178
+ vit_embeds = self.vision_model(pixel_values).features
179
+ vit_embeds = vit_embeds.to(dtype=torch.bfloat16)
180
+ h = w = int(vit_embeds.shape[1] ** 0.5)
181
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], h, w, -1)
182
+ vit_embeds = self.pixel_shuffle(vit_embeds, scale_factor=self.downsample_ratio)
183
+ vit_embeds = vit_embeds.reshape(vit_embeds.shape[0], -1, vit_embeds.shape[-1])
184
+ vit_embeds = self.mlp1(vit_embeds)
185
+ return vit_embeds
186
+
187
+ def _format_image_token(self, query, num_patches_list, IMG_CONTEXT_TOKEN):
188
+ # Split by '<image>' and rejoin with appropriate tokens
189
+ parts = query.split('<image>')
190
+ if len(parts) - 1 != len(num_patches_list):
191
+ raise ValueError(f"Number of <image> tokens ({len(parts) - 1}) doesn't match num_patches_list length ({len(num_patches_list)})")
192
+
193
+ result = parts[0]
194
+ for num_patches, part in zip(num_patches_list, parts[1:]):
195
+ if self.image_tag_type == "nvlm":
196
+ tile_pos_identifiers = [f"<tile_{j}>" for j in range(1, num_patches)] + ["<tile_global_thumbnail>"]
197
+ image_tokens = ''
198
+ for tile_pos_identifier in tile_pos_identifiers:
199
+ image_tokens += tile_pos_identifier + IMG_CONTEXT_TOKEN * self.num_image_token
200
+ image_tokens = '<Image>' + image_tokens + '</Image>'
201
+ elif self.image_tag_type == "internvl":
202
+ image_tokens = IMG_CONTEXT_TOKEN * self.num_image_token * num_patches
203
+ image_tokens = '<img>' + image_tokens + '</img>'
204
+ else:
205
+ raise ValueError(f"Unknown image tag type {self.image_tag_type}")
206
+
207
+ result += image_tokens + part
208
+
209
+ return result
210
+
211
+ """
212
+ Adapts the chat function to handle NVLM 1-D tile-tagging design for dynamic high-resolution images.
213
+ Additionally, it supports the following:
214
+ - Chat without a system prompt.
215
+ - Chat without an image prompt.
216
+ """
217
+ def chat(self, tokenizer, pixel_values, question, generation_config, history=None, return_history=False,
218
+ num_patches=None, IMG_START_TOKEN='<img>', IMG_END_TOKEN='</img>',
219
+ IMG_CONTEXT_TOKEN='<image>', verbose=False, visual_features=None, system_prompt=None):
220
+
221
+ if num_patches is None:
222
+ num_patches_list = [pixel_values.shape[0]] if pixel_values is not None else []
223
+ elif isinstance(num_patches, torch.Tensor):
224
+ num_patches_list = num_patches.tolist()
225
+ else:
226
+ num_patches_list = num_patches
227
+
228
+ if history is None and pixel_values is not None and '<image>' not in question:
229
+ question = '<image>\n' * len(num_patches_list) + question
230
+
231
+ assert pixel_values is None or len(pixel_values) == sum(num_patches_list)
232
+
233
+ img_context_token_id = tokenizer.convert_tokens_to_ids(IMG_CONTEXT_TOKEN)
234
+ self.img_context_token_id = img_context_token_id
235
+
236
+ eos_token_id = tokenizer.eos_token_id
237
+
238
+ messages = []
239
+ if system_prompt is not None:
240
+ messages.append({"role": "system", "content": system_prompt})
241
+
242
+ history = [] if history is None else history
243
+ for (old_question, old_answer) in history:
244
+ messages.append({"role": "user", "content": old_question})
245
+ messages.append({"role": "assistant", "content": old_answer})
246
+
247
+ messages.append({"role": "user", "content": question})
248
+ query = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
249
+
250
+ if verbose and pixel_values is not None:
251
+ image_bs = pixel_values.shape[0]
252
+ print(f'dynamic ViT batch size: {image_bs}')
253
+
254
+ query = self._format_image_token(query, num_patches_list, IMG_CONTEXT_TOKEN)
255
+
256
+ model_inputs = tokenizer(query, return_tensors='pt', add_special_tokens=False)
257
+ input_ids = model_inputs['input_ids'].cuda()
258
+ attention_mask = model_inputs['attention_mask'].cuda()
259
+ generation_config['eos_token_id'] = eos_token_id
260
+ generation_output = self.generate(
261
+ pixel_values=pixel_values,
262
+ visual_features=visual_features,
263
+ input_ids=input_ids,
264
+ attention_mask=attention_mask,
265
+ **generation_config
266
+ )
267
+
268
+ response = tokenizer.batch_decode(generation_output)[0]
269
+ response = response.split(tokenizer.eos_token)[0].strip()
270
+ history.append((question, response))
271
+ if return_history:
272
+ return response, history
273
+ else:
274
+ query_to_print = query.replace(IMG_CONTEXT_TOKEN, '')
275
+ query_to_print = query_to_print.replace(f'{IMG_START_TOKEN}{IMG_END_TOKEN}', '<image>')
276
+ if verbose:
277
+ print(query_to_print, response)
278
+ return response
279
+
280
+ @torch.no_grad()
281
+ def generate(
282
+ self,
283
+ pixel_values: Optional[torch.FloatTensor] = None,
284
+ input_ids: Optional[torch.FloatTensor] = None,
285
+ attention_mask: Optional[torch.LongTensor] = None,
286
+ visual_features: Optional[torch.FloatTensor] = None,
287
+ generation_config: Optional[GenerationConfig] = None,
288
+ output_hidden_states: Optional[bool] = None,
289
+ return_dict: Optional[bool] = None,
290
+ **generate_kwargs,
291
+ ) -> torch.LongTensor:
292
+
293
+ assert self.img_context_token_id is not None
294
+ if pixel_values is not None:
295
+ if visual_features is not None:
296
+ vit_embeds = visual_features.cuda()
297
+ vit_embeds = self.mlp1(vit_embeds)
298
+ else:
299
+ vit_embeds = self.extract_feature(pixel_values)
300
+
301
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
302
+ B, N, C = input_embeds.shape
303
+ input_embeds = input_embeds.reshape(B * N, C)
304
+
305
+ input_ids = input_ids.reshape(B * N)
306
+ selected = (input_ids == self.img_context_token_id)
307
+ assert selected.sum() != 0
308
+ input_embeds[selected] = vit_embeds.reshape(-1, C).to(input_embeds.device)
309
+
310
+ input_embeds = input_embeds.reshape(B, N, C)
311
+ else:
312
+ input_embeds = self.language_model.get_input_embeddings()(input_ids)
313
+
314
+ outputs = self.language_model.generate(
315
+ inputs_embeds=input_embeds,
316
+ attention_mask=attention_mask,
317
+ generation_config=generation_config,
318
+ output_hidden_states=output_hidden_states,
319
+ use_cache=True,
320
+ **generate_kwargs,
321
+ )
322
+
323
+ return outputs
preprocessor_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63ed8d1a7d866d2322cac5c73d8c3fa033a79ada289df02530ca0d16688fde8f
3
+ size 283
privacy.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
3
+ Generatable or reverse engineerable personal data? | None
4
+ Personal data used to create this model? | None
5
+ How often is dataset reviewed? | Before Every Release | Yes
6
+ Does data labeling (annotation, metadata) comply with privacy laws? | Yes
7
+ Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. Applicable Privacy Policy: https://www.nvidia.com/en-us/about-nvidia/privacy-policy/
8
+
safety.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Field | Response
2
+ :---------------------------------------------------|:----------------------------------
3
+ Model Application(s): | - Extracting and understanding information from text and images in documents (OCR, tables, charts, diagrams, math expressions)<br>- Recognizing objects, attributes, and semantic relationships in images<br>- Interactive Q&A based on images and text<br>- Analyzing and summarizing similarities and differences between images
4
+ Describe the life critical impact (if present). | Not Applicable
5
+ Use Case Restrictions: | Governing Terms:Your use of the model is governed by the [NVIDIA Open License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). Additional Information: Llama 3.1 Community Model License; Built with Llama.. Additional Information: [Llama 3.1 Community Model License](https://www.llama.com/llama3_1/license/); Built with Llama.
6
+ Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
7
+
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f38c73729248f6c127296386e3cdde96e254636cc58b4169d3fd32328d9a8ec
3
+ size 296
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5725802f93aea2c6126c605128904b2feaad45cdc0b16240fb153481a229948
3
+ size 17211566
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fce27bafb0f4fd5fcbf3fc9a7152fb659fad40f7371142d2cbdf20a3ad8dae59
3
+ size 52949