diff --git a/README.md b/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..77e030eff1b869e8814e82abcb860eb0b9497bdc
--- /dev/null
+++ b/README.md
@@ -0,0 +1,411 @@
+---
+base_model:
+- Qwen/Qwen2.5-VL-32B-Instruct
+datasets:
+- xlangai/AgentNet
+- xlangai/aguvis-stage1
+- xlangai/aguvis-stage2
+- osunlp/UGround-V1-Data
+language:
+- en
+license: mit
+metrics:
+- code_eval
+- accuracy
+pipeline_tag: image-text-to-text
+tags:
+- VLM
+- Computer-Use-Agent
+- OS-Agent
+- GUI
+- Grounding
+library_name: transformers
+---
+
+
+
+# Introduction
+
+
+OpenCUA models (OpenCUA-7B and OpenCUA-32B) are end-to-end computer-use foundation models than can produce executable actions in the computer environments. They are based on the weights of Qwen2.5-VL-7B-Instruction and Qwen2.5-VL-32B-Instruction.
+They demonstrate superior performance across CUA benchmarks. In particular, OpenCUA-32B achieves an average success rate of **34.8%** on [OSWorld-Verified](https://os-world.github.io/),
+establishing a new state-of-the-art (SOTA) among open-source models and surpassing OpenAI CUA (GPT-4o). Both models also have strong grounding performance, OpenCUA-32B achieves 59.6% on [OSWorld-G](https://osworld-grounding.github.io/) and 55.3% on [Screenspot-Pro](https://arxiv.org/abs/2504.07981).
+
+
+### Key Features
+
+- **Superior Computer-Use Capablity**: Able to execute multi-step computer-use actions with effective planning and reasoning
+- **Multi-OS Support**: Trained on demonstrations across Ubuntu, Windows, and macOS
+- **Visual Grounding**: Strong GUI element recognition and spatial reasoning capabilities
+- **Multi-Image Context**: Processes up to 3 screenshot history for better context understanding
+- **Reflective Reasoning**: Enhanced with reflective long Chain-of-Thought that identifies errors and provides corrective reasoning
+
+
+# Performance
+
+### Online Agent Evaluation
+OpenCUA models achieves strong performance on **[OSWorld-Verified](https://os-world.github.io/)**.
+OPENCUA-32B achieves the best performance among all open-source models with an average success rate of 34.8%, outperforming prior baselines by large margins.
+It also closes the gap to proprietary Claude models.
+
+
+| **Model** | **15 Steps** | **50 Steps** | **100 Steps** |
+|-------------------------------|:--------:|:--------:|:---------:|
+| **Proprietary** | | | |
+| OpenAI CUA | 26.0 | 31.3 | 31.4 |
+| Seed 1.5-VL | 27.9 | — | 34.1 |
+| Claude 3.7 Sonnet | 27.1 | 35.8 | 35.9 |
+| Claude 4 Sonnet | 31.2 | 43.9 | 41.5 |
+| **Open-Source** | | | |
+| Qwen 2.5-VL-32B-Instruct | 3.0 | — | 3.9 |
+| Qwen 2.5-VL-72B-Instruct | 4.4 | — | 5.0 |
+| Kimi-VL-A3B | 9.7 | — | 10.3 |
+| UI-TARS-72B-DPO | 24.0 | 25.8 | 27.1 |
+| UI-TARS-1.5-7B | 24.5 | 27.3 | 27.4 |
+| OpenCUA-7B *(Ours)* | 24.3 | 27.9 | 26.6 |
+| **OpenCUA-32B *(Ours)*** | **29.7** | **34.1** | **34.8** |
+
+
+*OpenCUA scores are the mean of 3 independent runs.*
+
+### GUI Grounding Performance
+
+
+| **Model** | **OSWorld-G** | **ScreenSpot-V2** | **ScreenSpot-Pro** |
+|-------|-----------|---------------|----------------|
+| Qwen2.5-VL-7B | 31.4 | 88.8 | 27.6 |
+| Qwen2.5-VL-32B | 46.5 | 87.0 | 39.4 |
+| UI-TARS-72B | 57.1 | 90.3 | 38.1 |
+| **OpenCUA-A3B** | 48.6 | 91.4 | 28.5 |
+| **OpenCUA-Qwen2-7B** | 45.7 | 88.5 | 23.7 |
+| **OpenCUA-7B** | 55.3 | 92.3 | 50.0 |
+| **OpenCUA-32B** | **59.6** | **93.4** | **55.3** |
+
+
+
+### AgentNetBench (Offline Evaluation)
+
+
+| **Model** | **Coordinate Actions** | **Content Actions** | **Function Actions** | **Average** |
+|-------|-------------------|-----------------|------------------|---------|
+| Qwen2.5-VL-7B | 50.7 | 40.8 | 3.1 | 48.0 |
+| Qwen2.5-VL-32B | 66.6 | 47.2 | 41.5 | 64.8 |
+| Qwen2.5-VL-72B | 67.2 | 52.6 | 50.5 | 67.0 |
+| OpenAI CUA | 71.7 | 57.3 | **80.0** | 73.1 |
+| **OpenCUA-7B** | 79.0 | 62.0 | 44.3 | 75.2 |
+| **OpenCUA-32B** | **81.9** | 66.1 | 55.7 | **79.1** |
+
+
+# 🚀 Quick Start
+
+
⚠️ Important for Qwen-based Models (OpenCUA-7B, OpenCUA-32B):
+
+ To align with our training infrastructure, we have modified the model in two places:
+
+ - 1. Multimodal Rotary Position Embedding (M-RoPE) has been replaced with 1D RoPE.
+ - 2. Using the same Tokenizer and ChatTemplate as Kimi-VL.
+ - Do not use the default transformers and vllm classes to load the model. Tokenizer and Chat Template should be aligned if training the models.
+
+
+
+
+## Installation & Download
+
+First, install the required transformers dependencies:
+
+```bash
+conda create -n opencua python=3.10
+conda activate opencua
+pip install -r requirement.txt
+```
+
+Download the model weight from huggingface:
+```bash
+from huggingface_hub import snapshot_download
+snapshot_download(
+ repo_id="xlangai/OpenCUA-32B",
+ local_dir="OpenCUA-32B",
+ local_dir_use_symlinks=False
+)
+```
+
+## 🎯 GUI Grounding
+
+The following code demonstrates how to use OpenCUA models for GUI grounding tasks:
+
+```python
+import base64
+import torch
+from transformers import AutoTokenizer, AutoModel, AutoImageProcessor
+from PIL import Image
+import json
+
+def encode_image(image_path: str) -> str:
+ """Encode image to base64 string for model input."""
+ with open(image_path, "rb") as f:
+ return base64.b64encode(f.read()).decode()
+
+def load_opencua_model(model_path: str):
+ """Load OpenCUA model, tokenizer, and image processor."""
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
+ model = AutoModel.from_pretrained(
+ model_path,
+ torch_dtype="auto",
+ device_map="auto",
+ trust_remote_code=True
+ )
+ image_processor = AutoImageProcessor.from_pretrained(model_path, trust_remote_code=True)
+
+ return model, tokenizer, image_processor
+
+def create_grounding_messages(image_path: str, instruction: str):
+ """Create chat messages for GUI grounding task."""
+ system_prompt = (
+ "You are a GUI agent. You are given a task and a screenshot of the screen. "
+ "You need to perform a series of pyautogui actions to complete the task."
+ )
+
+ messages = [
+ {"role": "system", "content": system_prompt},
+ {
+ "role": "user",
+ "content": [
+ {"type": "image", "image": f"data:image/png;base64,{encode_image(image_path)}"},
+ {"type": "text", "text": instruction},
+ ],
+ },
+ ]
+ return messages
+
+def run_inference(model, tokenizer, image_processor, messages, image_path):
+ """Run inference on the model."""
+ # Prepare text input
+ input_ids = tokenizer.apply_chat_template(
+ messages, tokenize=True, add_generation_prompt=True
+ )
+ input_ids = torch.tensor([input_ids]).to(model.device)
+
+ # Prepare image input
+ image = Image.open(image_path).convert('RGB')
+ image_info = image_processor.preprocess(images=[image])
+ pixel_values = torch.tensor(image_info['pixel_values']).to(
+ dtype=torch.bfloat16, device=model.device
+ )
+ grid_thws = torch.tensor(image_info['image_grid_thw'])
+
+ # Generate response
+ with torch.no_grad():
+ generated_ids = model.generate(
+ input_ids,
+ pixel_values=pixel_values,
+ grid_thws=grid_thws,
+ max_new_tokens=512,
+ temperature=0
+ )
+
+ # Decode output
+ prompt_len = input_ids.shape[1]
+ generated_ids = generated_ids[:, prompt_len:]
+ output_text = tokenizer.batch_decode(
+ generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
+ )[0]
+
+ return output_text
+
+# Example usage
+model_path = "xlangai/OpenCUA-32B" # or other model variants
+image_path = "screenshot.png"
+instruction = "Click on the submit button"
+
+# Load model
+model, tokenizer, image_processor = load_opencua_model(model_path)
+
+# Create messages and run inference
+messages = create_grounding_messages(image_path, instruction)
+result = run_inference(model, tokenizer, image_processor, messages, image_path)
+
+print("Model output:", result)
+```
+
+
+ Expected result: ```python
+pyautogui.click(x=1432, y=344)
+```
+
+
+## 🖥️ Computer Use Agent
+**[OpenCUAAgent](https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/opencua_agent.py)** is developed in the [OSWorld](https://github.com/xlang-ai/OSWorld) environment based on OpenCUA models. It iteratively perceives the environment via screenshots, produces reflective long CoT as inner monologue, and predicts the next action to be executed. OpenCUAAgent uses 3 images in total and L2 CoT format in default.
+
+Command for running OpenCUA-7B and OpenCUA-32B in OSWorld:
+```
+ python run_multienv_opencua.py \
+ --headless \
+ --observation_type screenshot \
+ --model OpenCUA-32B \
+ --result_dir ./results --test_all_meta_path evaluation_examples/test_all_no_gdrive.json \
+ --max_steps 100 \
+ --num_envs 30 \
+ --coordinate_type qwen25
+```
+
+ Currently we only supports huggingface inference. We are implementing the vLLM supports of OpenCUA models. Please stay tuned.
+
+
+## Important Notes on Coordinate Systems
+
+
+ xlangai/OpenCUA-A3B
– Relative coordinates (not supported in this code)
+ xlangai/OpenCUA-Qwen2-7B
– Relative coordinates
+ xlangai/OpenCUA-7B
– Absolute coordinates
+ xlangai/OpenCUA-32B
– Absolute coordinates
+
+
+
+**OpenCUA models use different coordinate systems depending on the base model:**
+
+- **OpenCUA-Qwen2-7B**: Outputs **relative coordinates** (0.0 to 1.0 range)
+ ```python
+ # Example output: pyautogui.click(x=0.5, y=0.3)
+ # x=0.5 means 50% from left edge, y=0.3 means 30% from top edge
+
+ # Convert to absolute coordinates:
+ def qwen2_relative_to_absolute(rel_x, rel_y, original_width, original_height):
+ abs_x = int(rel_x * original_width)
+ abs_y = int(rel_y * original_height)
+ return abs_x, abs_y
+ ```
+
+- **OpenCUA-7B and OpenCUA-32B** (Qwen2.5-based): Output **absolute coordinates** after smart resize
+ ```python
+ # Example output: pyautogui.click(x=960, y=324)
+ # These are coordinates on the smart-resized image, not the original image
+
+ # Convert to original image coordinates:
+ # Please refer to the smart_resize function in: https://github.com/huggingface/transformers/blob/67ddc82fbc7e52c6f42a395b4a6d278c55b77a39/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L55
+ def qwen25_smart_resize_to_absolute(model_x, model_y, original_width, original_height):
+ # First, calculate the smart-resized dimensions
+ resized_height, resized_width = smart_resize(original_height, original_width, factor = 28, min_pixels = 3136, max_pixels = 12845056)
+
+ # Convert model output to relative coordinates on original image
+ rel_x = model_x / resized_width
+ rel_y = model_y / resized_height
+
+ # Then convert to absolute coordinates on original image
+ abs_x = int(rel_x * original_width)
+ abs_y = int(rel_y * original_height)
+ return abs_x, abs_y
+ ```
+
+
+
Understanding Smart Resize for Qwen2.5-based Models:
+
+ The Qwen2.5-VL models use a “smart resize” preprocessing that maintains aspect ratio while fitting within pixel constraints.
+ For coordinate conversion, you need the smart resize function from the
+
+ official Qwen2.5-VL implementation.
+
+
+
+
+# TODO
+## vLLM Support
+We are actively working with the vLLM team to add support for OpenCUA models.
+
+**Workaround:** For now, please use the standard transformers library as shown in the examples above. We will update this section once vLLM support becomes available.
+
+## Training Code
+OpenCUA models are developed based on the training infrastructure of Kimi Team. We are developting the training pipeline based on the open-source infrastructure as well.
+
+## License
+
+This project is licensed under the MIT License - see the LICENSE file in the root folder for details.
+
+## Research Use and Disclaimer
+
+OpenCUA models are intended for **research and educational purposes only**.
+
+### Prohibited Uses
+- The model may **not** be used for any purpose or activity that violates applicable laws or regulations in any jurisdiction
+- Use for illegal, unethical, or harmful activities is strictly prohibited
+
+### Disclaimer
+- The authors, contributors, and copyright holders are **not responsible** for any illegal, unethical, or harmful use of the Software, nor for any direct or indirect damages resulting from such use
+- Use of the "OpenCUA" name, logo, or trademarks does **not** imply any endorsement or affiliation unless separate written permission is obtained
+- Users are solely responsible for ensuring their use complies with applicable laws and regulations
+
+## Citation
+
+If you use OpenCUA models in your research, please cite our work:
+
+```bibtex
+@misc{wang2025opencuaopenfoundationscomputeruse,
+ title={OpenCUA: Open Foundations for Computer-Use Agents},
+ author={Xinyuan Wang and Bowen Wang and Dunjie Lu and Junlin Yang and Tianbao Xie and Junli Wang and Jiaqi Deng and Xiaole Guo and Yiheng Xu and Chen Henry Wu and Zhennan Shen and Zhuokai Li and Ryan Li and Xiaochuan Li and Junda Chen and Boyuan Zheng and Peihang Li and Fangyu Lei and Ruisheng Cao and Yeqiao Fu and Dongchan Shin and Martin Shin and Jiarui Hu and Yuyan Wang and Jixuan Chen and Yuxiao Ye and Danyang Zhang and Dikang Du and Hao Hu and Huarong Chen and Zaida Zhou and Haotian Yao and Ziwei Chen and Qizheng Gu and Yipu Wang and Heng Wang and Diyi Yang and Victor Zhong and Flood Sung and Y. Charles and Zhilin Yang and Tao Yu},
+ year={2025},
+ eprint={2508.09123},
+ archivePrefix={arXiv},
+ primaryClass={cs.AI},
+ url={https://arxiv.org/abs/2508.09123},
+}
+```
+
+
\ No newline at end of file
diff --git a/config.bak.json b/config.bak.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf01cf5a51bc1749fcf5729f5284cededce5cc53
--- /dev/null
+++ b/config.bak.json
@@ -0,0 +1,69 @@
+{
+ "architectures": [
+ "OpenCUAForConditionalGeneration"
+ ],
+ "auto_map": {
+ "AutoConfig": "configuration_opencua.OpenCUAConfig",
+ "AutoModel": "modeling_opencua.OpenCUAForConditionalGeneration",
+ "AutoModelForCausalLM": "modeling_opencua.OpenCUAForConditionalGeneration"
+ },
+ "ignore_index": -100,
+ "media_placeholder_token_id": 151664,
+ "model_type": "opencua",
+ "pad_token_id": 0,
+ "text_config": {
+ "bos_token_id": 151643,
+ "eos_token_id": 151644,
+ "head_dim": 128,
+ "hidden_act": "silu",
+ "hidden_size": 5120,
+ "initializer_range": 0.02,
+ "intermediate_size": 27648,
+ "k_proj_bias": true,
+ "max_length": 20,
+ "min_length": 0,
+ "model_type": "qwen2",
+ "num_attention_heads": 40,
+ "num_beam_groups": 1,
+ "num_beams": 1,
+ "num_hidden_layers": 64,
+ "num_key_value_heads": 8,
+ "pad_token_id": 152063,
+ "pretraining_sequence_length": 131072,
+ "q_proj_bias": true,
+ "rms_norm_eps": 1e-05,
+ "rope_theta": 1000000.0,
+ "tie_word_embeddings": false,
+ "torch_dtype": "bfloat16",
+ "use_bfloat16": false,
+ "use_cache": true,
+ "v_proj_bias": true,
+ "vocab_size": 152064
+ },
+ "tie_word_embeddings": false,
+ "torch_dtype": "bfloat16",
+ "transformers_version": "4.48.3",
+ "vision_config": {
+ "depth": 32,
+ "fullatt_block_indexes": [
+ 7,
+ 15,
+ 23,
+ 31
+ ],
+ "hidden_act": "silu",
+ "hidden_size": 1280,
+ "num_heads": 16,
+ "in_chans": 3,
+ "intermediate_size": 3456,
+
+ "patch_size": 14,
+ "spatial_merge_size": 2,
+ "spatial_patch_size": 14,
+ "temporal_patch_size": 2,
+ "out_hidden_size": 5120,
+ "tokens_per_second": 2,
+ "window_size": 112
+ },
+ "vocab_size": 152064
+}
\ No newline at end of file
diff --git a/config.json b/config.json
new file mode 100644
index 0000000000000000000000000000000000000000..f1afbeb046767034fb33d0fda38a70c7cab72433
--- /dev/null
+++ b/config.json
@@ -0,0 +1,51 @@
+{
+ "architectures": [
+ "Qwen2_5_VLForConditionalGeneration"
+ ],
+ "attention_dropout": 0.0,
+ "eos_token_id": 151645,
+ "hidden_act": "silu",
+ "hidden_size": 5120,
+ "image_token_id": 151655,
+ "initializer_range": 0.02,
+ "intermediate_size": 27648,
+ "max_position_embeddings": 128000,
+ "max_window_layers": 64,
+ "model_type": "qwen2_5_vl",
+ "num_attention_heads": 40,
+ "num_hidden_layers": 64,
+ "num_key_value_heads": 8,
+ "pad_token_id": 151643,
+ "rms_norm_eps": 1e-06,
+ "rope_scaling": {
+ "mrope_section": [
+ 16,
+ 24,
+ 24
+ ],
+ "rope_type": "default",
+ "type": "default"
+ },
+ "rope_theta": 1000000.0,
+ "sliding_window": 32768,
+ "tie_word_embeddings": false,
+ "torch_dtype": "bfloat16",
+ "transformers_version": "4.49.0",
+ "use_cache": true,
+ "use_sliding_window": false,
+ "video_token_id": 151656,
+ "vision_config": {
+ "hidden_size": 1280,
+ "in_chans": 3,
+ "intermediate_size": 3456,
+ "model_type": "qwen2_5_vl",
+ "out_hidden_size": 5120,
+ "spatial_patch_size": 14,
+ "tokens_per_second": 2,
+ "torch_dtype": "bfloat16"
+ },
+ "vision_end_token_id": 151653,
+ "vision_start_token_id": 151652,
+ "vision_token_id": 151654,
+ "vocab_size": 152064
+}
diff --git a/configuration_opencua.py b/configuration_opencua.py
new file mode 100644
index 0000000000000000000000000000000000000000..095c0fc9710f91f6fd5514b1173693d9d351a61f
--- /dev/null
+++ b/configuration_opencua.py
@@ -0,0 +1,37 @@
+from transformers.configuration_utils import PretrainedConfig
+from transformers.models.qwen2_5_vl.configuration_qwen2_5_vl import Qwen2_5_VLVisionConfig
+from transformers.models.qwen2.configuration_qwen2 import Qwen2Config
+
+
+class OpenCUAConfig(PretrainedConfig):
+ """OpenCUA-2.5-32B model configuration.
+
+ Args:
+ vision_config: Configuration for the vision model.Qwen2_5_VLVisionConfig
+ text_config: Configuration for the text model. Qwen2Config
+ pad_token_id: The token ID to use for padding.
+ """
+
+ model_type = "opencua"
+
+ def __init__(
+ self,
+ vision_config: dict | Qwen2_5_VLVisionConfig | None = None,
+ text_config: dict | Qwen2Config | None = None,
+ ignore_index: int = -100,
+ media_placeholder_token_id: int = 151664,
+ pad_token_id: int = 0,
+ **kwargs
+ ):
+ if isinstance(vision_config, dict):
+ vision_config = Qwen2_5_VLVisionConfig(**vision_config)
+ self.vision_config = vision_config
+
+ if isinstance(text_config, dict):
+ text_config = Qwen2Config(**text_config)
+ self.text_config = text_config
+
+ self.ignore_index = ignore_index
+ self.media_placeholder_token_id = media_placeholder_token_id
+
+ super().__init__(pad_token_id=pad_token_id, **kwargs)
diff --git a/generation_config.json b/generation_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..d55e80e3462d79036361bd847152bfc854b1a5b5
--- /dev/null
+++ b/generation_config.json
@@ -0,0 +1,4 @@
+{
+ "max_length": 32768,
+ "eos_token_id": 151644
+}
\ No newline at end of file
diff --git a/model-1-of-64.safetensors b/model-1-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..8d7deffa38fe38f26a1e8b07f7f2aeb942281014
--- /dev/null
+++ b/model-1-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f93f8fdb8948cb1533461a48dbcc53ff3f49334fb4a9f39fba89b030b2671f2
+size 3910073936
diff --git a/model-10-of-64.safetensors b/model-10-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..c88c4571f257227e3ea02d653b42331a4822faff
--- /dev/null
+++ b/model-10-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0dea6a376c59a96a85dbf170bee7709de7591fff8fe3dab573a189f003b8efbf
+size 975212080
diff --git a/model-11-of-64.safetensors b/model-11-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..315f48ec253126a4b205e752f0617c32bd4be501
--- /dev/null
+++ b/model-11-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7305f64fbbd5fed1d919db106f720ea7ad4fdc0a3fbcedd53641bb3aa5cc0f31
+size 975212096
diff --git a/model-12-of-64.safetensors b/model-12-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..101f7bbf31d66bf1171fc244324be25cf24011e6
--- /dev/null
+++ b/model-12-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa8097a66699924bbeeaef401c4494bc589b2f03656b631cd294722e1be0e56c
+size 975212096
diff --git a/model-13-of-64.safetensors b/model-13-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..938ac96718a02bf99fb80b6daa05fe91532f163d
--- /dev/null
+++ b/model-13-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f62d3f60e4e9f5ced3999c7b59791d938f0624c6439d053f1010b3758833d692
+size 975212096
diff --git a/model-14-of-64.safetensors b/model-14-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..dafb8f24788d7763f751fee9d46cec64cc589805
--- /dev/null
+++ b/model-14-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:414abeb9ebc8b8d1556cc30654cdd89c29244fa598667d60b4c60acdc05febf1
+size 975212096
diff --git a/model-15-of-64.safetensors b/model-15-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..c40b2e788aedc92f726b96212f962362770a265f
--- /dev/null
+++ b/model-15-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:07e791bc6e5def800381c0f875e64fe44701ad1d223871929293cb402745abf2
+size 975212096
diff --git a/model-16-of-64.safetensors b/model-16-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9ed06a658f07756debe86c86f72faf568250af6d
--- /dev/null
+++ b/model-16-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dcfeec3a6e3d3761344d25bc7e64b470497f39b142abe76a27f0be328fa5a51b
+size 975212096
diff --git a/model-17-of-64.safetensors b/model-17-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..69b3edbfaf414c4983383666a495551f9d7ba51f
--- /dev/null
+++ b/model-17-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d862948c63a9f309203fc804a6d57249dbf937cfd23cd470f4787ab1b6f408e
+size 975212096
diff --git a/model-18-of-64.safetensors b/model-18-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..dc86658ace20b1b047cf6747b0f87c4e100ef99d
--- /dev/null
+++ b/model-18-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e593c5f9733724fe9e4e97737477425c7ebc4fe4ffc91848b28d2bb1372e725
+size 975212096
diff --git a/model-19-of-64.safetensors b/model-19-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e237a9e385d282d7056a94666316b18f0a4fd47c
--- /dev/null
+++ b/model-19-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2fd8ae7e12435ab7a71a933e7d7b4cb542f1d87d9dbba786ea5995b43164fa7c
+size 975212096
diff --git a/model-2-of-64.safetensors b/model-2-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9686951f6af6b6de3f63857479c3c88ddf2e6a16
--- /dev/null
+++ b/model-2-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba11af2777844ed3e665df561e3505e8a3ed16a0c0f7c5167d30f9b08d0524ff
+size 975212080
diff --git a/model-20-of-64.safetensors b/model-20-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..b2390af5fa222fe2b86bac09dcca3e238941aea0
--- /dev/null
+++ b/model-20-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:20582ca438c35b1818cd372f6602b03bc98d606efb987022939982f9c21d8950
+size 975212096
diff --git a/model-21-of-64.safetensors b/model-21-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..014cb26490f3f2ab13283795ab43edbcc2d09682
--- /dev/null
+++ b/model-21-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a66d3be5340ce35964b0217a11ae461332bd1b61461714d7844a695e294bc79
+size 975212096
diff --git a/model-22-of-64.safetensors b/model-22-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..95fd3a7d5f3becf3ebbb0c67fb32c73fcb562044
--- /dev/null
+++ b/model-22-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05a397d86b3124248deb7da026325e94dfe5f9fc2c1afc393aec4101f404ae27
+size 975212096
diff --git a/model-23-of-64.safetensors b/model-23-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..57496f81eb965c0077ea565668487dcf949aa090
--- /dev/null
+++ b/model-23-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a5a54d6f0094dff6c34de92caf1e962e36176585769148026efc51305b0060f
+size 975212096
diff --git a/model-24-of-64.safetensors b/model-24-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..7d001982c805fdc203fff7d1e032469099cf90ab
--- /dev/null
+++ b/model-24-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3adb06cbc432f29a0144373f44bada870e4b55a9e06d47593cdd69ad427bd7cc
+size 975212096
diff --git a/model-25-of-64.safetensors b/model-25-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..31a0a9b46a4b4d96e132e9ff40e44987c4b39457
--- /dev/null
+++ b/model-25-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a9084a804f924021d66a35dc2c3fc35109d2ad7e99fbfdc0106ebe85602eac7
+size 975212096
diff --git a/model-26-of-64.safetensors b/model-26-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..606f0a165c706642170a7911920b363ed09a9509
--- /dev/null
+++ b/model-26-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3563166a42ae3c8332ea79ef95aa17597d82bcb52a54695b3281e7e000b80af6
+size 975212096
diff --git a/model-27-of-64.safetensors b/model-27-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..baa67c6ca76002651d0202aeba3138d3b2912ca6
--- /dev/null
+++ b/model-27-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6b29bfc99c4cad7ca5e9b976734051b5b30181371e1a5e007e9dbde27f19e858
+size 975212096
diff --git a/model-28-of-64.safetensors b/model-28-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..8d9447c6b0b2fab9fc74c82b3f523347d4022d3a
--- /dev/null
+++ b/model-28-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be52480d723a9b68bf1942cb074a5855456b8059ec5d22f99d12d0effc5b4547
+size 975212096
diff --git a/model-29-of-64.safetensors b/model-29-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1b334d3fa2c1c3e65e96d7a5fd8b221212984c8f
--- /dev/null
+++ b/model-29-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3fb36bcb0a2acf028f3e83149d8954ec5f4b5c011b1ca538689e6dda034c298b
+size 975212096
diff --git a/model-3-of-64.safetensors b/model-3-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a078432c6b513974bb1808dfabefc9f96b6773e6
--- /dev/null
+++ b/model-3-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4a858641c4db4486f6142aaf5f7e6e2dba1fd14a7640c31ce9f86289f6cc9ec
+size 975212080
diff --git a/model-30-of-64.safetensors b/model-30-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..2c9de6de5a00475272163402ff31abafe1152d58
--- /dev/null
+++ b/model-30-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39cf268f7c72fc00bfa8b2a2ea6ab72243e46c88597e3c4aedb768288b534b05
+size 975212096
diff --git a/model-31-of-64.safetensors b/model-31-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..54df6cbbd0e2bec71bf32946396993d1e81ee938
--- /dev/null
+++ b/model-31-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f54de1fae911dc809f8ca311ff948facdc4c613fc659468aadb6af99b58b125
+size 975212096
diff --git a/model-32-of-64.safetensors b/model-32-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a94e6885365655d9564716baa02d6f0bf4de7817
--- /dev/null
+++ b/model-32-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:937716b987d5994fb46faecc35622ceb0fa18283aac2cbb506c4b9d3f1e3fcae
+size 975212096
diff --git a/model-33-of-64.safetensors b/model-33-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..fde0bc27e02b31f83ec0f2d3a867d5e990cdcd26
--- /dev/null
+++ b/model-33-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08d4297513cb07588a018c0df47fd2849e30e64ebe61675b576dfef2a0a5c831
+size 975212096
diff --git a/model-34-of-64.safetensors b/model-34-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e3f22e8660e47e88470439e63250b8bb9153443b
--- /dev/null
+++ b/model-34-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6651b5a783a8de35e606fe884b29c5a13ba237123972acbdae99fbe44614af57
+size 975212096
diff --git a/model-35-of-64.safetensors b/model-35-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..8a3c54658b3737a0ce438fb58dc46c6d81d53816
--- /dev/null
+++ b/model-35-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfeaf19d699c85bff1aa97b7a9106d2cecc4a6ecb0d6958250ee4b389c3b87f1
+size 975212096
diff --git a/model-36-of-64.safetensors b/model-36-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9c1c2390cd042c6562fef7bf7c597ca1386934f3
--- /dev/null
+++ b/model-36-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:439ea1bf04cc07f00d1e0a8ededd645e89e008fc8eec89c8213a82df7bc93442
+size 975212096
diff --git a/model-37-of-64.safetensors b/model-37-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9146a9b7bfdcb53bede9510b19649be251fd2e45
--- /dev/null
+++ b/model-37-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:893ef546ef828989c6839b6b0ab02e6e6257bb4eef4ee9fd1deaa46202d62aa4
+size 975212096
diff --git a/model-38-of-64.safetensors b/model-38-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..6c68d20c3b5e5768e4ff87e6fd143609efe580de
--- /dev/null
+++ b/model-38-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:447ecde3b04b45b54f6e24cfcd4dfa74cf68023c0f5a22eaaa230380d844dfdf
+size 975212096
diff --git a/model-39-of-64.safetensors b/model-39-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..3593d2f35465b262cc7f30cc65743a87e1664042
--- /dev/null
+++ b/model-39-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51186e7ff2c99616c75a85a28fbb37351e26c5dc2949d23d4373a3467910e935
+size 975212096
diff --git a/model-4-of-64.safetensors b/model-4-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e1681d9e7a2841d5763a388d512c25392378fc23
--- /dev/null
+++ b/model-4-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d13cc0d8d8f37544d3bf0dbf039f0f94ef1114dbc53a2bb6679cf8c180b77bc
+size 975212080
diff --git a/model-40-of-64.safetensors b/model-40-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f5aab405ea109d4f22a9697880943f36e2748ccc
--- /dev/null
+++ b/model-40-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d1cbb025e2acb4527d4cae146bf3fc53200b5f714828ed452a7085cd40e5064
+size 975212096
diff --git a/model-41-of-64.safetensors b/model-41-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..151e6b07e03b94aeffbd36a2ae16bec9a9584749
--- /dev/null
+++ b/model-41-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51d77fa16c2beed95ba7f5ad37a42e1258acda22640631f0ca0978db1b222672
+size 975212096
diff --git a/model-42-of-64.safetensors b/model-42-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..2fd4c3fcf57f7d0c47767e2fcfd3c5a59503dbde
--- /dev/null
+++ b/model-42-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e4f70cf44323490fc83788b6ebe670e247851d9c17a86dd1845e7377bf9691b
+size 975212096
diff --git a/model-43-of-64.safetensors b/model-43-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..cffc6f9beead5ac006ae1bf878bd4158a599d3d1
--- /dev/null
+++ b/model-43-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dcc0ef3ad628eb5c377f26caa4d1340c4b5e941cac9fdfad0e8826b8205a7520
+size 975212096
diff --git a/model-44-of-64.safetensors b/model-44-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1523768ea374bced222c7ce1fe3b9d0fc73de377
--- /dev/null
+++ b/model-44-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a832b22641ce8b082cbf7f2abb3a6d12cee21211465ec1f6a6945e1b01ca3a08
+size 975212096
diff --git a/model-45-of-64.safetensors b/model-45-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..86fdbfd68a1937e2aeb80f5c7bbd685cef5ea46a
--- /dev/null
+++ b/model-45-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f87e97668db5ff3af25b363da785365c5e8c061b0a69a899e148be7951200e3
+size 975212096
diff --git a/model-46-of-64.safetensors b/model-46-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..761c064530fe137052217a967d993668d37a8039
--- /dev/null
+++ b/model-46-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:942391c6100fbfdad57e548f7d4e9789486a04ee016d6f3edea08d08cb0fa7c2
+size 975212096
diff --git a/model-47-of-64.safetensors b/model-47-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..a8b908709e995c517f7811d05331ec3d56a1593c
--- /dev/null
+++ b/model-47-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a04f304e9310861fa3bf4410475f60d8a02fac1ee27bfa4ffd93459f0340860d
+size 975212096
diff --git a/model-48-of-64.safetensors b/model-48-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..f9a7eb4cfd5b79688e059e410ab461ca4f456bb2
--- /dev/null
+++ b/model-48-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fac24ae7395d74a349439772072bbc0bd900d68a8c68f42ee4fa7e5745731a7c
+size 975212096
diff --git a/model-49-of-64.safetensors b/model-49-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..db436b382ac8fd613b31568846c1a7cd8876d810
--- /dev/null
+++ b/model-49-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b944ee340bf8ad7916ba6428ec6479c9d0bda3060fcb2b6a0fdacb6058c7023
+size 975212096
diff --git a/model-5-of-64.safetensors b/model-5-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..469d10297619cf1f00c58c49eb94865de39efa70
--- /dev/null
+++ b/model-5-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d380966b60109b38e6a060423f347d9c90c84c1be995c1dfa534f412cdee9a65
+size 975212080
diff --git a/model-50-of-64.safetensors b/model-50-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..98ceec28d810cb7c2c06b6a9885aedad64eef6d2
--- /dev/null
+++ b/model-50-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a737d4e3a39128d70d78029d164bf566e79f8fe7cd3201e46286e3509f94c895
+size 975212096
diff --git a/model-51-of-64.safetensors b/model-51-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..442bd3520826cdcf6cda80a02b00b9f2819bbfad
--- /dev/null
+++ b/model-51-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cd53a71efaf790af3f07ac97e2cb3acbe5b5d0fbe49b2c80a94027f45dc3113f
+size 975212096
diff --git a/model-52-of-64.safetensors b/model-52-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..c2ad1b6746d3a016a17d05549e9612e89a9c40d0
--- /dev/null
+++ b/model-52-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:10d4f9f1dc9edb126511bf8a873cd2e11cb4b95e351cf1aff59e97cd3d5bbcaf
+size 975212096
diff --git a/model-53-of-64.safetensors b/model-53-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e820a1c556850d8ee54dab588c6f346945b55636
--- /dev/null
+++ b/model-53-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2f98b53c96079ea4755c1760b4019578788a5506e9985407e6e3916593dd01f
+size 975212096
diff --git a/model-54-of-64.safetensors b/model-54-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..c8493b3185c64b6d5ce2b91bbe060ed26a536019
--- /dev/null
+++ b/model-54-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14ac00ad2727d4a69ed6ef71bc1ecf9fb22f3b0eb6caa1575d0e6681eb7bf6ca
+size 975212096
diff --git a/model-55-of-64.safetensors b/model-55-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..0759a21e83c53769817e9c11add4ec6baf01d319
--- /dev/null
+++ b/model-55-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c46929585fcf07579ec33faff61418569b0940a58f838a935b4b999456c4129f
+size 975212096
diff --git a/model-56-of-64.safetensors b/model-56-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..9792d5464a3f07bd8785f4b08ac80ff1881c19bd
--- /dev/null
+++ b/model-56-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3e02822e615d4ee09f39e8686e44929008d00173b1fb1a2cd7f025387483dcd1
+size 975212096
diff --git a/model-57-of-64.safetensors b/model-57-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..3d7f4145f4fd92d47e85f74878ba083b4cb584d5
--- /dev/null
+++ b/model-57-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ea161d4781cd5fb585fd6ab0ec598b771ba0aeb5ee3579809e426145f817a5b
+size 975212096
diff --git a/model-58-of-64.safetensors b/model-58-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..b88391a2e7e1af769be0190c6b7752695e69f8a3
--- /dev/null
+++ b/model-58-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81abb2778229983ebc37cb6e8fcca044d4e4d17c883ab9fbe5b119d4cc1d8b5f
+size 975212096
diff --git a/model-59-of-64.safetensors b/model-59-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..bf45f029bb1c646d8e0fce0c8aed560f6ebd83c7
--- /dev/null
+++ b/model-59-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:888ee8fd8301c2f74a6595c1d00ef6e58107699943bd0a6c7240a70cb3b0c568
+size 975212096
diff --git a/model-6-of-64.safetensors b/model-6-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..77d2856ca67a7e44d8f3a4010d3738a09513790e
--- /dev/null
+++ b/model-6-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f64651f9d273cbc22a39bcb2ee88c2ee8f567e738f8b273397c764b26261237c
+size 975212080
diff --git a/model-60-of-64.safetensors b/model-60-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..92b8a80ab42731ca3e3a742cac2a9b318983bd50
--- /dev/null
+++ b/model-60-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d0de63c0f62e38fd5411c7328a3a75b40ced7efc0a5327180c3f403ab3478f4
+size 975212096
diff --git a/model-61-of-64.safetensors b/model-61-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..5f28575b7c8c93918032ef633cd2e9c625bce2f8
--- /dev/null
+++ b/model-61-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:921e6a70fc625e1814a22fbf1142381cd212f5a65424b737295990fc9fc04992
+size 975212096
diff --git a/model-62-of-64.safetensors b/model-62-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..39a98ad05f849a2a92827d4032115fd82d685265
--- /dev/null
+++ b/model-62-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:32add8b3d4d5107009fd4a40169ebe102de3fcc4e72379b7d0323cdbbd5f124a
+size 975212096
diff --git a/model-63-of-64.safetensors b/model-63-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..e2c5967a05fb0492be8a0b4bbb53c30fd041af03
--- /dev/null
+++ b/model-63-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05c69b2b42a2c07d6015de54265b62b88385d5a89857efd6f471c51992bdfe0b
+size 975212096
diff --git a/model-64-of-64.safetensors b/model-64-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..1e0ea0dbeafa2930977f474960505efdc994bb2e
--- /dev/null
+++ b/model-64-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80fc8baf9fe9e1dd8cfbb446036f10c7976bf047de8dcc5dd19faed7913f973f
+size 2532357912
diff --git a/model-7-of-64.safetensors b/model-7-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..d8886ab7548b4f2ee5411ae6e15dd883cd3a7dbb
--- /dev/null
+++ b/model-7-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5532cc9af2c089cb0d75569c20ec07572e23aeff87305a2204921f092517e272
+size 975212080
diff --git a/model-8-of-64.safetensors b/model-8-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..18d6e33f349877e036f983972f9d20ad406a53b7
--- /dev/null
+++ b/model-8-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22b175cd82fac8a7836431afb9db1b348e0ec923fc552c64b737dda21d1b012f
+size 975212080
diff --git a/model-9-of-64.safetensors b/model-9-of-64.safetensors
new file mode 100644
index 0000000000000000000000000000000000000000..bb6d683fb4c21ecaa62b268bd5c27f39d3e92205
--- /dev/null
+++ b/model-9-of-64.safetensors
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:346ec594b88f4ce7c5f7135303cb4c21b184ffde910305f7092d95cb833d08ec
+size 975212080
diff --git a/model.args.pt b/model.args.pt
new file mode 100644
index 0000000000000000000000000000000000000000..730b46a26b482754fe7aab9667050641a991677e
--- /dev/null
+++ b/model.args.pt
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f63a4fc32414b15ac0c96d0d6b4889f0bd29bb7c16bde542c978c0e01dd49beb
+size 25196
diff --git a/model.safetensors.index.json b/model.safetensors.index.json
new file mode 100644
index 0000000000000000000000000000000000000000..cec1df74c13cbf4db716dfc37b8dc4895775955f
--- /dev/null
+++ b/model.safetensors.index.json
@@ -0,0 +1,1232 @@
+{
+ "metadata": {
+ "total_size": 66905444864
+ },
+ "weight_map": {
+ "model.layers.63.self_attn.q_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.q_proj.bias": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.k_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.k_proj.bias": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.v_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.v_proj.bias": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.o_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.mlp.gate_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.mlp.down_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.mlp.up_proj.weight": "model-64-of-64.safetensors",
+ "model.layers.63.input_layernorm.weight": "model-64-of-64.safetensors",
+ "model.layers.63.post_attention_layernorm.weight": "model-64-of-64.safetensors",
+ "model.norm.weight": "model-64-of-64.safetensors",
+ "lm_head.weight": "model-64-of-64.safetensors",
+ "model.layers.63.self_attn.rotary_emb.inv_freq": "model-64-of-64.safetensors",
+ "model.layers.60.self_attn.q_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.q_proj.bias": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.k_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.k_proj.bias": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.v_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.v_proj.bias": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.o_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.mlp.gate_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.mlp.down_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.mlp.up_proj.weight": "model-61-of-64.safetensors",
+ "model.layers.60.input_layernorm.weight": "model-61-of-64.safetensors",
+ "model.layers.60.post_attention_layernorm.weight": "model-61-of-64.safetensors",
+ "model.layers.60.self_attn.rotary_emb.inv_freq": "model-61-of-64.safetensors",
+ "model.layers.58.self_attn.q_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.q_proj.bias": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.k_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.k_proj.bias": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.v_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.v_proj.bias": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.o_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.mlp.gate_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.mlp.down_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.mlp.up_proj.weight": "model-59-of-64.safetensors",
+ "model.layers.58.input_layernorm.weight": "model-59-of-64.safetensors",
+ "model.layers.58.post_attention_layernorm.weight": "model-59-of-64.safetensors",
+ "model.layers.58.self_attn.rotary_emb.inv_freq": "model-59-of-64.safetensors",
+ "model.layers.59.self_attn.q_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.q_proj.bias": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.k_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.k_proj.bias": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.v_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.v_proj.bias": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.o_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.mlp.gate_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.mlp.down_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.mlp.up_proj.weight": "model-60-of-64.safetensors",
+ "model.layers.59.input_layernorm.weight": "model-60-of-64.safetensors",
+ "model.layers.59.post_attention_layernorm.weight": "model-60-of-64.safetensors",
+ "model.layers.59.self_attn.rotary_emb.inv_freq": "model-60-of-64.safetensors",
+ "model.layers.61.self_attn.q_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.q_proj.bias": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.k_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.k_proj.bias": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.v_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.v_proj.bias": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.o_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.mlp.gate_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.mlp.down_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.mlp.up_proj.weight": "model-62-of-64.safetensors",
+ "model.layers.61.input_layernorm.weight": "model-62-of-64.safetensors",
+ "model.layers.61.post_attention_layernorm.weight": "model-62-of-64.safetensors",
+ "model.layers.61.self_attn.rotary_emb.inv_freq": "model-62-of-64.safetensors",
+ "model.layers.56.self_attn.q_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.q_proj.bias": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.k_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.k_proj.bias": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.v_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.v_proj.bias": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.o_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.mlp.gate_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.mlp.down_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.mlp.up_proj.weight": "model-57-of-64.safetensors",
+ "model.layers.56.input_layernorm.weight": "model-57-of-64.safetensors",
+ "model.layers.56.post_attention_layernorm.weight": "model-57-of-64.safetensors",
+ "model.layers.56.self_attn.rotary_emb.inv_freq": "model-57-of-64.safetensors",
+ "model.layers.54.self_attn.q_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.q_proj.bias": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.k_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.k_proj.bias": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.v_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.v_proj.bias": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.o_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.mlp.gate_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.mlp.down_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.mlp.up_proj.weight": "model-55-of-64.safetensors",
+ "model.layers.54.input_layernorm.weight": "model-55-of-64.safetensors",
+ "model.layers.54.post_attention_layernorm.weight": "model-55-of-64.safetensors",
+ "model.layers.54.self_attn.rotary_emb.inv_freq": "model-55-of-64.safetensors",
+ "model.layers.62.self_attn.q_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.q_proj.bias": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.k_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.k_proj.bias": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.v_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.v_proj.bias": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.o_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.mlp.gate_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.mlp.down_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.mlp.up_proj.weight": "model-63-of-64.safetensors",
+ "model.layers.62.input_layernorm.weight": "model-63-of-64.safetensors",
+ "model.layers.62.post_attention_layernorm.weight": "model-63-of-64.safetensors",
+ "model.layers.62.self_attn.rotary_emb.inv_freq": "model-63-of-64.safetensors",
+ "model.layers.57.self_attn.q_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.q_proj.bias": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.k_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.k_proj.bias": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.v_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.v_proj.bias": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.o_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.mlp.gate_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.mlp.down_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.mlp.up_proj.weight": "model-58-of-64.safetensors",
+ "model.layers.57.input_layernorm.weight": "model-58-of-64.safetensors",
+ "model.layers.57.post_attention_layernorm.weight": "model-58-of-64.safetensors",
+ "model.layers.57.self_attn.rotary_emb.inv_freq": "model-58-of-64.safetensors",
+ "model.layers.53.self_attn.q_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.q_proj.bias": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.k_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.k_proj.bias": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.v_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.v_proj.bias": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.o_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.mlp.gate_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.mlp.down_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.mlp.up_proj.weight": "model-54-of-64.safetensors",
+ "model.layers.53.input_layernorm.weight": "model-54-of-64.safetensors",
+ "model.layers.53.post_attention_layernorm.weight": "model-54-of-64.safetensors",
+ "model.layers.53.self_attn.rotary_emb.inv_freq": "model-54-of-64.safetensors",
+ "model.layers.0.self_attn.q_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.q_proj.bias": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.k_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.k_proj.bias": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.v_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.v_proj.bias": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.o_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.input_layernorm.weight": "model-1-of-64.safetensors",
+ "model.layers.0.post_attention_layernorm.weight": "model-1-of-64.safetensors",
+ "model.embed_tokens.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.0.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.0.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.0.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.0.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.1.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.1.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.1.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.1.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.10.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.10.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.10.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.10.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.11.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.11.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.11.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.11.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.12.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.12.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.12.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.12.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.13.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.13.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.13.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.13.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.14.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.14.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.14.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.14.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.15.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.15.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.15.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.15.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.16.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.16.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.16.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.16.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.17.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.17.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.17.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.17.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.18.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.18.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.18.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.18.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.19.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.19.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.19.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.19.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.2.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.2.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.2.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.2.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.20.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.20.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.20.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.20.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.21.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.21.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.21.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.21.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.22.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.22.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.22.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.22.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.23.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.23.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.23.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.23.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.24.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.24.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.24.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.24.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.25.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.25.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.25.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.25.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.26.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.26.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.26.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.26.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.27.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.27.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.27.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.27.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.28.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.28.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.28.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.28.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.29.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.29.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.29.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.29.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.3.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.3.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.3.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.3.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.30.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.30.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.30.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.30.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.31.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.31.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.31.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.31.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.4.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.4.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.4.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.4.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.5.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.5.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.5.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.5.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.6.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.6.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.6.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.6.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.7.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.7.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.7.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.7.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.8.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.8.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.8.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.8.norm2.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.attn.proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.9.attn.proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.attn.qkv.bias": "model-1-of-64.safetensors",
+ "visual.blocks.9.attn.qkv.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.down_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.down_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.gate_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.gate_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.up_proj.bias": "model-1-of-64.safetensors",
+ "visual.blocks.9.mlp.up_proj.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.norm1.weight": "model-1-of-64.safetensors",
+ "visual.blocks.9.norm2.weight": "model-1-of-64.safetensors",
+ "visual.merger.ln_q.weight": "model-1-of-64.safetensors",
+ "visual.merger.mlp.0.bias": "model-1-of-64.safetensors",
+ "visual.merger.mlp.0.weight": "model-1-of-64.safetensors",
+ "visual.merger.mlp.2.bias": "model-1-of-64.safetensors",
+ "visual.merger.mlp.2.weight": "model-1-of-64.safetensors",
+ "visual.patch_embed.proj.weight": "model-1-of-64.safetensors",
+ "model.layers.0.self_attn.rotary_emb.inv_freq": "model-1-of-64.safetensors",
+ "model.layers.55.self_attn.q_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.q_proj.bias": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.k_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.k_proj.bias": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.v_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.v_proj.bias": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.o_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.mlp.gate_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.mlp.down_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.mlp.up_proj.weight": "model-56-of-64.safetensors",
+ "model.layers.55.input_layernorm.weight": "model-56-of-64.safetensors",
+ "model.layers.55.post_attention_layernorm.weight": "model-56-of-64.safetensors",
+ "model.layers.55.self_attn.rotary_emb.inv_freq": "model-56-of-64.safetensors",
+ "model.layers.52.self_attn.q_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.q_proj.bias": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.k_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.k_proj.bias": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.v_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.v_proj.bias": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.o_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.mlp.gate_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.mlp.down_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.mlp.up_proj.weight": "model-53-of-64.safetensors",
+ "model.layers.52.input_layernorm.weight": "model-53-of-64.safetensors",
+ "model.layers.52.post_attention_layernorm.weight": "model-53-of-64.safetensors",
+ "model.layers.52.self_attn.rotary_emb.inv_freq": "model-53-of-64.safetensors",
+ "model.layers.51.self_attn.q_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.q_proj.bias": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.k_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.k_proj.bias": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.v_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.v_proj.bias": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.o_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.mlp.gate_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.mlp.down_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.mlp.up_proj.weight": "model-52-of-64.safetensors",
+ "model.layers.51.input_layernorm.weight": "model-52-of-64.safetensors",
+ "model.layers.51.post_attention_layernorm.weight": "model-52-of-64.safetensors",
+ "model.layers.51.self_attn.rotary_emb.inv_freq": "model-52-of-64.safetensors",
+ "model.layers.50.self_attn.q_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.q_proj.bias": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.k_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.k_proj.bias": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.v_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.v_proj.bias": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.o_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.mlp.gate_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.mlp.down_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.mlp.up_proj.weight": "model-51-of-64.safetensors",
+ "model.layers.50.input_layernorm.weight": "model-51-of-64.safetensors",
+ "model.layers.50.post_attention_layernorm.weight": "model-51-of-64.safetensors",
+ "model.layers.50.self_attn.rotary_emb.inv_freq": "model-51-of-64.safetensors",
+ "model.layers.49.self_attn.q_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.q_proj.bias": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.k_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.k_proj.bias": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.v_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.v_proj.bias": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.o_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.mlp.gate_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.mlp.down_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.mlp.up_proj.weight": "model-50-of-64.safetensors",
+ "model.layers.49.input_layernorm.weight": "model-50-of-64.safetensors",
+ "model.layers.49.post_attention_layernorm.weight": "model-50-of-64.safetensors",
+ "model.layers.49.self_attn.rotary_emb.inv_freq": "model-50-of-64.safetensors",
+ "model.layers.48.self_attn.q_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.q_proj.bias": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.k_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.k_proj.bias": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.v_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.v_proj.bias": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.o_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.mlp.gate_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.mlp.down_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.mlp.up_proj.weight": "model-49-of-64.safetensors",
+ "model.layers.48.input_layernorm.weight": "model-49-of-64.safetensors",
+ "model.layers.48.post_attention_layernorm.weight": "model-49-of-64.safetensors",
+ "model.layers.48.self_attn.rotary_emb.inv_freq": "model-49-of-64.safetensors",
+ "model.layers.47.self_attn.q_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.q_proj.bias": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.k_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.k_proj.bias": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.v_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.v_proj.bias": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.o_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.mlp.gate_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.mlp.down_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.mlp.up_proj.weight": "model-48-of-64.safetensors",
+ "model.layers.47.input_layernorm.weight": "model-48-of-64.safetensors",
+ "model.layers.47.post_attention_layernorm.weight": "model-48-of-64.safetensors",
+ "model.layers.47.self_attn.rotary_emb.inv_freq": "model-48-of-64.safetensors",
+ "model.layers.46.self_attn.q_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.q_proj.bias": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.k_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.k_proj.bias": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.v_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.v_proj.bias": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.o_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.mlp.gate_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.mlp.down_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.mlp.up_proj.weight": "model-47-of-64.safetensors",
+ "model.layers.46.input_layernorm.weight": "model-47-of-64.safetensors",
+ "model.layers.46.post_attention_layernorm.weight": "model-47-of-64.safetensors",
+ "model.layers.46.self_attn.rotary_emb.inv_freq": "model-47-of-64.safetensors",
+ "model.layers.44.self_attn.q_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.q_proj.bias": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.k_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.k_proj.bias": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.v_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.v_proj.bias": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.o_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.mlp.gate_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.mlp.down_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.mlp.up_proj.weight": "model-45-of-64.safetensors",
+ "model.layers.44.input_layernorm.weight": "model-45-of-64.safetensors",
+ "model.layers.44.post_attention_layernorm.weight": "model-45-of-64.safetensors",
+ "model.layers.44.self_attn.rotary_emb.inv_freq": "model-45-of-64.safetensors",
+ "model.layers.45.self_attn.q_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.q_proj.bias": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.k_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.k_proj.bias": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.v_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.v_proj.bias": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.o_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.mlp.gate_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.mlp.down_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.mlp.up_proj.weight": "model-46-of-64.safetensors",
+ "model.layers.45.input_layernorm.weight": "model-46-of-64.safetensors",
+ "model.layers.45.post_attention_layernorm.weight": "model-46-of-64.safetensors",
+ "model.layers.45.self_attn.rotary_emb.inv_freq": "model-46-of-64.safetensors",
+ "model.layers.43.self_attn.q_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.q_proj.bias": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.k_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.k_proj.bias": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.v_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.v_proj.bias": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.o_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.mlp.gate_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.mlp.down_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.mlp.up_proj.weight": "model-44-of-64.safetensors",
+ "model.layers.43.input_layernorm.weight": "model-44-of-64.safetensors",
+ "model.layers.43.post_attention_layernorm.weight": "model-44-of-64.safetensors",
+ "model.layers.43.self_attn.rotary_emb.inv_freq": "model-44-of-64.safetensors",
+ "model.layers.42.self_attn.q_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.q_proj.bias": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.k_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.k_proj.bias": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.v_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.v_proj.bias": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.o_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.mlp.gate_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.mlp.down_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.mlp.up_proj.weight": "model-43-of-64.safetensors",
+ "model.layers.42.input_layernorm.weight": "model-43-of-64.safetensors",
+ "model.layers.42.post_attention_layernorm.weight": "model-43-of-64.safetensors",
+ "model.layers.42.self_attn.rotary_emb.inv_freq": "model-43-of-64.safetensors",
+ "model.layers.41.self_attn.q_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.q_proj.bias": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.k_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.k_proj.bias": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.v_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.v_proj.bias": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.o_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.mlp.gate_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.mlp.down_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.mlp.up_proj.weight": "model-42-of-64.safetensors",
+ "model.layers.41.input_layernorm.weight": "model-42-of-64.safetensors",
+ "model.layers.41.post_attention_layernorm.weight": "model-42-of-64.safetensors",
+ "model.layers.41.self_attn.rotary_emb.inv_freq": "model-42-of-64.safetensors",
+ "model.layers.39.self_attn.q_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.q_proj.bias": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.k_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.k_proj.bias": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.v_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.v_proj.bias": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.o_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.mlp.gate_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.mlp.down_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.mlp.up_proj.weight": "model-40-of-64.safetensors",
+ "model.layers.39.input_layernorm.weight": "model-40-of-64.safetensors",
+ "model.layers.39.post_attention_layernorm.weight": "model-40-of-64.safetensors",
+ "model.layers.39.self_attn.rotary_emb.inv_freq": "model-40-of-64.safetensors",
+ "model.layers.40.self_attn.q_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.q_proj.bias": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.k_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.k_proj.bias": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.v_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.v_proj.bias": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.o_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.mlp.gate_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.mlp.down_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.mlp.up_proj.weight": "model-41-of-64.safetensors",
+ "model.layers.40.input_layernorm.weight": "model-41-of-64.safetensors",
+ "model.layers.40.post_attention_layernorm.weight": "model-41-of-64.safetensors",
+ "model.layers.40.self_attn.rotary_emb.inv_freq": "model-41-of-64.safetensors",
+ "model.layers.38.self_attn.q_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.q_proj.bias": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.k_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.k_proj.bias": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.v_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.v_proj.bias": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.o_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.mlp.gate_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.mlp.down_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.mlp.up_proj.weight": "model-39-of-64.safetensors",
+ "model.layers.38.input_layernorm.weight": "model-39-of-64.safetensors",
+ "model.layers.38.post_attention_layernorm.weight": "model-39-of-64.safetensors",
+ "model.layers.38.self_attn.rotary_emb.inv_freq": "model-39-of-64.safetensors",
+ "model.layers.37.self_attn.q_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.q_proj.bias": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.k_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.k_proj.bias": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.v_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.v_proj.bias": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.o_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.mlp.gate_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.mlp.down_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.mlp.up_proj.weight": "model-38-of-64.safetensors",
+ "model.layers.37.input_layernorm.weight": "model-38-of-64.safetensors",
+ "model.layers.37.post_attention_layernorm.weight": "model-38-of-64.safetensors",
+ "model.layers.37.self_attn.rotary_emb.inv_freq": "model-38-of-64.safetensors",
+ "model.layers.35.self_attn.q_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.q_proj.bias": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.k_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.k_proj.bias": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.v_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.v_proj.bias": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.o_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.mlp.gate_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.mlp.down_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.mlp.up_proj.weight": "model-36-of-64.safetensors",
+ "model.layers.35.input_layernorm.weight": "model-36-of-64.safetensors",
+ "model.layers.35.post_attention_layernorm.weight": "model-36-of-64.safetensors",
+ "model.layers.35.self_attn.rotary_emb.inv_freq": "model-36-of-64.safetensors",
+ "model.layers.36.self_attn.q_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.q_proj.bias": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.k_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.k_proj.bias": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.v_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.v_proj.bias": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.o_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.mlp.gate_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.mlp.down_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.mlp.up_proj.weight": "model-37-of-64.safetensors",
+ "model.layers.36.input_layernorm.weight": "model-37-of-64.safetensors",
+ "model.layers.36.post_attention_layernorm.weight": "model-37-of-64.safetensors",
+ "model.layers.36.self_attn.rotary_emb.inv_freq": "model-37-of-64.safetensors",
+ "model.layers.34.self_attn.q_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.q_proj.bias": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.k_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.k_proj.bias": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.v_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.v_proj.bias": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.o_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.mlp.gate_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.mlp.down_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.mlp.up_proj.weight": "model-35-of-64.safetensors",
+ "model.layers.34.input_layernorm.weight": "model-35-of-64.safetensors",
+ "model.layers.34.post_attention_layernorm.weight": "model-35-of-64.safetensors",
+ "model.layers.34.self_attn.rotary_emb.inv_freq": "model-35-of-64.safetensors",
+ "model.layers.29.self_attn.q_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.q_proj.bias": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.k_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.k_proj.bias": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.v_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.v_proj.bias": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.o_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.mlp.gate_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.mlp.down_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.mlp.up_proj.weight": "model-30-of-64.safetensors",
+ "model.layers.29.input_layernorm.weight": "model-30-of-64.safetensors",
+ "model.layers.29.post_attention_layernorm.weight": "model-30-of-64.safetensors",
+ "model.layers.29.self_attn.rotary_emb.inv_freq": "model-30-of-64.safetensors",
+ "model.layers.31.self_attn.q_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.q_proj.bias": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.k_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.k_proj.bias": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.v_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.v_proj.bias": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.o_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.mlp.gate_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.mlp.down_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.mlp.up_proj.weight": "model-32-of-64.safetensors",
+ "model.layers.31.input_layernorm.weight": "model-32-of-64.safetensors",
+ "model.layers.31.post_attention_layernorm.weight": "model-32-of-64.safetensors",
+ "model.layers.31.self_attn.rotary_emb.inv_freq": "model-32-of-64.safetensors",
+ "model.layers.11.self_attn.q_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.q_proj.bias": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.k_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.k_proj.bias": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.v_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.v_proj.bias": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.o_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.mlp.gate_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.mlp.down_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.mlp.up_proj.weight": "model-12-of-64.safetensors",
+ "model.layers.11.input_layernorm.weight": "model-12-of-64.safetensors",
+ "model.layers.11.post_attention_layernorm.weight": "model-12-of-64.safetensors",
+ "model.layers.11.self_attn.rotary_emb.inv_freq": "model-12-of-64.safetensors",
+ "model.layers.17.self_attn.q_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.q_proj.bias": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.k_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.k_proj.bias": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.v_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.v_proj.bias": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.o_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.mlp.gate_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.mlp.down_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.mlp.up_proj.weight": "model-18-of-64.safetensors",
+ "model.layers.17.input_layernorm.weight": "model-18-of-64.safetensors",
+ "model.layers.17.post_attention_layernorm.weight": "model-18-of-64.safetensors",
+ "model.layers.17.self_attn.rotary_emb.inv_freq": "model-18-of-64.safetensors",
+ "model.layers.33.self_attn.q_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.q_proj.bias": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.k_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.k_proj.bias": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.v_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.v_proj.bias": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.o_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.mlp.gate_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.mlp.down_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.mlp.up_proj.weight": "model-34-of-64.safetensors",
+ "model.layers.33.input_layernorm.weight": "model-34-of-64.safetensors",
+ "model.layers.33.post_attention_layernorm.weight": "model-34-of-64.safetensors",
+ "model.layers.33.self_attn.rotary_emb.inv_freq": "model-34-of-64.safetensors",
+ "model.layers.7.self_attn.q_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.q_proj.bias": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.k_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.k_proj.bias": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.v_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.v_proj.bias": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.o_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.mlp.gate_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.mlp.down_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.mlp.up_proj.weight": "model-8-of-64.safetensors",
+ "model.layers.7.input_layernorm.weight": "model-8-of-64.safetensors",
+ "model.layers.7.post_attention_layernorm.weight": "model-8-of-64.safetensors",
+ "model.layers.7.self_attn.rotary_emb.inv_freq": "model-8-of-64.safetensors",
+ "model.layers.25.self_attn.q_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.q_proj.bias": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.k_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.k_proj.bias": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.v_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.v_proj.bias": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.o_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.mlp.gate_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.mlp.down_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.mlp.up_proj.weight": "model-26-of-64.safetensors",
+ "model.layers.25.input_layernorm.weight": "model-26-of-64.safetensors",
+ "model.layers.25.post_attention_layernorm.weight": "model-26-of-64.safetensors",
+ "model.layers.25.self_attn.rotary_emb.inv_freq": "model-26-of-64.safetensors",
+ "model.layers.16.self_attn.q_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.q_proj.bias": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.k_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.k_proj.bias": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.v_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.v_proj.bias": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.o_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.mlp.gate_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.mlp.down_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.mlp.up_proj.weight": "model-17-of-64.safetensors",
+ "model.layers.16.input_layernorm.weight": "model-17-of-64.safetensors",
+ "model.layers.16.post_attention_layernorm.weight": "model-17-of-64.safetensors",
+ "model.layers.16.self_attn.rotary_emb.inv_freq": "model-17-of-64.safetensors",
+ "model.layers.15.self_attn.q_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.q_proj.bias": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.k_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.k_proj.bias": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.v_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.v_proj.bias": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.o_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.mlp.gate_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.mlp.down_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.mlp.up_proj.weight": "model-16-of-64.safetensors",
+ "model.layers.15.input_layernorm.weight": "model-16-of-64.safetensors",
+ "model.layers.15.post_attention_layernorm.weight": "model-16-of-64.safetensors",
+ "model.layers.15.self_attn.rotary_emb.inv_freq": "model-16-of-64.safetensors",
+ "model.layers.30.self_attn.q_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.q_proj.bias": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.k_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.k_proj.bias": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.v_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.v_proj.bias": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.o_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.mlp.gate_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.mlp.down_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.mlp.up_proj.weight": "model-31-of-64.safetensors",
+ "model.layers.30.input_layernorm.weight": "model-31-of-64.safetensors",
+ "model.layers.30.post_attention_layernorm.weight": "model-31-of-64.safetensors",
+ "model.layers.30.self_attn.rotary_emb.inv_freq": "model-31-of-64.safetensors",
+ "model.layers.28.self_attn.q_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.q_proj.bias": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.k_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.k_proj.bias": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.v_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.v_proj.bias": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.o_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.mlp.gate_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.mlp.down_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.mlp.up_proj.weight": "model-29-of-64.safetensors",
+ "model.layers.28.input_layernorm.weight": "model-29-of-64.safetensors",
+ "model.layers.28.post_attention_layernorm.weight": "model-29-of-64.safetensors",
+ "model.layers.28.self_attn.rotary_emb.inv_freq": "model-29-of-64.safetensors",
+ "model.layers.14.self_attn.q_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.q_proj.bias": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.k_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.k_proj.bias": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.v_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.v_proj.bias": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.o_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.mlp.gate_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.mlp.down_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.mlp.up_proj.weight": "model-15-of-64.safetensors",
+ "model.layers.14.input_layernorm.weight": "model-15-of-64.safetensors",
+ "model.layers.14.post_attention_layernorm.weight": "model-15-of-64.safetensors",
+ "model.layers.14.self_attn.rotary_emb.inv_freq": "model-15-of-64.safetensors",
+ "model.layers.12.self_attn.q_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.q_proj.bias": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.k_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.k_proj.bias": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.v_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.v_proj.bias": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.o_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.mlp.gate_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.mlp.down_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.mlp.up_proj.weight": "model-13-of-64.safetensors",
+ "model.layers.12.input_layernorm.weight": "model-13-of-64.safetensors",
+ "model.layers.12.post_attention_layernorm.weight": "model-13-of-64.safetensors",
+ "model.layers.12.self_attn.rotary_emb.inv_freq": "model-13-of-64.safetensors",
+ "model.layers.19.self_attn.q_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.q_proj.bias": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.k_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.k_proj.bias": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.v_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.v_proj.bias": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.o_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.mlp.gate_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.mlp.down_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.mlp.up_proj.weight": "model-20-of-64.safetensors",
+ "model.layers.19.input_layernorm.weight": "model-20-of-64.safetensors",
+ "model.layers.19.post_attention_layernorm.weight": "model-20-of-64.safetensors",
+ "model.layers.19.self_attn.rotary_emb.inv_freq": "model-20-of-64.safetensors",
+ "model.layers.27.self_attn.q_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.q_proj.bias": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.k_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.k_proj.bias": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.v_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.v_proj.bias": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.o_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.mlp.gate_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.mlp.down_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.mlp.up_proj.weight": "model-28-of-64.safetensors",
+ "model.layers.27.input_layernorm.weight": "model-28-of-64.safetensors",
+ "model.layers.27.post_attention_layernorm.weight": "model-28-of-64.safetensors",
+ "model.layers.27.self_attn.rotary_emb.inv_freq": "model-28-of-64.safetensors",
+ "model.layers.26.self_attn.q_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.q_proj.bias": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.k_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.k_proj.bias": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.v_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.v_proj.bias": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.o_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.mlp.gate_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.mlp.down_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.mlp.up_proj.weight": "model-27-of-64.safetensors",
+ "model.layers.26.input_layernorm.weight": "model-27-of-64.safetensors",
+ "model.layers.26.post_attention_layernorm.weight": "model-27-of-64.safetensors",
+ "model.layers.26.self_attn.rotary_emb.inv_freq": "model-27-of-64.safetensors",
+ "model.layers.32.self_attn.q_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.q_proj.bias": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.k_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.k_proj.bias": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.v_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.v_proj.bias": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.o_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.mlp.gate_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.mlp.down_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.mlp.up_proj.weight": "model-33-of-64.safetensors",
+ "model.layers.32.input_layernorm.weight": "model-33-of-64.safetensors",
+ "model.layers.32.post_attention_layernorm.weight": "model-33-of-64.safetensors",
+ "model.layers.32.self_attn.rotary_emb.inv_freq": "model-33-of-64.safetensors",
+ "model.layers.22.self_attn.q_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.q_proj.bias": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.k_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.k_proj.bias": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.v_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.v_proj.bias": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.o_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.mlp.gate_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.mlp.down_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.mlp.up_proj.weight": "model-23-of-64.safetensors",
+ "model.layers.22.input_layernorm.weight": "model-23-of-64.safetensors",
+ "model.layers.22.post_attention_layernorm.weight": "model-23-of-64.safetensors",
+ "model.layers.22.self_attn.rotary_emb.inv_freq": "model-23-of-64.safetensors",
+ "model.layers.20.self_attn.q_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.q_proj.bias": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.k_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.k_proj.bias": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.v_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.v_proj.bias": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.o_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.mlp.gate_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.mlp.down_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.mlp.up_proj.weight": "model-21-of-64.safetensors",
+ "model.layers.20.input_layernorm.weight": "model-21-of-64.safetensors",
+ "model.layers.20.post_attention_layernorm.weight": "model-21-of-64.safetensors",
+ "model.layers.20.self_attn.rotary_emb.inv_freq": "model-21-of-64.safetensors",
+ "model.layers.10.self_attn.q_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.q_proj.bias": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.k_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.k_proj.bias": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.v_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.v_proj.bias": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.o_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.mlp.gate_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.mlp.down_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.mlp.up_proj.weight": "model-11-of-64.safetensors",
+ "model.layers.10.input_layernorm.weight": "model-11-of-64.safetensors",
+ "model.layers.10.post_attention_layernorm.weight": "model-11-of-64.safetensors",
+ "model.layers.10.self_attn.rotary_emb.inv_freq": "model-11-of-64.safetensors",
+ "model.layers.5.self_attn.q_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.q_proj.bias": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.k_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.k_proj.bias": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.v_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.v_proj.bias": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.o_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.mlp.gate_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.mlp.down_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.mlp.up_proj.weight": "model-6-of-64.safetensors",
+ "model.layers.5.input_layernorm.weight": "model-6-of-64.safetensors",
+ "model.layers.5.post_attention_layernorm.weight": "model-6-of-64.safetensors",
+ "model.layers.5.self_attn.rotary_emb.inv_freq": "model-6-of-64.safetensors",
+ "model.layers.13.self_attn.q_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.q_proj.bias": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.k_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.k_proj.bias": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.v_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.v_proj.bias": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.o_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.mlp.gate_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.mlp.down_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.mlp.up_proj.weight": "model-14-of-64.safetensors",
+ "model.layers.13.input_layernorm.weight": "model-14-of-64.safetensors",
+ "model.layers.13.post_attention_layernorm.weight": "model-14-of-64.safetensors",
+ "model.layers.13.self_attn.rotary_emb.inv_freq": "model-14-of-64.safetensors",
+ "model.layers.6.self_attn.q_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.q_proj.bias": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.k_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.k_proj.bias": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.v_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.v_proj.bias": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.o_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.mlp.gate_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.mlp.down_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.mlp.up_proj.weight": "model-7-of-64.safetensors",
+ "model.layers.6.input_layernorm.weight": "model-7-of-64.safetensors",
+ "model.layers.6.post_attention_layernorm.weight": "model-7-of-64.safetensors",
+ "model.layers.6.self_attn.rotary_emb.inv_freq": "model-7-of-64.safetensors",
+ "model.layers.3.self_attn.q_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.q_proj.bias": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.k_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.k_proj.bias": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.v_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.v_proj.bias": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.o_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.mlp.gate_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.mlp.down_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.mlp.up_proj.weight": "model-4-of-64.safetensors",
+ "model.layers.3.input_layernorm.weight": "model-4-of-64.safetensors",
+ "model.layers.3.post_attention_layernorm.weight": "model-4-of-64.safetensors",
+ "model.layers.3.self_attn.rotary_emb.inv_freq": "model-4-of-64.safetensors",
+ "model.layers.18.self_attn.q_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.q_proj.bias": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.k_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.k_proj.bias": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.v_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.v_proj.bias": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.o_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.mlp.gate_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.mlp.down_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.mlp.up_proj.weight": "model-19-of-64.safetensors",
+ "model.layers.18.input_layernorm.weight": "model-19-of-64.safetensors",
+ "model.layers.18.post_attention_layernorm.weight": "model-19-of-64.safetensors",
+ "model.layers.18.self_attn.rotary_emb.inv_freq": "model-19-of-64.safetensors",
+ "model.layers.8.self_attn.q_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.q_proj.bias": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.k_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.k_proj.bias": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.v_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.v_proj.bias": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.o_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.mlp.gate_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.mlp.down_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.mlp.up_proj.weight": "model-9-of-64.safetensors",
+ "model.layers.8.input_layernorm.weight": "model-9-of-64.safetensors",
+ "model.layers.8.post_attention_layernorm.weight": "model-9-of-64.safetensors",
+ "model.layers.8.self_attn.rotary_emb.inv_freq": "model-9-of-64.safetensors",
+ "model.layers.9.self_attn.q_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.q_proj.bias": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.k_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.k_proj.bias": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.v_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.v_proj.bias": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.o_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.mlp.gate_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.mlp.down_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.mlp.up_proj.weight": "model-10-of-64.safetensors",
+ "model.layers.9.input_layernorm.weight": "model-10-of-64.safetensors",
+ "model.layers.9.post_attention_layernorm.weight": "model-10-of-64.safetensors",
+ "model.layers.9.self_attn.rotary_emb.inv_freq": "model-10-of-64.safetensors",
+ "model.layers.23.self_attn.q_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.q_proj.bias": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.k_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.k_proj.bias": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.v_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.v_proj.bias": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.o_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.mlp.gate_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.mlp.down_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.mlp.up_proj.weight": "model-24-of-64.safetensors",
+ "model.layers.23.input_layernorm.weight": "model-24-of-64.safetensors",
+ "model.layers.23.post_attention_layernorm.weight": "model-24-of-64.safetensors",
+ "model.layers.23.self_attn.rotary_emb.inv_freq": "model-24-of-64.safetensors",
+ "model.layers.21.self_attn.q_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.q_proj.bias": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.k_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.k_proj.bias": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.v_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.v_proj.bias": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.o_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.mlp.gate_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.mlp.down_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.mlp.up_proj.weight": "model-22-of-64.safetensors",
+ "model.layers.21.input_layernorm.weight": "model-22-of-64.safetensors",
+ "model.layers.21.post_attention_layernorm.weight": "model-22-of-64.safetensors",
+ "model.layers.21.self_attn.rotary_emb.inv_freq": "model-22-of-64.safetensors",
+ "model.layers.4.self_attn.q_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.q_proj.bias": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.k_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.k_proj.bias": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.v_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.v_proj.bias": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.o_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.mlp.gate_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.mlp.down_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.mlp.up_proj.weight": "model-5-of-64.safetensors",
+ "model.layers.4.input_layernorm.weight": "model-5-of-64.safetensors",
+ "model.layers.4.post_attention_layernorm.weight": "model-5-of-64.safetensors",
+ "model.layers.4.self_attn.rotary_emb.inv_freq": "model-5-of-64.safetensors",
+ "model.layers.24.self_attn.q_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.q_proj.bias": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.k_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.k_proj.bias": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.v_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.v_proj.bias": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.o_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.mlp.gate_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.mlp.down_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.mlp.up_proj.weight": "model-25-of-64.safetensors",
+ "model.layers.24.input_layernorm.weight": "model-25-of-64.safetensors",
+ "model.layers.24.post_attention_layernorm.weight": "model-25-of-64.safetensors",
+ "model.layers.24.self_attn.rotary_emb.inv_freq": "model-25-of-64.safetensors",
+ "model.layers.1.self_attn.q_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.q_proj.bias": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.k_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.k_proj.bias": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.v_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.v_proj.bias": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.o_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.mlp.gate_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.mlp.down_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.mlp.up_proj.weight": "model-2-of-64.safetensors",
+ "model.layers.1.input_layernorm.weight": "model-2-of-64.safetensors",
+ "model.layers.1.post_attention_layernorm.weight": "model-2-of-64.safetensors",
+ "model.layers.1.self_attn.rotary_emb.inv_freq": "model-2-of-64.safetensors",
+ "model.layers.2.self_attn.q_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.q_proj.bias": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.k_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.k_proj.bias": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.v_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.v_proj.bias": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.o_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.mlp.gate_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.mlp.down_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.mlp.up_proj.weight": "model-3-of-64.safetensors",
+ "model.layers.2.input_layernorm.weight": "model-3-of-64.safetensors",
+ "model.layers.2.post_attention_layernorm.weight": "model-3-of-64.safetensors",
+ "model.layers.2.self_attn.rotary_emb.inv_freq": "model-3-of-64.safetensors"
+ }
+}
\ No newline at end of file
diff --git a/modeling_opencua.py b/modeling_opencua.py
new file mode 100644
index 0000000000000000000000000000000000000000..6aa35ac9bff7442857b7db4fafc10baee92b5187
--- /dev/null
+++ b/modeling_opencua.py
@@ -0,0 +1,449 @@
+# ------------------------------------------------------------------------------
+# OpenCUA‑7B Model
+#
+# This implementation is adapted from the Qwen2‑VL reference code in
+# Hugging Face Transformers v4.53.0:
+# https://github.com/huggingface/transformers/tree/v4.53.0/src/transformers/models/qwen2_5_vl
+#
+# Checkpoint used for weight initialisation:
+# "Qwen/Qwen2.5-VL-32B-Instruct" – https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct
+#
+# Key modifications
+# -----------------
+# • Replaced Multimodal Rotary Position Embedding (M‑RoPE) with 1‑D RoPE for
+# compatibility with OpenCUA training settings.
+# • Wrapped vision encoder and language model into a single
+# `OpenCUAForConditionalGeneration` class.
+# • Simplified weight initialisation — this file targets inference / fine‑tuning,
+# not training from scratch.
+#
+# Copyright (c) 2025 XLANG Lab, The University of Hong Kong
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the “Software”), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+#
+# ------------------------------------------------------------------------------
+# Prohibited Uses & Additional Disclaimer
+# ---------------------------------------
+# • The Software may **not** be used for any purpose or activity that violates
+# applicable laws or regulations in any jurisdiction.
+# • The authors, contributors, and copyright holders are **not responsible**
+# for any illegal, unethical, or harmful use of the Software, nor for any
+# direct or indirect damages resulting from such use.
+# • Use of the “OpenCUA” name, logo, or trademarks does **not** imply any
+# endorsement or affiliation unless a separate written permission is obtained.
+
+import torch
+import torch.nn as nn
+from transformers.cache_utils import Cache
+from transformers.modeling_utils import PreTrainedModel
+from transformers.models.llava.modeling_llava import LlavaCausalLMOutputWithPast
+
+from .configuration_opencua import OpenCUAConfig
+from transformers.models.qwen2_5_vl.modeling_qwen2_5_vl import Qwen2_5_VisionTransformerPretrainedModel
+from transformers.models.qwen2.modeling_qwen2 import Qwen2ForCausalLM
+
+
+class OpenCUAPreTrainedModel(PreTrainedModel):
+ config_class = OpenCUAConfig
+ base_model_prefix = "model"
+ _no_split_modules = ["Qwen2_5_VisionTransformerPretrainedModel"]
+ _skip_keys_device_placement = "past_key_values"
+ _supports_flash_attn_2 = True
+
+ def _init_weights(self, module):
+ # important: this ported version of Llava isn't meant for training from scratch - only
+ # inference and fine-tuning - so the proper init weights code has been removed - the original codebase
+ # https://github.com/haotian-liu/LLaVA/tree/main/llava should serve for that purpose
+ std = (
+ self.config.initializer_range
+ if hasattr(self.config, "initializer_range")
+ else self.config.text_config.initializer_range
+ )
+
+ if hasattr(module, "class_embedding"):
+ module.class_embedding.data.normal_(mean=0.0, std=std)
+
+ if isinstance(module, (nn.Linear, nn.Conv2d)):
+ module.weight.data.normal_(mean=0.0, std=std)
+ if module.bias is not None:
+ module.bias.data.zero_()
+ elif isinstance(module, nn.Embedding):
+ module.weight.data.normal_(mean=0.0, std=std)
+ if module.padding_idx is not None:
+ module.weight.data[module.padding_idx].zero_()
+
+ @property
+ def _supports_sdpa(self):
+ """
+ Retrieve language_model's attribute to check whether the model supports
+ SDPA or not.
+ """
+ return self.language_model._supports_sdpa
+
+
+class OpenCUAForConditionalGeneration(OpenCUAPreTrainedModel):
+
+ def __init__(self, config: OpenCUAConfig):
+ super().__init__(config)
+ self.vision_tower = Qwen2_5_VisionTransformerPretrainedModel(config.vision_config)
+ self.language_model = Qwen2ForCausalLM(config.text_config)
+ self.post_init()
+
+ def get_input_embeddings(self):
+ return self.language_model.get_input_embeddings()
+
+ def set_input_embeddings(self, value):
+ self.language_model.set_input_embeddings(value)
+
+ def get_output_embeddings(self):
+ return self.language_model.get_output_embeddings()
+
+ def set_output_embeddings(self, new_embeddings):
+ self.language_model.set_output_embeddings(new_embeddings)
+
+ def set_decoder(self, decoder):
+ self.language_model.set_decoder(decoder)
+
+ def get_decoder(self):
+ return self.language_model.get_decoder()
+
+ def tie_weights(self):
+ return self.language_model.tie_weights()
+
+ def resize_token_embeddings(self, new_num_tokens: int | None = None, pad_to_multiple_of=None) -> nn.Embedding:
+ model_embeds = self.language_model.resize_token_embeddings(
+ new_num_tokens, pad_to_multiple_of)
+ # update vocab size
+ self.config.text_config.vocab_size = model_embeds.num_embeddings
+ self.vocab_size = model_embeds.num_embeddings
+ return model_embeds
+
+ def _merge_input_ids_with_image_features(
+ self,
+ image_features: torch.Tensor,
+ feature_lengths: list[int],
+ inputs_embeds: torch.Tensor,
+ input_ids: torch.Tensor,
+ attention_mask: torch.Tensor,
+ labels: torch.Tensor | None = None):
+ """
+ Args:
+ image_features (:obj:`torch.Tensor` of shape :obj:`(num_image_tokens, embed_dim)`):
+ The image features to merge with the input embeddings.
+ feature_lengths: the length of image feature.
+ inputs_embeds (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length, embed_dim)`):
+ The input embeddings.
+ input_ids (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`):
+ The input ids.
+ attention_mask (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`):
+ The attention mask.
+ labels (:obj:`torch.Tensor` of shape :obj:`(batch_size, sequence_length)`, *optional*):
+ The labels.
+ """
+
+ image_token_index: int = self.config.media_placeholder_token_id
+ pad_token_id: int = self.config.pad_token_id
+ ignore_index: int = self.config.ignore_index
+
+ _, embed_dim = image_features.shape
+
+ batch_size, sequence_length = input_ids.shape
+ left_padding = not torch.sum(
+ input_ids[:, -1] == torch.tensor(pad_token_id))
+
+ # 1. Create a mask to know where special image tokens are
+ _token_occupation_table = torch.ones_like(input_ids.flatten())
+ _token_occupation_table[input_ids.flatten() == image_token_index] = \
+ torch.tensor(feature_lengths,
+ dtype=torch.long, device=input_ids.device)
+ _token_occupation_table = _token_occupation_table.reshape(
+ input_ids.shape)
+
+ max_embed_dim = _token_occupation_table.sum(-1).max().item()
+ assert max_embed_dim >= sequence_length, (
+ f"The maximum embedding dimension ({max_embed_dim}) is less than the sequence length ({sequence_length})"
+ )
+ batch_indices, non_image_indices = torch.where(input_ids != image_token_index)
+
+ # 2. Compute the positions where text should be written
+ # Calculate new positions for text tokens in merged image-text sequence.
+ new_token_positions = torch.cumsum(_token_occupation_table, -1) - 1
+ nb_image_pad = max_embed_dim - 1 - new_token_positions[:, -1]
+ if left_padding:
+ new_token_positions += nb_image_pad[:, None] # offset for left padding
+ text_to_overwrite = new_token_positions[batch_indices, non_image_indices]
+
+ # 3. Create the full embedding, already padded to the maximum position
+ final_embedding = torch.zeros(
+ batch_size, max_embed_dim, embed_dim, dtype=inputs_embeds.dtype, device=inputs_embeds.device
+ )
+ final_attention_mask = torch.zeros(
+ batch_size, max_embed_dim, dtype=attention_mask.dtype, device=inputs_embeds.device
+ )
+ if labels is not None:
+ final_labels = torch.full(
+ (batch_size, max_embed_dim), ignore_index, dtype=input_ids.dtype, device=input_ids.device
+ )
+ # In case the Vision model or the Language model has been offloaded to CPU, we need to manually
+ # set the corresponding tensors into their correct target device.
+ target_device = inputs_embeds.device
+ batch_indices, non_image_indices, text_to_overwrite = (
+ batch_indices.to(target_device),
+ non_image_indices.to(target_device),
+ text_to_overwrite.to(target_device),
+ )
+ attention_mask = attention_mask.to(target_device)
+
+ # 4. Fill the embeddings based on the mask.
+ final_embedding[batch_indices, text_to_overwrite] = inputs_embeds[batch_indices, non_image_indices]
+ final_attention_mask[batch_indices, text_to_overwrite] = attention_mask[batch_indices, non_image_indices]
+ if labels is not None:
+ final_labels[batch_indices, text_to_overwrite] = labels[batch_indices, non_image_indices]
+
+ # 5. Fill the embeddings corresponding to the images. Anything that is not `text_positions` needs filling (#29835)
+ image_to_overwrite = torch.full(
+ (batch_size, max_embed_dim), True, dtype=torch.bool, device=inputs_embeds.device
+ )
+ image_to_overwrite[batch_indices, text_to_overwrite] = False
+ image_to_overwrite &= image_to_overwrite.cumsum(-1) - 1 >= nb_image_pad[:, None].to(target_device)
+
+ if image_to_overwrite.sum() != image_features.shape[:-1].numel():
+ raise ValueError(
+ f"The input provided to the model are wrong. The number of image tokens is {image_to_overwrite.sum()} while"
+ f" the number of image features given to the model is {image_features.shape[:-1].numel()}. "
+ "This prevents correct indexing and breaks batch generation."
+ )
+
+ final_embedding[image_to_overwrite] = image_features.contiguous().reshape(-1, embed_dim).to(target_device)
+ final_attention_mask |= image_to_overwrite
+ position_ids = (final_attention_mask.cumsum(-1) - 1).masked_fill_((final_attention_mask == 0), 1)
+
+ # 6. Mask out the embedding at padding positions, as we later use the past_key_value value to determine the non-attended tokens.
+ batch_indices, pad_indices = torch.where(input_ids == pad_token_id)
+ indices_to_mask = new_token_positions[batch_indices, pad_indices]
+
+ final_embedding[batch_indices, indices_to_mask] = 0
+
+ if labels is None:
+ final_labels = None
+
+ return final_embedding, final_attention_mask, final_labels, position_ids
+
+ def _extract_image_features(self,
+ pixel_values: torch.FloatTensor | list[torch.FloatTensor],
+ grid_thws: torch.FloatTensor,
+ ):
+ """
+ Args:
+ pixel_values (:obj:`torch.FloatTensor` of shape :obj:`(sum_num_image_tokens, channels)`):
+ The pixel values of the images processed by image processor.
+ grid_thws: (B,3)
+
+ Returns:
+ selected_image_feature (:obj:`torch.FloatTensor` of shape :obj:`(num_image_tokens, embed_dim)`):
+ The selected image features to use as input to the projector head.
+
+ """
+
+ assert len(grid_thws.shape)==2 and grid_thws.shape[1]==3, f"grid_thws must be a 2D tensor with shape (batched, 3), but got {grid_thws.shape}"
+ if isinstance(pixel_values, list):
+ pixel_values = torch.cat(pixel_values, dim=0)
+ image_features_ = self.vision_tower(pixel_values, grid_thw=grid_thws)
+ image_features_list = []
+ start_idx = 0
+ for i, grid_thw in enumerate(grid_thws):
+ end_idx = start_idx + (grid_thw[0] * grid_thw[1] * grid_thw[2]) // 4
+ image_features_list.append(image_features_[start_idx:end_idx, :])
+ start_idx = end_idx
+
+ selected_image_feature = torch.cat(image_features_list, dim=0)
+ feature_lengths = [x.size(0) for x in image_features_list]
+ return selected_image_feature, feature_lengths
+
+ def forward(
+ self,
+ input_ids: torch.LongTensor | None = None,
+ pixel_values: torch.FloatTensor | list[torch.FloatTensor] | None = None,
+ grid_thws: torch.Tensor = None,
+ attention_mask: torch.Tensor | None = None,
+ position_ids: torch.LongTensor | None = None,
+ past_key_values: list[torch.FloatTensor] | None = None,
+ inputs_embeds: torch.FloatTensor | None = None,
+ labels: torch.LongTensor | None = None,
+ use_cache: bool | None = None,
+ output_attentions: bool | None = None,
+ output_hidden_states: bool | None = None,
+ return_dict: bool | None = None,
+ ) -> tuple | LlavaCausalLMOutputWithPast:
+ r"""
+ Args:
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
+
+ ```"""
+
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
+ output_hidden_states = (
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
+ )
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
+ if inputs_embeds is None:
+ # 1. Extra the input embeddings
+ inputs_embeds = self.get_input_embeddings()(input_ids)
+ # 2. Merge text and images
+ if pixel_values is not None and len(pixel_values) > 0 and input_ids.shape[1] != 1:
+ image_feature, feature_lengths = self._extract_image_features(
+ pixel_values, grid_thws)
+
+ inputs_embeds = inputs_embeds.to(image_feature.dtype) # num_tokens, embed_dim
+ inputs_embeds, attention_mask, labels, position_ids = \
+ self._merge_input_ids_with_image_features(image_feature, feature_lengths, inputs_embeds, input_ids, attention_mask, labels
+ )
+ # In case input_ids.shape[1] == 1 & pixel_values==None & past_key_values != None, we are in the case of
+ # generation with cache
+ elif past_key_values is not None and pixel_values is not None and input_ids.shape[1] == 1:
+ # Retrieve the first layer to inspect the logits and mask out the hidden states
+ # that are set to 0
+ first_layer_past_key_value = past_key_values[0][0][:, :, :, 0]
+
+ # Sum all dimensions of head_dim (-2) to avoid random errors such as: https://github.com/huggingface/transformers/pull/28032#issuecomment-1863691941
+ batch_index, non_attended_tokens = torch.where(first_layer_past_key_value.float().sum(-2) == 0)
+
+ # Get the target length
+ target_length = input_ids.shape[1]
+ past_length = first_layer_past_key_value.shape[-1]
+
+ extended_attention_mask = torch.ones(
+ (attention_mask.shape[0], past_length),
+ dtype=attention_mask.dtype,
+ device=attention_mask.device,
+ )
+
+ # Filter out only the tokens that can be un-attended, this can happen
+ # if one uses Llava + Fused modules where the cache on the
+ # first iteration is already big enough, or if one passes custom cache
+ valid_indices = non_attended_tokens < extended_attention_mask.size(-1)
+ new_batch_index = batch_index[valid_indices]
+ new_non_attended_tokens = non_attended_tokens[valid_indices]
+
+ # Zero-out the places where we don't need to attend
+ extended_attention_mask[new_batch_index, new_non_attended_tokens] = 0
+
+ attention_mask = torch.cat((extended_attention_mask, attention_mask[:, -target_length:]), dim=1)
+ position_ids = torch.sum(attention_mask, dim=1).unsqueeze(-1) - 1
+
+ outputs = self.language_model(
+ attention_mask=attention_mask,
+ position_ids=position_ids,
+ past_key_values=past_key_values,
+ inputs_embeds=inputs_embeds,
+ use_cache=use_cache,
+ output_attentions=output_attentions,
+ output_hidden_states=output_hidden_states,
+ return_dict=return_dict,
+ )
+
+ logits = outputs[0]
+
+ loss = None
+ if labels is not None:
+ # Shift so that tokens < n predict n
+ if attention_mask is not None:
+ shift_attention_mask = attention_mask[..., 1:]
+ shift_logits = logits[..., :-1, :][shift_attention_mask.to(logits.device) != 0].contiguous()
+ shift_labels = labels[..., 1:][shift_attention_mask.to(labels.device) != 0].contiguous()
+ else:
+ shift_logits = logits[..., :-1, :].contiguous()
+ shift_labels = labels[..., 1:].contiguous()
+ # Flatten the tokens
+ loss_fct = nn.CrossEntropyLoss()
+ loss = loss_fct(
+ shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1).to(shift_logits.device)
+ )
+
+ if not return_dict:
+ output = (logits,) + outputs[1:]
+ return (loss,) + output if loss is not None else output
+
+ return LlavaCausalLMOutputWithPast(
+ loss=loss,
+ logits=logits,
+ past_key_values=outputs.past_key_values,
+ hidden_states=outputs.hidden_states,
+ attentions=outputs.attentions,
+ )
+
+ def prepare_inputs_for_generation(
+ self, input_ids, past_key_values=None, inputs_embeds=None, pixel_values=None, grid_thws=None, attention_mask=None, **kwargs
+ ):
+ if past_key_values is not None:
+ if isinstance(past_key_values, Cache):
+ cache_length = past_key_values.get_seq_length()
+ past_length = past_key_values.seen_tokens
+ else:
+ cache_length = past_length = past_key_values[0][0].shape[2]
+
+ # Keep only the unprocessed tokens:
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
+ # input)
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
+ # input_ids based on the past_length.
+ elif past_length < input_ids.shape[1]:
+ input_ids = input_ids[:, past_length:]
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
+ elif self.config.media_placeholder_token_id in input_ids:
+ input_ids = input_ids[:, input_ids.shape[1] - 1 :]
+ # If the cache has seen more tokens than it can hold, then the cache has a size limit. Let's discard the
+ # older attention values, as their corresponding values are not part of the input.
+ if cache_length < past_length and attention_mask is not None:
+ attention_mask = attention_mask[:, -(cache_length + input_ids.shape[1]) :]
+
+ position_ids = kwargs.get("position_ids", None)
+ if attention_mask is not None and position_ids is None:
+ # create position_ids on the fly for batch generation
+ position_ids = attention_mask.long().cumsum(-1) - 1
+ position_ids.masked_fill_(attention_mask == 0, 1)
+ if past_key_values:
+ position_ids = position_ids[:, -input_ids.shape[1] :]
+
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
+ if inputs_embeds is not None and past_key_values is None:
+ model_inputs = {"inputs_embeds": inputs_embeds}
+ else:
+ model_inputs = {"input_ids": input_ids}
+
+ model_inputs.update(
+ {
+ "position_ids": position_ids,
+ "past_key_values": past_key_values,
+ "use_cache": kwargs.get("use_cache"),
+ "attention_mask": attention_mask,
+ "pixel_values": pixel_values,
+ "grid_thws": grid_thws,
+ }
+ )
+ return model_inputs
+
+ def _reorder_cache(self, *args, **kwargs):
+ return self.language_model._reorder_cache(*args, **kwargs)
diff --git a/preprocessor_config.json b/preprocessor_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..e8670303869490ed63fe72fa31cbd384f3f269dc
--- /dev/null
+++ b/preprocessor_config.json
@@ -0,0 +1,18 @@
+{
+ "min_pixels": 3136,
+ "max_pixels": 12845056,
+ "patch_size": 14,
+ "temporal_patch_size": 2,
+ "merge_size": 2,
+ "image_mean": [
+ 0.48145466,
+ 0.4578275,
+ 0.40821073
+ ],
+ "image_std": [
+ 0.26862954,
+ 0.26130258,
+ 0.27577711
+ ],
+ "image_processor_type": "Qwen2VLImageProcessor"
+}
\ No newline at end of file
diff --git a/tiktoken.model b/tiktoken.model
new file mode 100644
index 0000000000000000000000000000000000000000..b59a29faddb69027255022411d8710e864c5d112
--- /dev/null
+++ b/tiktoken.model
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2b1b8dfb5cc5f024bafc373121c6aba3f66f9a5a0269e243470a1de16a33186
+size 2561218
diff --git a/tokenization_opencua.py b/tokenization_opencua.py
new file mode 100644
index 0000000000000000000000000000000000000000..2e3e2fb2e283945c821201ddfe8d3c2a6a032b7d
--- /dev/null
+++ b/tokenization_opencua.py
@@ -0,0 +1,367 @@
+import os
+import tiktoken
+
+from logging import getLogger
+from pathlib import Path
+from typing import (
+ cast,
+ Tuple,
+ Dict,
+ Iterator,
+ List,
+ Union,
+ Optional,
+)
+from shutil import copyfile
+from tiktoken.load import load_tiktoken_bpe
+from tokenizers import AddedToken
+from transformers.tokenization_utils import PreTrainedTokenizer
+from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode
+
+
+
+logger = getLogger(__name__)
+VOCAB_FILES_NAMES = {"vocab_file": "tiktoken.model"}
+
+class TikTokenTokenizer(PreTrainedTokenizer):
+ """
+ Tokenizing and encoding/decoding text using the Tiktoken tokenizer. See megatron/tokenizer/tiktoken_tokenizer.py.
+
+ This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
+ this superclass for more information regarding those methods.
+
+ Args:
+ vocab_file (`str`):
+ The path to the Tiktoken model file.
+ bos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<|begin_of_text|>",`):
+ The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
+ eos_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<|end_of_text|>"`):
+ The end of sequence token.
+ unk_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<|reserved_special_token_249|>"`):
+ The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
+ token instead. The second to last item in special_tokens.
+ pad_token (`str` or `tokenizers.AddedToken`, *optional*, defaults to `"<|reserved_special_token_250|>"`):
+ The token used for padding, for example when batching sequences of different lengths.
+ additional_special_tokens (list of `str`, *optional*):
+ A tuple or a list of additional tokens, which will be marked as `special`, meaning that they will be
+ skipped when decoding if `skip_special_tokens` is set to `True`.
+ """
+
+ vocab_files_names = VOCAB_FILES_NAMES
+
+ model_input_names = ["input_ids", "attention_mask"]
+
+ special_tokens: Dict[str, int]
+
+ num_reserved_special_tokens = 256
+
+ pat_str = "|".join(
+ [
+ r"""[\p{Han}]+""",
+ r"""[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}&&[^\p{Han}]]*[\p{Ll}\p{Lm}\p{Lo}\p{M}&&[^\p{Han}]]+(?i:'s|'t|'re|'ve|'m|'ll|'d)?""",
+ r"""[^\r\n\p{L}\p{N}]?[\p{Lu}\p{Lt}\p{Lm}\p{Lo}\p{M}&&[^\p{Han}]]+[\p{Ll}\p{Lm}\p{Lo}\p{M}&&[^\p{Han}]]*(?i:'s|'t|'re|'ve|'m|'ll|'d)?""",
+ r"""\p{N}{1,3}""",
+ r""" ?[^\s\p{L}\p{N}]+[\r\n]*""",
+ r"""\s*[\r\n]+""",
+ r"""\s+(?!\S)""",
+ r"""\s+""",
+ ]
+ )
+
+ def __init__(
+ self,
+ vocab_file,
+ bos_token: Union[str, AddedToken]="[BOS]",
+ eos_token: Union[str, AddedToken]="[EOS]",
+ unk_token: Union[str, AddedToken, None]=None,
+ pad_token: Union[str, AddedToken, None]=None,
+ additional_special_tokens: List[str]=None,
+ added_tokens_decoder: Optional[dict] = None,
+ **kwargs,
+ ):
+ assert os.path.isfile(vocab_file), vocab_file
+
+ if additional_special_tokens is None:
+ # dumping mode
+ used_special_tokens = [
+ "<|im_end|>",
+ "<|im_user|>",
+ "<|im_assistant|>",
+ "<|reserved_token_0|>",
+ "<|start_header_id|>",
+ "<|end_header_id|>",
+ "<|reserved_token_1|>",
+ "[EOT]",
+ "<|im_system|>",
+ "<|reserved_token_2|>",
+ "<|reserved_token_3|>",
+ "<|reserved_token_4|>",
+ "<|reserved_token_5|>",
+ "<|reserved_token_6|>",
+ "<|reserved_token_7|>",
+ "<|im_middle|>",
+ "<|media_begin|>",
+ "<|media_content|>",
+ "<|media_end|>",
+ "<|media_placeholder|>",
+ ]
+ used_reserved_tokens = 8
+ last_reserved_token_id = self.num_reserved_special_tokens - 4 - len(used_special_tokens) + used_reserved_tokens - 1
+ additional_special_tokens = used_special_tokens + [
+ f"<|reserved_token_{i}|>"
+ for i in range(used_reserved_tokens, last_reserved_token_id + 1)
+ ]
+ # num_reserved_special_tokens = additional_special_tokens + BOS + EOS + unk_token + pad_token
+ assert len(additional_special_tokens) + 4 == self.num_reserved_special_tokens, f"additional_special_tokens num: {len(additional_special_tokens)} is not correct"
+ # we assume that the instance is under initialization and unk_token and pad_token should be automatically inferred
+ if unk_token is not None:
+ raise ValueError("unk_token should not be set in dumping mode when additional_special_tokens is None")
+ if pad_token is not None:
+ raise ValueError("pad_token should not be set in dumping mode when additional_special_tokens is None")
+ # last two reserved tokens
+ unk_token = f"[UNK]"
+ pad_token = f"[PAD]"
+
+ logger.info(f"adding unk_token: {unk_token} and pad_token: {pad_token}")
+ self.additional_special_tokens = additional_special_tokens
+ special_tokens = [str(bos_token), str(eos_token)] + additional_special_tokens + [str(unk_token), str(pad_token)]
+
+ self.vocab_file = vocab_file
+ mergeable_ranks = load_tiktoken_bpe(vocab_file)
+ num_base_tokens = len(mergeable_ranks)
+ self.special_tokens = {
+ token: num_base_tokens + i for i, token in enumerate(special_tokens)
+ }
+ else:
+ self.additional_special_tokens = additional_special_tokens
+ special_tokens_mapping = {
+ i: added_tokens_decoder[i].content for i in added_tokens_decoder
+ }
+
+ self.vocab_file = vocab_file
+ mergeable_ranks = load_tiktoken_bpe(vocab_file)
+ num_base_tokens = len(mergeable_ranks)
+ self.special_tokens = {
+ special_tokens_mapping.get(i, f"<|reserved_token_{i}|>"): i
+ for i in range(
+ num_base_tokens, num_base_tokens + self.num_reserved_special_tokens + 2
+ )
+ }
+
+
+
+ self.model = tiktoken.Encoding(
+ name=Path(vocab_file).name,
+ pat_str=self.pat_str,
+ mergeable_ranks=mergeable_ranks,
+ special_tokens=self.special_tokens,
+ )
+ logger.info(f"Reloaded tiktoken model from {vocab_file}")
+
+ self.n_words: int = self.model.n_vocab
+ # BOS / EOS token IDs
+ self.bos_id: int = self.special_tokens[str(bos_token)]
+ self.eos_id: int = self.special_tokens[str(eos_token)]
+
+ logger.info(
+ f"#words: {self.n_words} - BOS ID: {self.bos_id} - EOS ID: {self.eos_id}"
+ )
+
+ self.pad_id: int = self.special_tokens[str(pad_token)]
+ self.unk_id: int = self.special_tokens[str(unk_token)]
+ self.byte_encoder = bytes_to_unicode()
+ self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
+
+ self.decoder = {}
+ for i in range(self.n_words):
+ # Taken from https://gist.github.com/xenova/a452a6474428de0182b17605a98631ee
+ decoding = ''.join([
+ self.byte_encoder[ord(char)] for char in
+ self.model.decode_single_token_bytes(i).decode('latin-1')
+ ])
+ self.decoder[i] = decoding
+
+ self.encoder = {}
+ for i in range(self.n_words):
+ if i in self.decoder:
+ self.encoder[self.decoder[i]] = i
+
+ super().__init__(
+ bos_token=bos_token,
+ eos_token=eos_token,
+ unk_token=unk_token,
+ pad_token=pad_token,
+ additional_special_tokens=self.additional_special_tokens,
+ **kwargs,
+ )
+ self.all_special_ids_set = set(self.all_special_ids)
+
+ def encode(
+ self,
+ text: str,
+ allow_special_tokens = True,
+ **kwargs
+ ) -> List[int]:
+ """
+ Encodes a string into a list of token IDs.
+
+ Args:
+ text (str): The input string to be encoded.
+
+ Returns:
+ list[int]: A list of token IDs.
+ """
+ # If there are other args, we should call super().encode because there are a lot of code
+ # to handle those args. supper().encode finally will call _tokenize and _convert_token_to_id.
+ # NOTE: our encode method is not compatible with the super().encode method,
+ # e.g. split_special_tokens' default is True in our encode method.
+ if len(kwargs) > 0:
+ logger.warning( f"Calling super().encode with {kwargs}" )
+ return super().encode(text, **kwargs)
+
+ assert type(text) is str
+
+ # The tiktoken tokenizer can handle <=400k chars without
+ # pyo3_runtime.PanicException.
+ TIKTOKEN_MAX_ENCODE_CHARS = 400_000
+
+ # https://github.com/openai/tiktoken/issues/195
+ # Here we iterate over subsequences and split if we exceed the limit
+ # of max consecutive non-whitespace or whitespace characters.
+ MAX_NO_WHITESPACES_CHARS = 25_000
+
+ texts = self.pre_tokenizer_process(text)
+
+ all_substrs = []
+ for text in texts:
+ substrs = (
+ substr
+ for i in range(0, len(text), TIKTOKEN_MAX_ENCODE_CHARS)
+ for substr in self._split_whitespaces_or_nonwhitespaces(
+ text[i: i + TIKTOKEN_MAX_ENCODE_CHARS], MAX_NO_WHITESPACES_CHARS
+ )
+ )
+ all_substrs.extend(substrs)
+
+ t: List[int] = []
+ for substr in all_substrs:
+ if allow_special_tokens:
+ t.extend(
+ self.model.encode(
+ substr,
+ allowed_special="all",
+ )
+ )
+ else:
+ t.extend(
+ self.model.encode(
+ substr,
+ disallowed_special=(),
+ )
+ )
+
+ return t
+
+ def decode(
+ self,
+ token_ids: Union[int, List[int]],
+ **kwargs
+ ) -> str:
+ """
+ Decodes a list of token IDs into a string.
+
+ Args:
+ token_ids (List[int]): The list of token IDs to be decoded.
+
+ Returns:
+ str: The decoded string.
+ """
+ # If there are other args, we should call super().decode because there are a lot of code
+ # to handle those args. supper().encode finally will call convert_tokens_to_string and _convert_id_to_token.
+ if len(kwargs) > 0:
+ return super().decode(token_ids, **kwargs)
+
+ if type(token_ids) is int:
+ token_ids = [token_ids]
+
+ return self.model.decode(cast(List[int], token_ids))
+
+ @staticmethod
+ def _split_whitespaces_or_nonwhitespaces(
+ s: str, max_consecutive_slice_len: int
+ ) -> Iterator[str]:
+ """
+ Splits the string `s` so that each substring contains no more than `max_consecutive_slice_len`
+ consecutive whitespaces or consecutive non-whitespaces.
+ """
+ current_slice_len = 0
+ current_slice_is_space = s[0].isspace() if len(s) > 0 else False
+ slice_start = 0
+
+ for i in range(len(s)):
+ is_now_space = s[i].isspace()
+
+ if current_slice_is_space ^ is_now_space:
+ current_slice_len = 1
+ current_slice_is_space = is_now_space
+ else:
+ current_slice_len += 1
+ if current_slice_len > max_consecutive_slice_len:
+ yield s[slice_start:i]
+ slice_start = i
+ current_slice_len = 1
+ yield s[slice_start:]
+
+ def pre_tokenizer_process(self, text: str) -> List[str]:
+ """
+ pre-tokenizes the input text into a list of tokens.
+ This method is used to split the input text into smaller chunks for internal processing.
+ """
+ return [text]
+
+
+ """ ----- Below are the abstract methods required by PreTrainedTokenizer ----- """
+ @property
+ def vocab_size(self) -> int:
+ return self.n_words
+
+ def get_vocab(self) -> Dict[str, int]:
+ return self.encoder
+
+ def _tokenize(self, text: str, **kwargs) -> List[str]:
+ return [
+ self.decoder[t]
+ for t in self.encode(text)
+ ]
+
+ def _convert_token_to_id(self, token: str) -> int:
+ return self.encoder.get(token, self.unk_id)
+
+ def _convert_id_to_token(self, index: int) -> str:
+ return self.decoder.get(index)
+
+ @staticmethod
+ def clean_up_tokenization(out_string: str) -> str:
+ return out_string
+
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
+ text = ''.join(tokens)
+ text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', 'replace')
+ return text
+
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
+ if not os.path.isdir(save_directory):
+ raise ValueError(f"vocabulary path ({save_directory}) should be a directory")
+ out_vocab_file = os.path.join(
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
+ )
+
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
+ copyfile(self.vocab_file, out_vocab_file)
+
+ return (out_vocab_file,)
+
+
+class TikTokenV3(TikTokenTokenizer):
+ num_reserved_special_tokens = 293 + 128
+ pat_str = "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?\\p{L}+|\\p{N}| ?[^\\s\\p{L}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+"
\ No newline at end of file
diff --git a/tokenizer_config.json b/tokenizer_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..f4893bbf1da770d6b3e4c2eef35fe098d25a1351
--- /dev/null
+++ b/tokenizer_config.json
@@ -0,0 +1,234 @@
+{
+ "added_tokens_decoder": {
+ "151643": {
+ "content": "[BOS]",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151644": {
+ "content": "[EOS]",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151645": {
+ "content": "<|im_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151646": {
+ "content": "<|im_user|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151647": {
+ "content": "<|im_assistant|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151648": {
+ "content": "<|reserved_token_0|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151649": {
+ "content": "<|start_header_id|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151650": {
+ "content": "<|end_header_id|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151651": {
+ "content": "<|reserved_token_1|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151652": {
+ "content": "[EOT]",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151653": {
+ "content": "<|im_system|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151654": {
+ "content": "<|reserved_token_2|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151655": {
+ "content": "<|reserved_token_3|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151656": {
+ "content": "<|reserved_token_4|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151657": {
+ "content": "<|reserved_token_5|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151658": {
+ "content": "<|reserved_token_6|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151659": {
+ "content": "<|reserved_token_7|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151660": {
+ "content": "<|im_middle|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151661": {
+ "content": "<|media_begin|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151662": {
+ "content": "<|media_content|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151663": {
+ "content": "<|media_end|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "151664": {
+ "content": "<|media_placeholder|>",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "152062": {
+ "content": "[UNK]",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ },
+ "152063": {
+ "content": "[PAD]",
+ "lstrip": false,
+ "normalized": false,
+ "rstrip": false,
+ "single_word": false,
+ "special": true
+ }
+
+ },
+ "additional_special_tokens": [
+ "<|im_end|>",
+ "<|im_user|>",
+ "<|im_assistant|>",
+ "<|reserved_token_0|>",
+ "<|start_header_id|>",
+ "<|end_header_id|>",
+ "<|reserved_token_1|>",
+ "[EOT]",
+ "<|im_system|>",
+ "<|reserved_token_2|>",
+ "<|reserved_token_3|>",
+ "<|reserved_token_4|>",
+ "<|reserved_token_5|>",
+ "<|reserved_token_6|>",
+ "<|reserved_token_7|>",
+ "<|im_middle|>",
+ "<|media_begin|>",
+ "<|media_content|>",
+ "<|media_end|>",
+ "<|media_placeholder|>"
+ ],
+ "bos_token": "[BOS]",
+ "clean_up_tokenization_spaces": false,
+ "eos_token": "[EOS]",
+ "extra_special_tokens": {},
+ "chat_template": "{%- for message in messages -%}{%- if loop.first and messages[0]['role'] != 'system' -%}{{'<|im_system|>system<|im_middle|>You are a helpful assistant<|im_end|>'}}{%- endif -%}{%- if message['role'] == 'system' -%}{{'<|im_system|>'}}{%- endif -%}{%- if message['role'] == 'user' -%}{{'<|im_user|>'}}{%- endif -%}{%- if message['role'] == 'assistant' -%}{{'<|im_assistant|>'}}{%- endif -%}{{- message['role'] -}}{{'<|im_middle|>'}}{%- if message['content'] is string -%}{{- message['content'] + '<|im_end|>' -}}{%- else -%}{%- for content in message['content'] -%}{%- if content['type'] == 'image' or 'image' in content or 'image_url' in content -%}{{'<|media_begin|>image<|media_content|><|media_placeholder|><|media_end|>'}}{%- else -%}{{content['text']}}{%- endif -%}{%- endfor -%}{{'<|im_end|>'}}{%- endif -%}{%- endfor -%}{%- if add_generation_prompt -%}{{'<|im_assistant|>assistant<|im_middle|>'}}{%- endif -%}",
+ "model_max_length": 1000000000000000019884624838656,
+ "pad_token": "[PAD]",
+ "tokenizer_class": "TikTokenV3",
+ "unk_token": "[UNK]",
+ "auto_map": {
+ "AutoTokenizer": [
+ "tokenization_opencua.TikTokenV3",
+ null
+ ]
+ }
+}
\ No newline at end of file