🐍 Merged Qwen2.5-VL Model (LoRA + Base)

This repository contains the merged weights of LoRA adapter located at ./final_model and the base model unsloth/Qwen2.5-VL-3B-Instruct-bnb-4bit.

The merge was performed with peft.merge_and_unload() on 2025-06-04.

Usage

from unsloth import FastVisionModel

model, tokenizer = FastVisionModel.from_pretrained("Litian2002/Qwen2.5-VL-3B-Spatial-bnb-4bit")
model = FastVisionModel.for_inference(model) # Enable native 2x faster inference

# Your inference code here

Or with transformers:

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("Litian2002/Qwen2.5-VL-3B-Spatial-bnb-4bit", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Litian2002/Qwen2.5-VL-3B-Spatial-bnb-4bit", trust_remote_code=True)
Downloads last month
7
Safetensors
Model size
2.11B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Litian2002/Qwen2.5-VL-3B-Spatial-bnb-4bit

Quantized
(1)
this model