Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ base_model:
|
|
| 13 |
|
| 14 |
# Lora Fine-Tune of Qwen2.5-VL-3B-Instruct on ComicsPAP datataset
|
| 15 |
|
| 16 |
-
[Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) fine-
|
| 17 |
The training was performed using a constant learning rate of 2e-4 with the AdamW optimizer. The model was trained for 5k steps using an effective batch size of 128. The LoRA configuration employed an α of 16, a dropout rate of 0.05, and a rank r = 8.
|
| 18 |
|
| 19 |
## Results
|
|
|
|
| 13 |
|
| 14 |
# Lora Fine-Tune of Qwen2.5-VL-3B-Instruct on ComicsPAP datataset
|
| 15 |
|
| 16 |
+
[Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) fine-tuned simultaneously in all five tasks of the [ComicsPAP](https://huggingface.co/datasets/VLR-CVC/ComicsPAP) dataset.
|
| 17 |
The training was performed using a constant learning rate of 2e-4 with the AdamW optimizer. The model was trained for 5k steps using an effective batch size of 128. The LoRA configuration employed an α of 16, a dropout rate of 0.05, and a rank r = 8.
|
| 18 |
|
| 19 |
## Results
|