AdaptLLM commited on
Commit
69c0281
·
verified ·
1 Parent(s): 2508bf9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -121,7 +121,7 @@ Please refer to the [remote-sensing-VQA-benchmark](https://huggingface.co/datase
121
 
122
  ## 3. To Reproduce this Domain-Adapted MLLM
123
 
124
- Using our training data, [biomed-visual-instructions](https://huggingface.co/datasets/AdaptLLM/remote-sensing-visual-instructions), you can easily reproduce our models based on the [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory) repository.
125
 
126
  For reference, we train from Qwen2.5-VL-3B-Instruct for 1 epoch with a learning rate of 1e-5, and a global batch size of 128.
127
 
 
121
 
122
  ## 3. To Reproduce this Domain-Adapted MLLM
123
 
124
+ Using our training data, [remote-sensing-visual-instructions](https://huggingface.co/datasets/AdaptLLM/remote-sensing-visual-instructions), you can easily reproduce our models based on the [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory) repository.
125
 
126
  For reference, we train from Qwen2.5-VL-3B-Instruct for 1 epoch with a learning rate of 1e-5, and a global batch size of 128.
127