Can *fim* and *instruct* datasets be mixed for lora training?

#23
by JacobHsu - opened

I want to build a custom Lora coding style based on the Qwen2.5-Coder-7B-Instruct model.
If I train fim or instruct separately with Lora, they both achieve decent results on their respective tasks (but not on the others).
However, I want to have a model like yours that has both fim and chat ability.
But if I mix both datsets and do sft lora at the same time, the results are terrible.
Is this reasonable?
Or maybe it would be better to continue pre-train with the fim dataset first and then do sft with the chat dataset?

you can create a dataset withbpdf's mergin all pdfs and then train with lora, if you what tomerge datasets here you are a exemple import json

upload big dataset

with open("datasets/catala_dataset.json", "r", encoding="utf-8") as f:
gran = json.load(f)

upload the mini dataset for correction

with open("datasets/mini_correccio.json", "r", encoding="utf-8") as f:
mini = json.load(f)

Combination

import random

total = big + mini
random.shuffle(total)

save all

with open("datasets/final_dataset.json", "w", encoding="utf-8") as f:
json.dump(total, f, ensure_ascii=False, indent=2)

good luck

Sign up or log in to comment