File size: 1,016 Bytes
cf61137 bd9d818 cf61137 2ac086f cf61137 2ac086f 0372641 2ac086f 0372641 2ac086f 0372641 2ac086f 0372641 2ac086f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---
# FlowerTune LoRA Model
This is a LoRA adapter for mistralai/Mistral-7B-Instruct-v0.3 fine-tuned with Flower federated learning framework on a general NLP dataset.
## Training Details
- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower
This model is a LoRA adapter fine-tuned on mistralai/Mistral-7B-Instruct-v0.3 using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.
## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)
|