Commit
·
94d34d9
1
Parent(s):
61944bd
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
|
@@ -8,27 +8,35 @@ model-index:
|
|
| 8 |
results: []
|
| 9 |
---
|
| 10 |
|
| 11 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 12 |
-
should probably proofread and complete it, then remove this comment. -->
|
| 13 |
-
|
| 14 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
| 15 |
-
# supercot-lora
|
| 16 |
|
| 17 |
-
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the
|
| 18 |
It achieves the following results on the evaluation set:
|
| 19 |
- Loss: 0.9790
|
| 20 |
|
| 21 |
## Model description
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
## Intended uses & limitations
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
## Training and evaluation data
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
## Training procedure
|
| 34 |
|
|
@@ -108,3 +116,37 @@ The following hyperparameters were used during training:
|
|
| 108 |
- Pytorch 2.0.1+cu118
|
| 109 |
- Datasets 2.14.5
|
| 110 |
- Tokenizers 0.14.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: mit
|
| 3 |
base_model: mistralai/Mistral-7B-v0.1
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
|
|
|
| 8 |
results: []
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
| 11 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
| 12 |
+
# mistral-v0.1-supercot-lora
|
| 13 |
|
| 14 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the [supercot](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset) dataset.
|
| 15 |
It achieves the following results on the evaluation set:
|
| 16 |
- Loss: 0.9790
|
| 17 |
|
| 18 |
## Model description
|
| 19 |
|
| 20 |
+
SuperCOT is a LoRA trained with the aim of making Mistral follow prompts for Langchain better, by infusing chain-of-thought datasets, code explanations and instructions, snippets, logical deductions and Alpaca GPT-4 prompts. It uses a mixture of the following datasets:
|
| 21 |
+
|
| 22 |
+
https://huggingface.co/datasets/QingyiSi/Alpaca-CoT
|
| 23 |
+
- Chain of thought QED
|
| 24 |
+
- Chain of thought Aqua
|
| 25 |
+
- CodeAlpaca
|
| 26 |
+
|
| 27 |
+
https://huggingface.co/datasets/neulab/conala
|
| 28 |
+
- Code snippets
|
| 29 |
+
|
| 30 |
+
https://huggingface.co/datasets/yahma/alpaca-cleaned
|
| 31 |
+
- Alpaca GPT4
|
| 32 |
|
| 33 |
## Intended uses & limitations
|
| 34 |
|
| 35 |
+
The model will show biases similar to those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
|
| 36 |
|
| 37 |
## Training and evaluation data
|
| 38 |
|
| 39 |
+
[kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
|
| 40 |
|
| 41 |
## Training procedure
|
| 42 |
|
|
|
|
| 116 |
- Pytorch 2.0.1+cu118
|
| 117 |
- Datasets 2.14.5
|
| 118 |
- Tokenizers 0.14.0
|
| 119 |
+
|
| 120 |
+
### Citations
|
| 121 |
+
|
| 122 |
+
Alpaca COT datasets
|
| 123 |
+
```
|
| 124 |
+
@misc{alpaca-cot,
|
| 125 |
+
author = {Qingyi Si, Zheng Lin },
|
| 126 |
+
school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
|
| 127 |
+
title = {Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface},
|
| 128 |
+
year = {2023},
|
| 129 |
+
publisher = {GitHub},
|
| 130 |
+
journal = {GitHub repository},
|
| 131 |
+
howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
|
| 132 |
+
}
|
| 133 |
+
```
|
| 134 |
+
Stanford Alpaca
|
| 135 |
+
```
|
| 136 |
+
@misc{alpaca,
|
| 137 |
+
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
|
| 138 |
+
title = {Stanford Alpaca: An Instruction-following LLaMA model},
|
| 139 |
+
year = {2023},
|
| 140 |
+
publisher = {GitHub},
|
| 141 |
+
journal = {GitHub repository},
|
| 142 |
+
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
|
| 143 |
+
}
|
| 144 |
+
```
|
| 145 |
+
Google FLAN
|
| 146 |
+
```
|
| 147 |
+
@inproceedings{weifinetuned,
|
| 148 |
+
title={Finetuned Language Models are Zero-Shot Learners},
|
| 149 |
+
author={Wei, Jason and Bosma, Maarten and Zhao, Vincent and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M and Le, Quoc V},
|
| 150 |
+
booktitle={International Conference on Learning Representations}
|
| 151 |
+
}
|
| 152 |
+
```
|