Rename README (2).md to README .md
Browse files- README (2).md → README .md +31 -52
README (2).md → README .md
RENAMED
@@ -1,25 +1,12 @@
|
|
1 |
-
---
|
2 |
-
base_model: Alignment-Lab-AI/Neural-network-medium-untuned-theta
|
3 |
-
tags:
|
4 |
-
- axolotl
|
5 |
-
- Alignment-Lab-AI
|
6 |
-
model-index:
|
7 |
-
- name: Buzz-5B-Medium
|
8 |
-
results: []
|
9 |
-
---
|
10 |
-
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |

|
15 |
|
16 |
-
# Buzz
|
17 |
|
18 |
## Introduction
|
19 |
|
20 |
- [Alignment Lab AI](https://AlignmentLab.ai) is pleased to introduce our latest research efforts with:
|
21 |
|
22 |
-
**Buzz
|
23 |
|
24 |
The Buzz model, Dataset, and Code are to be released to build a toolkit that aims to demonstrate the potential for reuse and optimization of existing pretrained language models to continuously refine the heights of performance that can be achieved with optimal use of FlOps. Alongside Buzz-5b-Medium, we release
|
25 |
|
@@ -27,13 +14,11 @@ The Buzz model, Dataset, and Code are to be released to build a toolkit that aim
|
|
27 |
- [Buzz-2.5b-Small](https://huggingface.co/tempbuzz/buzz-Buzz-2.5b-Small)
|
28 |
- [Buzz-8B-Large](https://huggingface.co/tempbuzz/Lab-AI/Buzz-8B-Large)
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
[ benchmark scores table here]
|
37 |
|
38 |
## Iterative Fine-Tuning Methodology
|
39 |
|
@@ -47,46 +32,40 @@ Our research builds upon the concepts introduced in several key papers, includin
|
|
47 |
|
48 |
By combining high quality data, iterative fine-tuning with carefully selected "grounding" distributions from previous epochs, we have developed a cost-effective approach that pushes the boundaries of model reuse and optimization.
|
49 |
|
50 |
-
## notably, we observe that
|
51 |
-
|
52 |
-
|
53 |
-

|
54 |
-
https://wandb.ai/llm_surgery/llama-3-8b-vs-5b
|
55 |
-
https://wandb.ai/autometa/neural-network-1
|
56 |
-
https://wandb.ai/autometa/buzz-baby?nw=nwuserautometa
|
57 |
-
https://wandb.ai/autometa/buzz-brother?nw=nwuserautometa
|
58 |
-
https://wandb.ai/autometa/buzz-big?nw=nwuserautometa
|
59 |
|
60 |
-
## Chat Template and Inference
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
|
66 |
-
model_name = "tempbuzz/Buzz-5b-Medium"
|
67 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
68 |
-
model = AutoModelForCausalLM.from_pretrained(model_name)
|
69 |
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
|
81 |
-
output = model.generate(input_ids, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id)
|
82 |
-
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
83 |
|
84 |
-
|
85 |
-
``````
|
86 |
## Conclusion
|
87 |
|
88 |
-
We intend to focus on *updating* and improving the
|
89 |
-
|
90 |
|
91 |
Buzz hopes to be a proof of concept, and a toolkit to demonstrate and enable the community in the pursuit of efficient and effective locally run, personally owned, language models. Through collaboration with [Hive Digital Technologies](https://hivedigitaltechnologies.com/) who have enabled us to perform this research, we have demonstrated the immense potential for model reuse and optimization. The Buzz models and dataset are open sourced with [////////].
|
92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |

|
2 |
|
3 |
+
# Buzz: Advancing Efficiency through Iterative Fine-Tuning
|
4 |
|
5 |
## Introduction
|
6 |
|
7 |
- [Alignment Lab AI](https://AlignmentLab.ai) is pleased to introduce our latest research efforts with:
|
8 |
|
9 |
+
**Buzz**, a highly curated pretraining scale assistant dataset, unifying RL and SFT, developed in collaboration with [Hive Digital Technologies](https://hivedt.com/).
|
10 |
|
11 |
The Buzz model, Dataset, and Code are to be released to build a toolkit that aims to demonstrate the potential for reuse and optimization of existing pretrained language models to continuously refine the heights of performance that can be achieved with optimal use of FlOps. Alongside Buzz-5b-Medium, we release
|
12 |
|
|
|
14 |
- [Buzz-2.5b-Small](https://huggingface.co/tempbuzz/buzz-Buzz-2.5b-Small)
|
15 |
- [Buzz-8B-Large](https://huggingface.co/tempbuzz/Lab-AI/Buzz-8B-Large)
|
16 |
|
17 |
+
## Features
|
18 |
|
19 |
+
Buzz contains over 500 datasets, deduplicated, with formatting built to maintain and extend compatibility between training types and the current local ecosystem.
|
20 |
|
21 |
+
the datasets within are comprised of various high quality instruction following, conversational, storytelling, and coding datasets, as well as over 5 million new rows of data, in addition to several million reaugmented rows of data, comprising the totality of the learned techniques since our release of [Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
|
|
|
|
|
22 |
|
23 |
## Iterative Fine-Tuning Methodology
|
24 |
|
|
|
32 |
|
33 |
By combining high quality data, iterative fine-tuning with carefully selected "grounding" distributions from previous epochs, we have developed a cost-effective approach that pushes the boundaries of model reuse and optimization.
|
34 |
|
35 |
+
## notably, we observe that training on a single epoch of high quality in domain data can still achieve remarkably low loss values before overfitting.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
|
|
37 |
|
38 |
+
## Data structure and formatting
|
39 |
+
buzz should be out of the box compatible with the sharegpt type in Axolotl and lmsys' FastChat during training
|
40 |
+
it containsthe following structure
|
41 |
|
|
|
|
|
|
|
42 |
|
43 |
+
```
|
44 |
+
{
|
45 |
+
"source": "string containing the source dataset",
|
46 |
+
"stack": "chosen/rejected for RL techniques",
|
47 |
+
"question_index": optional row, only contained in DPO specific dataset to match dpo pairs - int64
|
48 |
+
"conversations": [
|
49 |
+
{
|
50 |
+
"from": "system",
|
51 |
+
"value": "an initial system prompt or user query, may or may not be present depending on the row"
|
52 |
+
},
|
53 |
+
{
|
54 |
+
"from": "human or system",
|
55 |
+
"value": "an initial 'human' query"
|
56 |
+
},
|
57 |
+
{
|
58 |
+
"from": "gpt",
|
59 |
+
"value": "a response to the previous turn, may be followed by additional human/gpt alternations"
|
60 |
+
}
|
61 |
+
]
|
62 |
+
}
|
63 |
|
|
|
|
|
64 |
|
65 |
+
```
|
|
|
66 |
## Conclusion
|
67 |
|
68 |
+
We intend to focus on *updating* and improving the dataset, tools to construct it, and other surrounding open sourced infrastructure. Our next effort will focus on context and implementing the research currently being conducted by [Wing-Lian](https://github.com/winglian), the lead developer of the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) training framework that underpins these experiments. We encourage the community to explore Wing-Lian's work, such as the [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) and [llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) models, which showcase the potential for further advancements in language modeling.
|
|
|
69 |
|
70 |
Buzz hopes to be a proof of concept, and a toolkit to demonstrate and enable the community in the pursuit of efficient and effective locally run, personally owned, language models. Through collaboration with [Hive Digital Technologies](https://hivedigitaltechnologies.com/) who have enabled us to perform this research, we have demonstrated the immense potential for model reuse and optimization. The Buzz models and dataset are open sourced with [////////].
|
71 |
|