Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Alignment-Lab-AI commited on
Commit
280f32b
·
verified ·
1 Parent(s): 07e417c

Rename README (2).md to README .md

Browse files
Files changed (1) hide show
  1. README (2).md → README .md +31 -52
README (2).md → README .md RENAMED
@@ -1,25 +1,12 @@
1
- ---
2
- base_model: Alignment-Lab-AI/Neural-network-medium-untuned-theta
3
- tags:
4
- - axolotl
5
- - Alignment-Lab-AI
6
- model-index:
7
- - name: Buzz-5B-Medium
8
- results: []
9
- ---
10
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
11
-
12
-
13
-
14
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6436279eaaef013d1af225c9/fWaQucBWfabfnMsAFN8hv.png)
15
 
16
- # Buzz-5b-Medium: Advancing Efficiency through Iterative Fine-Tuning
17
 
18
  ## Introduction
19
 
20
  - [Alignment Lab AI](https://AlignmentLab.ai) is pleased to introduce our latest research efforts with:
21
 
22
- **Buzz-5b-Medium**, a state-of-the-art language model developed in collaboration with [Hive Digital Technologies](https://hivedt.com/).
23
 
24
  The Buzz model, Dataset, and Code are to be released to build a toolkit that aims to demonstrate the potential for reuse and optimization of existing pretrained language models to continuously refine the heights of performance that can be achieved with optimal use of FlOps. Alongside Buzz-5b-Medium, we release
25
 
@@ -27,13 +14,11 @@ The Buzz model, Dataset, and Code are to be released to build a toolkit that aim
27
  - [Buzz-2.5b-Small](https://huggingface.co/tempbuzz/buzz-Buzz-2.5b-Small)
28
  - [Buzz-8B-Large](https://huggingface.co/tempbuzz/Lab-AI/Buzz-8B-Large)
29
 
30
- the **Buzz dataset** and two additional models: **Buzz-2.5B-Small** (2.5B parameters) and **Buzz-8B-Large** (8B parameters), the codebase to refine, filter and augment the data, as well as prune and train your own variants, will additionally be released in the coming days.
31
 
32
- ## Performance
33
 
34
- Buzz-5b-Medium achieves remarkably low train and validation loss, with unseen data loss reaching around **0.5** by the end of training. This performance showcases the effectiveness of our novel iterative fine-tuning approach, which maximizes the reuse of pretrained weights. Even the smallest variant, Buzz-Small, maintains a steady train loss of approximately **0.4-0.6**, on entirely new data and hold out sets.
35
-
36
- [ benchmark scores table here]
37
 
38
  ## Iterative Fine-Tuning Methodology
39
 
@@ -47,46 +32,40 @@ Our research builds upon the concepts introduced in several key papers, includin
47
 
48
  By combining high quality data, iterative fine-tuning with carefully selected "grounding" distributions from previous epochs, we have developed a cost-effective approach that pushes the boundaries of model reuse and optimization.
49
 
50
- ## notably, we observe that the models have not yet appeared to plateu with the application of these techniques
51
-
52
-
53
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6436279eaaef013d1af225c9/wyHyDIJnNmbomonZKQAD0.png)
54
- https://wandb.ai/llm_surgery/llama-3-8b-vs-5b
55
- https://wandb.ai/autometa/neural-network-1
56
- https://wandb.ai/autometa/buzz-baby?nw=nwuserautometa
57
- https://wandb.ai/autometa/buzz-brother?nw=nwuserautometa
58
- https://wandb.ai/autometa/buzz-big?nw=nwuserautometa
59
 
60
- ## Chat Template and Inference
61
 
62
- To use the Buzz-5b-Medium model for chat-based tasks, you can utilize the provided chat template. Here's an example of how to format the chat template and perform inference using the Hugging Face Transformers library:
63
- ```python
64
- from transformers import AutoTokenizer, AutoModelForCausalLM
65
 
66
- model_name = "tempbuzz/Buzz-5b-Medium"
67
- tokenizer = AutoTokenizer.from_pretrained(model_name)
68
- model = AutoModelForCausalLM.from_pretrained(model_name)
69
 
70
- chat_template = """{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"""
71
-
72
- messages = [
73
- {"role": "user", "content": "Hello, how are you?"},
74
- {"role": "assistant", "content": "I'm doing well, thank you for asking! How can I assist you today?"},
75
- {"role": "user", "content": "Can you tell me a joke?"}
76
- ]
77
-
78
- input_text = chat_template.format(messages=messages, add_generation_prompt=True)
79
- input_ids = tokenizer.encode(input_text, return_tensors="pt")
 
 
 
 
 
 
 
 
 
 
80
 
81
- output = model.generate(input_ids, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id)
82
- generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
83
 
84
- print(generated_text)
85
- ``````
86
  ## Conclusion
87
 
88
- We intend to focus on *updating* and improving the performance of these models, and surrounding open sourced infrastructure. Our next effort will focus on context and implementing the research currently being conducted by [Wing-Lian](https://github.com/winglian), the lead developer of the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) training framework that underpins these experiments. We encourage the community to explore Wing-Lian's work, such as the [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) and [llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) models, which showcase the potential for further advancements in language modeling.
89
-
90
 
91
  Buzz hopes to be a proof of concept, and a toolkit to demonstrate and enable the community in the pursuit of efficient and effective locally run, personally owned, language models. Through collaboration with [Hive Digital Technologies](https://hivedigitaltechnologies.com/) who have enabled us to perform this research, we have demonstrated the immense potential for model reuse and optimization. The Buzz models and dataset are open sourced with [////////].
92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6436279eaaef013d1af225c9/fWaQucBWfabfnMsAFN8hv.png)
2
 
3
+ # Buzz: Advancing Efficiency through Iterative Fine-Tuning
4
 
5
  ## Introduction
6
 
7
  - [Alignment Lab AI](https://AlignmentLab.ai) is pleased to introduce our latest research efforts with:
8
 
9
+ **Buzz**, a highly curated pretraining scale assistant dataset, unifying RL and SFT, developed in collaboration with [Hive Digital Technologies](https://hivedt.com/).
10
 
11
  The Buzz model, Dataset, and Code are to be released to build a toolkit that aims to demonstrate the potential for reuse and optimization of existing pretrained language models to continuously refine the heights of performance that can be achieved with optimal use of FlOps. Alongside Buzz-5b-Medium, we release
12
 
 
14
  - [Buzz-2.5b-Small](https://huggingface.co/tempbuzz/buzz-Buzz-2.5b-Small)
15
  - [Buzz-8B-Large](https://huggingface.co/tempbuzz/Lab-AI/Buzz-8B-Large)
16
 
17
+ ## Features
18
 
19
+ Buzz contains over 500 datasets, deduplicated, with formatting built to maintain and extend compatibility between training types and the current local ecosystem.
20
 
21
+ the datasets within are comprised of various high quality instruction following, conversational, storytelling, and coding datasets, as well as over 5 million new rows of data, in addition to several million reaugmented rows of data, comprising the totality of the learned techniques since our release of [Open-Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
 
 
22
 
23
  ## Iterative Fine-Tuning Methodology
24
 
 
32
 
33
  By combining high quality data, iterative fine-tuning with carefully selected "grounding" distributions from previous epochs, we have developed a cost-effective approach that pushes the boundaries of model reuse and optimization.
34
 
35
+ ## notably, we observe that training on a single epoch of high quality in domain data can still achieve remarkably low loss values before overfitting.
 
 
 
 
 
 
 
 
36
 
 
37
 
38
+ ## Data structure and formatting
39
+ buzz should be out of the box compatible with the sharegpt type in Axolotl and lmsys' FastChat during training
40
+ it containsthe following structure
41
 
 
 
 
42
 
43
+ ```
44
+ {
45
+ "source": "string containing the source dataset",
46
+ "stack": "chosen/rejected for RL techniques",
47
+ "question_index": optional row, only contained in DPO specific dataset to match dpo pairs - int64
48
+ "conversations": [
49
+ {
50
+ "from": "system",
51
+ "value": "an initial system prompt or user query, may or may not be present depending on the row"
52
+ },
53
+ {
54
+ "from": "human or system",
55
+ "value": "an initial 'human' query"
56
+ },
57
+ {
58
+ "from": "gpt",
59
+ "value": "a response to the previous turn, may be followed by additional human/gpt alternations"
60
+ }
61
+ ]
62
+ }
63
 
 
 
64
 
65
+ ```
 
66
  ## Conclusion
67
 
68
+ We intend to focus on *updating* and improving the dataset, tools to construct it, and other surrounding open sourced infrastructure. Our next effort will focus on context and implementing the research currently being conducted by [Wing-Lian](https://github.com/winglian), the lead developer of the [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) training framework that underpins these experiments. We encourage the community to explore Wing-Lian's work, such as the [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) and [llama-3-8b-256k-PoSE](https://huggingface.co/winglian/llama-3-8b-256k-PoSE) models, which showcase the potential for further advancements in language modeling.
 
69
 
70
  Buzz hopes to be a proof of concept, and a toolkit to demonstrate and enable the community in the pursuit of efficient and effective locally run, personally owned, language models. Through collaboration with [Hive Digital Technologies](https://hivedigitaltechnologies.com/) who have enabled us to perform this research, we have demonstrated the immense potential for model reuse and optimization. The Buzz models and dataset are open sourced with [////////].
71