Update README.md
Browse files
README.md
CHANGED
@@ -13,11 +13,11 @@ tags:
|
|
13 |
- PEFT
|
14 |
- Quantization
|
15 |
---
|
16 |
-
# NeuroBit LLM
|
17 |
|
18 |
## Overview
|
19 |
|
20 |
-
The **Neurobit
|
21 |
|
22 |
This model supports a wide range of educational applications, from summarization to personalized study guide generation, and has been optimized for efficiency with 4-bit quantization.
|
23 |
|
@@ -137,8 +137,8 @@ To generate high-quality educational content tailored for diverse academic needs
|
|
137 |
If you use this model in your work, please cite it as follows:
|
138 |
|
139 |
```bibtex
|
140 |
-
@misc{
|
141 |
-
title={169Pi/
|
142 |
author={169Pi},
|
143 |
year={2024},
|
144 |
publisher={Hugging Face},
|
|
|
13 |
- PEFT
|
14 |
- Quantization
|
15 |
---
|
16 |
+
# NeuroBit-1.0-Exp LLM
|
17 |
|
18 |
## Overview
|
19 |
|
20 |
+
The **Neurobit-1.0-Exp** is a state-of-the-art fine-tuned model derived from **Meta-Llama-3.1-8B-bnb-4bit**, purpose-built to deliver high-quality educational content. Designed to meet the needs of students and educators, this model leverages advanced techniques, including **LoRA**, **PEFT**, and **RSLoRA**, to generate accurate, contextually relevant, and engaging outputs. We're naming it **NeuroBit-1.0-Exp** to signify its status as an experimental prototype, pushing the boundaries of innovation.
|
21 |
|
22 |
This model supports a wide range of educational applications, from summarization to personalized study guide generation, and has been optimized for efficiency with 4-bit quantization.
|
23 |
|
|
|
137 |
If you use this model in your work, please cite it as follows:
|
138 |
|
139 |
```bibtex
|
140 |
+
@misc{169Pi_neuroBit-1.0-exp_llm,
|
141 |
+
title={169Pi/neuroBit-1.0-exp_llm: Fine-Tuned Educational Model},
|
142 |
author={169Pi},
|
143 |
year={2024},
|
144 |
publisher={Hugging Face},
|