Update README.md
Browse files
README.md
CHANGED
|
@@ -22,27 +22,24 @@ pipeline_tag: text-generation
|
|
| 22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 23 |
should probably proofread and complete it, then remove this comment. -->
|
| 24 |
|
| 25 |
-
# FalCoder
|
| 26 |
**Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
|
| 27 |
|
| 28 |
-
## Model description
|
| 29 |
|
| 30 |
[Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)
|
| 31 |
|
| 32 |
-
## Dataset
|
| 33 |
-
|
| 34 |
-
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
|
| 35 |
|
|
|
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
TBA
|
| 40 |
|
| 41 |
-
### Training hyperparameters
|
| 42 |
|
| 43 |
TBA
|
| 44 |
|
| 45 |
-
### Training results
|
| 46 |
|
| 47 |
| Step | Training Loss | Validation Loss |
|
| 48 |
|------|---------------|-----------------|
|
|
@@ -54,7 +51,7 @@ TBA
|
|
| 54 |
|
| 55 |
|
| 56 |
|
| 57 |
-
### Example of usage
|
| 58 |
```py
|
| 59 |
import torch
|
| 60 |
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer
|
|
@@ -102,4 +99,16 @@ def generate(
|
|
| 102 |
|
| 103 |
instruction = "Design a class for representing a person in Python."
|
| 104 |
print(generate(instruction))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
```
|
|
|
|
| 22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 23 |
should probably proofread and complete it, then remove this comment. -->
|
| 24 |
|
| 25 |
+
# FalCoder 🦅👩💻
|
| 26 |
**Falcon-7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
|
| 27 |
|
| 28 |
+
## Model description 🧠
|
| 29 |
|
| 30 |
[Falcon 7B](https://huggingface.co/tiiuae/falcon-7b)
|
| 31 |
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
+
## Training and evaluation data 📚
|
| 34 |
|
| 35 |
+
[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
|
| 36 |
|
|
|
|
| 37 |
|
| 38 |
+
### Training hyperparameters ⚙
|
| 39 |
|
| 40 |
TBA
|
| 41 |
|
| 42 |
+
### Training results 🗒️
|
| 43 |
|
| 44 |
| Step | Training Loss | Validation Loss |
|
| 45 |
|------|---------------|-----------------|
|
|
|
|
| 51 |
|
| 52 |
|
| 53 |
|
| 54 |
+
### Example of usage 👩💻
|
| 55 |
```py
|
| 56 |
import torch
|
| 57 |
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoTokenizer
|
|
|
|
| 99 |
|
| 100 |
instruction = "Design a class for representing a person in Python."
|
| 101 |
print(generate(instruction))
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### Citation
|
| 105 |
+
```
|
| 106 |
+
@misc {manuel_romero_2023,
|
| 107 |
+
author = { {Manuel Romero} },
|
| 108 |
+
title = { falcoder-7b (Revision e061237) },
|
| 109 |
+
year = 2023,
|
| 110 |
+
url = { https://huggingface.co/mrm8488/falcoder-7b },
|
| 111 |
+
doi = { 10.57967/hf/0789 },
|
| 112 |
+
publisher = { Hugging Face }
|
| 113 |
+
}
|
| 114 |
```
|