Text Generation
PEFT
Safetensors
llama-2
Eval Results
dfurman commited on
Commit
d65fec6
·
1 Parent(s): 45375bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -10,7 +10,7 @@ pipeline_tag: text-generation
10
  base_model: meta-llama/Llama-2-70b-hf
11
  ---
12
 
13
- # llama-2-70b-dolphin 🦙🐬
14
 
15
  This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
16
 
@@ -30,13 +30,13 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
30
 
31
  * Model license: Llama 2 Community License Agreement
32
  * Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
33
- * Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-70b-dolphin-peft.py)
34
- * Loss curves: [plot](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#finetuning-description)
35
- * Runtime stats: [table](https://huggingface.co/dfurman/llama-2-70b-dolphin-peft#runtime-tests)
36
 
37
  ## Loss curve
38
 
39
- ![loss curves](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/jul_24_23_1_14_00_log_loss_curves_llama-2-70b-dolphin.png)
40
 
41
  The above loss curve was generated from the run's private wandb.ai log.
42
 
@@ -47,7 +47,7 @@ Example 1:
47
  **User**:
48
  > You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
49
 
50
- **llama-2-70b-dolphin-peft**:
51
  > 1. Visit the Statue of Liberty: Take a ferry to Liberty Island and see the iconic symbol of freedom and democracy.
52
  > 2. Explore Central Park: Enjoy the lush greenery, take a stroll, or rent a bike to explore the park's many attractions, such as the Central Park Zoo and Strawberry Fields.
53
  > 3. Visit the Empire State Building: Take an elevator to the observation deck for breathtaking views of the city.
@@ -65,7 +65,7 @@ Example 2:
65
 
66
  > You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
67
 
68
- **llama-2-70b-dolphin-peft**:
69
 
70
  > Subject: Dinner Party Invitation - Friday, 7 PM
71
  >
@@ -88,7 +88,7 @@ Example 3:
88
 
89
  > You are a helpful assistant. Tell me a recipe for vegan banana bread.\n
90
 
91
- **llama-2-70b-dolphin-peft**:
92
 
93
  > Sure! Here's a delicious and easy vegan banana bread recipe:
94
  >
@@ -227,7 +227,7 @@ print(tokenizer.decode(output["sequences"][0], skip_special_tokens=True))
227
  |:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
228
  | 4.50 | 1x H100 (80 GB PCIe) | torch | nf4 | 39 |
229
 
230
- The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-70b-dolphin-peft.ipynb).
231
 
232
  ## Acknowledgements
233
 
 
10
  base_model: meta-llama/Llama-2-70b-hf
11
  ---
12
 
13
+ # llama-2-70b-instruct-v0.1 🦙🐬
14
 
15
  This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
16
 
 
30
 
31
  * Model license: Llama 2 Community License Agreement
32
  * Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
33
+ * Finetuning code: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-70b-instruct-v0.1-peft.py)
34
+ * Loss curves: [plot](https://huggingface.co/dfurman/llama-2-70b-instruct-v0.1-peft#finetuning-description)
35
+ * Runtime stats: [table](https://huggingface.co/dfurman/llama-2-70b-instruct-v0.1-peft#runtime-tests)
36
 
37
  ## Loss curve
38
 
39
+ ![loss curves](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/jul_24_23_1_14_00_log_loss_curves_llama-2-70b-instruct-v0.1.png)
40
 
41
  The above loss curve was generated from the run's private wandb.ai log.
42
 
 
47
  **User**:
48
  > You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
49
 
50
+ **llama-2-70b-instruct-v0.1-peft**:
51
  > 1. Visit the Statue of Liberty: Take a ferry to Liberty Island and see the iconic symbol of freedom and democracy.
52
  > 2. Explore Central Park: Enjoy the lush greenery, take a stroll, or rent a bike to explore the park's many attractions, such as the Central Park Zoo and Strawberry Fields.
53
  > 3. Visit the Empire State Building: Take an elevator to the observation deck for breathtaking views of the city.
 
65
 
66
  > You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
67
 
68
+ **llama-2-70b-instruct-v0.1-peft**:
69
 
70
  > Subject: Dinner Party Invitation - Friday, 7 PM
71
  >
 
88
 
89
  > You are a helpful assistant. Tell me a recipe for vegan banana bread.\n
90
 
91
+ **llama-2-70b-instruct-v0.1-peft**:
92
 
93
  > Sure! Here's a delicious and easy vegan banana bread recipe:
94
  >
 
227
  |:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
228
  | 4.50 | 1x H100 (80 GB PCIe) | torch | nf4 | 39 |
229
 
230
+ The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-llama-2-70b-instruct-v0.1-peft.ipynb).
231
 
232
  ## Acknowledgements
233