Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -3,34 +3,31 @@ datasets: | |
| 3 | 
             
            - sahil2801/CodeAlpaca-20k
         | 
| 4 | 
             
            library_name: peft
         | 
| 5 | 
             
            tags:
         | 
| 6 | 
            -
            -  | 
| 7 | 
             
            - code
         | 
| 8 | 
             
            - instruct
         | 
| 9 | 
             
            - instruct-code
         | 
| 10 | 
             
            - code-alpaca
         | 
| 11 | 
             
            - alpaca-instruct
         | 
| 12 | 
             
            - alpaca
         | 
| 13 | 
            -
            -  | 
| 14 | 
             
            - gpt2
         | 
| 15 | 
             
            ---
         | 
| 16 |  | 
| 17 | 
            -
            We finetuned  | 
| 18 |  | 
| 19 | 
            -
            This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment. | 
| 20 |  | 
| 21 | 
            -
            The finetuning session got completed in 4 | 
| 22 |  | 
| 23 | 
             
            #### Hyperparameters & Run details:
         | 
| 24 | 
            -
            - Model Path: meta-llama/ | 
| 25 | 
             
            - Dataset: sahil2801/CodeAlpaca-20k
         | 
| 26 | 
             
            - Learning rate: 0.0003
         | 
| 27 | 
             
            - Number of epochs: 5
         | 
| 28 | 
             
            - Data split: Training: 90% / Validation: 10%
         | 
| 29 | 
             
            - Gradient accumulation steps: 1
         | 
| 30 |  | 
| 31 | 
            -
            Loss metrics:
         | 
| 32 | 
            -
            
         | 
| 33 | 
            -
             | 
| 34 | 
             
            ---
         | 
| 35 | 
             
            license: apache-2.0
         | 
| 36 | 
            -
            ---
         | 
|  | |
| 3 | 
             
            - sahil2801/CodeAlpaca-20k
         | 
| 4 | 
             
            library_name: peft
         | 
| 5 | 
             
            tags:
         | 
| 6 | 
            +
            - codellama7b
         | 
| 7 | 
             
            - code
         | 
| 8 | 
             
            - instruct
         | 
| 9 | 
             
            - instruct-code
         | 
| 10 | 
             
            - code-alpaca
         | 
| 11 | 
             
            - alpaca-instruct
         | 
| 12 | 
             
            - alpaca
         | 
| 13 | 
            +
            - codellama7b
         | 
| 14 | 
             
            - gpt2
         | 
| 15 | 
             
            ---
         | 
| 16 |  | 
| 17 | 
            +
            We finetuned CodeLlama7B on Code-Alpaca-Instruct Dataset (sahil2801/CodeAlpaca-20k) for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
         | 
| 18 |  | 
| 19 | 
            +
            This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
         | 
| 20 |  | 
| 21 | 
            +
            The finetuning session got completed in 4 hours and costed us only `$16` for the entire finetuning run!
         | 
| 22 |  | 
| 23 | 
             
            #### Hyperparameters & Run details:
         | 
| 24 | 
            +
            - Model Path: meta-llama/CodeLlama7B
         | 
| 25 | 
             
            - Dataset: sahil2801/CodeAlpaca-20k
         | 
| 26 | 
             
            - Learning rate: 0.0003
         | 
| 27 | 
             
            - Number of epochs: 5
         | 
| 28 | 
             
            - Data split: Training: 90% / Validation: 10%
         | 
| 29 | 
             
            - Gradient accumulation steps: 1
         | 
| 30 |  | 
|  | |
|  | |
|  | |
| 31 | 
             
            ---
         | 
| 32 | 
             
            license: apache-2.0
         | 
| 33 | 
            +
            ---
         | 
