Model save
Browse files- README.md +78 -0
- all_results.json +9 -0
- train_results.json +9 -0
- trainer_state.json +0 -0
    	
        README.md
    ADDED
    
    | @@ -0,0 +1,78 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            ---
         | 
| 2 | 
            +
            base_model: meta-llama/Meta-Llama-3.1-8B
         | 
| 3 | 
            +
            datasets:
         | 
| 4 | 
            +
            - generator
         | 
| 5 | 
            +
            library_name: peft
         | 
| 6 | 
            +
            license: llama3.1
         | 
| 7 | 
            +
            tags:
         | 
| 8 | 
            +
            - trl
         | 
| 9 | 
            +
            - sft
         | 
| 10 | 
            +
            - generated_from_trainer
         | 
| 11 | 
            +
            model-index:
         | 
| 12 | 
            +
            - name: llama3.1-8b-summarize-gpt4o-128k
         | 
| 13 | 
            +
              results: []
         | 
| 14 | 
            +
            ---
         | 
| 15 | 
            +
             | 
| 16 | 
            +
            <!-- This model card has been generated automatically according to the information the Trainer had access to. You
         | 
| 17 | 
            +
            should probably proofread and complete it, then remove this comment. -->
         | 
| 18 | 
            +
             | 
| 19 | 
            +
            # llama3.1-8b-summarize-gpt4o-128k
         | 
| 20 | 
            +
             | 
| 21 | 
            +
            This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the generator dataset.
         | 
| 22 | 
            +
            It achieves the following results on the evaluation set:
         | 
| 23 | 
            +
            - Loss: 4.0859
         | 
| 24 | 
            +
             | 
| 25 | 
            +
            ## Model description
         | 
| 26 | 
            +
             | 
| 27 | 
            +
            More information needed
         | 
| 28 | 
            +
             | 
| 29 | 
            +
            ## Intended uses & limitations
         | 
| 30 | 
            +
             | 
| 31 | 
            +
            More information needed
         | 
| 32 | 
            +
             | 
| 33 | 
            +
            ## Training and evaluation data
         | 
| 34 | 
            +
             | 
| 35 | 
            +
            More information needed
         | 
| 36 | 
            +
             | 
| 37 | 
            +
            ## Training procedure
         | 
| 38 | 
            +
             | 
| 39 | 
            +
            ### Training hyperparameters
         | 
| 40 | 
            +
             | 
| 41 | 
            +
            The following hyperparameters were used during training:
         | 
| 42 | 
            +
            - learning_rate: 0.0002
         | 
| 43 | 
            +
            - train_batch_size: 4
         | 
| 44 | 
            +
            - eval_batch_size: 2
         | 
| 45 | 
            +
            - seed: 42
         | 
| 46 | 
            +
            - distributed_type: multi-GPU
         | 
| 47 | 
            +
            - num_devices: 4
         | 
| 48 | 
            +
            - gradient_accumulation_steps: 2
         | 
| 49 | 
            +
            - total_train_batch_size: 32
         | 
| 50 | 
            +
            - total_eval_batch_size: 8
         | 
| 51 | 
            +
            - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
         | 
| 52 | 
            +
            - lr_scheduler_type: cosine
         | 
| 53 | 
            +
            - lr_scheduler_warmup_ratio: 0.1
         | 
| 54 | 
            +
            - num_epochs: 10
         | 
| 55 | 
            +
             | 
| 56 | 
            +
            ### Training results
         | 
| 57 | 
            +
             | 
| 58 | 
            +
            | Training Loss | Epoch  | Step | Validation Loss |
         | 
| 59 | 
            +
            |:-------------:|:------:|:----:|:---------------:|
         | 
| 60 | 
            +
            | 1.0008        | 0.9990 | 519  | 2.1032          |
         | 
| 61 | 
            +
            | 0.9747        | 2.0    | 1039 | 2.1444          |
         | 
| 62 | 
            +
            | 0.9289        | 2.9990 | 1558 | 2.2517          |
         | 
| 63 | 
            +
            | 0.8818        | 4.0    | 2078 | 2.4632          |
         | 
| 64 | 
            +
            | 0.8109        | 4.9990 | 2597 | 2.7084          |
         | 
| 65 | 
            +
            | 0.7513        | 6.0    | 3117 | 2.9358          |
         | 
| 66 | 
            +
            | 0.7004        | 6.9990 | 3636 | 3.2769          |
         | 
| 67 | 
            +
            | 0.6466        | 8.0    | 4156 | 3.6948          |
         | 
| 68 | 
            +
            | 0.6132        | 8.9990 | 4675 | 3.9708          |
         | 
| 69 | 
            +
            | 0.5965        | 9.9904 | 5190 | 4.0859          |
         | 
| 70 | 
            +
             | 
| 71 | 
            +
             | 
| 72 | 
            +
            ### Framework versions
         | 
| 73 | 
            +
             | 
| 74 | 
            +
            - PEFT 0.12.0
         | 
| 75 | 
            +
            - Transformers 4.44.0
         | 
| 76 | 
            +
            - Pytorch 2.4.0+cu121
         | 
| 77 | 
            +
            - Datasets 2.20.0
         | 
| 78 | 
            +
            - Tokenizers 0.19.1
         | 
    	
        all_results.json
    ADDED
    
    | @@ -0,0 +1,9 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
                "epoch": 9.990375360923965,
         | 
| 3 | 
            +
                "total_flos": 7.743588771836199e+18,
         | 
| 4 | 
            +
                "train_loss": 0.8018066772835792,
         | 
| 5 | 
            +
                "train_runtime": 21791.6644,
         | 
| 6 | 
            +
                "train_samples": 129221,
         | 
| 7 | 
            +
                "train_samples_per_second": 7.627,
         | 
| 8 | 
            +
                "train_steps_per_second": 0.238
         | 
| 9 | 
            +
            }
         | 
    	
        train_results.json
    ADDED
    
    | @@ -0,0 +1,9 @@ | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | 
|  | |
| 1 | 
            +
            {
         | 
| 2 | 
            +
                "epoch": 9.990375360923965,
         | 
| 3 | 
            +
                "total_flos": 7.743588771836199e+18,
         | 
| 4 | 
            +
                "train_loss": 0.8018066772835792,
         | 
| 5 | 
            +
                "train_runtime": 21791.6644,
         | 
| 6 | 
            +
                "train_samples": 129221,
         | 
| 7 | 
            +
                "train_samples_per_second": 7.627,
         | 
| 8 | 
            +
                "train_steps_per_second": 0.238
         | 
| 9 | 
            +
            }
         | 
    	
        trainer_state.json
    ADDED
    
    | The diff for this file is too large to render. 
		See raw diff | 
|  | 

