modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="gaokaobishuati/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bangla-para-v2-90000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-para-v2-90000 This model is a fine-tuned version of [mHossain/bangla-para-v2-60000](https://huggingface.co/mHossain/bangla-para-v2-60000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9199 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 17.5047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.1574 | 1.0 | 3375 | 0.9199 | 0.0 | 0.0 | 0.0 | 0.0 | 17.5047 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.48 +/- 2.61 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="gaokaobishuati/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - klue metrics: - f1 model-index: - name: kogpt2-base-v2-2-finetuned-klue-ner results: - task: name: Token Classification type: token-classification dataset: name: klue type: klue config: ner split: validation args: ner metrics: - name: F1 type: f1 value: 0.44605997663248087 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kogpt2-base-v2-2-finetuned-klue-ner This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4259 - F1: 0.4461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6118 | 1.0 | 876 | 0.5517 | 0.3337 | | 0.366 | 2.0 | 1752 | 0.4259 | 0.4461 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AnonymousSub/specter-bert-model_copy_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- tags: - generated_from_trainer model-index: - name: codebert-python-custom-functions-dataset-python results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codebert-python-custom-functions-dataset-python This model is a fine-tuned version of [neulab/codebert-python](https://huggingface.co/neulab/codebert-python) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 17.6391 | 0.02 | 1 | 18.8468 | | 17.4624 | 0.05 | 2 | 16.8714 | | 16.0325 | 0.07 | 3 | 14.7237 | | 13.9159 | 0.09 | 4 | 12.3306 | | 10.0481 | 0.12 | 5 | 9.8566 | | 9.809 | 0.14 | 6 | 7.6512 | | 7.2165 | 0.16 | 7 | 6.0915 | | 5.712 | 0.19 | 8 | 5.3033 | | 5.4211 | 0.21 | 9 | 5.0586 | | 4.8932 | 0.23 | 10 | 5.0131 | | 5.066 | 0.26 | 11 | 4.9536 | | 4.9451 | 0.28 | 12 | 4.8411 | | 5.0093 | 0.3 | 13 | 4.6795 | | 4.6068 | 0.33 | 14 | 4.4849 | | 4.3224 | 0.35 | 15 | 4.2770 | | 3.9837 | 0.37 | 16 | 4.0797 | | 4.1976 | 0.4 | 17 | 3.9111 | | 4.128 | 0.42 | 18 | 3.7831 | | 3.9352 | 0.44 | 19 | 3.6859 | | 4.0264 | 0.47 | 20 | 3.6056 | | 3.8478 | 0.49 | 21 | 3.5358 | | 3.6362 | 0.51 | 22 | 3.4453 | | 3.3028 | 0.53 | 23 | 3.3264 | | 3.1371 | 0.56 | 24 | 3.2070 | | 3.3148 | 0.58 | 25 | 3.0813 | | 3.2673 | 0.6 | 26 | 2.9608 | | 3.0928 | 0.63 | 27 | 2.8464 | | 2.8426 | 0.65 | 28 | 2.7360 | | 2.719 | 0.67 | 29 | 2.6269 | | 2.9311 | 0.7 | 30 | 2.5197 | | 2.7809 | 0.72 | 31 | 2.4133 | | 2.5529 | 0.74 | 32 | 2.3118 | | 2.3033 | 0.77 | 33 | 2.2164 | | 2.3882 | 0.79 | 34 | 2.1272 | | 2.4683 | 0.81 | 35 | 2.0446 | | 2.4469 | 0.84 | 36 | 1.9679 | | 2.2185 | 0.86 | 37 | 1.8938 | | 2.2096 | 0.88 | 38 | 1.8196 | | 2.2544 | 0.91 | 39 | 1.7513 | | 1.9036 | 0.93 | 40 | 1.6817 | | 1.983 | 0.95 | 41 | 1.6117 | | 1.6973 | 0.98 | 42 | 1.5473 | | 1.7622 | 1.0 | 43 | 1.4910 | | 1.8068 | 1.02 | 44 | 1.4393 | | 1.6332 | 1.05 | 45 | 1.3937 | | 1.5962 | 1.07 | 46 | 1.3504 | | 1.5566 | 1.09 | 47 | 1.3093 | | 1.467 | 1.12 | 48 | 1.2705 | | 1.5833 | 1.14 | 49 | 1.2302 | | 1.462 | 1.16 | 50 | 1.1902 | | 1.6844 | 1.19 | 51 | 1.1489 | | 1.4811 | 1.21 | 52 | 1.1072 | | 1.132 | 1.23 | 53 | 1.0670 | | 1.3929 | 1.26 | 54 | 1.0320 | | 1.3733 | 1.28 | 55 | 1.0005 | | 1.3173 | 1.3 | 56 | 0.9679 | | 0.9966 | 1.33 | 57 | 0.9359 | | 1.3424 | 1.35 | 58 | 0.9031 | | 1.0773 | 1.37 | 59 | 0.8694 | | 1.0527 | 1.4 | 60 | 0.8364 | | 1.139 | 1.42 | 61 | 0.8032 | | 1.1167 | 1.44 | 62 | 0.7734 | | 0.7685 | 1.47 | 63 | 0.7460 | | 1.011 | 1.49 | 64 | 0.7211 | | 0.9925 | 1.51 | 65 | 0.6949 | | 1.1036 | 1.53 | 66 | 0.6706 | | 0.9265 | 1.56 | 67 | 0.6462 | | 0.8027 | 1.58 | 68 | 0.6212 | | 0.8847 | 1.6 | 69 | 0.5986 | | 0.8378 | 1.63 | 70 | 0.5767 | | 0.9169 | 1.65 | 71 | 0.5578 | | 0.937 | 1.67 | 72 | 0.5385 | | 0.9755 | 1.7 | 73 | 0.5213 | | 0.8666 | 1.72 | 74 | 0.5065 | | 0.7972 | 1.74 | 75 | 0.4924 | | 0.8839 | 1.77 | 76 | 0.4777 | | 0.6774 | 1.79 | 77 | 0.4632 | | 0.7032 | 1.81 | 78 | 0.4484 | | 0.595 | 1.84 | 79 | 0.4343 | | 0.733 | 1.86 | 80 | 0.4218 | | 0.7852 | 1.88 | 81 | 0.4105 | | 0.649 | 1.91 | 82 | 0.3998 | | 0.729 | 1.93 | 83 | 0.3904 | | 0.7798 | 1.95 | 84 | 0.3828 | | 0.612 | 1.98 | 85 | 0.3756 | | 0.7662 | 2.0 | 86 | 0.3690 | | 0.5521 | 2.02 | 87 | 0.3612 | | 0.5335 | 2.05 | 88 | 0.3528 | | 0.603 | 2.07 | 89 | 0.3428 | | 0.5893 | 2.09 | 90 | 0.3328 | | 0.6001 | 2.12 | 91 | 0.3214 | | 0.4813 | 2.14 | 92 | 0.3091 | | 0.5835 | 2.16 | 93 | 0.2986 | | 0.4238 | 2.19 | 94 | 0.2899 | | 0.517 | 2.21 | 95 | 0.2826 | | 0.3389 | 2.23 | 96 | 0.2771 | | 0.5621 | 2.26 | 97 | 0.2719 | | 0.5298 | 2.28 | 98 | 0.2676 | | 0.5001 | 2.3 | 99 | 0.2636 | | 0.6028 | 2.33 | 100 | 0.2598 | | 0.4834 | 2.35 | 101 | 0.2560 | | 0.3828 | 2.37 | 102 | 0.2513 | | 0.3283 | 2.4 | 103 | 0.2453 | | 0.362 | 2.42 | 104 | 0.2379 | | 0.3924 | 2.44 | 105 | 0.2312 | | 0.5881 | 2.47 | 106 | 0.2254 | | 0.4036 | 2.49 | 107 | 0.2190 | | 0.3797 | 2.51 | 108 | 0.2131 | | 0.4023 | 2.53 | 109 | 0.2076 | | 0.3779 | 2.56 | 110 | 0.2024 | | 0.5212 | 2.58 | 111 | 0.1968 | | 0.3527 | 2.6 | 112 | 0.1913 | | 0.4868 | 2.63 | 113 | 0.1858 | | 0.3335 | 2.65 | 114 | 0.1804 | | 0.3045 | 2.67 | 115 | 0.1759 | | 0.3618 | 2.7 | 116 | 0.1718 | | 0.304 | 2.72 | 117 | 0.1682 | | 0.3166 | 2.74 | 118 | 0.1647 | | 0.3969 | 2.77 | 119 | 0.1615 | | 0.2572 | 2.79 | 120 | 0.1594 | | 0.3467 | 2.81 | 121 | 0.1569 | | 0.3321 | 2.84 | 122 | 0.1546 | | 0.33 | 2.86 | 123 | 0.1512 | | 0.2282 | 2.88 | 124 | 0.1477 | | 0.2623 | 2.91 | 125 | 0.1437 | | 0.4284 | 2.93 | 126 | 0.1400 | | 0.3089 | 2.95 | 127 | 0.1367 | | 0.3058 | 2.98 | 128 | 0.1333 | | 0.1779 | 3.0 | 129 | 0.1302 | | 0.2915 | 3.02 | 130 | 0.1271 | | 0.2477 | 3.05 | 131 | 0.1240 | | 0.2636 | 3.07 | 132 | 0.1209 | | 0.2607 | 3.09 | 133 | 0.1181 | | 0.2009 | 3.12 | 134 | 0.1158 | | 0.3641 | 3.14 | 135 | 0.1140 | | 0.2024 | 3.16 | 136 | 0.1127 | | 0.232 | 3.19 | 137 | 0.1112 | | 0.3783 | 3.21 | 138 | 0.1096 | | 0.2223 | 3.23 | 139 | 0.1083 | | 0.1471 | 3.26 | 140 | 0.1071 | | 0.2082 | 3.28 | 141 | 0.1057 | | 0.1142 | 3.3 | 142 | 0.1047 | | 0.1785 | 3.33 | 143 | 0.1032 | | 0.1935 | 3.35 | 144 | 0.1013 | | 0.2114 | 3.37 | 145 | 0.0994 | | 0.1827 | 3.4 | 146 | 0.0977 | | 0.2312 | 3.42 | 147 | 0.0960 | | 0.2209 | 3.44 | 148 | 0.0943 | | 0.4012 | 3.47 | 149 | 0.0930 | | 0.2126 | 3.49 | 150 | 0.0914 | | 0.1883 | 3.51 | 151 | 0.0897 | | 0.2101 | 3.53 | 152 | 0.0883 | | 0.2442 | 3.56 | 153 | 0.0868 | | 0.1923 | 3.58 | 154 | 0.0852 | | 0.2541 | 3.6 | 155 | 0.0838 | | 0.2062 | 3.63 | 156 | 0.0823 | | 0.2114 | 3.65 | 157 | 0.0807 | | 0.1809 | 3.67 | 158 | 0.0792 | | 0.2083 | 3.7 | 159 | 0.0777 | | 0.2163 | 3.72 | 160 | 0.0764 | | 0.1913 | 3.74 | 161 | 0.0753 | | 0.2633 | 3.77 | 162 | 0.0744 | | 0.3123 | 3.79 | 163 | 0.0736 | | 0.1271 | 3.81 | 164 | 0.0729 | | 0.2404 | 3.84 | 165 | 0.0722 | | 0.1316 | 3.86 | 166 | 0.0714 | | 0.2107 | 3.88 | 167 | 0.0710 | | 0.2088 | 3.91 | 168 | 0.0703 | | 0.1813 | 3.93 | 169 | 0.0697 | | 0.2049 | 3.95 | 170 | 0.0690 | | 0.2178 | 3.98 | 171 | 0.0685 | | 0.3137 | 4.0 | 172 | 0.0677 | | 0.1661 | 4.02 | 173 | 0.0668 | | 0.0944 | 4.05 | 174 | 0.0658 | | 0.2131 | 4.07 | 175 | 0.0649 | | 0.1349 | 4.09 | 176 | 0.0638 | | 0.2231 | 4.12 | 177 | 0.0627 | | 0.2046 | 4.14 | 178 | 0.0618 | | 0.1806 | 4.16 | 179 | 0.0609 | | 0.1463 | 4.19 | 180 | 0.0601 | | 0.1696 | 4.21 | 181 | 0.0593 | | 0.1271 | 4.23 | 182 | 0.0584 | | 0.132 | 4.26 | 183 | 0.0575 | | 0.1295 | 4.28 | 184 | 0.0566 | | 0.182 | 4.3 | 185 | 0.0557 | | 0.1896 | 4.33 | 186 | 0.0548 | | 0.1694 | 4.35 | 187 | 0.0540 | | 0.2121 | 4.37 | 188 | 0.0534 | | 0.1299 | 4.4 | 189 | 0.0530 | | 0.1471 | 4.42 | 190 | 0.0523 | | 0.0996 | 4.44 | 191 | 0.0518 | | 0.2355 | 4.47 | 192 | 0.0514 | | 0.1295 | 4.49 | 193 | 0.0511 | | 0.1528 | 4.51 | 194 | 0.0508 | | 0.1407 | 4.53 | 195 | 0.0506 | | 0.1246 | 4.56 | 196 | 0.0502 | | 0.1635 | 4.58 | 197 | 0.0497 | | 0.1935 | 4.6 | 198 | 0.0492 | | 0.1354 | 4.63 | 199 | 0.0485 | | 0.1051 | 4.65 | 200 | 0.0477 | | 0.0837 | 4.67 | 201 | 0.0468 | | 0.1822 | 4.7 | 202 | 0.0459 | | 0.127 | 4.72 | 203 | 0.0450 | | 0.1133 | 4.74 | 204 | 0.0443 | | 0.1641 | 4.77 | 205 | 0.0435 | | 0.1318 | 4.79 | 206 | 0.0428 | | 0.1544 | 4.81 | 207 | 0.0421 | | 0.1003 | 4.84 | 208 | 0.0413 | | 0.1205 | 4.86 | 209 | 0.0406 | | 0.1348 | 4.88 | 210 | 0.0399 | | 0.0895 | 4.91 | 211 | 0.0393 | | 0.095 | 4.93 | 212 | 0.0388 | | 0.138 | 4.95 | 213 | 0.0383 | | 0.117 | 4.98 | 214 | 0.0378 | | 0.0997 | 5.0 | 215 | 0.0374 | | 0.0989 | 5.02 | 216 | 0.0372 | | 0.0973 | 5.05 | 217 | 0.0370 | | 0.0848 | 5.07 | 218 | 0.0370 | | 0.1232 | 5.09 | 219 | 0.0370 | | 0.1271 | 5.12 | 220 | 0.0370 | | 0.1636 | 5.14 | 221 | 0.0369 | | 0.0893 | 5.16 | 222 | 0.0367 | | 0.125 | 5.19 | 223 | 0.0364 | | 0.1305 | 5.21 | 224 | 0.0360 | | 0.091 | 5.23 | 225 | 0.0356 | | 0.1198 | 5.26 | 226 | 0.0352 | | 0.074 | 5.28 | 227 | 0.0346 | | 0.1208 | 5.3 | 228 | 0.0340 | | 0.1737 | 5.33 | 229 | 0.0334 | | 0.1277 | 5.35 | 230 | 0.0326 | | 0.0811 | 5.37 | 231 | 0.0317 | | 0.0726 | 5.4 | 232 | 0.0310 | | 0.0803 | 5.42 | 233 | 0.0304 | | 0.0793 | 5.44 | 234 | 0.0299 | | 0.0595 | 5.47 | 235 | 0.0296 | | 0.0915 | 5.49 | 236 | 0.0293 | | 0.1049 | 5.51 | 237 | 0.0291 | | 0.1088 | 5.53 | 238 | 0.0289 | | 0.1269 | 5.56 | 239 | 0.0287 | | 0.0589 | 5.58 | 240 | 0.0286 | | 0.1118 | 5.6 | 241 | 0.0285 | | 0.0834 | 5.63 | 242 | 0.0283 | | 0.1003 | 5.65 | 243 | 0.0280 | | 0.0704 | 5.67 | 244 | 0.0278 | | 0.0892 | 5.7 | 245 | 0.0275 | | 0.1004 | 5.72 | 246 | 0.0273 | | 0.119 | 5.74 | 247 | 0.0272 | | 0.1096 | 5.77 | 248 | 0.0270 | | 0.1554 | 5.79 | 249 | 0.0269 | | 0.0903 | 5.81 | 250 | 0.0267 | | 0.0829 | 5.84 | 251 | 0.0266 | | 0.1561 | 5.86 | 252 | 0.0265 | | 0.0707 | 5.88 | 253 | 0.0264 | | 0.1329 | 5.91 | 254 | 0.0262 | | 0.104 | 5.93 | 255 | 0.0256 | | 0.0873 | 5.95 | 256 | 0.0250 | | 0.0954 | 5.98 | 257 | 0.0245 | | 0.1387 | 6.0 | 258 | 0.0239 | | 0.094 | 6.02 | 259 | 0.0234 | | 0.0989 | 6.05 | 260 | 0.0231 | | 0.1187 | 6.07 | 261 | 0.0228 | | 0.1052 | 6.09 | 262 | 0.0226 | | 0.0981 | 6.12 | 263 | 0.0225 | | 0.0529 | 6.14 | 264 | 0.0225 | | 0.0757 | 6.16 | 265 | 0.0225 | | 0.0742 | 6.19 | 266 | 0.0224 | | 0.0532 | 6.21 | 267 | 0.0223 | | 0.0487 | 6.23 | 268 | 0.0223 | | 0.0899 | 6.26 | 269 | 0.0222 | | 0.0479 | 6.28 | 270 | 0.0222 | | 0.1019 | 6.3 | 271 | 0.0221 | | 0.0422 | 6.33 | 272 | 0.0218 | | 0.0581 | 6.35 | 273 | 0.0216 | | 0.0725 | 6.37 | 274 | 0.0214 | | 0.0703 | 6.4 | 275 | 0.0211 | | 0.0943 | 6.42 | 276 | 0.0209 | | 0.0732 | 6.44 | 277 | 0.0207 | | 0.072 | 6.47 | 278 | 0.0204 | | 0.064 | 6.49 | 279 | 0.0201 | | 0.0433 | 6.51 | 280 | 0.0199 | | 0.0705 | 6.53 | 281 | 0.0197 | | 0.0928 | 6.56 | 282 | 0.0195 | | 0.067 | 6.58 | 283 | 0.0194 | | 0.0527 | 6.6 | 284 | 0.0192 | | 0.0771 | 6.63 | 285 | 0.0190 | | 0.0934 | 6.65 | 286 | 0.0189 | | 0.0687 | 6.67 | 287 | 0.0188 | | 0.0915 | 6.7 | 288 | 0.0186 | | 0.0729 | 6.72 | 289 | 0.0186 | | 0.0966 | 6.74 | 290 | 0.0185 | | 0.1624 | 6.77 | 291 | 0.0184 | | 0.0735 | 6.79 | 292 | 0.0183 | | 0.0664 | 6.81 | 293 | 0.0183 | | 0.0718 | 6.84 | 294 | 0.0181 | | 0.0455 | 6.86 | 295 | 0.0179 | | 0.1095 | 6.88 | 296 | 0.0177 | | 0.062 | 6.91 | 297 | 0.0176 | | 0.0753 | 6.93 | 298 | 0.0174 | | 0.0967 | 6.95 | 299 | 0.0172 | | 0.0988 | 6.98 | 300 | 0.0171 | | 0.0919 | 7.0 | 301 | 0.0169 | | 0.0554 | 7.02 | 302 | 0.0167 | | 0.0858 | 7.05 | 303 | 0.0166 | | 0.092 | 7.07 | 304 | 0.0165 | | 0.0299 | 7.09 | 305 | 0.0164 | | 0.0933 | 7.12 | 306 | 0.0163 | | 0.0764 | 7.14 | 307 | 0.0162 | | 0.0657 | 7.16 | 308 | 0.0161 | | 0.0563 | 7.19 | 309 | 0.0159 | | 0.0893 | 7.21 | 310 | 0.0158 | | 0.0988 | 7.23 | 311 | 0.0157 | | 0.0574 | 7.26 | 312 | 0.0156 | | 0.1329 | 7.28 | 313 | 0.0155 | | 0.0853 | 7.3 | 314 | 0.0153 | | 0.0585 | 7.33 | 315 | 0.0152 | | 0.0747 | 7.35 | 316 | 0.0151 | | 0.0494 | 7.37 | 317 | 0.0150 | | 0.0368 | 7.4 | 318 | 0.0149 | | 0.0907 | 7.42 | 319 | 0.0149 | | 0.0698 | 7.44 | 320 | 0.0148 | | 0.0571 | 7.47 | 321 | 0.0148 | | 0.0456 | 7.49 | 322 | 0.0147 | | 0.0492 | 7.51 | 323 | 0.0147 | | 0.0686 | 7.53 | 324 | 0.0146 | | 0.067 | 7.56 | 325 | 0.0145 | | 0.0548 | 7.58 | 326 | 0.0144 | | 0.0415 | 7.6 | 327 | 0.0143 | | 0.0816 | 7.63 | 328 | 0.0143 | | 0.0529 | 7.65 | 329 | 0.0142 | | 0.0818 | 7.67 | 330 | 0.0142 | | 0.0594 | 7.7 | 331 | 0.0142 | | 0.0387 | 7.72 | 332 | 0.0141 | | 0.0561 | 7.74 | 333 | 0.0141 | | 0.0875 | 7.77 | 334 | 0.0140 | | 0.0534 | 7.79 | 335 | 0.0140 | | 0.052 | 7.81 | 336 | 0.0140 | | 0.0894 | 7.84 | 337 | 0.0140 | | 0.0384 | 7.86 | 338 | 0.0139 | | 0.0353 | 7.88 | 339 | 0.0139 | | 0.0843 | 7.91 | 340 | 0.0138 | | 0.059 | 7.93 | 341 | 0.0138 | | 0.0538 | 7.95 | 342 | 0.0138 | | 0.0637 | 7.98 | 343 | 0.0137 | | 0.0492 | 8.0 | 344 | 0.0137 | | 0.0424 | 8.02 | 345 | 0.0137 | | 0.0416 | 8.05 | 346 | 0.0136 | | 0.0608 | 8.07 | 347 | 0.0135 | | 0.0158 | 8.09 | 348 | 0.0135 | | 0.0551 | 8.12 | 349 | 0.0134 | | 0.0954 | 8.14 | 350 | 0.0134 | | 0.0327 | 8.16 | 351 | 0.0133 | | 0.0329 | 8.19 | 352 | 0.0133 | | 0.0604 | 8.21 | 353 | 0.0133 | | 0.0565 | 8.23 | 354 | 0.0133 | | 0.0677 | 8.26 | 355 | 0.0133 | | 0.0483 | 8.28 | 356 | 0.0133 | | 0.0576 | 8.3 | 357 | 0.0133 | | 0.0515 | 8.33 | 358 | 0.0132 | | 0.0381 | 8.35 | 359 | 0.0132 | | 0.0436 | 8.37 | 360 | 0.0132 | | 0.0695 | 8.4 | 361 | 0.0132 | | 0.0564 | 8.42 | 362 | 0.0132 | | 0.0784 | 8.44 | 363 | 0.0132 | | 0.0582 | 8.47 | 364 | 0.0132 | | 0.0476 | 8.49 | 365 | 0.0132 | | 0.1157 | 8.51 | 366 | 0.0131 | | 0.0603 | 8.53 | 367 | 0.0131 | | 0.05 | 8.56 | 368 | 0.0131 | | 0.0428 | 8.58 | 369 | 0.0130 | | 0.0283 | 8.6 | 370 | 0.0130 | | 0.0726 | 8.63 | 371 | 0.0129 | | 0.0681 | 8.65 | 372 | 0.0129 | | 0.0753 | 8.67 | 373 | 0.0128 | | 0.0813 | 8.7 | 374 | 0.0127 | | 0.0592 | 8.72 | 375 | 0.0127 | | 0.0602 | 8.74 | 376 | 0.0127 | | 0.0552 | 8.77 | 377 | 0.0127 | | 0.0547 | 8.79 | 378 | 0.0126 | | 0.0777 | 8.81 | 379 | 0.0126 | | 0.0719 | 8.84 | 380 | 0.0126 | | 0.0641 | 8.86 | 381 | 0.0125 | | 0.0326 | 8.88 | 382 | 0.0125 | | 0.0431 | 8.91 | 383 | 0.0125 | | 0.0813 | 8.93 | 384 | 0.0125 | | 0.077 | 8.95 | 385 | 0.0125 | | 0.1018 | 8.98 | 386 | 0.0125 | | 0.0619 | 9.0 | 387 | 0.0125 | | 0.0525 | 9.02 | 388 | 0.0125 | | 0.0501 | 9.05 | 389 | 0.0125 | | 0.057 | 9.07 | 390 | 0.0125 | | 0.069 | 9.09 | 391 | 0.0125 | | 0.0258 | 9.12 | 392 | 0.0125 | | 0.0639 | 9.14 | 393 | 0.0125 | | 0.0696 | 9.16 | 394 | 0.0125 | | 0.0407 | 9.19 | 395 | 0.0125 | | 0.0465 | 9.21 | 396 | 0.0124 | | 0.084 | 9.23 | 397 | 0.0124 | | 0.0483 | 9.26 | 398 | 0.0124 | | 0.1181 | 9.28 | 399 | 0.0124 | | 0.0383 | 9.3 | 400 | 0.0124 | | 0.0574 | 9.33 | 401 | 0.0124 | | 0.0475 | 9.35 | 402 | 0.0124 | | 0.0549 | 9.37 | 403 | 0.0124 | | 0.0594 | 9.4 | 404 | 0.0124 | | 0.0428 | 9.42 | 405 | 0.0124 | | 0.0825 | 9.44 | 406 | 0.0124 | | 0.0617 | 9.47 | 407 | 0.0124 | | 0.0584 | 9.49 | 408 | 0.0124 | | 0.0579 | 9.51 | 409 | 0.0124 | | 0.0521 | 9.53 | 410 | 0.0124 | | 0.0703 | 9.56 | 411 | 0.0124 | | 0.0279 | 9.58 | 412 | 0.0123 | | 0.0469 | 9.6 | 413 | 0.0123 | | 0.0492 | 9.63 | 414 | 0.0123 | | 0.0308 | 9.65 | 415 | 0.0123 | | 0.0342 | 9.67 | 416 | 0.0123 | | 0.0486 | 9.7 | 417 | 0.0123 | | 0.0284 | 9.72 | 418 | 0.0123 | | 0.0325 | 9.74 | 419 | 0.0123 | | 0.043 | 9.77 | 420 | 0.0123 | | 0.0602 | 9.79 | 421 | 0.0123 | | 0.0349 | 9.81 | 422 | 0.0122 | | 0.0569 | 9.84 | 423 | 0.0122 | | 0.0513 | 9.86 | 424 | 0.0122 | | 0.0455 | 9.88 | 425 | 0.0122 | | 0.0936 | 9.91 | 426 | 0.0122 | | 0.0484 | 9.93 | 427 | 0.0122 | | 0.0556 | 9.95 | 428 | 0.0122 | | 0.0352 | 9.98 | 429 | 0.0122 | | 0.0594 | 10.0 | 430 | 0.0122 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Anorak/nirvana
[ "pytorch", "pegasus", "text2text-generation", "unk", "dataset:Anorak/autonlp-data-Niravana-test2", "transformers", "autonlp", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-05-06T10:14:51Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Marc-Elie/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnthonyNelson/DialoGPT-small-ricksanchez
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Marc-Elie/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Anthos23/distilbert-base-uncased-finetuned-sst2
[ "tf", "tensorboard", "distilbert", "text-classification", "transformers", "generated_from_keras_callback", "license:apache-2.0" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: mit tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: roberta-base-squadv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-squadv2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.6731 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1898 | 1.0 | 1221 | 1.5332 | | 0.7719 | 2.0 | 2443 | 1.5191 | | 0.5484 | 3.0 | 3663 | 1.6731 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Anthos23/test_trainer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Eleonora Dreambooth model trained by onezell with TheLastBen's fast-DreamBooth notebook
AntonClaesson/finetuning_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - vi metrics: - f1 pipeline_tag: fill-mask license: mit datasets: - VietAI/vi_pubmed tags: - transformer - vietnamese - nlp - bert - deberta - deberta-v2 --- # ViPubMedDeBERTa: A Vietnamese pretrained biomedical language representation model ## Model description ## Model variations ## How to use You can use this model directly with a pipeline for masked language modeling:<br> **_NOTE:_** The input text should be already word-segmented, you can use [Pyvi](https://github.com/trungtv/pyvi) (Python Vietnamese Core NLP Toolkit) to segment word before passing to the model. ```python >>> from transformers import pipeline >>> model = pipeline('fill-mask', model='manhtt-079/vipubmed-deberta-base') >>> text_with_mask = """Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ) . FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm . Phẫu_thuật được coi là phương_thức điều_trị tốt nhất , tiếp_theo là hóa_trị . Trong trường_hợp của chúng_tôi , [MASK] cắt bỏ không_thể thực_hiện được , do đó bệnh_nhân được hóa_trị hai dòng , sau đó là cấy_ghép tủy xương , sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên""" >>> model(text_with_mask) [{'score': 0.7657408118247986, 'token': 1621, 'token_str': 'phẫu_thuật', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phẫu_thuật cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.20739449560642242, 'token': 83, 'token_str': 'việc', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, việc cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.008736048825085163, 'token': 589, 'token_str': 'phương_pháp', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, phương_pháp cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.0018989236559718847, 'token': 485, 'token_str': 'quá_trình', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, quá_trình cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}, {'score': 0.0015389806358143687, 'token': 3766, 'token_str': 'chẩn_đoán', 'sequence': 'Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS ). FDCS là bệnh rất hiếm ảnh_hưởng đến tế_bào trình_diện kháng_nguyên đuôi gai và thường bị chẩn_đoán nhầm. Phẫu_thuật được coi là phương_thức điều_trị tốt nhất, tiếp_theo là hóa_trị. Trong trường_hợp của chúng_tôi, chẩn_đoán cắt bỏ không_thể thực_hiện được, do đó bệnh_nhân được hóa_trị hai dòng, sau đó là cấy_ghép tủy xương, sau đó là hóa_trị ba với đáp_ứng trao_đổi chất hoàn_toàn được thấy trên'}] ``` #### Get features: - With PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base') model = AutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base") text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )." model_inputs = tokenizer(text, return_tensors='pt') outputs = model(**model_inputs) ``` - With TensorFlow ```python from transformers import AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained('manhtt-079/vipubmed-deberta-base') model = TFAutoModel.from_pretrained("manhtt-079/vipubmed-deberta-base") text = "Chúng_tôi mô_tả một trường_hợp bệnh_nhân nữ 44 tuổi được chẩn_đoán sarcoma tế_bào tua nang ( FDCS )." model_inputs = tokenizer(text, return_tensors='tf') outputs = model(**model_inputs) ``` ## Pre-training data The ViPubMedDeBERTa model was pre-trained on [ViPubmed](https://github.com/vietai/ViPubmed), a dataset consisting of 20M Vietnamese Biomedical abstracts generated by large scale translation. ## Training procedure ### Data deduplication A fuzzy deduplication, targeting documents with high overlap, was conducted at the document level to enhance quality and address overfitting. Employing Locality Sensitive Hashing (LSH) with a threshold of 0.9 ensured the removal of documents with overlap exceeding 90%. This process resulted in an average reduction of the dataset's size by 3%. ### Pretraining We employ our model based on the [ViDeBERTa](https://github.com/HySonLab/ViDeBERTa) architecture and leverage its pre-trained checkpoint to continue pre-training. Our model was trained on a single A100 GPU (40GB) for 270 thousand steps, with a batch size of 16 and gradient accumulation steps set to 4 (resulting in a total of 64). The sequence length was limited to 512 tokens and the model peak learning rate of 1e-4. ## Evaluation results
AntonClaesson/movie-plot-generator
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola_sepehr_sepehr_sepehr_saturday_new results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5126228485857701 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola_sepehr_sepehr_sepehr_saturday_new This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4760 - Matthews Correlation: 0.5126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4957 | 1.0 | 535 | 0.4760 | 0.5126 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Antony/mint_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # HasinMDG/mpnet-base-v2-IPTC-L1 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("HasinMDG/mpnet-base-v2-IPTC-L1") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Anubhav23/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - klue metrics: - f1 model-index: - name: kogpt2-base-v2-5-finetuned-klue-ner results: - task: name: Token Classification type: token-classification dataset: name: klue type: klue config: ner split: validation args: ner metrics: - name: F1 type: f1 value: 0.5144974226804124 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kogpt2-base-v2-5-finetuned-klue-ner This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 - F1: 0.5145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6215 | 1.0 | 876 | 0.5607 | 0.3318 | | 0.4067 | 2.0 | 1752 | 0.5554 | 0.3609 | | 0.3128 | 3.0 | 2628 | 0.4259 | 0.4569 | | 0.2409 | 4.0 | 3504 | 0.4314 | 0.4894 | | 0.1874 | 5.0 | 4380 | 0.4425 | 0.5145 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Anupam/QuestionClassifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - zh - en tags: - glm - chatglm - thudm --- # ChatGLM-6B ## 介绍 ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。 ChatGLM-6B is an open bilingual language model based on [General Language Model (GLM)](https://github.com/THUDM/GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese QA and dialogue. The model is trained for about 1T tokens of Chinese and English corpus, supplemented by supervised fine-tuning, feedback bootstrap, and reinforcement learning wit human feedback. With only about 6.2 billion parameters, the model is able to generate answers that are in line with human preference. ## 软件依赖 ```shell pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels ``` ## 代码调用 可以通过如下代码调用 ChatGLM-6B 模型来生成对话: ```ipython >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) >>> model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda() >>> response, history = model.chat(tokenizer, "你好", history=[]) >>> print(response) 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。 >>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history) >>> print(response) 晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法: 1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。 2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。 3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。 4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。 5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。 6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。 如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。 ``` 关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。 For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM-6B). ## 协议 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。 ## 引用 如果你觉得我们的工作有帮助的话,请考虑引用下列论文: ``` @inproceedings{ zeng2023glm-130b, title={{GLM}-130B: An Open Bilingual Pre-trained Model}, author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang}, booktitle={The Eleventh International Conference on Learning Representations (ICLR)}, year={2023}, url={https://openreview.net/forum?id=-Aw0rrrPUF} } ``` ``` @inproceedings{du2022glm, title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling}, author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={320--335}, year={2022} } ```
gaurishhs/API
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bangla-para-v2-120000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-para-v2-120000 This model is a fine-tuned version of [mHossain/bangla-para-v2-90000](https://huggingface.co/mHossain/bangla-para-v2-90000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9277 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 17.575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.1615 | 1.0 | 3375 | 0.9277 | 0.0 | 0.0 | 0.0 | 0.0 | 17.575 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Apoorva/k2t-test
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "keytotext", "k2t", "Keywords to Sentences", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
7
2023-05-06T10:49:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.7227722772277227 - name: Recall type: recall value: 0.7227722772277227 - name: F1 type: f1 value: 0.7176271186440678 - name: Accuracy type: accuracy value: 0.9290916805147968 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1989 - Precision: 0.7228 - Recall: 0.7228 - F1: 0.7176 - Accuracy: 0.9291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3391 | 1.0 | 1756 | 0.2745 | 0.6765 | 0.6765 | 0.6035 | 0.8948 | | 0.1825 | 2.0 | 3512 | 0.2068 | 0.6986 | 0.6986 | 0.6914 | 0.9215 | | 0.1382 | 3.0 | 5268 | 0.1989 | 0.7228 | 0.7228 | 0.7176 | 0.9291 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Appolo/TestModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 245.02 +/- 46.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 284.45 +/- 20.08 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-05-06T10:57:38Z
--- license: mit tags: - generated_from_trainer model-index: - name: gptneo-txt2ARXMLv1.3.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gptneo-txt2ARXMLv1.3.0 This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.5205 | 0.98 | 45 | 1.4385 | | 0.7083 | 1.98 | 91 | 0.7334 | | 0.5779 | 2.99 | 137 | 0.5942 | | 0.531 | 3.99 | 183 | 0.4915 | | 0.3721 | 4.9 | 225 | 0.4324 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: ja license: apache-2.0 tags: - SudachiTra - Sudachi - SudachiPy - bert - Japanese - NWJC datasets: - NWJC --- # bert-base-sudachitra-v11 This model is a variant of SudachiTra. The differences between the original `chiTra v1.1` and `bert-base-sudachitra-v11` are: - `word_form_type` was changed from `normalized_nouns` to `surface` - Replacing continuous two empty lines with a dummy entry and an empty line in `vocab.txt` Also read the original `README.md` descriptions below. *(See [GitHub - WorksApplications/SudachiTra](https://github.com/WorksApplications/SudachiTra) for the latest README)* # Sudachi Transformers (chiTra) chiTra provides the pre-trained language models and a Japanese tokenizer for [Transformers](https://github.com/huggingface/transformers). ## chiTra pretrained language model We used [NINJAL Web Japanese Corpus (NWJC)](https://pj.ninjal.ac.jp/corpus_center/nwjc/) from National Institute for Japanese Language and Linguistics which contains around 100 million web page text. NWJC was used after cleaning to remove unnecessary sentences. This model trained BERT using a pre-learning script implemented by [NVIDIA](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow2/LanguageModeling/BERT). ## License Copyright (c) 2022 National Institute for Japanese Language and Linguistics and Works Applications Co., Ltd. All rights reserved. "chiTra" is distributed by [National Institute for Japanese Langauge and Linguistics](https://www.ninjal.ac.jp/) and [Works Applications Co.,Ltd.](https://www.worksap.co.jp/) under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Citation ``` @INPROCEEDINGS{katsuta2022chitra, author = {勝田哲弘, 林政義, 山村崇, Tolmachev Arseny, 高岡一馬, 内田佳孝, 浅原正幸}, title = {単語正規化による表記ゆれに頑健な BERT モデルの構築}, booktitle = "言語処理学会第28回年次大会(NLP2022)", year = "2022", pages = "", publisher = "言語処理学会", } ```
ArBert/albert-base-v2-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.46698933079472565 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5629 - Matthews Correlation: 0.4670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.866149341238024e-06 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5043 | 1.0 | 2138 | 0.5637 | 0.3863 | | 0.4399 | 2.0 | 4276 | 0.5629 | 0.4670 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ArBert/bert-base-uncased-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
**Segformer model** trained on the **sidewalk-semantic** dataset for image segmentation
ArBert/bert-base-uncased-finetuned-ner-kmeans-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### merdo Dreambooth model trained by kursatmert with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ArBert/bert-base-uncased-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-05-06T11:12:18Z
--- license: cc-by-4.0 --- Must use following in your prompt: photo of smw person with extremely detailed face, perfectly curled moustache, add_your_own_ideas_here
ArBert/roberta-base-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "roberta", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
<h1 style="text-align: left;">BioRestore Complete</h1> <p><strong><span style="color: #ff00fe;">✔For Order Official Website -</span> <a href="https://sale365day.com/get-biorestore-complete">https://sale365day.com/get-biorestore-complete</a></strong></p> <p><strong><span style="color: #800180;">✔Product Name -</span> <a href="https://www.sympla.com.br/produtor/biorestorecompleteget">BioRestore Complete</a><br /></strong></p> <p><strong><span style="color: #2b00fe;">✔Side Effect -</span> <span style="color: #800180;">No Side Effects<br /></span></strong></p> <p><strong><span style="color: #274e13;">✔Availability - </span><a href="https://sale365day.com/get-biorestore-complete">Online</a><br /></strong></p> <p><strong><span style="color: #274e13;">✔</span></strong><strong><span style="color: #274e13;">Rating -</span>⭐⭐⭐⭐⭐</strong></p> <h2 style="text-align: left;"><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a><u><strong><a href="https://sale365day.com/get-biorestore-complete"><span style="color: red;">Hurry Up - Limited Time Offer - Order Now</span></a></strong></u><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a></h2> <h2 style="text-align: left;"><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a><u><strong><a href="https://sale365day.com/get-biorestore-complete"><span style="color: red;">Hurry Up - Limited Time Offer - Order Now</span></a></strong></u><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a></h2> <h2 style="text-align: left;"><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a><u><strong><a href="https://sale365day.com/get-biorestore-complete"><span style="color: red;">Hurry Up - Limited Time Offer - Order Now</span></a></strong></u><a href="https://sale365day.com/buy-nuvei-skin-tag-remover"><strong><span style="color: #274e13;">✔</span></strong></a></h2> <p class="story-summary"><strong>BioRestore Complete</strong> is a unique anti-aging skin serum that clears dark spots, removes signs of damage and aging, and protects you from harmful radiation.</p> <p>Healthy skin is essential as it improves our physical appearance and overall well-being. If you suffer from dark spots or acne or want to prevent or reverse aging, there is a solution.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-biorestore-complete"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg381pnxx3VurEgK-CPxJMrbRP7qe64VNInV-q7RR08ymlwJTmPWPLF3M6HQBmA5Pn90HUe8UI_OG6wKRIlkbr8dPaq-9LLyNXKfJCq0t2mSAbY2f3oQv59C3G3m5eqr7oBLP3Pj-Q97AWKw07JPzQFF8E6dYOZwGyNzaqel6x5grg9234nONiqEr_ntg/w640-h346/gtghhjyjj.JPG" alt="" width="640" height="346" border="0" data-original-height="552" data-original-width="1022" /></a></div> <p><a href="https://groups.google.com/g/biorestore-complete-offer/c/4CuB1eT4Xps">BioRestore Complete</a> is an anti-aging serum that helps reverse aging and prevents the oxidation that causes dark spots. In the following BioRestore Complete review, we will reveal all you need to know about the serum.</p> <p><span style="font-size: large;"><a class="customlinkcss" title="Special Price for Sale: Nuvei Skin Tag Remover from the Official Website Online" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>Special Price for Sale: BioRestore Complete from the Official Website Online</strong></a></span></p> <h2>What is BioRestore Complete?</h2> <p><a href="https://www.youtube.com/watch?v=1eqExg8Gw3Q">BioRestore Complete</a> is a unique anti-aging skin serum that clears dark spots, removes signs of damage and aging, and protects you from harmful radiation.</p> <p>BioRestore Complete gives you glowing skin and promotes skin health. It is designed to address the underlying cause of dark spots giving you long-lasting results. BioRestore Complete works on all skin types, medical conditions, and all ages.</p> <p>The ingredients in <a href="https://www.provenexpert.com/biorestore-complete/">BioRestore Complete</a> promise 100% effectiveness and are backed by scientific research. The anti-aging ingredients are plant-based, pure, and derived from the most potent sources.</p> <p>The serum can help nourish and hydrate your skin, thus preventing irritation and possible damage. BioRestore Complete improves collagen production, reduces redness, shrinks pores, and eliminates fine lines and wrinkles.</p> <p>Regular use of the skin serum improves the skin's appearance and texture, making your skin look brighter, smoother, and youthful. BioRestore Complete can restore your skin's natural radiance, whether you are struggling with acne scars, sun damage, wrinkles, or fine lines.</p> <p>BioRestore Complete is a safe formula that has zero side effects. The manufacturer ensures quality products free from GMOs, gluten, chemicals, and stimulants. The advanced anti-aging serum is manufactured in the United States in an FDA-approved and GMP-certified facility adhering to strict and sterile standards.</p> <p>The website has real testimonials, thus adding to the product's credibility. Each BioRestore Complete order comes with a 60-day satisfaction guarantee.</p> <h2>The Working Mechanism of BioRestore Complete</h2> <p>According to recent research, the leading cause of dark spots is exposure to modern blue radiation. Anyone can develop dark spots, whether young or old. Electronic devices like smartphones and laptops emit blue light, which penetrates deep into the skin, causing the formation of fine lines and wrinkles.</p> <p>Your skin has three protective layers, which are:</p> <p><strong>Epidermis-</strong> the peel layer that protects you from many threats, including oxidation. It is responsible for making new skin and creating color on your skin.</p> <p><strong>Dermis-</strong> the skin layer has collagen and elasticin and comprises 90% of your skin's thickness. The Dermis layer is responsible for secreting sweat and gives you the sensation of touch because of nerve endings. The elasticin contributes to the skin's resilience, strength, and flexibility.</p> <p><strong>Hypodermis-</strong> the layer contains primarily fat and connective tissue. The bottom layer of the skin protects your bones and muscles and controls body temperature. Additionally, the hypodermis helps the nerves and blood vessels and prevents the body from too cold or too hot temperatures.</p> <p>According to studies, modern blue radiation damages the skin's protective layer, leaving the sensitive layers open. Extended exposure to toxins on the exposed layer causes it to oxidize, eventually causing dark spots.</p> <p><a href="https://biorestore-complete-usa.company.site/">BioRestore Complete</a> has superior ingredients that remove the oxidation from a protective layer, create a protective layer around the skin, and ensure the protective layer is fully renewed.</p> <p><span style="font-size: large;"><a class="customlinkcss" title="MUST SEE: Click Here to Order Nuvei Skin Tag Remover For The Best Price Available!" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>MUST SEE: Click Here to Order BioRestore Complete For The Best Price Available!</strong></a></span></p> <h2>The 4-Step Process (Pure Program) of BioRestore Complete</h2> <p><a href="https://www.sympla.com.br/produtor/biorestorecompletealert">BioRestore Complete</a> uses the following 4-step process to improve your skin's health:</p> <p><strong>Step 1:</strong> Preparing the skin- <a href="https://biorestore-complete-report.clubeo.com/page/biorestore-complete-exposed-scam-2023-customer-fraud-alert-warning-what-do-customers-say-about-biorestore-skin-sreum.html">BioRestore Complete</a> prepares the skin to take up nutrients and starts working on the effects of oxidation. The serum contains graveolens that help calm, defend, and give your skin a smooth feel. The hyaluronic acid in BioRestore Complete helps prepare the skin by providing hydration.</p> <p><strong>Step 2:</strong> unclogging your skin- BioRestore Complete clears your skin to slow down oxidation. It has antioxidants that help remove the first layer of oxidation and offer powerful rejuvenating properties.</p> <p><strong>Step 3:</strong> rehydrate your skin- the ingredients in BioRestore Complete have high hydrating power similar to the natural sebum in your skin. The serum gives your skin all the moisture needed to avoid oxidation and dark spots.</p> <p><strong>Step 4:</strong> erase traces of skin oxidation- BioRestore Complete removes all the traces of oxidation with its string cleansing properties. The antioxidants in the anti-aging serum penetrate deep inside your skin and cleanse away oxidation, thus balancing your skin tone. It gives you clear skin without dark spots, acne, scars, wrinkles, and fine lines.</p> <p><a href="https://groups.google.com/g/biorestore-complete-offer/c/5UDPEK-XO3k">BioRestore Complete</a> contains vitamins, minerals, and nutrients that support epidermis health. The powerful ingredients contribute to keeping your skin healthy and looking youthful.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-biorestore-complete"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj49LEeoTiRh7-XKcl5fX8K637bMafxfdRX9HENAoQc9fia_s6DGmQ7_hFcHLYPVLyy2mcFf3a4Bp4Kvog10PFTr31zOG_u_3QwcGqyH1f6fl2Ge9zDsqC47NjAI2E5ihOo5RfbCb_Y-RYHd_eOg8snk01xNxOhZuSMG_Mqzzppde2vli-wGlS9Wp7c4A/w640-h446/grtghthh.JPG" alt="" width="640" height="446" border="0" data-original-height="638" data-original-width="917" /></a></div> <p><span style="font-size: large;"><a class="customlinkcss" title="(OFFICIAL WEBSITE) Click Here to Buy Nuvei Skin Tag Remover From The Official Website" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>(OFFICIAL WEBSITE) Click Here to Buy BioRestore Complete From The Official Website</strong></a></span></p> <h2>The Ingredients in BioRestore Complete</h2> <p>BioRestore Complete contains a blend of plant-based ingredients in the right proportions to deliver the best results for people of all ages, skin types, and conditions. Here are the 16 science-backed components in <a href="https://colab.research.google.com/drive/1_pdzFIhCTKq4nm4xQEy6DALLd2Q22QkQ?usp=sharing">BioRestore Complete</a>:</p> <p><strong>Hyaluronic Acid</strong></p> <p>Hyaluronic acid holds more than 1,000 times its weight in moisture enabling your skin to retain moisture, which facilitates skin repair and helps you look younger. The component also softens your skin and eliminates dryness and peeling.</p> <p><strong>Graveolens</strong></p> <p>Graveolens are a crucial ingredient in <a href="https://vocal.media/stories/bio-restore-complete-is-it-cheap-scam-product-or-real-skin-correcter-serum">BioRestore Complete</a> that penetrates deep into the epidermis to repair your skin. Studies revealed that graveolens are rich in flavonoids, phenolic acids, and natural steroids that support skin health and prevent aging.</p> <p><strong>Aloe Barbadensis</strong></p> <p>Aloe Barbadensis is another name for Aloe Vera extract. The ingredient is commonly used in beauty and cosmetics due to its immense benefits. Aloe Vera has moisturizing and hydrating effects on the skin. In recent studies, the ingredient was found to heal wounds, fight skin damage, and have anti-aging products.</p> <p><strong>Sencha</strong></p> <p>Sencha or green tea extract is rich in antioxidants known as catechins, which reduce inflammation, especially around your skin. It is a skin rejuvenator and helps improve elasticity.</p> <p><strong>Witch Hazel and Horsetail</strong></p> <p>Witch hazel has a compound called tannins that create a barrier, shielding the skin from modern blue radiation. Horsetail prevents oxidation by unclogging your skin.</p> <p><strong>Jojoba Oil</strong></p> <p>Jojoba oil is a natural moisturizer that prevents the skin from drying, cracking, and irritation. According to studies, the fantastic moisturizer has anti-inflammatory effects and forms a skin barrier when applied topically. The iodine found in jojoba seed oil acts as an antibacterial that prevents the growth of bacteria, which causes skin breakouts.</p> <p><strong>Gotu Kola</strong></p> <p>Gotu Kola is an Ayurvedic ingredient in some Himalayan Mountains. According to studies, Gotu Kola can prevent infections by increasing the body's ability to fight external damage. The markers of BioRestore Complete claim that it improves your skin's natural barrier, containing external toxins and harmful substances. Gotu kola has antioxidants that protect your skin from external harm.</p> <p><strong>Sage</strong></p> <p>Sage is used in traditional medicine and aromatherapy. In <a href="https://www.hoggit.com/biorestorecompletetry">BioRestore Complete</a>, it provides antioxidants that help improve your external skin's appearance.</p> <p><span style="font-size: large;"><a class="customlinkcss" title="Place your order today before stock runs out!" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>Place your order today before stock runs out!</strong></a></span></p> <p><strong>Vitamin C</strong></p> <p>Vitamin C is an antioxidant that enhances tissue growth and repair. It helps hold moisture, which seals and strengthens the skin. BioRestore Complete serum contains ascorbic acid, a form of Vitamin C that improves collagen production and glucoside. Ascorbic acid then breaks down into pure Vitamin C, strengthening the immune system and preventing diseases and infections.</p> <p><strong>Vitamin E</strong></p> <p>Vitamins C and E have antioxidant properties that are helpful in your body. Vitamins are mainly found in fruits, vegetables, herbs, and plants. The vitamin supports the body's healthy inflammatory response. It helps keep the skin firm and smooth and prevents oxidation. Vitamin E supports collagen synthesis and reduces skin redness.</p> <p><strong>Hops</strong></p> <p>The particular type of hops in <a href="https://health-expert-4u.blogspot.com/2023/05/biorestore-complete.html">BioRestore Complete</a> helps the body to remove oxidation. It has natural antioxidants that strengthen the immune system and prevent inflammation.</p> <p><strong>Rosemary Oil</strong></p> <p>Rosemary is an essential herb known for its antifungal and antibacterial properties. The oil helps brighten the skin tone, has moisturizing benefits, and removes fine lines. Rosemary oil improves blood flow to the fingers and toes by expanding the blood vessels. The oil prevents the skin from showing signs of aging and creates a barrier between your skin and modern blue radiation.</p> <p><strong>Lemon Peel Extract</strong></p> <p>Lemon peel extract offers additional Vitamin C, therefore, serving as an antioxidant that protects your skin against oxidation. It has essential minerals, including calcium and magnesium, which benefit the body. Lemon peel has citric acid that reduces skin hyperpigmentation.</p> <p><strong>Scots Pine</strong></p> <p>Scots pine prevents aging by removing fine lines and tightening the skin giving you long-term anti-aging effects.&nbsp;</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-biorestore-complete"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpg6niGTV0SNDrVHpM20HwwpwsHtCkX8V-tqZWkQ_b58gNy8UZqy41hXDDe_tqXkzFoc1W_8rNsuXtzIw_gT8Jce8DMaQ6-E2mwuBkMGjJeCFv18u0iPUjdycHgscrGbS5yk2ELU_Y7piK9qw8uuk4RsvrGLc3Iw51ARsd5jmy5CldfYGiDq9srusC3A/w640-h412/grghhth.JPG" alt="" width="640" height="412" border="0" data-original-height="377" data-original-width="587" /></a></div> <h2>The Benefits of BioRestore Complete</h2> <ul> <li>The serum is a safe remedy for acne and dark spots</li> <li>BioRestore Complete enhances your skin's elasticity</li> <li><a href="https://go.ivoox.com/sq/1944071">BioRestore Complete</a> improves the production of collagen and sebum on the skin</li> <li>The serum shields the skin from harmful radiation</li> <li>BioRestore Complete has nutrients that help improve skin's health</li> <li>The anti-aging serum enhances blood circulation to the toes</li> <li>The advanced skin serum reduces hyperpigmentation</li> </ul> <p><span style="font-size: large;"><a class="customlinkcss" title="Order your supply of BioRestore Complete now by clicking here to start enjoying its benefits!" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>Order your supply of BioRestore Complete now by clicking here to start enjoying its benefits!</strong></a></span></p> <h2>How to Use BioRestore Complete</h2> <p><a href="https://biorestore-complete-official-sale.jimdosite.com/">BioRestore Complete</a> is in the form of a serum, making it easy to use on the skin. The manufacturer recommends applying the serum on your clean face morning and before going to bed. You can use the advanced serum on your neck, cleavage area, and hands.</p> <p>For best results, use <a href="https://soundcloud.com/biorestore-complete-526934537/biorestore-complete">BioRestore Complete</a> for 3-6 months. According to clinical trials, there is no evidence of any side effects from BioRestore Complete users.</p> <p>Consult your doctor before using BioRestore Complete serum if you have a pre-existing medical condition or using prescription drugs.</p> <h2>Pros</h2> <ul> <li><a href="https://infogram.com/biorestore-complete-reviews-1h7g6k0d5g89o2o?live">BioRestore Complete</a> has 100% plant-based ingredients</li> <li>The anti-aging skin serum is manufactured in an FDA-inspected and GMP-certified facility in the United States, adhering to precise standards and strict conditions</li> <li><a href="https://biorestorecompletetry.contently.com/">BioRestore Complete</a> is free from harmful side effects</li> <li>The serum is suitable for all ages, skin types, and medical conditions</li> <li>The ingredients in BioRestore Complete are clinically proven and tested</li> <li><a href="https://hashnode.com/@biorestorecomplete">BioRestore Complete</a> is 100% pure and free from GMOs, contaminants, and chemicals</li> <li>It is easy to use the anti-aging skin serum</li> <li>You get two free bonuses when you purchase 3-6 bottles of BioRestore Complete serum.</li> <li>A 60-day satisfaction guarantee covers all BioRestore Complete orders</li> <li>You get free shipping on all <a href="https://bborestore-complete.hashnode.dev/biorestore-complete-1-usa-report-fake-customer-complaints-best-mole-skin-removal">BioRestore Complete</a> purchases if you are in the United States</li> </ul> <h2>Cons</h2> <ul> <li>BioRestore Complete is only available on the <a class="customlinkcss" title="official website," href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow">official website,</a> and there is no offline availability.</li> <li>Results may vary from person to person.</li> </ul> <h2>Pricing and Money-Back Guarantee</h2> <p>You can only purchase <a href="https://sale365day.com/get-biorestore-complete">BioRestore Complete online from the official website</a>. There are incredible discounts on multiple bottles. Here are the BioRestore Complete current price details as per the website:</p> <ul> <li>One bottle of BioRestore Complete (30-day supply) at $69 per bottle + free US shipping</li> <li>Three bottles of BioRestore Complete (90-day supply) at $59 per bottle + 2 free bonuses and + free US shipping</li> <li>Six bottles of BioRestore Complete (180-day supply) at $49 per bottle + 2 free bonuses and + free US shipping.</li> </ul> <p><span style="font-size: large;"><a class="customlinkcss" title="Visit Here Know More: Click Here To Go to Official Website Now Nuvei Skin Tag Remover" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>Visit Here Know More: Click Here To Go to Official Website Now BioRestore Complete</strong></a></span></p> <p>An ironclad 60-day money-back guarantee covers each <a href="https://www.flowcode.com/page/biorestorecomplete">BioRestore Complete</a> order. If you are unhappy with the results within two months, you can return the full or empty bottles and get a 100% refund.</p> <p>Send a refund request to customer service at [email protected]. You can return the products to the following address; 19655 E 35th Dr. #100, Aurora, CO 800, United States.</p> <h2>Bonuses</h2> <p>As part of the discount, you get the following two bonuses when you purchase three or six bottles of <a href="https://biorestorecomplete.bandcamp.com/track/biorestore-complete-all-you-need-to-know-about-biorestore-complete-reviews-offer">BioRestore Complete</a>:</p> <p><strong>Bonus 1: Asia's Best Kept Skincare-Secrets-</strong> the special eBook contains the best Asian methods for skin health and glowing complexion. You will find three massage techniques used by K-pop stars to stay young. Additionally, the guide reveals a unique protein you can use to tighten your skin.</p> <p><strong>Bonus 2: Get a Hollywood-Ready Body in 21 Days-</strong> the guide has nutritional tips from Hollywood experts. You can use the recommendations for the red carpet even if you are not a celebrity. The author refers to the eBook as the Holy Grail of weight loss to help you attain the desired body. The book provides additional tips to improve your appearance by recommending three pieces of clothing to appear slimmer. You will also discover one spice to help your weight loss journey.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-biorestore-complete"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWKYzHIyNCD5TQewJkBH8_cqM0kXmouYhr-5LcRh7oMyZWPkxO-VIlpfYoc2UGbdrLGr0bN3nSOF6fj6qonLS6aH1TrfeqTkR4Q5Ec75WlqMkEpmVl5anixSWKPqL22Mt910VdDZLEOJDMXo5m2oVFo8gLCHCl_A2mVbZGFCdXmFqYFNN8UHMAU6P81g/w640-h590/ggrgftg.JPG" alt="" width="640" height="590" border="0" data-original-height="545" data-original-width="592" /></a></div> <h2>Conclusion</h2> <p><a href="&lt;h1 style=&quot;text-align: left;&quot;&gt;BioRestore Complete&lt;/h1&gt; &lt;p&gt;&lt;b&gt;&lt;span style=&quot;color: #ff00fe;&quot;&gt;✔For Order Official Website -&lt;/span&gt; &lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot;&gt;https://sale365day.com/get-biorestore-complete&lt;/a&gt;&lt;/b&gt;&lt;/p&gt; &lt;p&gt;&lt;b&gt;&lt;span style=&quot;color: #800180;&quot;&gt;✔Product Name -&lt;/span&gt; BioRestore Complete&lt;br /&gt;&lt;/b&gt;&lt;/p&gt; &lt;p&gt;&lt;b&gt;&lt;span style=&quot;color: #2b00fe;&quot;&gt;✔Side Effect -&lt;/span&gt; &lt;span style=&quot;color: #800180;&quot;&gt;No Side Effects&lt;br /&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt; &lt;p&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔Availability - &lt;/span&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot;&gt;Online&lt;/a&gt;&lt;br /&gt;&lt;/b&gt;&lt;/p&gt; &lt;p&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;Rating -&lt;/span&gt;⭐⭐⭐⭐⭐&lt;/b&gt;&lt;/p&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;u&gt;&lt;b&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot;&gt;&lt;span style=&quot;color: red;&quot;&gt;Hurry Up - Limited Time Offer - Order Now&lt;/span&gt;&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;/h2&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;u&gt;&lt;b&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot;&gt;&lt;span style=&quot;color: red;&quot;&gt;Hurry Up - Limited Time Offer - Order Now&lt;/span&gt;&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;/h2&gt; &lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;u&gt;&lt;b&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot;&gt;&lt;span style=&quot;color: red;&quot;&gt;Hurry Up - Limited Time Offer - Order Now&lt;/span&gt;&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;&lt;a href=&quot;https://sale365day.com/buy-nuvei-skin-tag-remover&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #274e13;&quot;&gt;✔&lt;/span&gt;&lt;/b&gt;&lt;/a&gt; &lt;br /&gt;&lt;/h2&gt;&lt;p class=&quot;story-summary&quot;&gt;BioRestore Complete is a unique anti-aging skin serum that clears dark spots, removes signs of damage and aging, and protects you from harmful radiation. &lt;/p&gt; &lt;p&gt;Healthy skin is essential as it improves our physical appearance and overall well-being. If you suffer from dark spots or acne or want to prevent or reverse aging, there is a solution.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;552&quot; data-original-width=&quot;1022&quot; height=&quot;346&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg381pnxx3VurEgK-CPxJMrbRP7qe64VNInV-q7RR08ymlwJTmPWPLF3M6HQBmA5Pn90HUe8UI_OG6wKRIlkbr8dPaq-9LLyNXKfJCq0t2mSAbY2f3oQv59C3G3m5eqr7oBLP3Pj-Q97AWKw07JPzQFF8E6dYOZwGyNzaqel6x5grg9234nONiqEr_ntg/w640-h346/gtghhjyjj.JPG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt; &lt;br /&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt; &lt;p&gt;BioRestore Complete is an anti-aging serum that helps reverse aging and prevents the oxidation that causes dark spots. In the following BioRestore Complete review, we will reveal all you need to know about the serum.&lt;/p&gt;&lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Special Price for Sale: Nuvei Skin Tag Remover from the Official Website Online&quot;&gt;&lt;b&gt;Special Price for Sale: BioRestore Complete from the Official Website Online&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; &lt;br /&gt;&lt;/p&gt; &lt;h2&gt;What is BioRestore Complete?&lt;/h2&gt; &lt;p&gt;BioRestore Complete is a unique anti-aging skin serum that clears dark spots, removes signs of damage and aging, and protects you from harmful radiation.&lt;/p&gt; &lt;p&gt;BioRestore Complete gives you glowing skin and promotes skin health. It is designed to address the underlying cause of dark spots giving you long-lasting results. BioRestore Complete works on all skin types, medical conditions, and all ages.&lt;/p&gt; &lt;p&gt;The ingredients in BioRestore Complete promise 100% effectiveness and are backed by scientific research. The anti-aging ingredients are plant-based, pure, and derived from the most potent sources. &lt;br /&gt;&lt;/p&gt; &lt;p&gt;The serum can help nourish and hydrate your skin, thus preventing irritation and possible damage. BioRestore Complete improves collagen production, reduces redness, shrinks pores, and eliminates fine lines and wrinkles.&lt;/p&gt; &lt;p&gt;Regular use of the skin serum improves the skin's appearance and texture, making your skin look brighter, smoother, and youthful. BioRestore Complete can restore your skin's natural radiance, whether you are struggling with acne scars, sun damage, wrinkles, or fine lines.&lt;/p&gt; &lt;p&gt;BioRestore Complete is a safe formula that has zero side effects. The manufacturer ensures quality products free from GMOs, gluten, chemicals, and stimulants. The advanced anti-aging serum is manufactured in the United States in an FDA-approved and GMP-certified facility adhering to strict and sterile standards.&lt;/p&gt; &lt;p&gt;The website has real testimonials, thus adding to the product's credibility. Each BioRestore Complete order comes with a 60-day satisfaction guarantee.&lt;/p&gt;&lt;h2&gt;The Working Mechanism of BioRestore Complete&lt;/h2&gt; &lt;p&gt;According to recent research, the leading cause of dark spots is exposure to modern blue radiation. Anyone can develop dark spots, whether young or old. Electronic devices like smartphones and laptops emit blue light, which penetrates deep into the skin, causing the formation of fine lines and wrinkles.&lt;/p&gt; &lt;p&gt;Your skin has three protective layers, which are:&lt;/p&gt; &lt;p&gt;&lt;b&gt;Epidermis-&lt;/b&gt; the peel layer that protects you from many threats, including oxidation. It is responsible for making new skin and creating color on your skin.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Dermis-&lt;/b&gt; the skin layer has collagen and elasticin and comprises 90% of your skin's thickness. The Dermis layer is responsible for secreting sweat and gives you the sensation of touch because of nerve endings. The elasticin contributes to the skin's resilience, strength, and flexibility.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Hypodermis-&lt;/b&gt; the layer contains primarily fat and connective tissue. The bottom layer of the skin protects your bones and muscles and controls body temperature. Additionally, the hypodermis helps the nerves and blood vessels and prevents the body from too cold or too hot temperatures.&lt;/p&gt; &lt;p&gt;According to studies, modern blue radiation damages the skin's protective layer, leaving the sensitive layers open. Extended exposure to toxins on the exposed layer causes it to oxidize, eventually causing dark spots.&lt;/p&gt; &lt;p&gt;BioRestore Complete has superior ingredients that remove the oxidation from a protective layer, create a protective layer around the skin, and ensure the protective layer is fully renewed.&lt;/p&gt;&lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;MUST SEE: Click Here to Order Nuvei Skin Tag Remover For The Best Price Available!&quot;&gt;&lt;b&gt;MUST SEE: Click Here to Order BioRestore Complete For The Best Price Available!&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; &lt;br /&gt;&lt;/p&gt; &lt;h2&gt;The 4-Step Process (Pure Program) of BioRestore Complete&lt;/h2&gt; &lt;p&gt;BioRestore Complete uses the following 4-step process to improve your skin's health:&lt;/p&gt; &lt;p&gt;&lt;b&gt;Step 1:&lt;/b&gt; Preparing the skin- BioRestore Complete prepares the skin to take up nutrients and starts working on the effects of oxidation. The serum contains graveolens that help calm, defend, and give your skin a smooth feel. The hyaluronic acid in BioRestore Complete helps prepare the skin by providing hydration.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Step 2:&lt;/b&gt; unclogging your skin- BioRestore Complete clears your skin to slow down oxidation. It has antioxidants that help remove the first layer of oxidation and offer powerful rejuvenating properties.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Step 3:&lt;/b&gt; rehydrate your skin- the ingredients in BioRestore Complete have high hydrating power similar to the natural sebum in your skin. The serum gives your skin all the moisture needed to avoid oxidation and dark spots.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Step 4:&lt;/b&gt; erase traces of skin oxidation- BioRestore Complete removes all the traces of oxidation with its string cleansing properties. The antioxidants in the anti-aging serum penetrate deep inside your skin and cleanse away oxidation, thus balancing your skin tone. It gives you clear skin without dark spots, acne, scars, wrinkles, and fine lines.&lt;/p&gt; &lt;p&gt;BioRestore Complete contains vitamins, minerals, and nutrients that support epidermis health. The powerful ingredients contribute to keeping your skin healthy and looking youthful.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;638&quot; data-original-width=&quot;917&quot; height=&quot;446&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj49LEeoTiRh7-XKcl5fX8K637bMafxfdRX9HENAoQc9fia_s6DGmQ7_hFcHLYPVLyy2mcFf3a4Bp4Kvog10PFTr31zOG_u_3QwcGqyH1f6fl2Ge9zDsqC47NjAI2E5ihOo5RfbCb_Y-RYHd_eOg8snk01xNxOhZuSMG_Mqzzppde2vli-wGlS9Wp7c4A/w640-h446/grtghthh.JPG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt; &lt;br /&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt; &lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;(OFFICIAL WEBSITE) Click Here to Buy Nuvei Skin Tag Remover From The Official Website&quot;&gt;&lt;b&gt;(OFFICIAL WEBSITE) Click Here to Buy BioRestore Complete From The Official Website&lt;/b&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://supplementhacks.com/biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;SPECIAL OFFER: Get BioRestore Complete at Very Affordable Pricing!!!&quot;&gt;&lt;/a&gt;&lt;/p&gt; &lt;h2&gt;The Ingredients in BioRestore Complete&lt;/h2&gt; &lt;p&gt;BioRestore Complete contains a blend of plant-based ingredients in the right proportions to deliver the best results for people of all ages, skin types, and conditions. Here are the 16 science-backed components in BioRestore Complete:&lt;/p&gt; &lt;p&gt;&lt;b&gt;Hyaluronic Acid&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Hyaluronic acid holds more than 1,000 times its weight in moisture enabling your skin to retain moisture, which facilitates skin repair and helps you look younger. The component also softens your skin and eliminates dryness and peeling.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Graveolens&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Graveolens are a crucial ingredient in BioRestore Complete that penetrates deep into the epidermis to repair your skin. Studies revealed that graveolens are rich in flavonoids, phenolic acids, and natural steroids that support skin health and prevent aging.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Aloe Barbadensis&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Aloe Barbadensis is another name for Aloe Vera extract. The ingredient is commonly used in beauty and cosmetics due to its immense benefits. Aloe Vera has moisturizing and hydrating effects on the skin. In recent studies, the ingredient was found to heal wounds, fight skin damage, and have anti-aging products.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Sencha&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Sencha or green tea extract is rich in antioxidants known as catechins, which reduce inflammation, especially around your skin. It is a skin rejuvenator and helps improve elasticity.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Witch Hazel and Horsetail&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Witch hazel has a compound called tannins that create a barrier, shielding the skin from modern blue radiation. Horsetail prevents oxidation by unclogging your skin.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Jojoba Oil&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Jojoba oil is a natural moisturizer that prevents the skin from drying, cracking, and irritation. According to studies, the fantastic moisturizer has anti-inflammatory effects and forms a skin barrier when applied topically. The iodine found in jojoba seed oil acts as an antibacterial that prevents the growth of bacteria, which causes skin breakouts.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Gotu Kola&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Gotu Kola is an Ayurvedic ingredient in some Himalayan Mountains. According to studies, Gotu Kola can prevent infections by increasing the body's ability to fight external damage. The markers of BioRestore Complete claim that it improves your skin's natural barrier, containing external toxins and harmful substances. Gotu kola has antioxidants that protect your skin from external harm.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Sage&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Sage is used in traditional medicine and aromatherapy. In BioRestore Complete, it provides antioxidants that help improve your external skin's appearance.&lt;/p&gt; &lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Place your order today before stock runs out!&quot;&gt;&lt;b&gt;Place your order today before stock runs out!&lt;/b&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt; &lt;p&gt;&lt;b&gt;Vitamin C&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Vitamin C is an antioxidant that enhances tissue growth and repair. It helps hold moisture, which seals and strengthens the skin. BioRestore Complete serum contains ascorbic acid, a form of Vitamin C that improves collagen production and glucoside. Ascorbic acid then breaks down into pure Vitamin C, strengthening the immune system and preventing diseases and infections.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Vitamin E&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Vitamins C and E have antioxidant properties that are helpful in your body. Vitamins are mainly found in fruits, vegetables, herbs, and plants. The vitamin supports the body's healthy inflammatory response. It helps keep the skin firm and smooth and prevents oxidation. Vitamin E supports collagen synthesis and reduces skin redness.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Hops&lt;/b&gt;&lt;/p&gt; &lt;p&gt;The particular type of hops in BioRestore Complete helps the body to remove oxidation. It has natural antioxidants that strengthen the immune system and prevent inflammation.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Rosemary Oil&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Rosemary is an essential herb known for its antifungal and antibacterial properties. The oil helps brighten the skin tone, has moisturizing benefits, and removes fine lines. Rosemary oil improves blood flow to the fingers and toes by expanding the blood vessels. The oil prevents the skin from showing signs of aging and creates a barrier between your skin and modern blue radiation.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Lemon Peel Extract&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Lemon peel extract offers additional Vitamin C, therefore, serving as an antioxidant that protects your skin against oxidation. It has essential minerals, including calcium and magnesium, which benefit the body. Lemon peel has citric acid that reduces skin hyperpigmentation.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Scots Pine&lt;/b&gt;&lt;/p&gt; &lt;p&gt;Scots pine prevents aging by removing fine lines and tightening the skin giving you long-term anti-aging effects.&amp;nbsp;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;377&quot; data-original-width=&quot;587&quot; height=&quot;412&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpg6niGTV0SNDrVHpM20HwwpwsHtCkX8V-tqZWkQ_b58gNy8UZqy41hXDDe_tqXkzFoc1W_8rNsuXtzIw_gT8Jce8DMaQ6-E2mwuBkMGjJeCFv18u0iPUjdycHgscrGbS5yk2ELU_Y7piK9qw8uuk4RsvrGLc3Iw51ARsd5jmy5CldfYGiDq9srusC3A/w640-h412/grghhth.JPG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;h2&gt;The Benefits of BioRestore Complete&lt;/h2&gt; &lt;ul&gt;&lt;li&gt;The serum is a safe remedy for acne and dark spots&lt;/li&gt;&lt;li&gt;BioRestore Complete enhances your skin's elasticity&lt;/li&gt;&lt;li&gt;BioRestore Complete improves the production of collagen and sebum on the skin&lt;/li&gt;&lt;li&gt;The serum shields the skin from harmful radiation&lt;/li&gt;&lt;li&gt;BioRestore Complete has nutrients that help improve skin's health&lt;/li&gt;&lt;li&gt;The anti-aging serum enhances blood circulation to the toes&lt;/li&gt;&lt;li&gt;The advanced skin serum reduces hyperpigmentation&lt;/li&gt;&lt;/ul&gt; &lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Order your supply of BioRestore Complete now by clicking here to start enjoying its benefits!&quot;&gt;&lt;b&gt;Order your supply of BioRestore Complete now by clicking here to start enjoying its benefits!&lt;/b&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt; &lt;h2&gt;How to Use BioRestore Complete&lt;/h2&gt; &lt;p&gt;BioRestore Complete is in the form of a serum, making it easy to use on the skin. The manufacturer recommends applying the serum on your clean face morning and before going to bed. You can use the advanced serum on your neck, cleavage area, and hands.&lt;/p&gt; &lt;p&gt;For best results, use BioRestore Complete for 3-6 months. According to clinical trials, there is no evidence of any side effects from BioRestore Complete users.&lt;/p&gt; &lt;p&gt;Consult your doctor before using BioRestore Complete serum if you have a pre-existing medical condition or using prescription drugs.&lt;/p&gt; &lt;h2&gt;Pros&lt;/h2&gt; &lt;ul&gt;&lt;li&gt;BioRestore Complete has 100% plant-based ingredients&lt;/li&gt;&lt;li&gt;The anti-aging skin serum is manufactured in an FDA-inspected and GMP-certified facility in the United States, adhering to precise standards and strict conditions&lt;/li&gt;&lt;li&gt;BioRestore Complete is free from harmful side effects&lt;/li&gt;&lt;li&gt;The serum is suitable for all ages, skin types, and medical conditions&lt;/li&gt;&lt;li&gt;The ingredients in BioRestore Complete are clinically proven and tested&lt;/li&gt;&lt;li&gt;BioRestore Complete is 100% pure and free from GMOs, contaminants, and chemicals&lt;/li&gt;&lt;li&gt;It is easy to use the anti-aging skin serum&lt;/li&gt;&lt;li&gt;You get two free bonuses when you purchase 3-6 bottles of BioRestore Complete serum.&lt;/li&gt;&lt;li&gt;A 60-day satisfaction guarantee covers all BioRestore Complete orders&lt;/li&gt;&lt;li&gt;You get free shipping on all BioRestore Complete purchases if you are in the United States&lt;/li&gt;&lt;/ul&gt; &lt;h2&gt;Cons&lt;/h2&gt; &lt;ul&gt;&lt;li&gt;BioRestore Complete is only available on the &lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;official website,&quot;&gt;official website,&lt;/a&gt; and there is no offline availability.&lt;/li&gt;&lt;li&gt;Results may vary from person to person. &lt;br /&gt; &lt;/li&gt;&lt;/ul&gt; &lt;h2&gt;Pricing and Money-Back Guarantee&lt;/h2&gt; &lt;p&gt;You can only purchase BioRestore Complete online from the official website. There are incredible discounts on multiple bottles. Here are the BioRestore Complete current price details as per the website:&lt;/p&gt; &lt;ul&gt;&lt;li&gt;One bottle of BioRestore Complete (30-day supply) at $69 per bottle + free US shipping&lt;/li&gt;&lt;li&gt;Three bottles of BioRestore Complete (90-day supply) at $59 per bottle + 2 free bonuses and + free US shipping&lt;/li&gt;&lt;li&gt;Six bottles of BioRestore Complete (180-day supply) at $49 per bottle + 2 free bonuses and + free US shipping.&lt;/li&gt;&lt;/ul&gt; &lt;p&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://supplementhacks.com/biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Find out what others are saying about BioRestore Complete - click here for authentic reviews!&quot;&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;/span&gt;&lt;/a&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Visit Here Know More: Click Here To Go to Official Website Now Nuvei Skin Tag Remover&quot;&gt;&lt;b&gt;Visit Here Know More: Click Here To Go to Official Website Now BioRestore Complete&lt;/b&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://supplementhacks.com/biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Click Here to Get BioRestore Complete At Discounted Price!!!&quot;&gt;&lt;/a&gt;&lt;/p&gt; &lt;p&gt;An ironclad 60-day money-back guarantee covers each BioRestore Complete order. If you are unhappy with the results within two months, you can return the full or empty bottles and get a 100% refund.&lt;/p&gt; &lt;p&gt;Send a refund request to customer service at [email protected]. You can return the products to the following address; 19655 E 35th Dr. #100, Aurora, CO 800, United States.&lt;/p&gt; &lt;h2&gt;Bonuses&lt;/h2&gt; &lt;p&gt;As part of the discount, you get the following two bonuses when you purchase three or six bottles of BioRestore Complete:&lt;/p&gt; &lt;p&gt;&lt;b&gt;Bonus 1: Asia's Best Kept Skincare-Secrets-&lt;/b&gt; the special eBook contains the best Asian methods for skin health and glowing complexion. You will find three massage techniques used by K-pop stars to stay young. Additionally, the guide reveals a unique protein you can use to tighten your skin.&lt;/p&gt; &lt;p&gt;&lt;b&gt;Bonus 2: Get a Hollywood-Ready Body in 21 Days-&lt;/b&gt; the guide has nutritional tips from Hollywood experts. You can use the recommendations for the red carpet even if you are not a celebrity. The author refers to the eBook as the Holy Grail of weight loss to help you attain the desired body. The book provides additional tips to improve your appearance by recommending three pieces of clothing to appear slimmer. You will also discover one spice to help your weight loss journey.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://sale365day.com/get-biorestore-complete&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;545&quot; data-original-width=&quot;592&quot; height=&quot;590&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWKYzHIyNCD5TQewJkBH8_cqM0kXmouYhr-5LcRh7oMyZWPkxO-VIlpfYoc2UGbdrLGr0bN3nSOF6fj6qonLS6aH1TrfeqTkR4Q5Ec75WlqMkEpmVl5anixSWKPqL22Mt910VdDZLEOJDMXo5m2oVFo8gLCHCl_A2mVbZGFCdXmFqYFNN8UHMAU6P81g/w640-h590/ggrgftg.JPG&quot; width=&quot;640&quot; /&gt;&lt;/a&gt; &lt;br /&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt; &lt;h2&gt;Conclusion&lt;/h2&gt; &lt;p&gt;BioRestore Complete is a natural formula that helps clear dark spots, protect the skin from harmful blue radiation, and remove all signs of aging from your face. The ingredients in the serum form a protective barrier that protects your skin from damage.&lt;/p&gt; &lt;p&gt;When using BioRestore Complete, you will have clear and rejuvenated skin free from dark spots, wrinkles, and fine lines. The serum enhances the skin's elasticity, smoothens, and moisturizes the skin.&lt;/p&gt; &lt;p&gt;The moisturizing effect in BioRestore Complete prevents skin cracking, drying, and irritation. People of all ages can use the serum for all skin types and medical conditions. BioRestore Complete provides long-term results.&lt;/p&gt; &lt;p&gt;The antioxidant agents in serum reduce inflammation and strengthen the immune system. BioRestore Complete is a safe anti-aging formula with 100% plant-based ingredients backed by scientific studies. There are no side effects linked to the serum.&lt;/p&gt; &lt;p&gt;The ingredients in BioRestore Complete are tested for purity, quality, and potency and formulated in an FDA-inspected and GMP-certified facility. You can use the formula with zero risk as it comes with a 60-day money-back guarantee. &lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Visit the official website to learn more today!&quot;&gt;Visit the official website to learn more today!&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;a class=&quot;customlinkcss&quot; href=&quot;https://sale365day.com/get-biorestore-complete&quot; rel=&quot;sposored, index, nofollow&quot; target=&quot;_blank&quot; title=&quot;Discount Price: Higher Discount Price Available For Nuvei Skin Tag Remover&quot;&gt;&lt;b&gt;Discount Price: Higher Discount Price Available For BioRestore Complete&lt;/b&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;">BioRestore Complete</a> is a natural formula that helps clear dark spots, protect the skin from harmful blue radiation, and remove all signs of aging from your face. The ingredients in the serum form a protective barrier that protects your skin from damage.</p> <p>When using <a href="https://www.sympla.com.br/produtor/biorestorecompletescam">BioRestore Complete</a>, you will have clear and rejuvenated skin free from dark spots, wrinkles, and fine lines. The serum enhances the skin's elasticity, smoothens, and moisturizes the skin.</p> <p>The moisturizing effect in BioRestore Complete prevents skin cracking, drying, and irritation. People of all ages can use the serum for all skin types and medical conditions. <a href="https://groups.google.com/g/biorestore-complete-offer">BioRestore Complete</a> provides long-term results.</p> <p>The antioxidant agents in serum reduce inflammation and strengthen the immune system. BioRestore Complete is a safe anti-aging formula with 100% plant-based ingredients backed by scientific studies. There are no side effects linked to the serum.</p> <p>The ingredients in <a href="https://biorestore-complete-usa.hashnode.dev/biorestore-complete-reviews-2023-shocking-customer-side-effect-complaints-about-ingredients?showSharer=true">BioRestore Complete</a> are tested for purity, quality, and potency and formulated in an FDA-inspected and GMP-certified facility. You can use the formula with zero risk as it comes with a 60-day money-back guarantee. <a class="customlinkcss" title="Visit the official website to learn more today!" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow">Visit the official website to learn more today!</a></p> <p><span style="font-size: large;"><a class="customlinkcss" title="Discount Price: Higher Discount Price Available For Nuvei Skin Tag Remover" href="https://sale365day.com/get-biorestore-complete" target="_blank" rel="sposored, index, nofollow"><strong>Discount Price: Higher Discount Price Available For BioRestore Complete</strong></a></span></p> <p>Read More:</p> <p><a href="https://health-expert-4u.blogspot.com/2023/05/biorestore-complete.html">https://health-expert-4u.blogspot.com/2023/05/biorestore-complete.html</a><br /><a href="https://biorestore-complete-official-sale.jimdosite.com/">https://biorestore-complete-official-sale.jimdosite.com/</a><br /><a href="https://www.flowcode.com/page/biorestorecomplete">https://www.flowcode.com/page/biorestorecomplete</a><br /><a href="https://www.facebook.com/profile.php?id=100092610664759">https://www.facebook.com/profile.php?id=100092610664759</a><br /><a href="https://www.sympla.com.br/produtor/biorestorecompletescam">https://www.sympla.com.br/produtor/biorestorecompletescam</a><br /><a href="https://infogram.com/biorestore-complete-reviews-1h7g6k0d5g89o2o?live">https://infogram.com/biorestore-complete-reviews-1h7g6k0d5g89o2o?live</a><br /><a href="https://www.fuzia.com/article_detail/783152/biorestore-complete-customer-reviews-benefits-buying">https://www.fuzia.com/article_detail/783152/biorestore-complete-customer-reviews-benefits-buying</a><br /><a href="https://www.youtube.com/watch?v=1eqExg8Gw3Q">https://www.youtube.com/watch?v=1eqExg8Gw3Q</a><br /><a href="https://colab.research.google.com/drive/1_pdzFIhCTKq4nm4xQEy6DALLd2Q22QkQ?usp=sharing">https://colab.research.google.com/drive/1_pdzFIhCTKq4nm4xQEy6DALLd2Q22QkQ?usp=sharing</a><br /><a href="https://www.sympla.com.br/produtor/biorestorecompletealert">https://www.sympla.com.br/produtor/biorestorecompletealert</a><br /><a href="https://biorestore-complete-report.clubeo.com/page/biorestore-complete-exposed-scam-2023-customer-fraud-alert-warning-what-do-customers-say-about-biorestore-skin-sreum.html">https://biorestore-complete-report.clubeo.com/page/biorestore-complete-exposed-scam-2023-customer-fraud-alert-warning-what-do-customers-say-about-biorestore-skin-sreum.html</a><br /><a href="https://biorestore-complete-usa.company.site/">https://biorestore-complete-usa.company.site/</a><br /><a href="https://groups.google.com/g/biorestore-complete-offer">https://groups.google.com/g/biorestore-complete-offer</a><br /><a href="https://groups.google.com/g/biorestore-complete-offer/c/5UDPEK-XO3k">https://groups.google.com/g/biorestore-complete-offer/c/5UDPEK-XO3k</a><br /><a href="https://groups.google.com/g/biorestore-complete-offer/c/4CuB1eT4Xps">https://groups.google.com/g/biorestore-complete-offer/c/4CuB1eT4Xps</a><br /><a href="https://soundcloud.com/biorestore-complete-526934537/biorestore-complete">https://soundcloud.com/biorestore-complete-526934537/biorestore-complete</a><br /><a href="https://go.ivoox.com/sq/1944071">https://go.ivoox.com/sq/1944071</a><br /><a href="https://biorestorecomplete.bandcamp.com/track/biorestore-complete-all-you-need-to-know-about-biorestore-complete-reviews-offer">https://biorestorecomplete.bandcamp.com/track/biorestore-complete-all-you-need-to-know-about-biorestore-complete-reviews-offer</a><br /><a href="https://biorestorecompletetry.contently.com/">https://biorestorecompletetry.contently.com/</a><br /><a href="https://bborestore-complete.hashnode.dev/biorestore-complete-1-usa-report-fake-customer-complaints-best-mole-skin-removal">https://bborestore-complete.hashnode.dev/biorestore-complete-1-usa-report-fake-customer-complaints-best-mole-skin-removal</a><br /><a href="https://hashnode.com/@biorestorecomplete">https://hashnode.com/@biorestorecomplete</a><br /><a href="https://vocal.media/stories/bio-restore-complete-is-it-cheap-scam-product-or-real-skin-correcter-serum">https://vocal.media/stories/bio-restore-complete-is-it-cheap-scam-product-or-real-skin-correcter-serum</a><br /><a href="https://www.hoggit.com/biorestorecompletetry">https://www.hoggit.com/biorestorecompletetry</a><br /><a href="https://www.sympla.com.br/produtor/biorestorecompleteget">https://www.sympla.com.br/produtor/biorestorecompleteget</a></p>
ArBert/roberta-base-finetuned-ner
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FFurkanGozukara%2FStable-Diffusion&count_bg=%2379C83D&title_bg=%239E0F0F&icon=apachespark.svg&icon_color=%23E7E7E7&title=views&edge_flat=false)](https://hits.seeyoufarm.com) [![Twitter Follow](https://img.shields.io/twitter/follow/GozukaraFurkan?label=Follow&style=social)](https://twitter.com/GozukaraFurkan) [![YouTube Channel](https://img.shields.io/badge/YouTube-Channel-red?style=for-the-badge&logo=youtube)](https://www.youtube.com/SECourses) # Expert-Level Tutorials on Stable Diffusion: Master Advanced Techniques and Strategies Greetings everyone. I am Dr. Furkan Gözükara. I am an Assistant Professor in Software Engineering department of a private university (have PhD in Computer Engineering). My professional programming skill is unfortunately C# not Python :) My linkedin : [**https://www.linkedin.com/in/furkangozukara**](https://www.linkedin.com/in/furkangozukara/) ### Our channel address if you like to subscribe : [**https://www.youtube.com/@SECourses**](https://www.youtube.com/@SECourses) ### Our discord to get more help : [**https://discord.com/servers/software-engineering-courses-secourses-772774097734074388**](https://discord.com/servers/software-engineering-courses-secourses-772774097734074388) I am keeping this list up-to-date. I got upcoming new awesome video ideas. Trying to find time to do that. ### I am open to any criticism you have. I am constantly trying to improve the quality of my tutorial guide videos. Please leave comments with both your suggestions and what you would like to see in future videos. ### All videos have manually fixed subtitles and properly prepared video chapters. You can watch with these perfect subtitles or look for the chapters you are interested in. Since my profession is teaching, I usually do not skip any of the important parts. Therefore, you may find my videos a little bit longer. Playlist link on YouTube: [**Stable Diffusion Tutorials, Automatic1111 Web UI & Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Video to Anime**](https://www.youtube.com/watch?v=mnCY8uM7E50&list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3) 1.) Automatic1111 Web UI - PC - Free [**How To Install Python, Setup Virtual Environment VENV, Set Default Python System Path & Install Git**](https://youtu.be/B5U7LJOvH6g) [![image](https://user-images.githubusercontent.com/19240467/235155922-c9ebf609-e1d3-4bbf-8f00-bc181fa4a10b.png)](https://youtu.be/B5U7LJOvH6g) 2.) Automatic1111 Web UI - PC - Free [**Easiest Way to Install & Run Stable Diffusion Web UI on PC by Using Open Source Automatic Installer**](https://www.youtube.com/watch?v=AZg6vzWHOTA) [![image](https://user-images.githubusercontent.com/19240467/218344261-aa236e18-152f-4287-b4fd-fa09c8f57a3f.png)](https://www.youtube.com/watch?v=AZg6vzWHOTA) 3.) Automatic1111 Web UI - PC - Free [**How to use Stable Diffusion V2.1 and Different Models in the Web UI - SD 1.5 vs 2.1 vs Anything V3**](https://www.youtube.com/watch?v=aAyvsX-EpG4) [![image](https://user-images.githubusercontent.com/19240467/218344276-af8f2fa2-4fdb-4454-9c8f-bd3ef4d2c92a.png)](https://www.youtube.com/watch?v=aAyvsX-EpG4) 4.) Automatic1111 Web UI - PC - Free [**Zero To Hero Stable Diffusion DreamBooth Tutorial By Using Automatic1111 Web UI - Ultra Detailed**](https://www.youtube.com/watch?v=Bdl-jWR3Ukc) [![image](https://user-images.githubusercontent.com/19240467/218344301-04f91cf4-fa35-4975-8c3d-9951c765839a.png)](https://www.youtube.com/watch?v=Bdl-jWR3Ukc) 5.) Automatic1111 Web UI - PC - Free [**DreamBooth Got Buffed - 22 January Update - Much Better Success Train Stable Diffusion Models Web UI**](https://www.youtube.com/watch?v=KwxNcGhHuLY) [![image](https://user-images.githubusercontent.com/19240467/218344369-97b68dd7-732d-4ca3-9acc-a87984ebe0f0.png)](https://www.youtube.com/watch?v=KwxNcGhHuLY) 6.) Automatic1111 Web UI - PC - Free [**How to Inject Your Trained Subject e.g. Your Face Into Any Custom Stable Diffusion Model By Web UI**](https://www.youtube.com/watch?v=s25hcW4zq4M) [![image](https://user-images.githubusercontent.com/19240467/218344509-01d70965-aeea-4096-bc29-7a005b4d47a6.png)](https://www.youtube.com/watch?v=s25hcW4zq4M) 7.) Automatic1111 Web UI - PC - Free [**How To Do Stable Diffusion LORA Training By Using Web UI On Different Models - Tested SD 1.5, SD 2.1**](https://www.youtube.com/watch?v=mfaqqL5yOO4) [![image](https://user-images.githubusercontent.com/19240467/218344459-bd4554b0-b57b-4079-aaea-ed93d8be95ed.png)](https://www.youtube.com/watch?v=mfaqqL5yOO4) 8.) Automatic1111 Web UI - PC - Free [**8 GB LoRA Training - Fix CUDA & xformers For DreamBooth and Textual Inversion in Automatic1111 SD UI**](https://www.youtube.com/watch?v=O01BrQwOd-Q) [![image](https://user-images.githubusercontent.com/19240467/218344491-52ac51d8-6556-4abc-b2fb-d640a46c48a2.png)](https://www.youtube.com/watch?v=O01BrQwOd-Q) 9.) Automatic1111 Web UI - PC - Free [**How To Do Stable Diffusion Textual Inversion (TI) / Text Embeddings By Automatic1111 Web UI Tutorial**](https://www.youtube.com/watch?v=dNOpWt-epdQ) [![image](https://user-images.githubusercontent.com/19240467/218344538-d5f0329d-b0e9-44ed-aaf0-5e4bb134afb7.png)](https://www.youtube.com/watch?v=dNOpWt-epdQ) 10.) Automatic1111 Web UI - PC - Free [**How To Generate Stunning Epic Text By Stable Diffusion AI - No Photoshop - For Free - Depth-To-Image**](https://www.youtube.com/watch?v=TBq1bhY8BOc) [![image](https://user-images.githubusercontent.com/19240467/218344579-fda1e9b8-a810-48af-9dcb-f47e87afee9e.png)](https://www.youtube.com/watch?v=TBq1bhY8BOc) 11.) Python Code - Hugging Face Diffusers Script - PC - Free [**How to Run and Convert Stable Diffusion Diffusers (.bin Weights) & Dreambooth Models to CKPT File**](https://www.youtube.com/watch?v=-6CA18MS0pY) [![image](https://user-images.githubusercontent.com/19240467/218344677-3f812cf3-db37-4ccb-8f81-99b8a1d5ef00.png)](https://www.youtube.com/watch?v=-6CA18MS0pY) 12.) NMKD Stable Diffusion GUI - Open Source - PC - Free [**Forget Photoshop - How To Transform Images With Text Prompts using InstructPix2Pix Model in NMKD GUI**](https://www.youtube.com/watch?v=EPRa8EZl9Os) [![image](https://user-images.githubusercontent.com/19240467/218344868-3232f875-b2c5-4caa-b59b-9d0fd683c06b.png)](https://www.youtube.com/watch?v=EPRa8EZl9Os) 13.) Google Colab Free - Cloud - No PC Is Required [**Transform Your Selfie into a Stunning AI Avatar with Stable Diffusion - Better than Lensa for Free**](https://www.youtube.com/watch?v=mnCY8uM7E50) [![image](https://user-images.githubusercontent.com/19240467/218344900-286cded5-0171-4b9e-9354-7adf4bada612.png)](https://www.youtube.com/watch?v=mnCY8uM7E50) 14.) Google Colab Free - Cloud - No PC Is Required [**Stable Diffusion Google Colab, Continue, Directory, Transfer, Clone, Custom Models, CKPT SafeTensors**](https://www.youtube.com/watch?v=kIyqAdd_i10) [![image](https://user-images.githubusercontent.com/19240467/218344930-95956805-6a6e-46ee-8885-64043246d79b.png)](https://www.youtube.com/watch?v=kIyqAdd_i10) 15.) Automatic1111 Web UI - PC - Free [**Become A Stable Diffusion Prompt Master By Using DAAM - Attention Heatmap For Each Used Token - Word**](https://www.youtube.com/watch?v=XiKyEKJrTLQ) [![image](https://user-images.githubusercontent.com/19240467/218345146-54076e5d-230a-4774-8d6a-8358cbd15f78.png)](https://www.youtube.com/watch?v=XiKyEKJrTLQ) 16.) Python Script - Gradio Based - ControlNet - PC - Free [**Transform Your Sketches into Masterpieces with Stable Diffusion ControlNet AI - How To Use Tutorial**](https://www.youtube.com/watch?v=YJebdQ30UZQ) [![image](https://user-images.githubusercontent.com/19240467/218345328-ada437bf-5eb4-478e-a951-84486a42995d.png)](https://www.youtube.com/watch?v=YJebdQ30UZQ) 17.) Automatic1111 Web UI - PC - Free [**Sketches into Epic Art with 1 Click: A Guide to Stable Diffusion ControlNet in Automatic1111 Web UI**](https://www.youtube.com/watch?v=vhqqmkTBMlU) [![image](https://user-images.githubusercontent.com/19240467/218806127-c84d1ff8-d5bb-41b0-bdef-6922568792b9.png)](https://www.youtube.com/watch?v=vhqqmkTBMlU) 18.) RunPod - Automatic1111 Web UI - Cloud - Paid - No PC Is Required [**Ultimate RunPod Tutorial For Stable Diffusion - Automatic1111 - Data Transfers, Extensions, CivitAI**](https://www.youtube.com/watch?v=QN1vdGhjcRc) [![image](https://user-images.githubusercontent.com/19240467/219958249-82ecb925-901b-4f87-b776-f592b0f5eaad.png)](https://www.youtube.com/watch?v=QN1vdGhjcRc) 19.) RunPod - Automatic1111 Web UI - Cloud - Paid - No PC Is Required [**RunPod Fix For DreamBooth & xFormers - How To Use Automatic1111 Web UI Stable Diffusion on RunPod**](https://www.youtube.com/watch?v=zA4LksIVas8) [![image](https://user-images.githubusercontent.com/19240467/228829128-e32d2900-0162-4de7-ba78-887b9083b090.png)](https://www.youtube.com/watch?v=zA4LksIVas8) 20.) Automatic1111 Web UI - PC - Free [**Fantastic New ControlNet OpenPose Editor Extension & Image Mixing - Stable Diffusion Web UI Tutorial**](https://youtu.be/iFRdrRyAQdQ) [![image](https://user-images.githubusercontent.com/19240467/220776337-3abce5a3-bb17-4240-8400-4e633562ecc8.png)](https://youtu.be/iFRdrRyAQdQ) 21.) Automatic1111 Web UI - PC - Free [**Automatic1111 Stable Diffusion DreamBooth Guide: Optimal Classification Images Count Comparison Test**](https://youtu.be/Tb4IYIYm4os) [![image](https://user-images.githubusercontent.com/19240467/221384116-e42d6f37-a068-4a2a-9bda-11ac47f33faa.png)](https://youtu.be/Tb4IYIYm4os) 22.) Automatic1111 Web UI - PC - Free [**Epic Web UI DreamBooth Update - New Best Settings - 10 Stable Diffusion Training Compared on RunPods**](https://youtu.be/sRdtVanSRl4) [![image](https://user-images.githubusercontent.com/19240467/222991604-ceed12bc-0bc9-4f16-82fe-e6779132e00c.png)](https://youtu.be/sRdtVanSRl4) 23.) Automatic1111 Web UI - PC - Free [**New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color Control**](https://youtu.be/tXaQAkOgezQ) [![image](https://user-images.githubusercontent.com/19240467/223283277-eaaf6e53-df43-40ac-8096-c08f9a14cc8d.png)](https://youtu.be/tXaQAkOgezQ) 24.) Automatic1111 Web UI - PC - Free [**Generate Text Arts & Fantastic Logos By Using ControlNet Stable Diffusion Web UI For Free Tutorial**](https://youtu.be/C_mJI4U23nQ) [![image](https://user-images.githubusercontent.com/19240467/224442765-ba241f71-b412-4f5b-bf39-506e9682e336.png)](https://youtu.be/C_mJI4U23nQ) 25.) Automatic1111 Web UI - PC - Free [**How To Install New DREAMBOOTH & Torch 2 On Automatic1111 Web UI PC For Epic Performance Gains Guide**](https://youtu.be/pom3nQejaTs) [![image](https://user-images.githubusercontent.com/19240467/226115542-72db7e7e-cee0-4e3a-82c4-12348e2b237e.png)](https://youtu.be/pom3nQejaTs) 26.) Automatic1111 Web UI - PC - Free [**Training Midjourney Level Style And Yourself Into The SD 1.5 Model via DreamBooth Stable Diffusion**](https://youtu.be/m-UVVY_syP0) [![image](https://user-images.githubusercontent.com/19240467/226378438-fe70f09e-94a8-4d1d-9468-e44dca99aac7.png)](https://youtu.be/m-UVVY_syP0) 27.) Automatic1111 Web UI - PC - Free [**Video To Anime - Generate An EPIC Animation From Your Phone Recording By Using Stable Diffusion AI**](https://youtu.be/kmT-z2lqEPQ) [![image](https://user-images.githubusercontent.com/19240467/228096548-5f6add70-ca04-4bec-8c33-24d243227532.png)](https://youtu.be/kmT-z2lqEPQ) 28.) Python Script - Jupyter Based - PC - Free [**Midjourney Level NEW Open Source Kandinsky 2.1 Beats Stable Diffusion - Installation And Usage Guide**](https://youtu.be/dYt9xJ7dnpU) [![image](https://user-images.githubusercontent.com/19240467/230183162-8a6f7e84-dcd9-45b5-a94c-b93a10778f42.png)](https://youtu.be/dYt9xJ7dnpU) 29.) Automatic1111 Web UI - PC - Free [**RTX 3090 vs RTX 3060 Ultimate Showdown for Stable Diffusion, ML, AI & Video Rendering Performance**](https://youtu.be/lgP1LNnaUaQ) [![image](https://user-images.githubusercontent.com/19240467/231303430-63d801cf-3c5a-4c20-b445-bb682febfa4e.png)](https://youtu.be/lgP1LNnaUaQ) 30.) Kohya Web GU - Automatic1111 Web UI - PC - Free [**Generate Studio Quality Realistic Photos By Kohya LoRA Stable Diffusion Training - Full Tutorial**](https://youtu.be/TpuDOsuKIBo) [![image](https://user-images.githubusercontent.com/19240467/235155355-83ff14e5-a3c8-4ae8-83a5-6d2573189a22.png)](https://youtu.be/TpuDOsuKIBo) 31.) Kaggle NoteBook - Free [**DeepFloyd IF By Stability AI - Is It Stable Diffusion XL or Version 3? We Review and Show How To Use**](https://youtu.be/R2fEocf-MU8) [![image](https://user-images.githubusercontent.com/19240467/235505544-2ba77ef2-3928-4c44-aba8-2536aebbfb60.png)](https://youtu.be/R2fEocf-MU8) 32.) Python Script - Automatic1111 Web UI - PC - Free [**How To Find Best Stable Diffusion Generated Images By Using DeepFace AI - DreamBooth / LoRA Training**](https://youtu.be/343I11mhnXs) [![image](https://user-images.githubusercontent.com/19240467/236293388-6254ff84-0866-4bd4-a5d4-2db3c42be3f0.png)](https://youtu.be/343I11mhnXs)
ArJakusz/DialoGPT-small-stark
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
The powerful DataVare Outlook PST Merge programme allows you to combine numerous PST files into one without compromising any data. To combine Outlook emails, contacts, calendars, and other mail items into a single file, use this trustworthy and inexpensive tool. All versions of Windows 10, 8, 7, XP, Vista, and MS Outlook, including 2003, 2007, 2010, 2013, 2016, 2019, and 2021, are compatible with it. It has all the latest innovations for superior outcomes. To find out more about the app's features and capabilities, use the free sample version. It can quickly and without sacrificing the data's quality combine large PST files. It merges Outlook PST files in a few simple steps and features an intuitive UI. The software produced a very high-quality merged PST file, and I had no problems with data loss or corruption. It can make it simpler to manage your emails and keep track of important information. Versions in ANSI and Unicode are both supported. The utility's platform is 100 percent secure and safe. Before purchasing, you can download a free trial version of this programme to examine its features and usability. Read more :- https://www.datavare.com/software/outlook-pst-merge-expert.html
Aries/T5_question_generation
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
13
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2077 - Accuracy: 0.9169 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 75 | 0.3656 | 0.7874 | | No log | 2.0 | 150 | 0.2077 | 0.9169 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ArpanZS/debug_squad
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: mit --- This repository highlights the outcome of an experimental merging algorithm that combined the weights of two distinct language models through the application of the add difference technique. The process of weight merging is an innovative approach that enables the integration of knowledge from multiple models, culminating in the development of a more dynamic and advanced language model. Proto-Synthia showcases an achievement in optimization within a mere 10 minutes, thereby, in many cases, obviating the need for the conventional time-intensive training process.
AshLukass/AshLukass
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers --- # OpenBuddy - Open Multilingual Chatbot based on LLaMA GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) ## Installation Due to licensing restrictions from LLAMA, you need to have the original LLAMA-7B model to decrypt the model weights. To decrypt the model weights, please follow the guide in our GitHub: https://github.com/OpenBuddy/OpenBuddy#installation ## Disclaimer OpenBuddy is provided as-is without any warranty of any kind, either express or implied. The authors and contributors shall not be held liable for any damages resulting from the use or inability to use this software. By using OpenBuddy, you agree to these terms and conditions. ## License Restrictions OpenBuddy is intended for non-commercial research purposes only, following the same restrictions as the LLAMA model. Any use outside of this scope is strictly prohibited. For more information, please refer to the LLAMA license.
Ashagi/Ashvx
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: polyglot-5.8b-chatdoctor-v1.1b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # polyglot-5.8b-chatdoctor-v1.1b This model is a fine-tuned version of [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Ashim/dga-transformer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bangla-para-v2-150000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-para-v2-150000 This model is a fine-tuned version of [mHossain/bangla-para-v2-120000](https://huggingface.co/mHossain/bangla-para-v2-120000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8941 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 17.458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.1437 | 1.0 | 3375 | 0.8941 | 0.0 | 0.0 | 0.0 | 0.0 | 17.458 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
AshtonBenson/DialoGPT-small-quentin
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola_sepehr_sepehr_sepehr_saturday_from_server results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.4863017578040948 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola_sepehr_sepehr_sepehr_saturday_from_server This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4730 - Matthews Correlation: 0.4863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 268 | 0.4730 | 0.4863 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.2
Atarax/rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - translation language: - unk - unk datasets: - alvations/autotrain-data-aymara-t5-small-expensive co2_eq_emissions: emissions: 19.989441023741563 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 55961130121 - CO2 Emissions (in grams): 19.9894 ## Validation Metrics - Loss: 2.564 - SacreBLEU: 2.106 - Gen len: 16.875
Ateeb/EmotionDetector
[ "pytorch", "funnel", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "FunnelForSequenceClassification" ], "model_type": "funnel", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: jikkyjohn/roberta-base-finetuned-NQ results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jikkyjohn/roberta-base-finetuned-NQ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6847 - Train End Logits Accuracy: 0.8001 - Train Start Logits Accuracy: 0.7764 - Validation Loss: 0.6973 - Validation End Logits Accuracy: 0.8017 - Validation Start Logits Accuracy: 0.7821 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.0229 | 0.7188 | 0.6971 | 0.7360 | 0.7886 | 0.7681 | 0 | | 0.6847 | 0.8001 | 0.7764 | 0.6973 | 0.8017 | 0.7821 | 1 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Atiqah/Atiqah
[ "license:artistic-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: kobart-trans-en-ko-v2 results: [] license: openrail datasets: - fka/awesome-chatgpt-prompts language: - ko - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobart-trans-en-ko-v2 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7926 - Bleu: 5.3159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Augustvember/your-model-name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: unispeech-sat-base-digit-mask-ft results: [] datasets: - mazkooleg/digit_mask_augmented_raw --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-sat-base-digit-mask-ft This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0053 - Accuracy: 0.9991 - F1: 0.9991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | |:-------------:|:-----:|:-----:|:--------:|:------:|:---------------:| | 0.0079 | 1.0 | 14264 | 0.9991 | 0.9991 | 0.0053 | | 0.0019 | 2.0 | 28528 | 0.9987 | 0.9987 | 0.0078 | | 0.0009 | 3.0 | 42792 | 0.9989 | 0.9989 | 0.0069 | ### Framework versions - Transformers 4.28.1 - Pytorch 1.13.0+cpu - Datasets 2.12.0 - Tokenizers 0.13.2
Awsaf/large-eren
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: SaiKiran97/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Axon/resnet34-v1
[ "dataset:ImageNet", "arxiv:1512.03385", "Axon", "Elixir", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: VinayakMane47/bert-base-cased-finetuned-on-duplicate-Q-A results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # VinayakMane47/bert-base-cased-finetuned-on-duplicate-Q-A This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1196 - Validation Loss: 0.2625 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30324, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3153 | 0.2493 | 0 | | 0.1929 | 0.2385 | 1 | | 0.1196 | 0.2625 | 2 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Aybars/ModelOnWhole
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- # Model Card for Molly (Cat) ![IMAGE](https://huggingface.co/badmonk/mxlly/resolve/main/mxlly.png) ## Model Description - **Developed by:** BADMONK - **Model type:** Dreambooth Model + Extracted LoRA - **Language(s) (NLP):** EN - **License:** Creativeml-Openrail-M - **Parent Model:** ChilloutMix # How to Get Started with the Model Use the code below to get started with the model. ### MXLLY ###
Ayham/albert_gpt2_Full_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: unknown duplicated_from: fulouma/MyLoRAs --- Trigger word for LoRA on folder `concept`: cic everything else: sls note: - unsuffixed LoRA is usually trained 10 epoch - some of those need LoCon extension to work.
Ayham/albert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
# Oops! This should have been a space, not a model!
Ayham/bert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Question_Answering_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Question_Answering_model This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5418 - Validation Loss: 1.1876 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'Custom>Adam', 'config': {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5418 | 1.1876 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-05-06T15:08:36Z
--- tags: - generated_from_keras_callback model-index: - name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2153 - Train Accuracy: 0.9144 - Validation Loss: 0.1911 - Validation Accuracy: 0.9243 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2153 | 0.9144 | 0.1911 | 0.9243 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.7.0 - Datasets 2.10.1 - Tokenizers 0.12.1
Ayham/xlnet_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.93 +/- 3.52 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r EExe/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Ayou/chinese_mobile_bert
[ "pytorch", "mobilebert", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "MobileBertForMaskedLM" ], "model_type": "mobilebert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: cc-by-nc-sa-4.0 inference: false --- **NOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.** **NOTE: Unlike other LLaVA models, this model can (should) be used directly without delta weights conversion!** <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna/MPT on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-Lightning-MPT was trained in May 2023. **Paper or resources for more information:** https://llava-vl.github.io/ **License:** CC-BY-NC-SA 4.0 **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. 80K GPT-generated multimodal instruction-following data. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2161 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Ayran/DialoGPT-small-gandalf
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - klue metrics: - f1 model-index: - name: kogpt2-base-v2-finetuned-klue-ner results: - task: name: Token Classification type: token-classification dataset: name: klue type: klue config: ner split: validation args: ner metrics: - name: F1 type: f1 value: 0.4045776387287996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kogpt2-base-v2-finetuned-klue-ner This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.4255 - F1: 0.4046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6124 | 1.0 | 876 | 0.5478 | 0.2024 | | 0.4086 | 2.0 | 1752 | 0.4947 | 0.2814 | | 0.3159 | 3.0 | 2628 | 0.4443 | 0.3303 | | 0.2498 | 4.0 | 3504 | 0.4168 | 0.3791 | | 0.1998 | 5.0 | 4380 | 0.4255 | 0.4046 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Azaghast/DistilBART-SCP-ParaSummarization
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-05-06T16:15:06Z
--- tags: - generated_from_trainer datasets: - pile-instruct/ metrics: - accuracy model-index: - name: layer_4,5,6,7,8 results: - task: type: text-generation name: Causal Language Modeling dataset: name: pile-instruct/ type: pile-instruct/ split: None metrics: - type: accuracy value: 0.3842425129408517 name: Accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layer_4,5,6,7,8 This model is a fine-tuned version of [P1ayer-1/pythia-deduped-1b-chat-base](https://huggingface.co/P1ayer-1/pythia-deduped-1b-chat-base) on the pile-instruct/ dataset. It achieves the following results on the evaluation set: - Loss: 4.9648 - Accuracy: 0.3842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 96 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 6000 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 7.4574 | 0.1 | 200 | 0.1688 | 7.4961 | | 7.0445 | 0.2 | 400 | 0.1997 | 7.0547 | | 6.7483 | 0.3 | 600 | 0.2190 | 6.7930 | | 6.4568 | 0.4 | 800 | 0.2376 | 6.5703 | | 6.2865 | 0.5 | 1000 | 0.2552 | 6.375 | | 6.1028 | 0.6 | 1200 | 0.2793 | 6.1484 | | 5.8888 | 0.7 | 1400 | 0.2982 | 5.9570 | | 5.7362 | 0.8 | 1600 | 0.3121 | 5.8008 | | 5.6507 | 0.9 | 1800 | 0.3238 | 5.6797 | | 5.565 | 1.0 | 2000 | 0.3318 | 5.5781 | | 5.4688 | 1.1 | 2200 | 0.3392 | 5.4961 | | 5.4044 | 1.2 | 2400 | 0.3456 | 5.4219 | | 5.3323 | 1.3 | 2600 | 0.3516 | 5.3594 | | 5.2598 | 1.4 | 2800 | 0.3562 | 5.3047 | | 5.2159 | 1.5 | 3000 | 0.3596 | 5.2578 | | 5.1992 | 1.6 | 3200 | 0.3638 | 5.2148 | | 5.1429 | 1.69 | 3400 | 0.3672 | 5.1797 | | 5.095 | 1.79 | 3600 | 0.3696 | 5.1445 | | 5.0646 | 1.89 | 3800 | 0.3715 | 5.1172 | | 5.059 | 1.99 | 4000 | 0.3742 | 5.0859 | | 5.0152 | 2.09 | 4200 | 0.3756 | 5.0664 | | 5.0251 | 2.19 | 4400 | 0.3769 | 5.0469 | | 5.022 | 2.29 | 4600 | 0.3790 | 5.0273 | | 4.9939 | 2.39 | 4800 | 0.3798 | 5.0156 | | 4.924 | 2.49 | 5000 | 0.3811 | 5.0 | | 4.9335 | 2.59 | 5200 | 0.3821 | 4.9883 | | 4.9231 | 2.69 | 5400 | 0.3829 | 4.9805 | | 4.8886 | 2.79 | 5600 | 4.9727 | 0.3835 | | 4.9419 | 2.89 | 5800 | 4.9648 | 0.3843 | | 4.9227 | 2.99 | 6000 | 4.9648 | 0.3842 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Wandb Report https://wandb.ai/ontocord/pythia-1b-deduped-layer-test-min-pile-instruct/runs/kqlipkt3
Azizun/Geotrend-10-epochs
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola-batch-16 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5992215466535732 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-batch-16 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4502 - Matthews Correlation: 0.5992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4987 | 1.0 | 535 | 0.5145 | 0.4872 | | 0.3065 | 2.0 | 1070 | 0.4502 | 0.5992 | | 0.2059 | 3.0 | 1605 | 0.7547 | 0.5208 | | 0.1467 | 4.0 | 2140 | 0.8557 | 0.5390 | | 0.1006 | 5.0 | 2675 | 0.9277 | 0.5550 | | 0.0796 | 6.0 | 3210 | 1.0832 | 0.5765 | | 0.0532 | 7.0 | 3745 | 1.0337 | 0.5687 | | 0.0367 | 8.0 | 4280 | 1.1539 | 0.5779 | | 0.0276 | 9.0 | 4815 | 1.3224 | 0.5755 | | 0.0192 | 10.0 | 5350 | 1.3055 | 0.5810 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BAHIJA/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
2023-05-06T16:34:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola-batch-32 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5927736326773501 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-batch-32 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8835 - Matthews Correlation: 0.5928 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 268 | 0.5093 | 0.5049 | | 0.4202 | 2.0 | 536 | 0.4633 | 0.5600 | | 0.4202 | 3.0 | 804 | 0.5369 | 0.5393 | | 0.1814 | 4.0 | 1072 | 0.6271 | 0.5605 | | 0.1814 | 5.0 | 1340 | 0.7427 | 0.5662 | | 0.0947 | 6.0 | 1608 | 0.7794 | 0.5697 | | 0.0947 | 7.0 | 1876 | 0.8835 | 0.5928 | | 0.0566 | 8.0 | 2144 | 1.0182 | 0.5751 | | 0.0566 | 9.0 | 2412 | 1.1300 | 0.5549 | | 0.0296 | 10.0 | 2680 | 1.1266 | 0.5704 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BJTK2/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T16:35:55Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BME-TMIT/foszt2oszt
[ "pytorch", "encoder-decoder", "text2text-generation", "hu", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: creativeml-openrail-m --- # Flowers Diffusion Simple U-net diffusion model to generate flowers images in shape 64x64.
BSC-LT/roberta-base-biomedical-es
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
161
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BSen/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-05-06T16:51:34Z
## Pasta with a small twist! Untested but fresh as of 5/6/2023, taste and hopefully enjoy! ^~^ ## Model Info: ChanSung's [AlpacaGPT4-LoRA-13B-elina](https://huggingface.co/LLMs/AlpacaGPT4-LoRA-13B-elina) merged with [dvruette's llama-13b sft do2 finetune](https://huggingface.co/dvruette/llama-13b-pretrained-sft-do2)
Bagus/SER-LSSED
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T17:00:19Z
--- duplicated_from: aurenigma/aurenigma-loras ---
Bagus/ser-japanese
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola-batch-64 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5835943612387946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-batch-64 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7651 - Matthews Correlation: 0.5836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 134 | 0.4344 | 0.5367 | | No log | 2.0 | 268 | 0.4313 | 0.5650 | | No log | 3.0 | 402 | 0.5034 | 0.5495 | | 0.3177 | 4.0 | 536 | 0.5733 | 0.5293 | | 0.3177 | 5.0 | 670 | 0.6364 | 0.5498 | | 0.3177 | 6.0 | 804 | 0.7316 | 0.5600 | | 0.3177 | 7.0 | 938 | 0.7651 | 0.5836 | | 0.0846 | 8.0 | 1072 | 0.8575 | 0.5625 | | 0.0846 | 9.0 | 1206 | 0.8820 | 0.5573 | | 0.0846 | 10.0 | 1340 | 0.8854 | 0.5704 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
[ "pytorch", "tensorboard", "wav2vec2", "el", "dataset:aesdd", "transformers", "audio", "audio-classification", "speech", "license:apache-2.0" ]
audio-classification
{ "architectures": [ "Wav2Vec2ForSpeechClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: other language: - en library_name: transformers inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico tags: - gpt - llm - large language model - LLaMa datasets: - h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2 --- # h2ogpt-oasst1-512-30B-GGML These files are GGML format model files of [H2O.ai's h2ogpt-research-oig-oasst1-512-30b](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b). GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/h2ogpt-oasst1-512-30B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/h2ogpt-oasst1-512-30B-GGML). * [float16 HF format unquantised model for GPU inference and further conversions](https://huggingface.co/TheBloke/h2ogpt-oasst1-512-30B-HF) ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)! llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508 I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them. For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`. ## Provided files | Name | Quant method | Bits | Size | RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | `h2ogptq-oasst1-512-30B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 25GB | 4-bit. | `h2ogptq-oasst1-512-30B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 24.4GB | 26GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | `h2ogptq-oasst1-512-30B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | `h2ogptq-oasst1-512-30B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 26GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference.| `h2ogptq-oasst1-512-30B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 36.6GB | 39GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. | ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 8 -m h2ogptq-oasst1-512-30B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write a story about llamas ### Response:" ``` Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual. Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files. # Original h2oGPT Model Card ## Summary H2O.ai's `h2oai/h2ogpt-research-oig-oasst1-512-30b` is a 30 billion parameter instruction-following large language model for research use only. Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide LORA weights. - Base model: [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) - Fine-tuning dataset: [h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2](https://huggingface.co/datasets/h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2) - Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt) - Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17.zip) The model was trained using h2oGPT code as: ```python torchrun --nproc_per_node=8 finetune.py --base_model=decapoda-research/llama-30b-hf --micro_batch_size=1 --batch_size=8 --cutoff_len=512 --num_epochs=2.0 --val_set_size=0 --eval_steps=100000 --save_steps=17000 --save_total_limit=20 --prompt_type=plain --save_code=True --train_8bit=False --run_id=llama30b_17 --llama_flash_attn=True --lora_r=64 --lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'] --learning_rate=2e-4 --lora_alpha=32 --drop_truncations=True --data_path=h2oai/h2ogpt-oig-oasst1-instruct-cleaned-v2 --data_mix_in_path=h2oai/openassistant_oasst1_h2ogpt --data_mix_in_factor=1.0 --data_mix_in_prompt_type=plain --data_mix_in_col_dict={'input': 'input'} ``` On h2oGPT Hash: 131f6d098b43236b5f91e76fc074ad089d6df368 Only the last checkpoint at epoch 2.0 and step 137,846 is provided in this model repository because the LORA state is large enough and there are enough checkpoints to make total run 19GB. Feel free to request additional checkpoints and we can consider adding more. ## Chatbot - Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt) [![H2O.ai GitHub](https://user-images.githubusercontent.com/6147661/232930822-e7170e4d-8aa1-4f7a-ad70-ece9cdd8b0cb.png)](https://github.com/h2oai/h2ogpt) ## Usage: ### Usage as LORA: ### Build HF model: Use: https://github.com/h2oai/h2ogpt/blob/main/export_hf_checkpoint.py and change: ```python BASE_MODEL = 'decapoda-research/llama-30b-hf' LORA_WEIGHTS = '<lora_weights_path>' OUTPUT_NAME = "local_h2ogpt-research-oasst1-512-30b" ``` where `<lora_weights_path>` is a directory of some name that contains the files in this HF model repository: * adapter_config.json * adapter_model.bin * special_tokens_map.json * tokenizer.model * tokenizer_config.json Once the HF model is built, to use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. ```bash pip install transformers==4.28.1 pip install accelerate==0.18.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline(model="local_h2ogpt-research-oasst1-512-30b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") res = generate_text("Why is drinking water so healthy?", max_new_tokens=100) print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b/blob/main/h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("local_h2ogpt-research-oasst1-512-30b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("local_h2ogpt-research-oasst1-512-30b", torch_dtype=torch.bfloat16, device_map="auto") generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text("Why is drinking water so healthy?", max_new_tokens=100) print(res[0]["generated_text"]) ``` ## Model Architecture with LORA and flash attention ``` PeftModelForCausalLM( (base_model): LoraModel( (model): LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 6656, padding_idx=31999) (layers): ModuleList( (0-59): 60 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear( in_features=6656, out_features=6656, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=6656, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=6656, bias=False) ) ) (k_proj): Linear( in_features=6656, out_features=6656, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=6656, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=6656, bias=False) ) ) (v_proj): Linear( in_features=6656, out_features=6656, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=6656, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=6656, bias=False) ) ) (o_proj): Linear( in_features=6656, out_features=6656, bias=False (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=6656, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=6656, bias=False) ) ) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=6656, out_features=17920, bias=False) (down_proj): Linear(in_features=17920, out_features=6656, bias=False) (up_proj): Linear(in_features=6656, out_features=17920, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=6656, out_features=32000, bias=False) ) ) ) trainable params: 204472320 || all params: 32733415936 || trainable%: 0.6246592790675496 ``` ## Model Configuration ```json { "base_model_name_or_path": "decapoda-research/llama-30b-hf", "bias": "none", "fan_in_fan_out": false, "inference_mode": true, "init_lora_weights": true, "lora_alpha": 32, "lora_dropout": 0.05, "modules_to_save": null, "peft_type": "LORA", "r": 64, "target_modules": [ "q_proj", "k_proj", "v_proj", "o_proj" ], "task_type": "CAUSAL_LM" ``` ## Model Validation Classical benchmarks align with base LLaMa 30B model, but are not useful for conversational purposes. One could use GPT3.5 or GPT4 to evaluate responses, while here we use a [RLHF based reward model](OpenAssistant/reward-model-deberta-v3-large-v2). This is run using h2oGPT: ```python python generate.py --base_model=decapoda-research/llama-30b-hf --gradio=False --infer_devices=False --eval_sharegpt_prompts_only=100 --eval_sharegpt_as_output=False --lora_weights=llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17 ``` So the model gets a reward model score mean of 0.55 and median of 0.58. This compares to our [20B model](https://huggingface.co/h2oai/h2ogpt-oasst1-512-20b) that gets 0.49 mean and 0.48 median or [Dollyv2](https://huggingface.co/databricks/dolly-v2-12b) that gets 0.37 mean and 0.27 median. [Logs](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/score_llama30b_jon17d.log) and [prompt-response pairs](https://huggingface.co/h2oai/h2ogpt-research-oig-oasst1-512-30b/blob/main/df_scores_100_100_1234_False_llama-30b-hf_llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17.parquet) The full distribution of scores is shown here: ![image info](df_scores_100_100_1234_False_llama-30b-hf_llama-30b-hf.h2oaih2ogpt-oig-oasst1-instruct-cleaned-v2.2.0_epochs.131f6d098b43236b5f91e76fc074ad089d6df368.llama30b_17.png) Same plot for our h2oGPT 20B: ![image info](df_scores_100_100_1234_False_h2ogpt-oasst1-512-20b_.png) Same plot for DB Dollyv2: ![image info](df_scores_100_100_1234_False_dolly-v2-12b_.png) ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - The LORA contained in this repository is only for research (non-commercial) purposes. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Bala/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T17:02:10Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 580.00 +/- 254.98 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaokaobishuati -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gaokaobishuati -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gaokaobishuati ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BatuhanYilmaz/bert-finetuned-nerxD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T17:28:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 252.56 +/- 20.25 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- language: - en pipeline_tag: summarization metrics: - bleu tags: - code ---
BatuhanYilmaz/dummy-model
[ "tf", "camembert", "fill-mask", "transformers", "generated_from_keras_callback", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2023-05-06T17:36:26Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="BartekSadlej/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_00 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5909585115904812 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_00 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.3418 - Matthews Correlation: 0.5910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.9628623388222396e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.484 | 1.0 | 535 | 0.4557 | 0.5053 | | 0.3013 | 2.0 | 1070 | 0.4224 | 0.5711 | | 0.1949 | 3.0 | 1605 | 0.8633 | 0.5523 | | 0.1399 | 4.0 | 2140 | 0.7826 | 0.5858 | | 0.0933 | 5.0 | 2675 | 0.9575 | 0.5846 | | 0.0607 | 6.0 | 3210 | 1.0032 | 0.5694 | | 0.0554 | 7.0 | 3745 | 1.2276 | 0.5702 | | 0.0368 | 8.0 | 4280 | 1.2437 | 0.5761 | | 0.0303 | 9.0 | 4815 | 1.2978 | 0.5889 | | 0.0146 | 10.0 | 5350 | 1.3418 | 0.5910 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BatuhanYilmaz/mlm-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit language: - en library_name: tensorflowtts tags: - biology - medical --- # Brain Tumor Classification (MRI) | AI Model This is a deep learning model that can classify MRI images of the brain into four categories: glioma tumor, meningioma tumor, no tumor, and pituitary tumor. The model was trained on the Images Dataset "Brain Tumor Classification (MRI)" From Kaggle by SARTAJ under the CC0: Public Domain License. Source Files: https://github.com/ShabGaming/Brain-Tumor-Classification-AI-Model ## Model The model is a convolutional neural network (CNN) with the following architecture: ``` Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 1248, 1248, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 624, 624, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 622, 622, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 311, 311, 64) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 309, 309, 128) 73856 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 154, 154, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 307328) 0 _________________________________________________________________ dense (Dense) (None, 128) 39338112 _________________________________________________________________ dropout (Dropout) (None, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 4) 516 ================================================================= Total params: 39,436,876 Trainable params: 39,436,876 Non-trainable params: 0 ``` The model was trained using TensorFlow and achieved an accuracy of over 95% on the validation set. ## GUI In addition to the model, we have also provided a graphical user interface (GUI) that allows users to upload an MRI image and get a prediction from the model. The GUI was built using the Tkinter library in Python. To use the GUI, simply run the gui.py file and a window will appear. Click the "Choose File" button to select an MRI image from your computer, and then click the "Predict" button to get the model's prediction. The GUI will display the selected image as well as the predicted class. ## Usage To use the model and GUI, follow these steps: - Clone or download the GitHub repository containing the model and GUI files. - Install the necessary Python libraries. - Train the model by running 'BrainTumorMRIDetection.ipynb'. This will save the trained model as a .h5 file in the repository directory (You can also just download the model, more information down below). - Run the GUI by running gui.py. This will open a window where you can upload an MRI image and get a prediction from the model. ## Credits Muhammad Shahab Hasan (Shab) - https://www.fiverr.com/best_output - https://www.youtube.com/Shabpassiongamer - https://medium.com/@ShahabH
BatuhanYilmaz/mt5-small-finetuned-amazonbooks-en-es
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T17:42:14Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="BartekSadlej/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Beatriz/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T17:56:45Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ngkuissi/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Beelow/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en metrics: - rouge --- # Personalised opener This model creates an opener based on a provided interest. ### Model input > [INTEREST] ### Example > dancing ### Output > What's your favorite dance move to make people laugh or cry? ### How to use in code ```{python} import nltk from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("njvdnbus/personalised_opener-t5-large") model = AutoModelForSeq2SeqLM.from_pretrained("njvdnbus/personalised_opener-t5-large") def use_model(text): inputs = ["" + text] inputs = tokenizer(inputs, truncation=True, return_tensors="pt") output = model.generate(**inputs, num_beams=1, do_sample=True, min_length=10, max_length=256) decoded_output = tokenizer.batch_decode(output, skip_special_tokens=True)[0] predicted_interests = nltk.sent_tokenize(decoded_output.strip())[0] return predicted_interests text= "tennis" print(use_model(text)) ``` > Do you think tennis is the most exciting sport out there? > > ## Smaller model > Fine-tuned T5-large version can be found [here](https://huggingface.co/njvdnbus/personalised_opener-t5-large).
BenGeorge/MyModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T18:08:27Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 9.000000136438757e-05 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
BenWitter/DialoGPT-small-Tyrion
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 257.07 +/- 28.98 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Benicio/t5-small-finetuned-en-to-ru
[ "pytorch", "tensorboard", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
50
null
--- tags: - autotrain - text-classification language: - es widget: - text: "I love AutoTrain 🤗" datasets: - Venkatakrishnan-Ramesh/autotrain-data-hate-speech co2_eq_emissions: emissions: 0.3353050644031622 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 56013130185 - CO2 Emissions (in grams): 0.3353 ## Validation Metrics - Loss: 1.123 - Accuracy: 0.501 - Macro F1: 0.370 - Micro F1: 0.501 - Weighted F1: 0.458 - Macro Precision: 0.564 - Micro Precision: 0.501 - Weighted Precision: 0.548 - Macro Recall: 0.381 - Micro Recall: 0.501 - Weighted Recall: 0.501 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Venkatakrishnan-Ramesh/autotrain-hate-speech-56013130185 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Venkatakrishnan-Ramesh/autotrain-hate-speech-56013130185", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Venkatakrishnan-Ramesh/autotrain-hate-speech-56013130185", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
BigSalmon/DaBlank
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: aprilzoo/distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aprilzoo/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.2701 - Validation Loss: 2.6432 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -997, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.2701 | 2.6432 | 0 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/Flowberta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2023-05-07T15:39:17Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-with-freeze-LR-1e-05 This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2708 - Exact Match: 52.7487 - F1: 60.8071 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:| | 6.4745 | 0.49 | 36 | 2.5724 | 35.6021 | 37.8405 | | 3.5197 | 0.98 | 72 | 1.9912 | 28.0105 | 35.4278 | | 2.1756 | 1.48 | 108 | 1.6669 | 35.7330 | 43.0612 | | 2.1756 | 1.97 | 144 | 1.5047 | 39.3979 | 46.1664 | | 1.6725 | 2.46 | 180 | 1.3222 | 45.9424 | 52.9355 | | 1.336 | 2.95 | 216 | 1.3205 | 44.1099 | 51.6851 | | 1.176 | 3.45 | 252 | 1.2526 | 47.5131 | 55.3298 | | 1.176 | 3.94 | 288 | 1.2778 | 47.3822 | 54.7110 | | 1.1089 | 4.44 | 324 | 1.2291 | 49.8691 | 57.2303 | | 0.967 | 4.93 | 360 | 1.1944 | 52.4869 | 60.2202 | | 0.967 | 5.42 | 396 | 1.2122 | 53.7958 | 61.3033 | | 0.9202 | 5.91 | 432 | 1.2348 | 54.0576 | 61.6263 | | 0.8719 | 6.41 | 468 | 1.2206 | 55.2356 | 62.9267 | | 0.8205 | 6.9 | 504 | 1.2472 | 53.9267 | 61.6359 | | 0.8205 | 7.4 | 540 | 1.2764 | 52.3560 | 60.2681 | | 0.7907 | 7.89 | 576 | 1.2382 | 55.3665 | 63.0145 | | 0.7533 | 8.38 | 612 | 1.2812 | 52.4869 | 60.4214 | | 0.7533 | 8.87 | 648 | 1.2474 | 53.1414 | 60.6338 | | 0.7345 | 9.37 | 684 | 1.2708 | 52.7487 | 60.8071 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.2.0 - Tokenizers 0.13.2
BigSalmon/FormalRobertaaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5108235781406687 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Matthews Correlation: 0.5108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4908 | 1.0 | 535 | 0.4659 | 0.5108 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/FroBurta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-06T18:58:02Z
--- license: apache-2.0 tags: - osu --- CircleViT v2.0 with 3.4M parameters, trained on osu! gameplay. See [here](https://www.youtube.com/watch?v=mp2yO6Tnhqw) for a demonstration of `run3` gameplay. You will need a modified version of osu!lazer to run CircleViT, please contact for more details. - `circlevit-v2-0-run1.pt` was trained on osu! 800k (w/ emsu-processor 0.1.0) for 4 epochs and achieves an accuracy of 82.1%. - `circlevit-v2-0-run2.pt` was trained on osu! 1M (w/ emsu-processor 0.1.2) for 4 epochs and achieves an accuracy of 92.6%. - `circlevit-v2-0-run3.pt` was trained on osu! 1M (w/ emsu-processor 0.1.2) for 7 epochs and achieves an accuracy of 97.4%.
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bangla-para-v2-270000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-para-v2-270000 This model is a fine-tuned version of [mHossain/bangla-para-v2-240000](https://huggingface.co/mHossain/bangla-para-v2-240000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8960 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 17.51 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.1072 | 1.0 | 3375 | 0.8960 | 0.0 | 0.0 | 0.0 | 0.0 | 17.51 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/GPTNeo350MInformalToFormalLincoln2
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5365723103616664 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4582 - Matthews Correlation: 0.5366 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4912 | 1.0 | 535 | 0.4582 | 0.5366 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/GPTNeo350MInformalToFormalLincoln4
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_05_16 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5905209134554644 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_05_16 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7283 - Matthews Correlation: 0.5905 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.0356344528514278e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5012 | 1.0 | 535 | 0.4807 | 0.4912 | | 0.3376 | 2.0 | 1070 | 0.4363 | 0.5882 | | 0.2395 | 3.0 | 1605 | 0.6192 | 0.5351 | | 0.1814 | 4.0 | 2140 | 0.6754 | 0.5931 | | 0.1554 | 5.0 | 2675 | 0.7283 | 0.5905 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/InfillFormalLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: AzzamRadman/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-05-06T20:11:36Z
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: none-segformer-b0-scene-parse-150-cvfinal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # none-segformer-b0-scene-parse-150-cvfinal This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 2.7252 - Mean Iou: 0.0740 - Mean Accuracy: 0.1399 - Overall Accuracy: 0.5014 - Per Category Iou: [0.48516085209240617, 0.48972620283996443, 0.8461720523595614, 0.3492916550456616, 0.57616479445388, 0.0, 0.1380369639332496, 0.0, 0.0, 0.06175407695344529, 0.05268220495745468, 0.0, 0.46499631540162123, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.014604701379005741, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] - Per Category Accuracy: [0.9196892474715829, 0.9582061399112456, 0.933910864697729, 0.8767355657473141, 0.698410787382615, nan, 0.2478126973082325, 0.0, 0.0, 0.3181569271688962, 0.11338181432135463, 0.0, 0.792386293263607, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.018925518925518924, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.7335 | 1.0 | 20 | 4.8796 | 0.0116 | 0.0631 | 0.2288 | [0.26602580332229103, 0.13503996080472794, 0.5126324717493553, 0.03538599823621193, 0.0, 0.0, 0.23201003311621884, 0.0, 0.0, 0.0007549500703476202, 0.0007177646757241733, 0.0, 0.1337408194640391, 0.0, 0.0, 0.0006260434056761269, 0.0, 0.0, 0.003776113039770997, 0.0018461084034854527, 0.0, 0.0, 0.0, 0.0, 0.004682746892141129, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.037279151943462895, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0069502929938564375, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0004982250731768076, 0.015501624105421608, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0] | [0.4298642826228788, 0.1607421109703757, 0.637978462522657, 0.03745321713531803, 0.0, nan, 0.5172729264112773, 0.0, 0.0, 0.0008605178753031369, 0.0007431392324433356, 0.0, 0.6180416982040873, 0.0, 0.0, 0.004047976011994003, 0.0, 0.0, 0.00394896074393325, 0.004025764895330112, 0.0, nan, 0.0, nan, 0.004973036223036223, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.12507409602845287, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.007277621777169246, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0006330115524608325, 0.09684870483418578, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 4.6529 | 2.0 | 40 | 4.5475 | 0.0247 | 0.1009 | 0.3676 | [0.3492776793903353, 0.3337007290250834, 0.7135686182394738, 0.30712523110007506, 0.17802442220240258, 0.0, 0.19822838291071956, 0.0, 0.0, 0.006058044519582981, 0.0, 0.0, 0.1319319517090062, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0008831521739130435, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0017985144961997735, 0.0, nan, nan, 0.0, 0.0, 0.000757346258709482, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0028642717677982914, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0] | [0.6289293550463911, 0.6152304061380888, 0.8097451753918328, 0.35633958301546415, 0.194654466650614, nan, 0.7448168335330576, 0.0, 0.0, 0.006727685206915434, 0.0, 0.0, 0.8730131425032684, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.000992063492063492, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0019371181642080166, nan, nan, nan, nan, 0.0, 0.0007711289327575571, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0029890232299087977, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 4.2773 | 3.0 | 60 | 4.0639 | 0.0440 | 0.1247 | 0.4361 | [0.4042366302315495, 0.41260752956121216, 0.6956280974529252, 0.4744124360789115, 0.5174210871265778, 0.0, 0.2321725137895724, 0.0, 0.0, 0.001339366515837104, 0.001013299556681444, 0.0, 0.1994971186483083, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02148978246539222, 0.0, 0.0, 0.0, 0.0, 0.029444459507089987, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 8.193214197929848e-05, nan, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.7164404710381314, 0.8109822217767289, 0.8638927390979848, 0.6677855661960707, 0.5892968938117024, nan, 0.6953238754236087, 0.0, 0.0, 0.0014472346084643667, 0.0010616274749190508, 0.0, 0.8230062616115048, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.026247987117552336, 0.0, nan, 0.0, nan, 0.041437728937728936, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 8.354412754403471e-05, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 4.1933 | 4.0 | 80 | 3.6891 | 0.0627 | 0.1355 | 0.4699 | [0.43517016314839757, 0.40520111317002927, 0.7840096282049769, 0.44564120150115194, 0.5733062934147058, 0.0, 0.21302609539102332, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2286940122998202, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0075297225891677675, 0.0, 0.0, 0.0, 0.0, 0.043723122114094314, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8028039214419675, 0.8179622930354198, 0.8855634929096918, 0.8280993367378993, 0.7283529978328919, nan, 0.6735217184016906, 0.0, 0.0, 0.0, 0.0, 0.0, 0.894928782770247, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.009178743961352657, 0.0, nan, 0.0, nan, 0.05263024013024013, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.521 | 5.0 | 100 | 3.5680 | 0.0617 | 0.1323 | 0.4725 | [0.47426346531460467, 0.3756216836005744, 0.7932022625900177, 0.3481105662362344, 0.5801636113930854, 0.0, 0.15677184088230717, 0.0, 0.0, 0.0, 0.008441786844882167, 0.0, 0.25353794767478555, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.031425970259972354, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 1.3880470640491183e-05, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8201394855099144, 0.9462917232392634, 0.8484913103742403, 0.9206329261616062, 0.7018540813869492, nan, 0.36339505456981974, 0.0, 0.0, 0.0, 0.01146557672912575, 0.0, 0.8983692286520333, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.04741554741554742, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 1.392402125733912e-05, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.8066 | 6.0 | 120 | 3.3481 | 0.0658 | 0.1298 | 0.4773 | [0.4451362791719356, 0.44091462229009504, 0.8264357123511887, 0.3317407638164916, 0.59153012013538, 0.0, 0.06888716899773342, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2812393731777613, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.042233144193968744, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8072330923881564, 0.9195059275931512, 0.9353795713828766, 0.9653200980084187, 0.7427883457741392, nan, 0.16056541291378354, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8678352714511801, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.05235042735042735, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.3965 | 7.0 | 140 | 3.2030 | 0.0667 | 0.1225 | 0.4655 | [0.420408008087244, 0.481311168681635, 0.7398386056180811, 0.31060386991657574, 0.5916529721214714, 0.0, 0.008396429119557989, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3534511051812954, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.03080386434082219, nan, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9285971133949584, 0.9141934190503094, 0.7702793474784092, 0.8965526525996463, 0.7325427401878161, nan, 0.016918576932493202, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8521812426890525, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.03463319088319088, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.6942 | 8.0 | 160 | 3.1186 | 0.0664 | 0.1250 | 0.4901 | [0.4677720939201501, 0.4484987119600654, 0.8553566495523993, 0.2849493094206236, 0.5487920811486254, 0.0, 0.0014844575679098999, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3853302757039157, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0625742612828454, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9297426257641235, 0.9558505936705516, 0.9161157905960123, 0.9217637925308969, 0.6741632554779677, nan, 0.0023709201489556355, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7688708456615977, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.08171805046805047, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.5189 | 9.0 | 180 | 3.0406 | 0.0646 | 0.1276 | 0.4861 | [0.4621877645769044, 0.47665485057669216, 0.8447637704259798, 0.27353983986308267, 0.5388896405538267, 0.0, 0.01346168308641047, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3231348814229249, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.041031878120599156, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0001856536167645216, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8968694352753214, 0.907942336957017, 0.925319863524896, 0.9410154461985837, 0.7192872622200819, nan, 0.02075843673895396, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9000550471341086, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.048916361416361416, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0001856536167645216, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.9829 | 10.0 | 200 | 3.1090 | 0.0661 | 0.1272 | 0.4812 | [0.5191038808054473, 0.45321063834928527, 0.8387389258379386, 0.2550185661375535, 0.5706876054609603, 0.0, 0.0, 0.0, 0.0, 0.0, 0.011067405870267704, 0.0, 0.34958018471872376, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.041148107944361696, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8900351189976337, 0.9399905231408499, 0.8827327007143618, 0.9621967527979968, 0.7166506140139658, nan, 0.0, 0.0, 0.0, 0.0, 0.028451616327830564, 0.0, 0.8594577857290305, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.062194749694749696, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.8942 | 11.0 | 220 | 2.9804 | 0.0689 | 0.1321 | 0.4936 | [0.5088402771712589, 0.4066365442855555, 0.8731627211650969, 0.30338188655945664, 0.5316464254625052, 0.0, 0.1177612092738229, 0.0, 0.0, 0.0, 0.00910533063867701, 0.0, 0.38959143968871596, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.029298853203515534, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8906789314006042, 0.9500642421702007, 0.9169954152894765, 0.9701935935522666, 0.7573561281001685, nan, 0.2079967013284884, 0.0, 0.0, 0.0, 0.011120547799777059, 0.0, 0.7991811738801349, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.045622201872201874, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.7201 | 12.0 | 240 | 2.8938 | 0.0689 | 0.1276 | 0.4884 | [0.49859850341662776, 0.47117914374551545, 0.8606668137094586, 0.2782023057382922, 0.5850470064324592, 0.0, 0.0003655450370231512, 0.0, 0.0, 0.0, 0.020233693867389855, 0.0, 0.3704016085427476, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.014074614865676608, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9088973151515543, 0.9603976635897249, 0.904904574048406, 0.9583733474542044, 0.7117625812665543, nan, 0.0005025319880938576, 0.0, 0.0, 0.0, 0.05069271192738468, 0.0, 0.8461088557076997, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.019370675620675622, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.5131 | 13.0 | 260 | 2.8989 | 0.0687 | 0.1255 | 0.4861 | [0.493514093098358, 0.4578591089951369, 0.8434796025273804, 0.3360078087526457, 0.5465601579679576, 0.0, 0.011196847808241667, 0.0, 0.0, 0.0, 0.005744272281816978, 0.0, 0.3760110149488592, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.019655201132672606, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9206926474039718, 0.9389927192207106, 0.9034625226569997, 0.8990656889758479, 0.7331447146640983, nan, 0.021969667684616077, 0.0, 0.0, 0.0, 0.009209618344922767, 0.0, 0.8221117456822404, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.024013024013024013, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.7687 | 14.0 | 280 | 2.8508 | 0.0756 | 0.1365 | 0.5063 | [0.48129422405879024, 0.45942192498346157, 0.8903460460852659, 0.40217135424355716, 0.5147232560825036, 0.0, 0.1818769738189422, 0.0, 0.0, 0.0009107783119473362, 0.020630762255025442, 0.0, 0.4228314371743324, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.027973084594153555, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9188925565381411, 0.9239345367729472, 0.9630930802857447, 0.9046482197829814, 0.6976402600529737, nan, 0.4348576803638847, 0.0, 0.0, 0.00309004146131581, 0.04347364509793514, 0.0, 0.8068361659671094, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.03532000407000407, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 3.1372 | 15.0 | 300 | 2.8200 | 0.0703 | 0.1280 | 0.4870 | [0.49449563777050903, 0.4802672707435518, 0.8720511665164714, 0.2774592800349947, 0.5330857772718238, 0.0, 0.03201641586867305, 0.0, 0.0, 0.0, 0.023100078878318123, 0.0, 0.4300235829877206, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02029611861287454, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8903021181212736, 0.920562961882979, 0.9285824714788357, 0.9677792836051302, 0.7103660004815796, nan, 0.07539268364967078, 0.0, 0.0, 0.0, 0.038085885662720954, 0.0, 0.8186885020298631, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.024966931216931217, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.486 | 16.0 | 320 | 2.7598 | 0.0789 | 0.1432 | 0.5142 | [0.4991798514691637, 0.43418590272907964, 0.86210232576229, 0.4136572394475814, 0.5251975880164801, 0.0, 0.35446049906302396, 0.0, 0.0, 0.0, 0.03666287722199256, 0.0, 0.4233234225305583, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9154172614933433, 0.9534950474298576, 0.9453619788890073, 0.9158940575664832, 0.6952323621478449, nan, 0.7409383174198204, 0.0, 0.0, 0.0, 0.07532246934550667, 0.0, 0.7715027867611642, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.7467 | 17.0 | 340 | 2.7457 | 0.0695 | 0.1255 | 0.4912 | [0.4558465734083546, 0.47018152124441115, 0.8197800939182224, 0.3284440295543021, 0.5600445316848798, 0.0, 0.0521584075176449, 0.0, 0.0, 0.00041090773253642135, 0.0, 0.0, 0.44074943276431233, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9343332881157398, 0.9601744106578216, 0.915769271777375, 0.8905034150369326, 0.6904406453166386, nan, 0.09912765601041143, 0.0, 0.0, 0.0012907768129547055, 0.0, 0.0, 0.7785901052776439, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.2817 | 18.0 | 360 | 2.8545 | 0.0718 | 0.1363 | 0.4842 | [0.5250536482085539, 0.45146468669356504, 0.8169652139242147, 0.31774934039309877, 0.5361035118830144, 0.0, 0.023971370437490595, 0.0, 0.0, 0.0, 0.1170712306271169, 0.0, 0.4165009201256576, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02620465995508268, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8671743956453304, 0.9477360330232092, 0.890249493549419, 0.9522612839820856, 0.7243077293522755, nan, 0.04721223600963831, 0.0, 0.0, 0.0, 0.484367535431817, 0.0, 0.7708835065024428, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.038436100936100934, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.942 | 19.0 | 380 | 2.7746 | 0.0731 | 0.1391 | 0.4996 | [0.47438098986192484, 0.46805161724419664, 0.8401448805276246, 0.3948473962651032, 0.5195866943099121, 0.0, 0.1753525864646666, 0.0, 0.0, 0.004733862429232695, 0.0070794392523364485, 0.0, 0.379712404037255, 0.0, 0.0, 0.0, 0.0, 0.025598219254312743, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.00010164874260505397, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9213902902754182, 0.8874304043156158, 0.9527748160784731, 0.8848849837101392, 0.7452684806164218, nan, 0.36271212648343576, 0.0, 0.0, 0.021160916842681687, 0.00804182812251181, 0.0, 0.8380754145737288, 0.0, 0.0, 0.0, 0.0, 0.2222222222222222, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.00012718762718762718, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.9623 | 20.0 | 400 | 2.7914 | 0.0767 | 0.1374 | 0.5068 | [0.4780680704589745, 0.47719941783806225, 0.8714037920928285, 0.3540430160558594, 0.569819461565992, 0.0, 0.2643097199341021, 0.0, 0.0, 0.004042103722942139, 0.01566662062705945, 0.0, 0.417884019477645, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9222817228333775, 0.9530986595711721, 0.9320183388420941, 0.9406564410019835, 0.6900674211413436, nan, 0.516821936165552, 0.0, 0.0, 0.01728858640381757, 0.02107330537714316, 0.0, 0.7794674189774995, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.3827 | 21.0 | 420 | 2.7929 | 0.0736 | 0.1346 | 0.4898 | [0.4991783960640681, 0.4633859904157219, 0.8432330912135406, 0.28499998671601223, 0.5906476321044318, 0.0, 0.10364698859993662, 0.0, 0.0, 0.0, 0.09607177161416816, 0.0, 0.4256818538666864, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.003464260055703727, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.8856662381761377, 0.9600012757310394, 0.893978569143832, 0.9627801362424722, 0.715795810257645, nan, 0.18123365160359245, 0.0, 0.0, 0.0, 0.2574977440416158, 0.0, 0.7928335512282392, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.004477004477004477, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.1266 | 22.0 | 440 | 2.7964 | 0.0733 | 0.1336 | 0.4945 | [0.479201681139934, 0.46803094759478286, 0.8226404746106327, 0.3305701897826291, 0.5371052834092911, 0.0, 0.17402634630669603, 0.0, 0.0, 0.0, 0.06292154139160569, 0.0, 0.4987350838079176, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9270317233716822, 0.9122980472202732, 0.8955698901801897, 0.9162620378929985, 0.756260534553335, nan, 0.34419575553751597, 0.0, 0.0, 0.0, 0.11661977811985774, 0.0, 0.7426890525012042, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.9058 | 23.0 | 460 | 2.7332 | 0.0786 | 0.1430 | 0.5117 | [0.46751417639424964, 0.47414191147296786, 0.8678990374310945, 0.4613486768056043, 0.5347419596110695, 0.0, 0.3287688404715714, 0.0, 0.0, 0.002630568035675513, 0.059173180940731256, 0.0, 0.4186963040101983, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9005794311626735, 0.9310512935001504, 0.960627465614671, 0.9196995126504456, 0.6886106429087406, nan, 0.7096911361088561, 0.0, 0.0, 0.012321051396385825, 0.08216996655873454, 0.0, 0.8022947774031515, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.006 | 24.0 | 480 | 2.7379 | 0.0834 | 0.1462 | 0.5161 | [0.5059332832593822, 0.48382921889690655, 0.8841889209221592, 0.4198028943618153, 0.5703533575931118, 0.0, 0.36924429160758854, 0.0, 0.0, 0.004732153653915917, 0.057609931995837464, 0.0, 0.45736494668177025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0018850845021790457, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9079305199377289, 0.9589487976234953, 0.9450234566584924, 0.9385742108617022, 0.6906453166385745, nan, 0.7415954746350201, 0.0, 0.0, 0.022960181491042793, 0.12636021020224003, 0.0, 0.805683616596711, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0025691900691900693, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7623 | 25.0 | 500 | 2.7353 | 0.0736 | 0.1312 | 0.4958 | [0.47366980226061, 0.48251607670604874, 0.8670598230796142, 0.30190292486058545, 0.5787168756008273, 0.0, 0.1184128306602791, 0.0, 0.0, 0.0028823771628333013, 0.038949751018560436, 0.0, 0.44208272948575866, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.005849476368766618, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9165024837378155, 0.951412872126188, 0.9248853822369123, 0.928540015616726, 0.7175294967493379, nan, 0.190459623487572, 0.0, 0.0, 0.013416255964953454, 0.057089017463771964, 0.0, 0.8028624509736462, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.006855413105413106, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7663 | 26.0 | 520 | 2.7150 | 0.0742 | 0.1345 | 0.5018 | [0.48013744627507166, 0.4815673027261687, 0.8818191133871689, 0.3468069385556645, 0.5659960371894528, 0.0, 0.1638641429184871, 0.0, 0.0, 0.0019665355642957136, 0.048896398774949834, 0.0, 0.4406706348086923, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.00192766955460124, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9236748553575312, 0.9431023956406448, 0.9404574048406014, 0.9015069243127294, 0.7153383096556706, nan, 0.30977875707088276, 0.0, 0.0, 0.009231009935070015, 0.09830670417750412, 0.0, 0.806165279020161, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0022893772893772895, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.9199 | 27.0 | 540 | 2.7413 | 0.0745 | 0.1350 | 0.4968 | [0.49510488371335354, 0.46754934683141397, 0.849462043859321, 0.33122249686501287, 0.5753252737986685, 0.0, 0.1391476574238492, 0.0, 0.0, 0.017918746022708244, 0.055636638828076436, 0.0, 0.41275063903556486, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.009935347189246447, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9162096459893071, 0.9637419013859907, 0.9080952127092441, 0.9032121989965805, 0.7064531663857453, nan, 0.26252786475446804, 0.0, 0.0, 0.08370492059766878, 0.10926800785604332, 0.0, 0.8027592375971926, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.013838013838013839, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.794 | 28.0 | 560 | 2.7492 | 0.0784 | 0.1431 | 0.5087 | [0.49827306897444046, 0.4712267062537749, 0.8527258988446009, 0.38926715642358, 0.5746564006620134, 0.0, 0.3509618812657759, 0.0, 0.0, 0.0041067235859124866, 0.0436436932761334, 0.0, 0.4018405165537335, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.019541860184850424, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9079865036249437, 0.9704394893430897, 0.9114297899562853, 0.9359265475367756, 0.6730315434625572, nan, 0.6342340252812246, 0.0, 0.0, 0.018814049910036768, 0.10746324114868093, 0.0, 0.8232642950526389, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02681115181115181, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.4253 | 29.0 | 580 | 2.7020 | 0.0760 | 0.1352 | 0.5020 | [0.5031510311231457, 0.4877849933934947, 0.8587502163752813, 0.3660408701006716, 0.5600148244955283, 0.0, 0.1667567620375014, 0.0, 0.0, 0.0016477245326088704, 0.04485047749319211, 0.0, 0.426326759660093, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.005379764495786595, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9178331729185373, 0.948651825662241, 0.9388820769804883, 0.9176531830298243, 0.6913074885624849, nan, 0.3299186929014135, 0.0, 0.0, 0.00801846201987014, 0.09485641488401719, 0.0, 0.8233331039702746, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.006130443630443631, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.3029 | 30.0 | 600 | 2.7116 | 0.0759 | 0.1348 | 0.5018 | [0.49563397328545666, 0.46367195231916253, 0.8560654148152961, 0.36008673749995623, 0.5725128547973218, 0.0, 0.19332219059308975, 0.0, 0.0, 0.0022978129744368305, 0.021612771182971755, 0.0, 0.4503470471906627, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.002150631543323763, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9284851460205288, 0.9605662423342233, 0.913098411344493, 0.9225536039634173, 0.7010835540573079, nan, 0.3815377478835672, 0.0, 0.0, 0.010952045685676289, 0.04204044800679441, 0.0, 0.799146769421317, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0025055962555962557, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.2664 | 31.0 | 620 | 2.7345 | 0.0757 | 0.1423 | 0.5086 | [0.5038067007994251, 0.47491069334439895, 0.8677876544299113, 0.38322480318787056, 0.5636300805984409, 0.0, 0.26721591887967316, 0.0, 0.0, 0.023486980302009432, 0.05034503768261831, 0.0, 0.4216559817450831, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0012768303653475977, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.905785914073653, 0.9594545338569905, 0.9438266339695064, 0.9201123686265359, 0.6676498916445943, nan, 0.5596015823314907, 0.0, 0.0, 0.11589611202378158, 0.1008811508041828, 0.0, 0.8010390146562995, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.001678876678876679, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.8158 | 32.0 | 640 | 2.7678 | 0.0730 | 0.1411 | 0.5014 | [0.49905246546841314, 0.462109144067159, 0.8336621630930006, 0.3613004300113773, 0.5967025403579026, 0.0, 0.20601669089379662, 0.0, 0.0, 0.04176613817881872, 0.06829472018288126, 0.0, 0.4274557934451566, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0072899703036689335, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.915193326744484, 0.9793832751660728, 0.9032865977183069, 0.8921009881618036, 0.6888875511678305, nan, 0.35976136173283335, 0.0, 0.0, 0.20597668778846906, 0.16492382822867455, 0.0, 0.8088144223491365, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.009678978428978429, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7327 | 33.0 | 660 | 2.7381 | 0.0804 | 0.1475 | 0.5128 | [0.498424444150801, 0.48720218166484014, 0.8630049768653525, 0.3974616262893528, 0.574922619408873, 0.0, 0.3285919648936975, 0.0, 0.0, 0.038542149097674124, 0.06470739565595561, 0.0, 0.4200233222656424, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.02335147511083947, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9015246941891086, 0.9580512297135984, 0.9466094466361019, 0.915436325940818, 0.68430050565856, nan, 0.6155759145437911, 0.0, 0.0, 0.1920519439881092, 0.14018790806306067, 0.0, 0.811687194660428, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.030614061864061865, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7564 | 34.0 | 680 | 2.7376 | 0.0788 | 0.1413 | 0.5038 | [0.48152886069362916, 0.48194988038387154, 0.8785999592777584, 0.3634575819920126, 0.5734061468122936, 0.0, 0.2063981324219105, 0.0, 0.0, 0.03960991466883381, 0.05569229071243309, 0.0, 0.44068977691886013, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.026782822721678864, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9126353028825139, 0.9582608140986505, 0.9316611579059602, 0.8625458853516904, 0.7066578377076812, nan, 0.3805069130361952, 0.0, 0.0, 0.19064382382852227, 0.13392430596103827, 0.0, 0.8159189430950251, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.042073667073667075, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7323 | 35.0 | 700 | 2.6776 | 0.0782 | 0.1375 | 0.5030 | [0.4757392220257697, 0.5041899980691253, 0.8771835367312755, 0.3387514369271945, 0.5695425546901729, 0.0, 0.20578442209700468, 0.0, 0.0, 0.03306012389907801, 0.03178429534395786, 0.0, 0.46818443502381324, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.016566547356582435, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9229384545487822, 0.9517682543443198, 0.9380317731101396, 0.8701478203897002, 0.6814351071514568, nan, 0.3934052340639375, 0.0, 0.0, 0.1692873347414535, 0.05669090716067732, 0.0, 0.7694213170026836, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.023606023606023607, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7502 | 36.0 | 720 | 2.6911 | 0.0786 | 0.1456 | 0.5125 | [0.4949791385929085, 0.4935428587482792, 0.8824374046486584, 0.3756147258437639, 0.5756830996096574, 0.0, 0.3131360704412819, 0.0, 0.0, 0.0461541901773269, 0.03627418092362742, 0.0, 0.4571003127016429, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.017569593581587585, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.920632357279279, 0.9604614501416973, 0.9435654120908412, 0.8801909907645913, 0.671177462075608, nan, 0.5911322432254822, 0.0, 0.0, 0.24219666744895565, 0.08609798821593503, 0.0, 0.7920938553636552, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.027574277574277575, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.8968 | 37.0 | 740 | 2.7568 | 0.0790 | 0.1435 | 0.5039 | [0.5159331555248942, 0.47422794208749053, 0.8698409286328461, 0.35261114702415314, 0.5870581913708008, 0.0, 0.19183575662322835, 0.0, 0.0, 0.04157999129214842, 0.06774659603344863, 0.0, 0.42975392505285887, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02222145810588933, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9017701611253582, 0.9591447134616962, 0.9276015566691544, 0.9154901767203081, 0.7102937635444257, nan, 0.3507286713827361, 0.0, 0.0, 0.2017132128608308, 0.20534529433621743, 0.0, 0.8216644877176082, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.03287800162800163, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.9333 | 38.0 | 760 | 2.7121 | 0.0743 | 0.1358 | 0.5002 | [0.492245796660384, 0.4990214787841022, 0.8774645689187481, 0.314980744912293, 0.5762447101593342, 0.0, 0.12077603743611492, 0.0, 0.0, 0.019930932412432167, 0.06163556675327402, 0.0, 0.43879118066474093, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.01638768993227357, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9194373208791161, 0.9340629299897031, 0.9441491630237765, 0.8875146967752359, 0.7328076089573802, nan, 0.21650108881930755, 0.0, 0.0, 0.09481342407885474, 0.12466160624236955, 0.0, 0.8257414160875249, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02375864875864876, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.5571 | 39.0 | 780 | 2.7319 | 0.0761 | 0.1369 | 0.4975 | [0.4956006371814093, 0.49073904702941273, 0.8705636277573597, 0.32531316601780225, 0.5835368808282982, 0.0, 0.0970667263770712, 0.0, 0.0, 0.0384180054882865, 0.06949698752977441, 0.0, 0.4282930071311814, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.024329781671985143, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9110785257341938, 0.953417592331034, 0.9363524896044354, 0.8617022231396799, 0.7104382374187335, nan, 0.16757508987591327, 0.0, 0.0, 0.18454196980364546, 0.171134348956951, 0.0, 0.8203227138237116, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.03432794057794058, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 2.3024 | 40.0 | 800 | 2.7409 | 0.0760 | 0.1362 | 0.4977 | [0.49284739640115016, 0.48300367276956246, 0.8672200718415108, 0.30880525610377874, 0.5877539279479448, 0.0, 0.10099623612024188, 0.0, 0.0, 0.03730625591049704, 0.05611049582552394, 0.0, 0.4764464313562406, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.008334900043867895, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9130487208804081, 0.9574908192926983, 0.9292541848811174, 0.9078074655130633, 0.7133999518420419, nan, 0.1801383895782597, 0.0, 0.0, 0.1820777595243683, 0.139842879133712, 0.0, 0.7883265671230991, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.01014957264957265, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.4036 | 41.0 | 820 | 2.7064 | 0.0749 | 0.1394 | 0.5031 | [0.487813536469504, 0.4977033867197311, 0.8680892671981715, 0.36664962378474075, 0.574401991146239, 0.0, 0.16103436579250163, 0.0, 0.0, 0.03611544700517247, 0.0583637967981848, 0.0, 0.44931401338275645, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0198205917325313, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9231537764226855, 0.9389972754029943, 0.9394898176777908, 0.8624471589226254, 0.7279677341680713, nan, 0.31198216655714045, 0.0, 0.0, 0.18462019870140028, 0.13517171824406815, 0.0, 0.8027936420560104, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.028833435083435083, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.6131 | 42.0 | 840 | 2.7351 | 0.0734 | 0.1381 | 0.4979 | [0.49334335533986007, 0.49400814454667413, 0.8362092623940197, 0.3448456821647818, 0.5752572249962172, 0.0, 0.13234821504699315, 0.0, 0.0, 0.030827873734365695, 0.07199641568825678, 0.0, 0.44997403995923313, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.020278572517646216, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9129647453495858, 0.9462416052341422, 0.9177684188079752, 0.8787639451081054, 0.7323621478449314, nan, 0.26001520481399876, 0.0, 0.0, 0.15184229054212625, 0.1663304846329423, 0.0, 0.8050643363379894, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.028757122507122507, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7673 | 43.0 | 860 | 2.7602 | 0.0714 | 0.1356 | 0.4952 | [0.48490060195255197, 0.4836128787948758, 0.8416006538413178, 0.31311144666218327, 0.5821942927375631, 0.0, 0.0927357815020289, 0.0, 0.0, 0.04086304030691609, 0.06653045024915845, 0.0, 0.4366934956768567, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.015767551654681505, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9081932126238909, 0.9542695984180936, 0.9277348331378612, 0.8858812231307048, 0.7071634962677582, nan, 0.16520416972695762, 0.0, 0.0, 0.19831025580849565, 0.12012314878709061, 0.0, 0.8045310672263125, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.022003459503459503, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.8284 | 44.0 | 880 | 2.7654 | 0.0770 | 0.1383 | 0.5009 | [0.4780241229841424, 0.4724772691732011, 0.8381134265670269, 0.3534298932331826, 0.5830118145652999, 0.0, 0.17914432925175744, 0.0, 0.0, 0.041721021986063825, 0.05542343387470998, 0.0, 0.4516751688956294, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.009997096279089061, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.917542488388768, 0.9636097720997622, 0.9276282119628958, 0.8598174458575288, 0.7022393450517698, nan, 0.33198036259615754, 0.0, 0.0, 0.20515528436204333, 0.10143850522851532, 0.0, 0.7889630496112296, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.012260887260887261, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.1011 | 45.0 | 900 | 2.7155 | 0.0732 | 0.1372 | 0.5007 | [0.48112157935578315, 0.495839975851107, 0.8487809920832402, 0.35658660696861805, 0.5784769179607591, 0.0, 0.14736378137855763, 0.0, 0.0, 0.040067667779013716, 0.04977255272641342, 0.0, 0.4330156098908689, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.009026960118346769, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9171592154532202, 0.9579555498856398, 0.9410731421260262, 0.8691964566187096, 0.6911148567300747, nan, 0.2714059298774595, 0.0, 0.0, 0.20288664632715325, 0.08944211476193004, 0.0, 0.8088316245785454, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.011408730158730158, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.5308 | 46.0 | 920 | 2.6937 | 0.0738 | 0.1383 | 0.5013 | [0.4953031030805177, 0.5002398948802333, 0.8605114604295915, 0.35049047282992235, 0.5731179721745759, 0.0, 0.14307669738101392, 0.0, 0.0, 0.034703365901583544, 0.05438552713661884, 0.0, 0.43572962659120446, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.019763843572076327, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9169977240477929, 0.9453258125951103, 0.935766073142126, 0.8914906793275833, 0.7240910185408139, nan, 0.2639066063628281, 0.0, 0.0, 0.1754674176640851, 0.11569085407930357, 0.0, 0.813166586389596, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.026887464387464387, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.5236 | 47.0 | 940 | 2.7221 | 0.0744 | 0.1374 | 0.4998 | [0.4853331033355728, 0.4881399840401541, 0.8612525489635242, 0.34798312549520677, 0.5828029640354316, 0.0, 0.12786962717921832, 0.0, 0.0, 0.04893190972839659, 0.0414194771772992, 0.0, 0.42564837625979846, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.012037018368046454, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9208455259344431, 0.9615367091606601, 0.9310347585030387, 0.8750751667130382, 0.697869010353961, nan, 0.2231370881492649, 0.0, 0.0, 0.24755534694516154, 0.08116142045756145, 0.0, 0.8173295259065575, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.015186202686202686, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.5259 | 48.0 | 960 | 2.7320 | 0.0732 | 0.1398 | 0.5027 | [0.4915774800410124, 0.484385784287873, 0.8490741123114034, 0.3691998017154617, 0.5805358086483146, 0.0, 0.15945474283044536, 0.0, 0.0, 0.047071452843594576, 0.05466317870290361, 0.0, 0.459717537309698, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.018597060525841376, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9187870488199285, 0.9624707265288269, 0.9316798166115791, 0.8823629722040226, 0.7010113171201541, nan, 0.2990451892226217, 0.0, 0.0, 0.23840256590784636, 0.12256489197940443, 0.0, 0.7900811945228101, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.02492877492877493, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.7699 | 49.0 | 980 | 2.7197 | 0.0747 | 0.1422 | 0.5033 | [0.4841676024546001, 0.49158091455389386, 0.8513176846120908, 0.369239589780196, 0.5759422141418112, 0.0, 0.16500976017847183, 0.0, 0.0, 0.06872859974770229, 0.050253203803224476, 0.0, 0.4431469485168769, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.012676627344731092, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.91909495909961, 0.9563791108154654, 0.9327007143618723, 0.8728044588445417, 0.7055742836503732, nan, 0.3049853750306029, 0.0, 0.0, 0.3580145505749824, 0.1032432719358777, 0.0, 0.804909516273309, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.016305453805453805, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | | 1.0884 | 50.0 | 1000 | 2.7252 | 0.0740 | 0.1399 | 0.5014 | [0.48516085209240617, 0.48972620283996443, 0.8461720523595614, 0.3492916550456616, 0.57616479445388, 0.0, 0.1380369639332496, 0.0, 0.0, 0.06175407695344529, 0.05268220495745468, 0.0, 0.46499631540162123, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.014604701379005741, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | [0.9196892474715829, 0.9582061399112456, 0.933910864697729, 0.8767355657473141, 0.698410787382615, nan, 0.2478126973082325, 0.0, 0.0, 0.3181569271688962, 0.11338181432135463, 0.0, 0.792386293263607, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.018925518925518924, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0] | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.49430354503894686 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4718 - Matthews Correlation: 0.4943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5023 | 1.0 | 535 | 0.4718 | 0.4943 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BigSalmon/MrLincoln2
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 inference: false tags: - auto-gptq pipeline_tag: text-generation --- # redpajama gptq: RedPajama-INCITE-Chat-3B-v1 <a href="https://colab.research.google.com/gist/pszemraj/86d2e8485df182302646ed2c5a637059/inference-with-redpajama-incite-chat-3b-v1-gptq-4bit-128g.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> A GPTQ quantization of the [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) via auto-gptq. Model file is only 2GB. ## Usage > Note that you cannot load directly from the hub with `auto_gptq` yet - if needed you can use [this function](https://gist.github.com/pszemraj/8368cba3400bda6879e521a55d2346d0) to download using the repo name. first install auto-GPTQ ```bash pip install ninja auto-gptq[triton] ``` load: ```python import torch from pathlib import Path from auto_gptq import AutoGPTQForCausalLM from transformers import AutoTokenizer model_repo = Path.cwd() / "RedPajama-INCITE-Chat-3B-v1-GPTQ-4bit-128g" device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(model_repo) model = AutoGPTQForCausalLM.from_quantized( model_repo, device=device, use_safetensors=True, use_triton=device != "cpu", # comment/remove if not on Linux ).to(device) ``` Inference: ```python import re import pprint as pp prompt = "How can I further strive to increase shareholder value even further?" prompt = f"<human>: {prompt}\n<bot>:" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **inputs, penalty_alpha=0.6, top_k=4, temperature=0.7, do_sample=True, max_new_tokens=192, length_penalty=0.9, pad_token_id=model.config.eos_token_id ) result = tokenizer.batch_decode( outputs, skip_special_tokens=True, clean_up_tokenization_spaces=True ) bot_responses = re.findall(r'<bot>:(.*?)(<human>|$)', result[0], re.DOTALL) bot_responses = [response[0].strip() for response in bot_responses] print(bot_responses[0]) ```
BigSalmon/NEO125InformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lrthomps/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/ParaphraseParentheses2.0
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: openrail library_name: sklearn inference: true language: - en metrics: - accuracy - precision pipeline_tag: text-classification ---
BigSalmon/SimplifyText
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: ravkumar/ppo-Huggy-1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigTooth/Megumin-v0.2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
Sharded version of the original https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Picard
Bimal/my_bot_model
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: cc-by-sa-4.0 inference: false pipeline_tag: text-generation tags: - gptq - auto-gptq - quantization - sft - openassistant --- # Open-Assistant StableLM-7B SFT-7 Model: GPTQ 4-bit <a href="https://colab.research.google.com/gist/pszemraj/225805c85d0097e570a2fae0eb5f8913/inference-with-stablelm-7b-sft-v7e3-autogptq-4bit-128g.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> A GPTQ version of the [Open-Assistant StableLM-7B SFT-7 Model](https://huggingface.co/OpenAssistant/stablelm-7b-sft-v7-epoch-3) quantized via auto-gptq. Will update this more later - take a look at the colab for now to see an inference example. > Note: this model uses special tokens, structured as: `<|prompter|>What's the Earth total population<|endoftext|><|assistant|>` ## Query Example > I want you to act as an expert linguist. I'm trying to uncover the true meaning of the corporate buzzword 'value'. For example: "I've been furiously delivering value today". What could it really mean? The corporate buzzword "value" can have a few different meanings, but one possible explanation is that it refers to the perceived worth or significance that is attached to something or someone. In this sense, it could be used to describe the perceived benefits that are associated with a particular product, service, or experience. For example, if someone were to say "I've been furiously delivering value today," they might be emphasizing the importance of the product or service they are using to make that statement. This could be a way of signaling to the person or company that the value they are delivering is high quality and relevant. It's worth noting that the meaning of "value" can be subjective and context-dependent, and it can be difficult to pin down. However, in many cases, it can be used to communicate the value or significance that is being attached to something.<|endoftext|>
Binbin/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: bjarlestam/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BlindMan820/Sarcastic-News-Headlines
[ "pytorch", "distilbert", "text-classification", "English", "dataset:Kaggle Dataset", "transformers", "Text", "Sequence-Classification", "Sarcasm", "DistilBert" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_00_16 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6008788381144764 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola_HW2_sepehr_bakhshi_dropout_00_16 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0825 - Matthews Correlation: 0.6009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.1204324670557534e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4985 | 1.0 | 535 | 0.4773 | 0.4879 | | 0.3349 | 2.0 | 1070 | 0.4213 | 0.6088 | | 0.2322 | 3.0 | 1605 | 0.6781 | 0.5232 | | 0.1763 | 4.0 | 2140 | 0.6570 | 0.5836 | | 0.1367 | 5.0 | 2675 | 0.7957 | 0.5880 | | 0.1047 | 6.0 | 3210 | 0.8028 | 0.6263 | | 0.0823 | 7.0 | 3745 | 1.0014 | 0.5754 | | 0.0614 | 8.0 | 4280 | 0.9796 | 0.6012 | | 0.0576 | 9.0 | 4815 | 1.0651 | 0.6082 | | 0.0394 | 10.0 | 5350 | 1.0825 | 0.6009 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BonjinKim/dst_kor_bert
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
{ "architectures": [ "BertForPreTraining" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-05-23T06:39:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: ModerationGPT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ModerationGPT This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 3.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.1428 | 1.0 | 2490 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 3.02 | | 0.1085 | 2.0 | 4980 | 0.0029 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0187 | | 0.0968 | 3.0 | 7470 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0113 | | 0.0859 | 4.0 | 9960 | 0.0218 | 0.0 | 0.0 | 0.0 | 0.0 | 3.1767 | | 0.0767 | 5.0 | 12450 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 3.018 | | 0.0753 | 6.0 | 14940 | 0.0007 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0073 | | 0.0628 | 7.0 | 17430 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | | 0.0601 | 8.0 | 19920 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | | 0.0561 | 9.0 | 22410 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | | 0.0528 | 10.0 | 24900 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0133 | | 0.048 | 11.0 | 27390 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | | 0.0454 | 12.0 | 29880 | 0.0014 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0187 | | 0.0406 | 13.0 | 32370 | 0.0003 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0073 | | 0.0357 | 14.0 | 34860 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0073 | | 0.0354 | 15.0 | 37350 | 0.0006 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0127 | | 0.0317 | 16.0 | 39840 | 0.0007 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0127 | | 0.0304 | 17.0 | 42330 | 0.0004 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0073 | | 0.0272 | 18.0 | 44820 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0193 | | 0.0251 | 19.0 | 47310 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | | 0.0228 | 20.0 | 49800 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 3.0 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yatsy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CALM/backup
[ "lean_albert", "transformers" ]
null
{ "architectures": [ "LeanAlbertForPretraining", "LeanAlbertForTokenClassification", "LeanAlbertForSequenceClassification" ], "model_type": "lean_albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6909 | 0.54 | 500 | 1.4848 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- license: cc-by-nc-sa-4.0 language: - en pipeline_tag: text-generation inference: false tags: - gptq - auto-gptq - quantized --- # stablelm-tuned-alpha-3b-gptq-4bit-128g This is a quantized model saved with [auto-gptq](https://github.com/PanQiWei/AutoGPTQ). At time of writing, you cannot directly load models from the hub, but will need to clone this repo and load locally. ```bash git lfs install git clone https://huggingface.co/ethzanalytics/stablelm-tuned-alpha-3b-gptq-4bit-128g ``` See the below [excerpt from the tutorial](https://github.com/PanQiWei/AutoGPTQ/blob/main/docs/tutorial/01-Quick-Start.md) for instructions. --- # Auto-GPTQ Quick Start ## Quick Installation Start from v0.0.4, one can install `auto-gptq` directly from pypi using `pip`: ```shell pip install auto-gptq ``` AutoGPTQ supports using `triton` to speedup inference, but it currently **only supports Linux**. To integrate triton, using: ```shell pip install auto-gptq[triton] ``` For some people who want to try the newly supported `llama` type models in 🤗 Transformers but not update it to the latest version, using: ```shell pip install auto-gptq[llama] ``` By default, CUDA extension will be built at installation if CUDA and pytorch are already installed. To disable building CUDA extension, you can use the following commands: For Linux ```shell BUILD_CUDA_EXT=0 pip install auto-gptq ``` For Windows ```shell set BUILD_CUDA_EXT=0 && pip install auto-gptq ``` ## Basic Usage *The full script of basic usage demonstrated here is `examples/quantization/basic_usage.py`* The two main classes currently used in AutoGPTQ are `AutoGPTQForCausalLM` and `BaseQuantizeConfig`. ```python from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig ``` ### Load quantized model and do inference Instead of `.from_pretrained`, you should use `.from_quantized` to load a quantized model. ```python device = "cuda:0" model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, use_triton=False, use_safetensors=True) ``` This will first read and load `quantize_config.json` in `opt-125m-4bit-128g` directory, then based on the values of `bits` and `group_size` in it, load `gptq_model-4bit-128g.bin` model file into the first GPU. Then you can initialize 🤗 Transformers' `TextGenerationPipeline` and do inference. ```python from transformers import TextGenerationPipeline pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer, device=device) print(pipeline("auto-gptq is")[0]["generated_text"]) ``` ## Conclusion Congrats! You learned how to quickly install `auto-gptq` and integrate with it. In the next chapter, you will learn the advanced loading strategies for pretrained or quantized model and some best practices on different situations.
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: sinMT5-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sinMT5-tuned This model is a fine-tuned version of [google/mT5](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8573 - Rouge1: 20.2531 - Rouge2: 8.1307 - Rougel: 19.3917 - Rougelsum: 20.0592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00015652249866150822 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 1.8651 | 1.0 | 1500 | 1.8070 | 17.676 | 7.1418 | 16.8638 | 17.457 | | 1.5527 | 2.0 | 3000 | 1.7804 | 21.1357 | 8.1386 | 20.122 | 20.8652 | | 1.3755 | 3.0 | 4500 | 1.7769 | 21.4151 | 8.5692 | 20.3204 | 21.1152 | | 1.2473 | 4.0 | 6000 | 1.7937 | 21.2434 | 8.2325 | 20.1332 | 21.0657 | | 1.1548 | 5.0 | 7500 | 1.8035 | 20.4298 | 8.2314 | 19.5909 | 20.2116 | | 1.0835 | 6.0 | 9000 | 1.8367 | 20.5427 | 8.2226 | 19.6134 | 20.2918 | | 1.0387 | 7.0 | 10500 | 1.8573 | 20.2531 | 8.1307 | 19.3917 | 20.0592 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('DaveLoay/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: apache-2.0 library_name: transformers pipeline_tag: text2text-generation inference: parameters: do_sample: true max_length: 64 top_k: 10 temperature: 1 num_return_sequences: 10 widget: - text: >- Generate a Japanese question for this passage: Transformer (machine learning model) A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input (which includes the recursive output) data. - text: >- Generate a Arabic question for this passage: Transformer (machine learning model) A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input (which includes the recursive output) data. --- ## Model description mT5-base query generation model that is trained with XOR QA data. Used in paper [Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation](https://arxiv.org/pdf/2206.10128.pdf) and [Augmenting Passage Representations with Query Generation for Enhanced Cross-Lingual Dense Retrieval](https://arxiv.org/pdf/2305.03950.pdf) ### How to use ```python from transformers import pipeline lang2mT5 = dict( ar='Arabic', bn='Bengali', fi='Finnish', ja='Japanese', ko='Korean', ru='Russian', te='Telugu' ) PROMPT = 'Generate a {lang} question for this passage: {title} {passage}' title = 'Transformer (machine learning model)' passage = 'A transformer is a deep learning model that adopts the mechanism of self-attention, differentially ' \ 'weighting the significance of each part of the input (which includes the recursive output) data.' model_name_or_path = 'ielabgroup/xor-tydi-docTquery-mt5-base' input_text = PROMPT.format_map({'lang': lang2mT5['ja'], 'title': title, 'passage': passage}) generator = pipeline(model=model_name_or_path, task='text2text-generation', device="cuda:0", ) results = generator(input_text, do_sample=True, max_length=64, num_return_sequences=10, ) for i, result in enumerate(results): print(f'{i + 1}. {result["generated_text"]}') ``` ### BibTeX entry and citation info ```bibtex @article{zhuang2022bridging, title={Bridging the gap between indexing and retrieval for differentiable search index with query generation}, author={Zhuang, Shengyao and Ren, Houxing and Shou, Linjun and Pei, Jian and Gong, Ming and Zuccon, Guido and Jiang, Daxin}, journal={arXiv preprint arXiv:2206.10128}, year={2022} } @inproceedings{zhuang2023augmenting, title={Augmenting Passage Representations with Query Generation for Enhanced Cross-Lingual Dense Retrieval}, author={Zhuang, Shengyao and Shou, Linjun and Zuccon, Guido}, booktitle={Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval}, year={2023} } ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
45
null
--- license: unknown --- chinese-StableVicuna 全球首个StableVicuna中文优化版。 http://metafont.vip 短域名:http://m-f.vip 基于CarperAI官方 stable-vicuna-13B 模型。 StableVicuna基于Vicuna-13B模型实现,是全球首个基于--RLHF人类反馈训练--的开源LLM模型。 被业界视为:是自ChatGPT推出以来的第二个里程碑。 Stable-Vicuna发布不到一周,HF网站就涌现10个衍生版本。zw团队的StableVicuna中文优化版,是其中唯一的中文版本。 相关项目网址: https://github.com/ziwang-com/chinese-StableVicuna https://huggingface.co/zwpython/stable-vicuna-13B-chinese 这几天UC佰克利大学发布的开源模型排行榜,vicuna-13B排名第一,相关衍生模型可能更多,特别是Stable-Vicuna系列。 StableVicuna中文优化版,在中文细节,优于原版StableVicuna模型,语境、场景好多了,有点人情味了。 更多参见Github项目: https://github.com/ziwang-com/chinese-StableVicuna StableVicuna中文优化版,目前仅向合作伙伴提供: 联系方式: 微信:zwpython,或扫码HF账号头像二维码。 QQ:357811718(zw字王) 合作伙伴请提供相关文字资料:团队核心成员简介,研究课题,合作方向,如有相关PPT资料更好。
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
63
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion2 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9275 - name: F1 type: f1 value: 0.9275719429504966 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2226 - Accuracy: 0.9275 - F1: 0.9276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8425 | 1.0 | 250 | 0.3132 | 0.9065 | 0.9038 | | 0.2536 | 2.0 | 500 | 0.2226 | 0.9275 | 0.9276 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
62
null
--- license: apache-2.0 library_name: transformers pipeline_tag: text2text-generation inference: parameters: do_sample: true max_length: 64 top_k: 10 temperature: 1 num_return_sequences: 10 widget: - text: >- Generate a Japanese question for this passage: Transformer (machine learning model) A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input (which includes the recursive output) data. - text: >- Generate a Arabic question for this passage: Transformer (machine learning model) A transformer is a deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input (which includes the recursive output) data. --- ## Model description mT5-large query generation model that is trained with XOR QA data. Used in paper [Bridging the Gap Between Indexing and Retrieval for Differentiable Search Index with Query Generation](https://arxiv.org/pdf/2206.10128.pdf) and [Augmenting Passage Representations with Query Generation for Enhanced Cross-Lingual Dense Retrieval](https://arxiv.org/pdf/2305.03950.pdf) ### How to use ```python from transformers import pipeline lang2mT5 = dict( ar='Arabic', bn='Bengali', fi='Finnish', ja='Japanese', ko='Korean', ru='Russian', te='Telugu' ) PROMPT = 'Generate a {lang} question for this passage: {title} {passage}' title = 'Transformer (machine learning model)' passage = 'A transformer is a deep learning model that adopts the mechanism of self-attention, differentially ' \ 'weighting the significance of each part of the input (which includes the recursive output) data.' model_name_or_path = 'ielabgroup/xor-tydi-docTquery-mt5-large' input_text = PROMPT.format_map({'lang': lang2mT5['ja'], 'title': title, 'passage': passage}) generator = pipeline(model=model_name_or_path, task='text2text-generation', device="cuda:0", ) results = generator(input_text, do_sample=True, max_length=64, num_return_sequences=10, ) for i, result in enumerate(results): print(f'{i + 1}. {result["generated_text"]}') ``` ### BibTeX entry and citation info ```bibtex @article{zhuang2022bridging, title={Bridging the gap between indexing and retrieval for differentiable search index with query generation}, author={Zhuang, Shengyao and Ren, Houxing and Shou, Linjun and Pei, Jian and Gong, Ming and Zuccon, Guido and Jiang, Daxin}, journal={arXiv preprint arXiv:2206.10128}, year={2022} } @inproceedings{zhuang2023augmenting, title={Augmenting Passage Representations with Query Generation for Enhanced Cross-Lingual Dense Retrieval}, author={Zhuang, Shengyao and Shou, Linjun and Zuccon, Guido}, booktitle={Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval}, year={2023} } ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
null
--- license: openrail widget: - text: I am totally a human, trust me bro. example_title: default - text: >- In Finnish folklore, all places and things, and also human beings, have a haltija (a genius, guardian spirit) of their own. One such haltija is called etiäinen—an image, doppelgänger, or just an impression that goes ahead of a person, doing things the person in question later does. For example, people waiting at home might hear the door close or even see a shadow or a silhouette, only to realize that no one has yet arrived. Etiäinen can also refer to some kind of a feeling that something is going to happen. Sometimes it could, for example, warn of a bad year coming. In modern Finnish, the term has detached from its shamanistic origins and refers to premonition. Unlike clairvoyance, divination, and similar practices, etiäiset (plural) are spontaneous and can't be induced. Quite the opposite, they may be unwanted and cause anxiety, like ghosts. Etiäiset need not be too dramatic and may concern everyday events, although ones related to e.g. deaths are common. As these phenomena are still reported today, they can be considered a living tradition, as a way to explain the psychological experience of premonition. example_title: real wikipedia - text: >- In Finnish folklore, all places and things, animate or inanimate, have a spirit or "etiäinen" that lives there. Etiäinen can manifest in many forms, but is usually described as a kind, elderly woman with white hair. She is the guardian of natural places and often helps people in need. Etiäinen has been a part of Finnish culture for centuries and is still widely believed in today. Folklorists study etiäinen to understand Finnish traditions and how they have changed over time. example_title: generated wikipedia - text: >- This paper presents a novel framework for sparsity-certifying graph decompositions, which are important tools in various areas of computer science, including algorithm design, complexity theory, and optimization. Our approach is based on the concept of "cut sparsifiers," which are sparse graphs that preserve the cut structure of the original graph up to a certain error bound. We show that cut sparsifiers can be efficiently constructed using a combination of spectral techniques and random sampling, and we use them to develop new algorithms for decomposing graphs into sparse subgraphs. example_title: from ChatGPT - text: >- Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general. example_title: GPT-3 paper datasets: - NicolaiSivesind/human-vs-machine - gfissore/arxiv-abstracts-2021 language: - en pipeline_tag: text-classification tags: - mgt-detection - ai-detection --- Machine-generated text-detection by fine-tuning of language models === This project is related to a bachelor's thesis with the title "*Turning Poachers into Gamekeepers: Detecting Machine-Generated Text in Academia using Large Language Models*" (not yet published) written by *Nicolai Thorer Sivesind* and *Andreas Bentzen Winje* at the *Department of Computer Science* at the *Norwegian University of Science and Technology*. It contains text classification models trained to distinguish human-written text from text generated by language models like ChatGPT and GPT-3. The best models were able to achieve an accuracy of 100% on real and *GPT-3*-generated wikipedia articles (4500 samples), and an accuracy of 98.4% on real and *ChatGPT*-generated research abstracts (3000 samples). The dataset card for the dataset that was created in relation to this project can be found [here](https://huggingface.co/datasets/NicolaiSivesind/human-vs-machine). **NOTE**: the hosted inference on this site only works for the RoBERTa-models, and not for the Bloomz-models. The Bloomz-models otherwise can produce wrong predictions when not explicitly providing the attention mask from the tokenizer to the model for inference. To be sure, the [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines)-library seems to produce the most consistent results. ## Fine-tuned detectors This project includes 12 fine-tuned models based on the RoBERTa-base model, and three sizes of the bloomz-models. | Base-model | RoBERTa-base | Bloomz-560m | Bloomz-1b7 | Bloomz-3b | |------------|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | Wiki | [roberta-wiki](https://huggingface.co/andreas122001/roberta-academic-detector) | [Bloomz-560m-wiki](https://huggingface.co/andreas122001/bloomz-560m-wiki-detector) | [Bloomz-1b7-wiki](https://huggingface.co/andreas122001/bloomz-1b7-wiki-detector) | [Bloomz-3b-wiki](https://huggingface.co/andreas122001/bloomz-3b-wiki-detector) | | Academic | [roberta-academic](https://huggingface.co/andreas122001/roberta-wiki-detector) | [Bloomz-560m-academic](https://huggingface.co/andreas122001/bloomz-560m-academic-detector) | [Bloomz-1b7-academic](https://huggingface.co/andreas122001/bloomz-1b7-academic-detector) | [Bloomz-3b-academic](https://huggingface.co/andreas122001/bloomz-3b-academic-detector) | | Mixed | [roberta-mixed](https://huggingface.co/andreas122001/roberta-mixed-detector) | [Bloomz-560m-mixed](https://huggingface.co/andreas122001/bloomz-560m-mixed-detector) | [Bloomz-1b7-mixed](https://huggingface.co/andreas122001/bloomz-1b7-mixed-detector) | [Bloomz-3b-mixed](https://huggingface.co/andreas122001/bloomz-3b-mixed-detector) | ### Datasets The models were trained on selections from the [GPT-wiki-intros]() and [ChatGPT-Research-Abstracts](), and are separated into three types, **wiki**-detectors, **academic**-detectors and **mixed**-detectors, respectively. - **Wiki-detectors**: - Trained on 30'000 datapoints (10%) of GPT-wiki-intros. - Best model (in-domain) is Bloomz-3b-wiki, with an accuracy of 100%. - **Academic-detectors**: - Trained on 20'000 datapoints (100%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is Bloomz-3b-academic, with an accuracy of 98.4% - **Mixed-detectors**: - Trained on 15'000 datapoints (5%) of GPT-wiki-intros and 10'000 datapoints (50%) of ChatGPT-Research-Abstracts. - Best model (in-domain) is RoBERTa-mixed, with an F1-score of 99.3%. ### Hyperparameters All models were trained using the same hyperparameters: ```python { "num_train_epochs": 1, "adam_beta1": 0.9, "adam_beta2": 0.999, "batch_size": 8, "adam_epsilon": 1e-08 "optim": "adamw_torch" # the optimizer (AdamW) "learning_rate": 5e-05, # (LR) "lr_scheduler_type": "linear", # scheduler type for LR "seed": 42, # seed for PyTorch RNG-generator. } ``` ### Metrics Metrics can be found at https://wandb.ai/idatt2900-072/IDATT2900-072. In-domain performance of wiki-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.973 | *1.000 | 0.945 | 0.972 | | Bloomz-1b7 | 0.972 | *1.000 | 0.945 | 0.972 | | Bloomz-3b | *1.000 | *1.000 | *1.000 | *1.000 | | RoBERTa | 0.998 | 0.999 | 0.997 | 0.998 | In-domain peformance of academic-detectors: | Base model | Accuracy | Precision | Recall | F1-score | |-------------|----------|-----------|--------|----------| | Bloomz-560m | 0.964 | 0.963 | 0.965 | 0.964 | | Bloomz-1b7 | 0.946 | 0.941 | 0.951 | 0.946 | | Bloomz-3b | *0.984 | *0.983 | 0.985 | *0.984 | | RoBERTa | 0.982 | 0.968 | *0.997 | 0.982 | F1-scores of the mixed-detectors on all three datasets: | Base model | Mixed | Wiki | CRA | |-------------|--------|--------|--------| | Bloomz-560m | 0.948 | 0.972 | *0.848 | | Bloomz-1b7 | 0.929 | 0.964 | 0.816 | | Bloomz-3b | 0.988 | 0.996 | 0.772 | | RoBERTa | *0.993 | *0.997 | 0.829 | ## Credits - [GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro), by Aaditya Bhat - [arxiv-abstracts-2021](https://huggingface.co/datasets/gfissore/arxiv-abstracts-2021), by Giancarlo - [Bloomz](bigscience/bloomz), by BigScience - [RoBERTa](https://huggingface.co/roberta-base), by Liu et. al. ## Citation Please use the following citation: ``` @misc {sivesind_2023, author = { {Nicolai Thorer Sivesind} and {Andreas Bentzen Winje} }, title = { Machine-generated text-detection by fine-tuning of language models }, url = { https://huggingface.co/andreas122001/roberta-academic-detector } year = 2023, publisher = { Hugging Face } } ```
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-05-07T01:43:41Z
--- license: mit --- Trained using https://github.com/princeton-nlp/DinkyTrain. Mask rate = 0.05.