modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
BitanBiswas/mbert-bengali-ner-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - autotrain - summarization language: - en widget: - text: "I love AutoTrain 🤗" datasets: - joel89/autotrain-data-rwlv_summarizer co2_eq_emissions: emissions: 0.007272812398046086 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55443129210 - CO2 Emissions (in grams): 0.0073 ## Validation Metrics - Loss: 1.625 - Rouge1: 47.446 - Rouge2: 25.858 - RougeL: 43.937 - RougeLsum: 43.961 - Gen Len: 15.395 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/joel89/autotrain-rwlv_summarizer-55443129210 ```
BlindMan820/Sarcastic-News-Headlines
[ "pytorch", "distilbert", "text-classification", "English", "dataset:Kaggle Dataset", "transformers", "Text", "Sequence-Classification", "Sarcasm", "DistilBert" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- language: - en tags: - causal-lm - llama inference: false --- # Wizard-Vicuna-13B-GGML This is GGML format quantised 4bit and 5bit models of [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp). ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)! llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508 I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them. For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`. ## Provided files | Name | Quant method | Bits | Size | RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | `wizard-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. | `wizard-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 11.0GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | `wizard-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | `wizard-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. | `wizard-vicuna-13B.ggmlv3.q8_0.bin` | q5_1 | 5bit | 16GB | 18GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.| ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 18 -m wizard-vicuna-13B.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:" ``` Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. ## How to run in `text-generation-webui` GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual. Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files. # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
Brendan/cse244b-hw2-roberta
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 - recall - accuracy - precision model-index: - name: roberta-bne-fine-tuned-text-classification-SL-dss results: [] language: - es --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-bne-fine-tuned-text-classification-SL-dss This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5089 - F1: 0.4781 - Recall: 0.4750 - Accuracy: 0.4750 - Precision: 0.5009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:---------:| | 3.235 | 1.0 | 836 | 2.4142 | 0.3995 | 0.4471 | 0.4471 | 0.4786 | | 2.0006 | 2.0 | 1672 | 2.1013 | 0.4672 | 0.4942 | 0.4942 | 0.4867 | | 1.2424 | 3.0 | 2508 | 2.1138 | 0.4861 | 0.4852 | 0.4852 | 0.5132 | | 0.7242 | 4.0 | 3344 | 2.2694 | 0.4828 | 0.4747 | 0.4747 | 0.5126 | | 0.3403 | 5.0 | 4180 | 2.5089 | 0.4781 | 0.4750 | 0.4750 | 0.5009 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Bryanwong/wangchanberta-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - DmitriyVasiliev/autotrain-data-mbart-rua-sent co2_eq_emissions: emissions: 0.032307287679585996 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 55454129221 - CO2 Emissions (in grams): 0.0323 ## Validation Metrics - Loss: 0.958 - Rouge1: 8.528 - Rouge2: 2.583 - RougeL: 8.489 - RougeLsum: 8.606 - Gen Len: 21.020 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/DmitriyVasiliev/autotrain-mbart-rua-sent-55454129221 ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 292.45 +/- 20.23 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
229
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: search_summarize_v1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1476 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # search_summarize_v1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5224 - Rouge1: 0.1476 - Rouge2: 0.0551 - Rougel: 0.1228 - Rougelsum: 0.1228 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8176 | 0.1281 | 0.0401 | 0.1087 | 0.1086 | 19.0 | | No log | 2.0 | 124 | 2.5989 | 0.1372 | 0.0476 | 0.1138 | 0.1137 | 19.0 | | No log | 3.0 | 186 | 2.5386 | 0.1464 | 0.0541 | 0.1218 | 0.1219 | 19.0 | | No log | 4.0 | 248 | 2.5224 | 0.1476 | 0.0551 | 0.1228 | 0.1228 | 19.0 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
CLTL/icf-domains
[ "pytorch", "roberta", "nl", "transformers", "license:mit", "text-classification" ]
text-classification
{ "architectures": [ "RobertaForMultiLabelSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.55 +/- 21.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CLTL/icf-levels-adm
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: relu-segformer-b0-scene-parse-150-cvfinal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # relu-segformer-b0-scene-parse-150-cvfinal This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 2.3127 - Mean Iou: 0.0939 - Mean Accuracy: 0.1473 - Overall Accuracy: 0.5650 - Per Category Iou: [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] - Per Category Accuracy: [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.8909 | 1.0 | 20 | 4.8871 | 0.0125 | 0.0437 | 0.2028 | [0.2508561911058272, 0.04515398550724638, 0.44804797891594955, 0.21111439623524775, 0.31233771405814414, 7.014449766519029e-05, 0.0, 0.0, 0.002980895171853123, nan, 0.0, nan, 0.027754770004437213, nan, 0.0, 0.0205260184545854, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0003843936190659235, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004370956146657081, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.39185298015397546, 0.06756573597180808, 0.6783385713684792, 0.21857860768857937, 0.32489891959965533, 7.836815118335908e-05, nan, 0.0, 0.0029985461594378483, nan, 0.0, nan, 0.0358977302074665, nan, 0.0, 0.06666893200584458, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0003843936190659235, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.005191611448869459, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.7422 | 2.0 | 40 | 4.5099 | 0.0250 | 0.0841 | 0.3808 | [0.34984703137884793, 0.3589339267863093, 0.4974385245901639, 0.37037865924560065, 0.5684109932116259, 0.01841334153311869, 0.0, 0.001964517481175478, 0.0029039377395748637, nan, 0.0, nan, 0.06118068223901683, nan, 0.0, 0.03404671283875128, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03711163107543761, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.7327161706761063, 0.5109870561127677, 0.9394270811424725, 0.4854978189331905, 0.5716510903426791, 0.02279393654418844, nan, 0.0019648747076680623, 0.003028834504482675, nan, 0.0, nan, 0.08533004294719121, nan, 0.0, 0.05562540351354107, nan, nan, 0.0, 0.0, nan, nan, 0.03711163107543761, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.0553 | 3.0 | 60 | 3.8580 | 0.0431 | 0.0969 | 0.4522 | [0.39298635765482803, 0.5115893703108798, 0.7615359747458901, 0.3781477627471384, 0.608186554204912, 0.07249520170070661, 0.0, 0.00026263422123902746, 0.0002654632333421821, nan, 0.0, nan, 0.031058319540453704, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.004609370441682712, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.890797371986272, 0.8525853889943074, 0.9625745894201705, 0.5214471569602817, 0.6093656790614437, 0.09429927677391908, nan, 0.00026265677326154047, 0.00027259510540344074, nan, 0.0, nan, 0.03661511090705616, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.004679678938482955, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.187 | 4.0 | 80 | 3.6772 | 0.0470 | 0.1028 | 0.4582 | [0.41381878117227805, 0.6254016101621327, 0.723075600130345, 0.4342820137592303, 0.5828271454173473, 0.03514338327468898, 0.0, 0.0013178423739339867, 0.0032081449763366166, nan, 0.0, nan, 0.08433073627395674, nan, 0.0, 0.05683375940128191, 0.0, 0.0, 0.0013988020515763424, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8694650748085648, 0.8541525481160206, 0.9600627362373881, 0.6586184025917706, 0.6089514151256048, 0.04550950493719352, nan, 0.0013183349581011937, 0.003664889750424037, nan, 0.0, nan, 0.10580887065147733, nan, 0.0, 0.10425090896734514, nan, nan, 0.0015747395623031575, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.9225 | 5.0 | 100 | 3.4916 | 0.0535 | 0.1072 | 0.4745 | [0.45367797998078097, 0.6130674322082363, 0.7883869096437359, 0.44548332225614223, 0.6276848544314471, 0.03952981755468821, 0.0, 0.008173727017699651, 0.0006412792839502804, nan, 0.0, nan, 0.019051546391752577, nan, 0.0, 0.034960804888587126, 0.0, 0.0, 0.020120972719417356, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8962918531278948, 0.9171862293304418, 0.9473472306518199, 0.7566005969235479, 0.6784317624444887, 0.0487561854862184, nan, 0.008182768705455683, 0.0007117761085534286, nan, 0.0, nan, 0.019885792992625325, nan, 0.0, 0.09623160827754935, nan, nan, 0.026326415246709197, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.3489 | 6.0 | 120 | 3.4303 | 0.0520 | 0.1086 | 0.4474 | [0.42187173154853513, 0.708063699670283, 0.5798242120634122, 0.3861172591019147, 0.6270752544837616, 0.02814913099490091, 0.0, 0.00023740131429408467, 0.017860624731895643, nan, 0.0, nan, 0.027602765692416008, nan, 0.0, 0.0016909891128098215, 0.0, 0.0, 0.11093712358467839, 0.0, nan, nan, 0.00014647722279185587, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.7836365221219719, 0.8768382352941176, 0.9973919993269675, 0.8288627841126502, 0.6859713660767548, 0.040852197666868185, nan, 0.00023740131429408467, 0.020808093045795978, nan, 0.0, nan, 0.029250002391268997, nan, 0.0, 0.0024805463998097115, nan, nan, 0.18594040216425745, 0.0, nan, nan, 0.00014809110564819477, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.185 | 7.0 | 140 | 3.2048 | 0.0641 | 0.1138 | 0.4879 | [0.4625065290705925, 0.7300395796369592, 0.8077190100423401, 0.42056456498035616, 0.6794219083593365, 0.044713581380198694, 0.0, 0.028159846611837125, 0.024135304777162067, nan, 0.0, 0.0, 0.03210493557646753, nan, 0.0, 0.009216222146861098, 0.0, 0.0, 0.09707657301147606, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9052804046916575, 0.9062415288696124, 0.9927588921405436, 0.7748463049412004, 0.7540763571286538, 0.06762051902106983, nan, 0.028190143299474182, 0.027834989096195785, nan, 0.0, nan, 0.033056902637091455, nan, 0.0, 0.015698800502905296, nan, nan, 0.15848340466768956, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.0486 | 8.0 | 160 | 3.0909 | 0.0698 | 0.1105 | 0.4975 | [0.4651597922984161, 0.7260885140196046, 0.9206514284429353, 0.45898710456104685, 0.7276357698339558, 0.04093562606727295, 0.0, 0.015882548879086716, 0.005692492407655908, nan, 0.0, 0.0, 0.010177352573912554, nan, 0.0, 0.008494170905639253, 0.0, 0.0, 0.04026935665611418, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9556969599371216, 0.9443616156139876, 0.9841356641087428, 0.7372643555011352, 0.7457579372970107, 0.05770135017129039, nan, 0.01589578587411669, 0.006103101526532591, nan, 0.0, nan, 0.010368542378069194, nan, 0.0, 0.011859050596350538, nan, nan, 0.06060728417992409, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.9697 | 9.0 | 180 | 2.9824 | 0.0718 | 0.1146 | 0.5030 | [0.47571929063641566, 0.7620331717703751, 0.9091288105282214, 0.45845890761854624, 0.6875787794176307, 0.06392846340574371, 0.0, 0.029610548726600787, 0.009870234653398776, nan, 0.0, 0.0, 0.04909003605961605, nan, 0.0, 0.0034267639047535366, 0.0, 0.0, 0.06778747181648857, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9398650535338866, 0.9216335727839523, 0.99588368557367, 0.8275235325629449, 0.7140916020414927, 0.09228409574348985, nan, 0.02978628830621739, 0.011115822631451418, nan, 0.0, nan, 0.04987230623547304, nan, 0.0, 0.005300893676305685, nan, nan, 0.11168537511103933, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.975 | 10.0 | 200 | 2.9308 | 0.0751 | 0.1232 | 0.5169 | [0.49561224561439093, 0.7497106384686921, 0.9130948912615037, 0.3754963905075124, 0.7629068787092295, 0.22973931751081408, 0.0, 0.05083747710023554, 0.015496721582456333, nan, 0.0, 0.0, 0.012014619855137467, nan, 0.0, 0.0, 0.0, 0.0, 0.07644850942663618, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9386077791452011, 0.9492494578476552, 0.9605134275979352, 0.8774010867069718, 0.7647146550009942, 0.33713978639081077, nan, 0.05102107820605424, 0.017037194087715046, nan, 0.0, nan, 0.01204243067711173, nan, 0.0, 0.0, nan, nan, 0.14309941048211258, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5359 | 11.0 | 220 | 2.7183 | 0.0707 | 0.1091 | 0.4964 | [0.436494042818339, 0.7312176249886404, 0.8698091735772315, 0.449270633714399, 0.715452001258549, 0.07408947729180626, 0.0, 0.023337516948682666, 0.0007162605469365537, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.023290337478969417, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9563026973041733, 0.9542474247763622, 0.9920918689269339, 0.6937386801357108, 0.7159143633591833, 0.10770023062627349, nan, 0.023386555003864086, 0.0007572086261206688, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.028506823871436646, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5711 | 12.0 | 240 | 2.8566 | 0.0763 | 0.1213 | 0.5148 | [0.48463778209415787, 0.7972517840126281, 0.8992532843509164, 0.3620071881258745, 0.7991759196438913, 0.23374793779079467, 0.0, 0.05480187511495767, 0.017412369340310773, nan, 0.0, 0.0, 0.010156830611505994, nan, 0.0, 0.0024294237037948215, 0.0, 0.0, 0.07926785671727163, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499483645829794, 0.9284274193548387, 0.9472931476885542, 0.8414836356215403, 0.8002750712533969, 0.2823492532634737, nan, 0.055683235931446584, 0.0197025684516598, nan, 0.0, nan, 0.010196371010167675, nan, 0.0, 0.0026844269258214687, nan, nan, 0.13429702010821287, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.4239 | 13.0 | 260 | 2.8176 | 0.0785 | 0.1216 | 0.5215 | [0.4772270741554508, 0.7986058009962806, 0.9456717376101984, 0.38306746584232015, 0.7562297037577043, 0.16837090306762553, 0.0, 0.12381028314015488, 0.015467904098994586, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0001597290994473373, 0.0, nan, 0.09785982304164544, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578687500092338, 0.9112649091894822, 0.9914008088407618, 0.8282059131144613, 0.7563299529396169, 0.2298761783211303, nan, 0.12549942670108144, 0.016355706324206444, nan, 0.0, nan, 0.0, nan, 0.0, 0.00016990043834313093, nan, nan, 0.17059678591617541, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5524 | 14.0 | 280 | 2.6836 | 0.0803 | 0.1330 | 0.5227 | [0.47872884863743637, 0.7690534821348954, 0.8143990086741016, 0.39197327448075703, 0.7833511205976521, 0.4254791369299206, 0.0, 0.05692139759856595, 0.015772870662460567, nan, 0.0, 0.0, 5.655842013479757e-05, nan, 0.0, 0.0, 0.0, nan, 0.12069485104184605, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.910749769893671, 0.9497661968013011, 0.987344586595838, 0.8268475293997603, 0.790581295154769, 0.687467813080764, nan, 0.05742081150840754, 0.016658589774654713, nan, 0.0, nan, 5.739045596717266e-05, nan, 0.0, 0.0, nan, nan, 0.22639909553420012, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.2365 | 15.0 | 300 | 2.6524 | 0.0783 | 0.1216 | 0.5209 | [0.4727612635992119, 0.7967315791368201, 0.9461993821443159, 0.4549778460982006, 0.7957215731446438, 0.1510305721400498, 0.0, 0.09256875307543579, 0.010900638156564927, nan, 0.0, 0.0, 0.0019851210451189653, nan, 0.0, 0.00021183876044062461, 0.0, 0.0, 0.11264323169547485, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9702627275023971, 0.9375, 0.9883661536797447, 0.8022116782735135, 0.8012858752568437, 0.16907368845301268, nan, 0.09407153356197942, 0.011433850254422099, nan, 0.0, nan, 0.002008665958851043, nan, 0.0, 0.0002378606136803833, nan, nan, 0.2107728337236534, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.3117 | 16.0 | 320 | 2.5956 | 0.0860 | 0.1316 | 0.5415 | [0.4894104007776673, 0.779094307350649, 0.9424582493688185, 0.4388413214151246, 0.7654435517269941, 0.4694480829822636, 0.0, 0.14206474651680132, 0.012345500959083894, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.08905017355632691, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9692019484059504, 0.9569327731092437, 0.978030298477865, 0.8027792658350552, 0.7656591767747067, 0.5938290678668189, nan, 0.14709284411825616, 0.01296341167918585, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.17091980941613502, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.8282 | 17.0 | 340 | 2.6868 | 0.0776 | 0.1316 | 0.5153 | [0.4743226115554707, 0.7954512666628195, 0.7070193198952633, 0.4438320057367676, 0.7353437159012003, 0.35684062059238364, 0.0, 0.14815248552530494, 0.0068802802972507135, nan, 0.0, 0.0, 0.0024155733434376926, 0.0, 0.0, 0.0, 0.0, nan, 0.13392355469165237, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8951394747222842, 0.9341539034968827, 0.9946517958548413, 0.7302237187826841, 0.7370252535295287, 0.5778195741250756, nan, 0.15975088015274502, 0.0073752120184153135, nan, 0.0, nan, 0.002439094378604838, nan, 0.0, 0.0, nan, nan, 0.35609303076798837, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.0514 | 18.0 | 360 | 2.5944 | 0.0846 | 0.1299 | 0.5291 | [0.4706921853235463, 0.762537340607025, 0.9442168569685349, 0.4813733604418935, 0.7709313108564615, 0.30066122830344627, 0.0, 0.15233106243610275, 0.010987503600773652, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0013024142312579416, 0.0, nan, 0.16403877693519314, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9700204325555764, 0.9579323664949851, 0.9764679017613018, 0.6824889671181857, 0.7714423013190164, 0.3746669353574707, nan, 0.17009046505402142, 0.012130482190453113, nan, 0.0, nan, 0.0, nan, 0.0, 0.0013931835944136735, nan, nan, 0.40789792457401275, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1405 | 19.0 | 380 | 2.6256 | 0.0898 | 0.1426 | 0.5498 | [0.5185031999297933, 0.8150003672959671, 0.9434324339806381, 0.4280491080929272, 0.8144273942862782, 0.4402868608799049, 0.0, 0.23071341887392194, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.11919999121834488, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9427297480575776, 0.9398380319869883, 0.9896881816706828, 0.7824609066095253, 0.8205408629946311, 0.6632744452654441, nan, 0.2698343747000914, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.438464023257692, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7111 | 20.0 | 400 | 2.5802 | 0.0873 | 0.1327 | 0.5361 | [0.4831464880075981, 0.8199493255831284, 0.9468093671643164, 0.45873317459935703, 0.8320261007118376, 0.2898698861398996, 0.0, 0.17530824031501036, 0.004702239712946351, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.18089675000909852, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.972520207250824, 0.9320700054215234, 0.9893276285822452, 0.7298410754827683, 0.836713727049778, 0.3873401849488368, nan, 0.18507705440530972, 0.004921856069784347, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4013970766373254, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5598 | 21.0 | 420 | 2.5502 | 0.0870 | 0.1403 | 0.5487 | [0.5087946262173165, 0.7867801857585139, 0.876644631963781, 0.35241226393455155, 0.8468368479467259, 0.4543256761763616, 0.0, 0.2858426005132592, 0.0036147422467467318, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.14941308790216548, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9337426739020271, 0.8611073461642722, 0.991364753531918, 0.8818397489859953, 0.8471034665606151, 0.5491256353417971, nan, 0.303808018103113, 0.003710322267991277, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38084470645239443, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.9659 | 22.0 | 440 | 2.5646 | 0.0885 | 0.1421 | 0.5519 | [0.5045213604927574, 0.8042200331977878, 0.9418047369854599, 0.45272025606142935, 0.8257880700359456, 0.49502223166994386, 0.0, 0.2643218071018291, 0.009075791313171171, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00026377394572851066, 0.0, nan, 0.12570543441761717, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9617011469119953, 0.9398803876389266, 0.9765880861241144, 0.7279342363715211, 0.8260754291774375, 0.625702514498108, nan, 0.28985185147769693, 0.009752847104434213, nan, 0.0, nan, 0.0, nan, 0.0, 0.0002718407013490095, nan, nan, 0.46858596462892677, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7829 | 23.0 | 460 | 2.5177 | 0.0846 | 0.1345 | 0.5462 | [0.4930088449307022, 0.7882521985751578, 0.9101626308882405, 0.44989356331658825, 0.8198836997398983, 0.40375010070087813, 0.0, 0.255825791600409, 0.006403338369666883, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10222337548824124, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9612653114893605, 0.8876050420168067, 0.9877291765568382, 0.8127343690211984, 0.8200603168290581, 0.44887037907794275, nan, 0.28685150295236317, 0.006739156772473952, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.3022288621497214, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1919 | 24.0 | 480 | 2.5331 | 0.0882 | 0.1434 | 0.5555 | [0.5115400805821155, 0.7956842451360199, 0.8616338898084154, 0.45560905508889904, 0.8224094829727, 0.5002673724089801, 0.0, 0.3255104125870554, 0.012325574865934807, nan, 0.0, 0.0, 0.0007629074402548111, 0.0, 0.0, 0.0, 0.0, nan, 0.12330337703247325, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9425568912967359, 0.9430147058823529, 0.9791480130520218, 0.7440562740746409, 0.8231590110691324, 0.6179440675309554, nan, 0.3862317339893018, 0.013296583474678943, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.42990390050876204, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.4307 | 25.0 | 500 | 2.4986 | 0.0878 | 0.1390 | 0.5565 | [0.5067241688681817, 0.8073821293627443, 0.9322059866647752, 0.44890528812665903, 0.8061694443524071, 0.41967209810856837, 0.0, 0.3408865002560274, 0.016331731047663174, nan, 0.0, 0.0, 0.000765023141950044, 0.0, 0.0, 0.0, 0.0, nan, 0.11318709117849012, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9637754280420943, 0.9159240309026837, 0.987182337706041, 0.7603568786510548, 0.8063564658315105, 0.4662569131904794, nan, 0.40015254297216346, 0.017082626605282286, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.3793911007025761, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7454 | 26.0 | 520 | 2.4647 | 0.0931 | 0.1433 | 0.5644 | [0.5157605506728834, 0.803062049148433, 0.9502783532598003, 0.4501630383564226, 0.8001457822543238, 0.5544864444466602, 0.0, 0.34386147348502605, 0.03533719952693749, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10768117416918258, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.966183603428178, 0.9091047709406344, 0.9867737108724783, 0.7615685824341216, 0.8003579240405647, 0.6225789839009427, nan, 0.407542290266041, 0.03755754785558517, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38451909876443513, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.9301 | 27.0 | 540 | 2.4489 | 0.0876 | 0.1466 | 0.5486 | [0.507267107485819, 0.7927630641160874, 0.7813868735967076, 0.4633558799098137, 0.7889578677261816, 0.5558062931081549, 0.0, 0.2951652047146916, 0.0721112427309929, nan, 0.0, 0.0, 0.0030247802978979207, 0.0, 0.0, 0.0, 0.0, nan, 0.12245389551577283, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9121651269610748, 0.9422523041474654, 0.9766361598692395, 0.7470536465906482, 0.7897030556107908, 0.6838404872259913, nan, 0.3599913121221152, 0.08112733220256846, nan, 0.0, nan, 0.003032129090265622, nan, 0.0, 0.0, nan, nan, 0.5131632076233545, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.327 | 28.0 | 560 | 2.4543 | 0.0895 | 0.1445 | 0.5619 | [0.5166661067571481, 0.7768677863017486, 0.9314251193832346, 0.45081840103589177, 0.7494204146519176, 0.5238211228089168, 0.0, 0.3455096827773653, 0.034265018714489594, nan, 0.0, 0.0, 0.002371911971460543, 0.0, 0.0, 0.0, 0.0, nan, 0.14420769872731065, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543126284421765, 0.9521974112225535, 0.9728383340043627, 0.7415946021785158, 0.7499171472128322, 0.6500526186157946, nan, 0.4160786353970411, 0.03854191906954204, nan, 0.0, nan, 0.0023721388466431364, nan, 0.0, 0.0, nan, nan, 0.4465396107566825, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.5732 | 29.0 | 580 | 2.4811 | 0.0905 | 0.1438 | 0.5573 | [0.5287957944150393, 0.8163827989505047, 0.9545677455513522, 0.4680606710241056, 0.7693352098683666, 0.5231390901672386, 0.0, 0.3055402033194326, 0.05167843946258601, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10709085097314959, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9566114756205484, 0.9330780699376525, 0.9825432213014764, 0.7407527869187011, 0.7699343805925631, 0.5956987080450505, nan, 0.36859837253822414, 0.05994063484371214, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4869983041266252, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.3826 | 30.0 | 600 | 2.4233 | 0.0904 | 0.1416 | 0.5575 | [0.5119552280143455, 0.8199371986427764, 0.9542587763055794, 0.46780402658887665, 0.7983808215095777, 0.5227657946477864, 0.0, 0.2978553040543449, 0.01962687827114638, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.12628517230477396, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9640251100299766, 0.9334592708050963, 0.9828617098629298, 0.7486224841203031, 0.7990819911181812, 0.6042632274243748, nan, 0.3494850411916536, 0.02112612066876666, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4032140838245982, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.6463 | 31.0 | 620 | 2.4264 | 0.0891 | 0.1424 | 0.5517 | [0.5087716983709148, 0.7961915655249836, 0.9422048694584277, 0.5216542842991579, 0.7608908022187267, 0.4894583034188466, 0.0, 0.2650125146137745, 0.029923016240036997, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.1422919548457617, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9685755273239262, 0.9552639604228789, 0.9773993305730991, 0.6716792938955639, 0.7614833963014516, 0.541904569982759, nan, 0.33319021906585106, 0.03331717954930943, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.5939594605507551, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1258 | 32.0 | 640 | 2.3982 | 0.0905 | 0.1459 | 0.5696 | [0.5188148838822648, 0.803991121325235, 0.9411620809356289, 0.46322936273191717, 0.7986109970939301, 0.4870755144136439, 0.0, 0.38434420091878657, 0.07936659093691757, nan, 0.0, 0.0, 0.01315502027867747, 0.0, 0.0, 0.0005077173030056864, 0.0, nan, 0.1251307754069244, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9634592626846575, 0.9542558959067498, 0.9710295593440338, 0.7470919109206398, 0.8060084841254059, 0.5419493517834352, nan, 0.4796415745263339, 0.08835110249575963, nan, 0.0, nan, 0.013247630252422355, nan, 0.0, 0.0005097013150293928, nan, nan, 0.41532746507308405, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.2958 | 33.0 | 660 | 2.3860 | 0.0955 | 0.1524 | 0.5735 | [0.5375765406531137, 0.8110853226428411, 0.954462712687812, 0.47733671569201686, 0.7880233174350821, 0.5977448263382951, 0.0, 0.3451584856219023, 0.15232419259491609, nan, 0.0, 0.0, 0.00037936626864822315, 0.0, 0.0, 0.0, 0.0, nan, 0.11023077406778595, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9564489607171931, 0.9443446733532123, 0.9799171929740221, 0.7536096017958726, 0.7884934049181415, 0.7026824298605047, nan, 0.4351263025502963, 0.19121032226799128, nan, 0.0, nan, 0.00038260303978115105, nan, 0.0, 0.0, nan, nan, 0.4954776710005653, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.6949 | 34.0 | 680 | 2.4148 | 0.0918 | 0.1475 | 0.5613 | [0.5290693028524354, 0.7945630929959845, 0.9478845254791959, 0.4378261787685903, 0.7753794713058447, 0.5583728198407097, 0.0, 0.294060489825014, 0.09295956791297402, nan, 0.0, 0.0, 0.015815294162562516, 0.0, 0.0, 0.0, 0.0, nan, 0.14532379891865785, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9488270117498275, 0.9470639062076444, 0.974707200846098, 0.7404658044437642, 0.7762146218598793, 0.7440720091354873, nan, 0.35005581456431806, 0.11103707293433487, nan, 0.0, nan, 0.015849330922934182, nan, 0.0, 0.0, nan, nan, 0.4373738189453283, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7268 | 35.0 | 700 | 2.3556 | 0.0926 | 0.1479 | 0.5638 | [0.5410470829128678, 0.7780415307765347, 0.9476531340280946, 0.4552572445110248, 0.7523007150423728, 0.5545934323690384, 0.0, 0.3147277402817886, 0.15298449658591226, nan, 0.0, 0.0, 0.013374103392106993, 0.0, 0.0, 0.0, 0.0, nan, 0.11979662163590622, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9529268195390191, 0.9579069531038221, 0.9737517351617381, 0.7442220861712712, 0.7531484059123749, 0.6352970152929849, nan, 0.3820443788924976, 0.19442088684274292, nan, 0.0, nan, 0.013429366696318402, nan, 0.0, 0.0, nan, nan, 0.4576031656302996, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8777 | 36.0 | 720 | 2.3999 | 0.0909 | 0.1447 | 0.5650 | [0.5195887577564181, 0.7589975376935272, 0.9568716745005531, 0.47163186638014915, 0.7390994868399272, 0.5596583309503823, 0.0, 0.34237489727059855, 0.05195082273248527, nan, 0.0, 0.0, 0.019154068221530048, 0.0, 0.0, 0.0, 0.0, nan, 0.12754062385704582, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9590093091491458, 0.9530868799132556, 0.9823328986665545, 0.7264419275018494, 0.7398588188506662, 0.6792503526566803, nan, 0.4187506629557979, 0.05900169614732251, nan, 0.0, nan, 0.019292758280964543, nan, 0.0, 0.0, nan, nan, 0.39425018170071874, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.5147 | 37.0 | 740 | 2.3679 | 0.0929 | 0.1519 | 0.5702 | [0.5364579552707658, 0.8087594119494329, 0.9561376717960766, 0.40982070007215154, 0.8017200811359027, 0.6135778964476434, 0.0, 0.3455637603595059, 0.1178796275898851, 0.0, 0.0, 0.0, 0.02513673698854649, 0.0, 0.0, 0.0, 0.0, nan, 0.12247120239988897, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499276808680068, 0.9381014502575223, 0.9782526395490683, 0.7751843065227928, 0.8186849605620733, 0.739649806318712, nan, 0.3864741863953894, 0.15397080203537677, nan, 0.0, nan, 0.025232670473566914, nan, 0.0, 0.0, nan, nan, 0.463215698942098, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8831 | 38.0 | 760 | 2.3579 | 0.0936 | 0.1469 | 0.5595 | [0.5304507686446, 0.8067229372973851, 0.9543009495299125, 0.4539171592059717, 0.8000993130844989, 0.6138934235074627, 0.0, 0.2697607964234687, 0.10662386132885616, nan, 0.0, 0.0, 0.019987270464628042, 0.0, 0.0, 0.0, 0.0, nan, 0.1252110694183865, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9523358562540906, 0.9254201680672269, 0.9808005480406944, 0.7376023570827275, 0.8009876052230397, 0.7072949553301539, nan, 0.32520444294034156, 0.1423400775381633, nan, 0.0, nan, 0.020124919892488546, nan, 0.0, 0.0, nan, nan, 0.43115561657110557, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5653 | 39.0 | 780 | 2.3282 | 0.0907 | 0.1479 | 0.5667 | [0.5275596317733419, 0.7942082826255595, 0.9517817247869964, 0.4541504138802404, 0.807428478543563, 0.5882710451677632, 0.0, 0.32820640572452037, 0.11085403522999678, 0.0, 0.0, 0.0, 0.028076457871027343, 0.0, 0.0, 0.0, 0.0, nan, 0.12453114251969526, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578436340696244, 0.945335795608566, 0.9753802332778482, 0.7358294431264508, 0.8231092993968318, 0.6826873558585791, nan, 0.38597412830783373, 0.1407650835958323, nan, 0.0, nan, 0.028197844031870832, nan, 0.0, 0.0, nan, nan, 0.3874263102640717, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1075 | 40.0 | 800 | 2.3290 | 0.0938 | 0.1462 | 0.5654 | [0.525696705455315, 0.8097656901943995, 0.9540430140437204, 0.468111166556006, 0.8067950394887163, 0.6026423969326851, 0.0, 0.3172524415747904, 0.07324316230784587, nan, 0.0, 0.0, 0.01990007056086351, 0.0, 0.0, 0.0, 0.0, nan, 0.11110484334984262, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9615755672139479, 0.9453188533477908, 0.9740341684143476, 0.7238909721690773, 0.807450122622125, 0.6827545285595934, nan, 0.3946165463664971, 0.08889629270656652, nan, 0.0, nan, 0.019962313600581556, nan, 0.0, 0.0, nan, nan, 0.3976419284502948, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.858 | 41.0 | 820 | 2.2884 | 0.0861 | 0.1402 | 0.5554 | [0.5197047648538989, 0.7522506253594982, 0.9529308898004213, 0.4432910713601716, 0.7616208786659381, 0.5651882235681975, 0.0, 0.2511281328404124, 0.07432842851653051, 0.0, 0.0, 0.0, 0.03464474456840869, 0.0, 0.0, 0.0, 0.0, nan, 0.12361143100118704, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9645451577207137, 0.9527649769585254, 0.9758249154202546, 0.7380551516542946, 0.7621130774839265, 0.6602404782696312, nan, 0.28615950337665486, 0.0908044584443906, nan, 0.0, nan, 0.034989047987986265, nan, 0.0, 0.0, nan, nan, 0.2817168699022854, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8407 | 42.0 | 840 | 2.3426 | 0.0886 | 0.1434 | 0.5560 | [0.5319928364107948, 0.7797183410955866, 0.9486764705882353, 0.4390484927134142, 0.7554676247082002, 0.5886622292884924, 0.0, 0.2479837292662356, 0.0663740228502706, 0.0, 0.0, 0.0, 0.03285631534465895, 0.0, 0.0, 0.0, 0.0, nan, 0.12666638151047235, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9545416267150862, 0.9530360531309298, 0.969136655629736, 0.7508545700364787, 0.7561311062504142, 0.7336826313786078, nan, 0.2851492850179566, 0.08358068815119941, nan, 0.0, nan, 0.03309516294106957, nan, 0.0, 0.0, nan, nan, 0.3587175967051603, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.094 | 43.0 | 860 | 2.3197 | 0.0938 | 0.1469 | 0.5661 | [0.5321809020623814, 0.7960168163032635, 0.9480932639440182, 0.45480238271229184, 0.8054216967228046, 0.6017263259674073, 0.0, 0.3017606715947139, 0.11857016219842353, nan, 0.0, 0.0, 0.024840957285671008, 0.0, 0.0, 0.0, 0.0, nan, 0.10708057202650854, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.963567113484157, 0.9463353889943074, 0.9688542223771265, 0.7337823014719013, 0.8059422018956718, 0.6688497794496316, nan, 0.37692762290569104, 0.1498970196268476, nan, 0.0, nan, 0.02509875940964351, nan, 0.0, 0.0, nan, nan, 0.38427683113946537, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.2333 | 44.0 | 880 | 2.3351 | 0.0924 | 0.1440 | 0.5649 | [0.5234600096067719, 0.7993256716093246, 0.9540437935614297, 0.4402231330031544, 0.8011626173796393, 0.5988770094608107, 0.0, 0.31263255344218144, 0.06051157899885066, nan, 0.0, 0.0, 0.01943918475313282, 0.0, 0.0, 0.0, 0.0, nan, 0.1115382392045783, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9644136683898171, 0.9338489428029276, 0.9755484913857858, 0.7493877707201347, 0.8016007158480811, 0.6973422001298673, nan, 0.37526581370563244, 0.0749485098134238, nan, 0.0, nan, 0.01954145025682229, nan, 0.0, 0.0, nan, nan, 0.3116369215860454, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7624 | 45.0 | 900 | 2.3306 | 0.0882 | 0.1482 | 0.5742 | [0.5258849135317549, 0.7047259963908543, 0.9470757995170505, 0.4796076709120187, 0.7687998311222253, 0.5991964723174914, 0.0, 0.3769427430313193, 0.12063174979711593, 0.0, 0.0, 0.0, 0.02665966248604636, 0.0, 0.0, 0.0, 0.0, nan, 0.12325740513726943, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9648908712423969, 0.9626761995120628, 0.9663063138855004, 0.7312759878574526, 0.7845330416915225, 0.6845793869371487, nan, 0.460987892532971, 0.14632299491155804, nan, 0.0, nan, 0.027183945976450782, nan, 0.0, 0.0, nan, nan, 0.3491480255188565, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7946 | 46.0 | 920 | 2.3309 | 0.0924 | 0.1501 | 0.5684 | [0.5361640179143524, 0.7846633151596686, 0.9536386080413826, 0.4568883842931527, 0.7759403160506726, 0.6158210406873075, 0.0, 0.32342639353641117, 0.11150484571020869, nan, 0.0, 0.0, 0.0293502430935062, 0.0, 0.0, 0.0, 0.0, nan, 0.123842605156038, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543968407102787, 0.9387537272973706, 0.9748934866084574, 0.7442603505012627, 0.7876151653741632, 0.72303575826784, nan, 0.3969147931325356, 0.14200690574267022, nan, 0.0, nan, 0.02979521172295714, nan, 0.0, 0.0, nan, nan, 0.460671888879916, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7709 | 47.0 | 940 | 2.3227 | 0.0913 | 0.1509 | 0.5704 | [0.5424758709486558, 0.7733391833636282, 0.9512393001599097, 0.4543827573277878, 0.7687249054047356, 0.6394879643801308, 0.0, 0.32752722872125856, 0.14992535250207353, 0.0, 0.0, 0.0, 0.02317038711500424, 0.0, 0.0, 0.0, 0.0, nan, 0.11885573064179251, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9536167691741732, 0.9406597316345893, 0.9723035135898468, 0.7496747531950716, 0.7709286140385763, 0.7203600456774367, nan, 0.4101183470807215, 0.20530954688635814, nan, 0.0, nan, 0.02353008694654079, nan, 0.0, 0.0, nan, nan, 0.4388678026326415, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.0854 | 48.0 | 960 | 2.3119 | 0.0923 | 0.1515 | 0.5675 | [0.5484915919545876, 0.7502571445858459, 0.9503989939945234, 0.4518522223975188, 0.7665169241908147, 0.6387045798547317, 0.0, 0.3014816981040113, 0.14846499417229633, nan, 0.0, 0.0, 0.024767918571226614, 0.0, 0.0, 0.0, 0.0, nan, 0.1255448070585734, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9485950586604931, 0.9453781512605042, 0.971912914410706, 0.7488712022652483, 0.7718897063697223, 0.7511475336423278, nan, 0.3749223394636751, 0.19869154349406348, nan, 0.0, nan, 0.025137019713621624, nan, 0.0, 0.0, nan, nan, 0.4768634418153921, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.2518 | 49.0 | 980 | 2.3313 | 0.0959 | 0.1480 | 0.5662 | [0.5342926601876615, 0.8062937473403085, 0.9553006514505319, 0.47239519660475027, 0.8008359767380386, 0.625160514049403, 0.0, 0.28114668652271035, 0.08647048160009668, nan, 0.0, 0.0, 0.027304971193724548, 0.0, 0.0, 0.0, 0.0, nan, 0.11171062547098719, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9601395264315716, 0.9309179316888045, 0.9798991653196003, 0.736824315706232, 0.8032246304765692, 0.6922034885022726, nan, 0.38143824787727865, 0.1083565543978677, nan, 0.0, nan, 0.02783437114407874, nan, 0.0, 0.0, nan, nan, 0.44892190906888474, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1639 | 50.0 | 1000 | 2.3127 | 0.0939 | 0.1473 | 0.5650 | [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
CLTL/icf-levels-etn
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - autotrain - token-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - atatavana/autotrain-data-rhenus_eml co2_eq_emissions: emissions: 1.8661519287001727 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 53659129262 - CO2 Emissions (in grams): 1.8662 ## Validation Metrics - Loss: 0.341 - Accuracy: 0.937 - Precision: 0.317 - Recall: 0.352 - F1: 0.334 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/atatavana/autotrain-rhenus_eml-53659129262 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("atatavana/autotrain-rhenus_eml-53659129262", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("atatavana/autotrain-rhenus_eml-53659129262", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
CLTL/icf-levels-ins
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
Converted using https://github.com/oobabooga/GPTQ-for-LLaMa, commit 57a2629 --- license: other ---
CLTL/icf-levels-mbw
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: openrail base_model: runwayml/stable-diffusion-v1-5 tags: - art - controlnet - stable-diffusion - controlnet-v1-1 - image-to-image duplicated_from: ControlNet-1-1-preview/control_v11f1e_sd15_tile --- # Controlnet - v1.1 - *Tile Version* **Controlnet v1.1** was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel). This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth) into `diffusers` format. It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet). ControlNet is a neural network structure to control diffusion models by adding extra conditions. ![img](./sd.png) This checkpoint corresponds to the ControlNet conditioned on **tiled image**. Conceptually, it is similar to a super-resolution model, but its usage is not limited to that. It is also possible to generate details at the same size as the input (conditione) image. **This model was contributed by [*takuma104*](https://huggingface.co/takuma104)** ## Model Details - **Developed by:** Lvmin Zhang, Maneesh Agrawala - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543). - **Cite as:** @misc{zhang2023adding, title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang and Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} } ## Introduction Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by Lvmin Zhang, Maneesh Agrawala. The abstract reads as follows: *We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.* ## Example It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion. 1. Let's install `diffusers` and related packages: ``` $ pip install diffusers transformers accelerate ``` 2. Run code: ```python import torch from PIL import Image from diffusers import ControlNetModel, DiffusionPipeline from diffusers.utils import load_image def resize_for_condition_image(input_image: Image, resolution: int): input_image = input_image.convert("RGB") W, H = input_image.size k = float(resolution) / min(H, W) H *= k W *= k H = int(round(H / 64.0)) * 64 W = int(round(W / 64.0)) * 64 img = input_image.resize((W, H), resample=Image.LANCZOS) return img controlnet = ControlNetModel.from_pretrained('lllyasviel/control_v11f1e_sd15_tile', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_controlnet_img2img", controlnet=controlnet, torch_dtype=torch.float16).to('cuda') pipe.enable_xformers_memory_efficient_attention() source_image = load_image('https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png') condition_image = resize_for_condition_image(source_image, 1024) image = pipe(prompt="best quality", negative_prompt="blur, lowres, bad anatomy, bad hands, cropped, worst quality", image=condition_image, controlnet_conditioning_image=condition_image, width=condition_image.size[0], height=condition_image.size[1], strength=1.0, generator=torch.manual_seed(0), num_inference_steps=32, ).images[0] image.save('output.png') ``` ![original](./images/original.png) ![tile_output](./images/output.png) ## Other released checkpoints v1-1 The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on a different type of conditioning: | Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example | |---|---|---|---|---| |[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>| |[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>| ## More information For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
CM-CA/DialoGPT-small-cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - token-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - atatavana/autotrain-data-rhenus_email co2_eq_emissions: emissions: 0.45458051171320146 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 55490129266 - CO2 Emissions (in grams): 0.4546 ## Validation Metrics - Loss: 0.230 - Accuracy: 0.967 - Precision: 0.624 - Recall: 0.225 - F1: 0.330 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/atatavana/autotrain-rhenus_email-55490129266 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("atatavana/autotrain-rhenus_email-55490129266", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("atatavana/autotrain-rhenus_email-55490129266", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Calamarii/calamari
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - openvino --- # madlag/albert-base-v2-squad This is the [madlag/albert-base-v2-squad](https://huggingface.co/madlag/albert-base-v2-squad) model converted to [OpenVINO](https://openvino.ai), for accellerated inference. An example of how to do inference on this model: ```python from optimum.intel.openvino import OVModelForQuestionAnswering from transformers import AutoTokenizer, pipeline # model_id should be set to either a local directory or a model available on the HuggingFace hub. model_id = "helenai/madlag-albert-base-v2-squad-ov" tokenizer = AutoTokenizer.from_pretrained(model_id) model = OVModelForQuestionAnswering.from_pretrained(model_id) pipe = pipeline("question-answering", model=model, tokenizer=tokenizer) result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing") print(result) ```
CalvinHuang/mt5-small-finetuned-amazon-en-es
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
summarization
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.77 +/- 17.29 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Cameron/BERT-SBIC-offensive
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.4706932444154383 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4939 - Matthews Correlation: 0.4707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.590049876247753e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5211 | 1.0 | 535 | 0.4939 | 0.4707 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Cameron/BERT-SBIC-targetcategory
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- tags: - adapterhub:umls - bert - adapter-transformers datasets: - umls --- # Adapter `reginaboateng/umls_RE_adapter_clinical_bert` for emilyalsentzer/Bio_ClinicalBERT An [adapter](https://adapterhub.ml) for the `emilyalsentzer/Bio_ClinicalBERT` model that was trained on the [umls](https://adapterhub.ml/explore/umls/) dataset. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") adapter_name = model.load_adapter("reginaboateng/umls_RE_adapter_clinical_bert", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Cameron/BERT-jigsaw-identityhate
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: creativeml-openrail-m --- These VAEs are modified versions of the [kl-f8-anime2 vae](https://huggingface.co/hakurei/waifu-diffusion-v1-4). They have been modified using the [VAE-BlessUp script](https://github.com/sALTaccount/VAE-BlessUp) to produce lower contrast images than the original version. The number at the end of the filename corresponds to the multiplier used for the contrast level of that version.
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5205935908642821 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4500 - Matthews Correlation: 0.5206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4917 | 1.0 | 535 | 0.4500 | 0.5206 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: - en tags: - causal-lm - llama inference: false --- # Wizard-Vicuna-13B-GPTQ This repo contains 4bit GPTQ format quantised models of [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/wizard-vicuna-13B-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `wizard-vicuna-13B-GPTQ`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Provided files **Compatible file - wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors** In the `main` branch - the default one - you will find `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. * `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with text-generation-webui one-click-installers * Parameters: Groupsize = 128g. No act-order. * Command used to create the GPTQ: ``` CUDA_VISIBLE_DEVICES=0 python3 llama.py wizard-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors ``` # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
dccuchile/albert-base-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: apache-2.0 tags: - computer-vision - generated_from_trainer datasets: - beans metrics: - accuracy widget: - src: >- https://huggingface.co/platzi/platzi-vit-model-paola-daft/resolve/main/healthy.jpeg example_title: Healthy - src: >- https://huggingface.co/platzi/platzi-vit-model-paola-daft/resolve/main/bean_rust.jpeg example_title: Bean rust model-index: - name: platzi-vit-model-paola-daft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-paola-daft This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0743 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1367 | 3.85 | 500 | 0.0743 | 0.9774 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: bigcode-openrail-m --- # GPTQ-for-StarCoder Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here. If you want 4-bit weights, visit [starcoderbase-GPTQ-4bit-128g](https://huggingface.co/mayank31398/starcoderbase-GPTQ-4bit-128g). ## Results | StarCoderBase | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) | | -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- | | FP32 | 32 | - | | 10.172 | 15.756 | 12.736 | 1.692 | 59195 | | BF16 | 16 | - | | 10.173 | 15.765 | 12.745 | 1.692 | 29597 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | 128 | | 10.174 | 15.767 | 12.739 | 1.692 | 16163 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | 128 | | 10.387 | 16.056 | 13.005 | 1.708 | 8877 | # License The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license). # Acknowledgements Thanks to everyone in BigCode who worked so hard to create these code models.
dccuchile/albert-base-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - generated_from_keras_callback model-index: - name: ’Marbert-sarcasm-detector results: [] license: afl-3.0 language: - ar metrics: - accuracy - F1 score - precession - recall pipeline_tag: text-classification widget: - text: "بعد أن حصل على الليسانس بدأ فى تحضيرالماجستير ." example_title: "NS 01" - text: "بعد أن حصل على الليسانس بدأ فى تحضيرالماجستير .وبعد أن حصل على الماجستير بدأ فى «تحضير» الشاى للزبائن." example_title: "Sarc 01" - text: " .جمع كلمة امراءة هي كلمة نساء" example_title: "NS 02" - text: ".للجامعة العربية موقفا كبيرا" example_title: "NS 03" - text: "شهر أكتوبر ده شهر عسل بجد مبيجبش ليا فيه غير الاكتئاب كل سنه" example_title: "Sarc 02" - text: "جمع كلمة امراءة كلمة نساء. حتى اللغة العربية مش قادرة عليهم " example_title: "Sarc 03" - text: " 🙂🙂للجامعة العربية موقفاَ كبيراَ!!!!! يتسع لاكثر من الف سيار " example_title: "Sarc 04" - text: " فى ناس زى النسكافيه ثلاثة فى واحد يتكلموا معاك وعنك وعليك" example_title: "Sarc 05" - text: "!!هو أنت كدة عايش؟ يا بني دا أنت ميت بالحياة" example_title: "Sarc 06" --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MARBERT Sarcasm Detector This model is fine-tuned UBC-NLP/MARBERTv2 was finetuned on ArSarcasT corpus training dataset. It achieves the following results on the evaluation sets: | Eval Datatset| Accuracy | F1 | Precession | Recall| | :----- | :---: | :---: | :---: | :---: | |ArSarcasT |0.831 | 0.716 | 0.727 | 0.705 | |iSarcasmEVAL |0.886 | 0.619 | 0.591 | 0.650 | |AraSarcasmV2 |0.765 | 0.553 | 0.577 | 0.532 | |IDAT |0.796 | 0.794 | 0.769 | 0.821 | ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # rodekruis/sml-ukr-word-classifier-medium This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("rodekruis/sml-ukr-word-classifier-medium") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
dccuchile/albert-large-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5891424967516642 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6806 - Matthews Correlation: 0.5891 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.873067343773953e-06 - train_batch_size: 4 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5444 | 1.0 | 2138 | 0.6008 | 0.5429 | | 0.4189 | 2.0 | 4276 | 0.6806 | 0.5891 | | 0.2808 | 3.0 | 6414 | 0.8681 | 0.5778 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dccuchile/albert-large-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- language: - en tags: - causal-lm - llama --- # Wizard-Vicuna-13B-HF This is a float16 HF format repo for [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b). June Lee's repo was also HF format. The reason I've made this is that the original repo was in float32, meaning it required 52GB disk space, VRAM and RAM. This model was converted to float16 to make it easier to load and manage. ## Repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GPTQ). * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-GGML). * [float16 HF format model for GPU inference](https://huggingface.co/TheBloke/wizard-vicuna-13B-HF). # Original WizardVicuna-13B model card Github page: https://github.com/melodysdreamj/WizardVicunaLM # WizardVicunaLM ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method I am a big fan of the ideas behind WizardLM and VicunaLM. I particularly like the idea of WizardLM handling the dataset itself more deeply and broadly, as well as VicunaLM overcoming the limitations of single-turn conversations by introducing multi-round conversations. As a result, I combined these two ideas to create WizardVicunaLM. This project is highly experimental and designed for proof of concept, not for actual usage. ## Benchmark ### Approximately 7% performance improvement over VicunaLM ![](https://user-images.githubusercontent.com/21379657/236088663-3fa212c9-0112-4d44-9b01-f16ea093cb67.png) ### Detail The questions presented here are not from rigorous tests, but rather, I asked a few questions and requested GPT-4 to score them. The models compared were ChatGPT 3.5, WizardVicunaLM, VicunaLM, and WizardLM, in that order. | | gpt3.5 | wizard-vicuna-13b | vicuna-13b | wizard-7b | link | |-----|--------|-------------------|------------|-----------|----------| | Q1 | 95 | 90 | 85 | 88 | [link](https://sharegpt.com/c/YdhIlby) | | Q2 | 95 | 97 | 90 | 89 | [link](https://sharegpt.com/c/YOqOV4g) | | Q3 | 85 | 90 | 80 | 65 | [link](https://sharegpt.com/c/uDmrcL9) | | Q4 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/XBbK5MZ) | | Q5 | 90 | 85 | 80 | 75 | [link](https://sharegpt.com/c/AQ5tgQX) | | Q6 | 92 | 85 | 87 | 88 | [link](https://sharegpt.com/c/eVYwfIr) | | Q7 | 95 | 90 | 85 | 92 | [link](https://sharegpt.com/c/Kqyeub4) | | Q8 | 90 | 85 | 75 | 70 | [link](https://sharegpt.com/c/M0gIjMF) | | Q9 | 92 | 85 | 70 | 60 | [link](https://sharegpt.com/c/fOvMtQt) | | Q10 | 90 | 80 | 75 | 85 | [link](https://sharegpt.com/c/YYiCaUz) | | Q11 | 90 | 85 | 75 | 65 | [link](https://sharegpt.com/c/HMkKKGU) | | Q12 | 85 | 90 | 80 | 88 | [link](https://sharegpt.com/c/XbW6jgB) | | Q13 | 90 | 95 | 88 | 85 | [link](https://sharegpt.com/c/JXZb7y6) | | Q14 | 94 | 89 | 90 | 91 | [link](https://sharegpt.com/c/cTXH4IS) | | Q15 | 90 | 85 | 88 | 87 | [link](https://sharegpt.com/c/GZiM0Yt) | | | 91 | 88 | 82 | 80 | | ## Principle We adopted the approach of WizardLM, which is to extend a single problem more in-depth. However, instead of using individual instructions, we expanded it using Vicuna's conversation format and applied Vicuna's fine-tuning techniques. Turning a single command into a rich conversation is what we've done [here](https://sharegpt.com/c/6cmxqq0). After creating the training data, I later trained it according to the Vicuna v1.1 [training method](https://github.com/lm-sys/FastChat/blob/main/scripts/train_vicuna_13b.sh). ## Detailed Method First, we explore and expand various areas in the same topic using the 7K conversations created by WizardLM. However, we made it in a continuous conversation format instead of the instruction format. That is, it starts with WizardLM's instruction, and then expands into various areas in one conversation using ChatGPT 3.5. After that, we applied the following model using Vicuna's fine-tuning format. ## Training Process Trained with 8 A100 GPUs for 35 hours. ## Weights You can see the [dataset](https://huggingface.co/datasets/junelee/wizard_vicuna_70k) we used for training and the [13b model](https://huggingface.co/junelee/wizard-vicuna-13b) in the huggingface. ## Conclusion If we extend the conversation to gpt4 32K, we can expect a dramatic improvement, as we can generate 8x more, more accurate and richer conversations. ## License The model is licensed under the LLaMA model, and the dataset is licensed under the terms of OpenAI because it uses ChatGPT. Everything else is free. ## Author [JUNE LEE](https://github.com/melodysdreamj) - He is active in Songdo Artificial Intelligence Study and GDG Songdo.
dccuchile/albert-large-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: selu-segformer-b0-scene-parse-150-cvfinal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # selu-segformer-b0-scene-parse-150-cvfinal This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - Loss: 2.3127 - Mean Iou: 0.0939 - Mean Accuracy: 0.1473 - Overall Accuracy: 0.5650 - Per Category Iou: [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] - Per Category Accuracy: [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 4.8909 | 1.0 | 20 | 4.8871 | 0.0125 | 0.0437 | 0.2028 | [0.2508561911058272, 0.04515398550724638, 0.44804797891594955, 0.21111439623524775, 0.31233771405814414, 7.014449766519029e-05, 0.0, 0.0, 0.002980895171853123, nan, 0.0, nan, 0.027754770004437213, nan, 0.0, 0.0205260184545854, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0003843936190659235, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.004370956146657081, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.39185298015397546, 0.06756573597180808, 0.6783385713684792, 0.21857860768857937, 0.32489891959965533, 7.836815118335908e-05, nan, 0.0, 0.0029985461594378483, nan, 0.0, nan, 0.0358977302074665, nan, 0.0, 0.06666893200584458, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0003843936190659235, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.005191611448869459, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.7422 | 2.0 | 40 | 4.5099 | 0.0250 | 0.0841 | 0.3808 | [0.34984703137884793, 0.3589339267863093, 0.4974385245901639, 0.37037865924560065, 0.5684109932116259, 0.01841334153311869, 0.0, 0.001964517481175478, 0.0029039377395748637, nan, 0.0, nan, 0.06118068223901683, nan, 0.0, 0.03404671283875128, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.03711163107543761, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0] | [0.7327161706761063, 0.5109870561127677, 0.9394270811424725, 0.4854978189331905, 0.5716510903426791, 0.02279393654418844, nan, 0.0019648747076680623, 0.003028834504482675, nan, 0.0, nan, 0.08533004294719121, nan, 0.0, 0.05562540351354107, nan, nan, 0.0, 0.0, nan, nan, 0.03711163107543761, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.0553 | 3.0 | 60 | 3.8580 | 0.0431 | 0.0969 | 0.4522 | [0.39298635765482803, 0.5115893703108798, 0.7615359747458901, 0.3781477627471384, 0.608186554204912, 0.07249520170070661, 0.0, 0.00026263422123902746, 0.0002654632333421821, nan, 0.0, nan, 0.031058319540453704, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.004609370441682712, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, nan] | [0.890797371986272, 0.8525853889943074, 0.9625745894201705, 0.5214471569602817, 0.6093656790614437, 0.09429927677391908, nan, 0.00026265677326154047, 0.00027259510540344074, nan, 0.0, nan, 0.03661511090705616, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.004679678938482955, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 4.187 | 4.0 | 80 | 3.6772 | 0.0470 | 0.1028 | 0.4582 | [0.41381878117227805, 0.6254016101621327, 0.723075600130345, 0.4342820137592303, 0.5828271454173473, 0.03514338327468898, 0.0, 0.0013178423739339867, 0.0032081449763366166, nan, 0.0, nan, 0.08433073627395674, nan, 0.0, 0.05683375940128191, 0.0, 0.0, 0.0013988020515763424, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8694650748085648, 0.8541525481160206, 0.9600627362373881, 0.6586184025917706, 0.6089514151256048, 0.04550950493719352, nan, 0.0013183349581011937, 0.003664889750424037, nan, 0.0, nan, 0.10580887065147733, nan, 0.0, 0.10425090896734514, nan, nan, 0.0015747395623031575, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.9225 | 5.0 | 100 | 3.4916 | 0.0535 | 0.1072 | 0.4745 | [0.45367797998078097, 0.6130674322082363, 0.7883869096437359, 0.44548332225614223, 0.6276848544314471, 0.03952981755468821, 0.0, 0.008173727017699651, 0.0006412792839502804, nan, 0.0, nan, 0.019051546391752577, nan, 0.0, 0.034960804888587126, 0.0, 0.0, 0.020120972719417356, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8962918531278948, 0.9171862293304418, 0.9473472306518199, 0.7566005969235479, 0.6784317624444887, 0.0487561854862184, nan, 0.008182768705455683, 0.0007117761085534286, nan, 0.0, nan, 0.019885792992625325, nan, 0.0, 0.09623160827754935, nan, nan, 0.026326415246709197, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.3489 | 6.0 | 120 | 3.4303 | 0.0520 | 0.1086 | 0.4474 | [0.42187173154853513, 0.708063699670283, 0.5798242120634122, 0.3861172591019147, 0.6270752544837616, 0.02814913099490091, 0.0, 0.00023740131429408467, 0.017860624731895643, nan, 0.0, nan, 0.027602765692416008, nan, 0.0, 0.0016909891128098215, 0.0, 0.0, 0.11093712358467839, 0.0, nan, nan, 0.00014647722279185587, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.7836365221219719, 0.8768382352941176, 0.9973919993269675, 0.8288627841126502, 0.6859713660767548, 0.040852197666868185, nan, 0.00023740131429408467, 0.020808093045795978, nan, 0.0, nan, 0.029250002391268997, nan, 0.0, 0.0024805463998097115, nan, nan, 0.18594040216425745, 0.0, nan, nan, 0.00014809110564819477, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.185 | 7.0 | 140 | 3.2048 | 0.0641 | 0.1138 | 0.4879 | [0.4625065290705925, 0.7300395796369592, 0.8077190100423401, 0.42056456498035616, 0.6794219083593365, 0.044713581380198694, 0.0, 0.028159846611837125, 0.024135304777162067, nan, 0.0, 0.0, 0.03210493557646753, nan, 0.0, 0.009216222146861098, 0.0, 0.0, 0.09707657301147606, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9052804046916575, 0.9062415288696124, 0.9927588921405436, 0.7748463049412004, 0.7540763571286538, 0.06762051902106983, nan, 0.028190143299474182, 0.027834989096195785, nan, 0.0, nan, 0.033056902637091455, nan, 0.0, 0.015698800502905296, nan, nan, 0.15848340466768956, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 3.0486 | 8.0 | 160 | 3.0909 | 0.0698 | 0.1105 | 0.4975 | [0.4651597922984161, 0.7260885140196046, 0.9206514284429353, 0.45898710456104685, 0.7276357698339558, 0.04093562606727295, 0.0, 0.015882548879086716, 0.005692492407655908, nan, 0.0, 0.0, 0.010177352573912554, nan, 0.0, 0.008494170905639253, 0.0, 0.0, 0.04026935665611418, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9556969599371216, 0.9443616156139876, 0.9841356641087428, 0.7372643555011352, 0.7457579372970107, 0.05770135017129039, nan, 0.01589578587411669, 0.006103101526532591, nan, 0.0, nan, 0.010368542378069194, nan, 0.0, 0.011859050596350538, nan, nan, 0.06060728417992409, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.9697 | 9.0 | 180 | 2.9824 | 0.0718 | 0.1146 | 0.5030 | [0.47571929063641566, 0.7620331717703751, 0.9091288105282214, 0.45845890761854624, 0.6875787794176307, 0.06392846340574371, 0.0, 0.029610548726600787, 0.009870234653398776, nan, 0.0, 0.0, 0.04909003605961605, nan, 0.0, 0.0034267639047535366, 0.0, 0.0, 0.06778747181648857, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9398650535338866, 0.9216335727839523, 0.99588368557367, 0.8275235325629449, 0.7140916020414927, 0.09228409574348985, nan, 0.02978628830621739, 0.011115822631451418, nan, 0.0, nan, 0.04987230623547304, nan, 0.0, 0.005300893676305685, nan, nan, 0.11168537511103933, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.975 | 10.0 | 200 | 2.9308 | 0.0751 | 0.1232 | 0.5169 | [0.49561224561439093, 0.7497106384686921, 0.9130948912615037, 0.3754963905075124, 0.7629068787092295, 0.22973931751081408, 0.0, 0.05083747710023554, 0.015496721582456333, nan, 0.0, 0.0, 0.012014619855137467, nan, 0.0, 0.0, 0.0, 0.0, 0.07644850942663618, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9386077791452011, 0.9492494578476552, 0.9605134275979352, 0.8774010867069718, 0.7647146550009942, 0.33713978639081077, nan, 0.05102107820605424, 0.017037194087715046, nan, 0.0, nan, 0.01204243067711173, nan, 0.0, 0.0, nan, nan, 0.14309941048211258, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5359 | 11.0 | 220 | 2.7183 | 0.0707 | 0.1091 | 0.4964 | [0.436494042818339, 0.7312176249886404, 0.8698091735772315, 0.449270633714399, 0.715452001258549, 0.07408947729180626, 0.0, 0.023337516948682666, 0.0007162605469365537, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.023290337478969417, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9563026973041733, 0.9542474247763622, 0.9920918689269339, 0.6937386801357108, 0.7159143633591833, 0.10770023062627349, nan, 0.023386555003864086, 0.0007572086261206688, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.028506823871436646, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5711 | 12.0 | 240 | 2.8566 | 0.0763 | 0.1213 | 0.5148 | [0.48463778209415787, 0.7972517840126281, 0.8992532843509164, 0.3620071881258745, 0.7991759196438913, 0.23374793779079467, 0.0, 0.05480187511495767, 0.017412369340310773, nan, 0.0, 0.0, 0.010156830611505994, nan, 0.0, 0.0024294237037948215, 0.0, 0.0, 0.07926785671727163, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499483645829794, 0.9284274193548387, 0.9472931476885542, 0.8414836356215403, 0.8002750712533969, 0.2823492532634737, nan, 0.055683235931446584, 0.0197025684516598, nan, 0.0, nan, 0.010196371010167675, nan, 0.0, 0.0026844269258214687, nan, nan, 0.13429702010821287, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.4239 | 13.0 | 260 | 2.8176 | 0.0785 | 0.1216 | 0.5215 | [0.4772270741554508, 0.7986058009962806, 0.9456717376101984, 0.38306746584232015, 0.7562297037577043, 0.16837090306762553, 0.0, 0.12381028314015488, 0.015467904098994586, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0001597290994473373, 0.0, nan, 0.09785982304164544, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578687500092338, 0.9112649091894822, 0.9914008088407618, 0.8282059131144613, 0.7563299529396169, 0.2298761783211303, nan, 0.12549942670108144, 0.016355706324206444, nan, 0.0, nan, 0.0, nan, 0.0, 0.00016990043834313093, nan, nan, 0.17059678591617541, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5524 | 14.0 | 280 | 2.6836 | 0.0803 | 0.1330 | 0.5227 | [0.47872884863743637, 0.7690534821348954, 0.8143990086741016, 0.39197327448075703, 0.7833511205976521, 0.4254791369299206, 0.0, 0.05692139759856595, 0.015772870662460567, nan, 0.0, 0.0, 5.655842013479757e-05, nan, 0.0, 0.0, 0.0, nan, 0.12069485104184605, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.910749769893671, 0.9497661968013011, 0.987344586595838, 0.8268475293997603, 0.790581295154769, 0.687467813080764, nan, 0.05742081150840754, 0.016658589774654713, nan, 0.0, nan, 5.739045596717266e-05, nan, 0.0, 0.0, nan, nan, 0.22639909553420012, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.2365 | 15.0 | 300 | 2.6524 | 0.0783 | 0.1216 | 0.5209 | [0.4727612635992119, 0.7967315791368201, 0.9461993821443159, 0.4549778460982006, 0.7957215731446438, 0.1510305721400498, 0.0, 0.09256875307543579, 0.010900638156564927, nan, 0.0, 0.0, 0.0019851210451189653, nan, 0.0, 0.00021183876044062461, 0.0, 0.0, 0.11264323169547485, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9702627275023971, 0.9375, 0.9883661536797447, 0.8022116782735135, 0.8012858752568437, 0.16907368845301268, nan, 0.09407153356197942, 0.011433850254422099, nan, 0.0, nan, 0.002008665958851043, nan, 0.0, 0.0002378606136803833, nan, nan, 0.2107728337236534, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.3117 | 16.0 | 320 | 2.5956 | 0.0860 | 0.1316 | 0.5415 | [0.4894104007776673, 0.779094307350649, 0.9424582493688185, 0.4388413214151246, 0.7654435517269941, 0.4694480829822636, 0.0, 0.14206474651680132, 0.012345500959083894, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.08905017355632691, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9692019484059504, 0.9569327731092437, 0.978030298477865, 0.8027792658350552, 0.7656591767747067, 0.5938290678668189, nan, 0.14709284411825616, 0.01296341167918585, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.17091980941613502, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.8282 | 17.0 | 340 | 2.6868 | 0.0776 | 0.1316 | 0.5153 | [0.4743226115554707, 0.7954512666628195, 0.7070193198952633, 0.4438320057367676, 0.7353437159012003, 0.35684062059238364, 0.0, 0.14815248552530494, 0.0068802802972507135, nan, 0.0, 0.0, 0.0024155733434376926, 0.0, 0.0, 0.0, 0.0, nan, 0.13392355469165237, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.8951394747222842, 0.9341539034968827, 0.9946517958548413, 0.7302237187826841, 0.7370252535295287, 0.5778195741250756, nan, 0.15975088015274502, 0.0073752120184153135, nan, 0.0, nan, 0.002439094378604838, nan, 0.0, 0.0, nan, nan, 0.35609303076798837, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.0514 | 18.0 | 360 | 2.5944 | 0.0846 | 0.1299 | 0.5291 | [0.4706921853235463, 0.762537340607025, 0.9442168569685349, 0.4813733604418935, 0.7709313108564615, 0.30066122830344627, 0.0, 0.15233106243610275, 0.010987503600773652, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0013024142312579416, 0.0, nan, 0.16403877693519314, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9700204325555764, 0.9579323664949851, 0.9764679017613018, 0.6824889671181857, 0.7714423013190164, 0.3746669353574707, nan, 0.17009046505402142, 0.012130482190453113, nan, 0.0, nan, 0.0, nan, 0.0, 0.0013931835944136735, nan, nan, 0.40789792457401275, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1405 | 19.0 | 380 | 2.6256 | 0.0898 | 0.1426 | 0.5498 | [0.5185031999297933, 0.8150003672959671, 0.9434324339806381, 0.4280491080929272, 0.8144273942862782, 0.4402868608799049, 0.0, 0.23071341887392194, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.11919999121834488, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9427297480575776, 0.9398380319869883, 0.9896881816706828, 0.7824609066095253, 0.8205408629946311, 0.6632744452654441, nan, 0.2698343747000914, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.438464023257692, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7111 | 20.0 | 400 | 2.5802 | 0.0873 | 0.1327 | 0.5361 | [0.4831464880075981, 0.8199493255831284, 0.9468093671643164, 0.45873317459935703, 0.8320261007118376, 0.2898698861398996, 0.0, 0.17530824031501036, 0.004702239712946351, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.18089675000909852, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.972520207250824, 0.9320700054215234, 0.9893276285822452, 0.7298410754827683, 0.836713727049778, 0.3873401849488368, nan, 0.18507705440530972, 0.004921856069784347, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4013970766373254, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5598 | 21.0 | 420 | 2.5502 | 0.0870 | 0.1403 | 0.5487 | [0.5087946262173165, 0.7867801857585139, 0.876644631963781, 0.35241226393455155, 0.8468368479467259, 0.4543256761763616, 0.0, 0.2858426005132592, 0.0036147422467467318, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.14941308790216548, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9337426739020271, 0.8611073461642722, 0.991364753531918, 0.8818397489859953, 0.8471034665606151, 0.5491256353417971, nan, 0.303808018103113, 0.003710322267991277, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38084470645239443, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.9659 | 22.0 | 440 | 2.5646 | 0.0885 | 0.1421 | 0.5519 | [0.5045213604927574, 0.8042200331977878, 0.9418047369854599, 0.45272025606142935, 0.8257880700359456, 0.49502223166994386, 0.0, 0.2643218071018291, 0.009075791313171171, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00026377394572851066, 0.0, nan, 0.12570543441761717, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9617011469119953, 0.9398803876389266, 0.9765880861241144, 0.7279342363715211, 0.8260754291774375, 0.625702514498108, nan, 0.28985185147769693, 0.009752847104434213, nan, 0.0, nan, 0.0, nan, 0.0, 0.0002718407013490095, nan, nan, 0.46858596462892677, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7829 | 23.0 | 460 | 2.5177 | 0.0846 | 0.1345 | 0.5462 | [0.4930088449307022, 0.7882521985751578, 0.9101626308882405, 0.44989356331658825, 0.8198836997398983, 0.40375010070087813, 0.0, 0.255825791600409, 0.006403338369666883, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10222337548824124, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9612653114893605, 0.8876050420168067, 0.9877291765568382, 0.8127343690211984, 0.8200603168290581, 0.44887037907794275, nan, 0.28685150295236317, 0.006739156772473952, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.3022288621497214, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1919 | 24.0 | 480 | 2.5331 | 0.0882 | 0.1434 | 0.5555 | [0.5115400805821155, 0.7956842451360199, 0.8616338898084154, 0.45560905508889904, 0.8224094829727, 0.5002673724089801, 0.0, 0.3255104125870554, 0.012325574865934807, nan, 0.0, 0.0, 0.0007629074402548111, 0.0, 0.0, 0.0, 0.0, nan, 0.12330337703247325, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9425568912967359, 0.9430147058823529, 0.9791480130520218, 0.7440562740746409, 0.8231590110691324, 0.6179440675309554, nan, 0.3862317339893018, 0.013296583474678943, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.42990390050876204, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.4307 | 25.0 | 500 | 2.4986 | 0.0878 | 0.1390 | 0.5565 | [0.5067241688681817, 0.8073821293627443, 0.9322059866647752, 0.44890528812665903, 0.8061694443524071, 0.41967209810856837, 0.0, 0.3408865002560274, 0.016331731047663174, nan, 0.0, 0.0, 0.000765023141950044, 0.0, 0.0, 0.0, 0.0, nan, 0.11318709117849012, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9637754280420943, 0.9159240309026837, 0.987182337706041, 0.7603568786510548, 0.8063564658315105, 0.4662569131904794, nan, 0.40015254297216346, 0.017082626605282286, nan, 0.0, nan, 0.0007652060795623021, nan, 0.0, 0.0, nan, nan, 0.3793911007025761, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7454 | 26.0 | 520 | 2.4647 | 0.0931 | 0.1433 | 0.5644 | [0.5157605506728834, 0.803062049148433, 0.9502783532598003, 0.4501630383564226, 0.8001457822543238, 0.5544864444466602, 0.0, 0.34386147348502605, 0.03533719952693749, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10768117416918258, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.966183603428178, 0.9091047709406344, 0.9867737108724783, 0.7615685824341216, 0.8003579240405647, 0.6225789839009427, nan, 0.407542290266041, 0.03755754785558517, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.38451909876443513, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.9301 | 27.0 | 540 | 2.4489 | 0.0876 | 0.1466 | 0.5486 | [0.507267107485819, 0.7927630641160874, 0.7813868735967076, 0.4633558799098137, 0.7889578677261816, 0.5558062931081549, 0.0, 0.2951652047146916, 0.0721112427309929, nan, 0.0, 0.0, 0.0030247802978979207, 0.0, 0.0, 0.0, 0.0, nan, 0.12245389551577283, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9121651269610748, 0.9422523041474654, 0.9766361598692395, 0.7470536465906482, 0.7897030556107908, 0.6838404872259913, nan, 0.3599913121221152, 0.08112733220256846, nan, 0.0, nan, 0.003032129090265622, nan, 0.0, 0.0, nan, nan, 0.5131632076233545, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.327 | 28.0 | 560 | 2.4543 | 0.0895 | 0.1445 | 0.5619 | [0.5166661067571481, 0.7768677863017486, 0.9314251193832346, 0.45081840103589177, 0.7494204146519176, 0.5238211228089168, 0.0, 0.3455096827773653, 0.034265018714489594, nan, 0.0, 0.0, 0.002371911971460543, 0.0, 0.0, 0.0, 0.0, nan, 0.14420769872731065, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543126284421765, 0.9521974112225535, 0.9728383340043627, 0.7415946021785158, 0.7499171472128322, 0.6500526186157946, nan, 0.4160786353970411, 0.03854191906954204, nan, 0.0, nan, 0.0023721388466431364, nan, 0.0, 0.0, nan, nan, 0.4465396107566825, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.5732 | 29.0 | 580 | 2.4811 | 0.0905 | 0.1438 | 0.5573 | [0.5287957944150393, 0.8163827989505047, 0.9545677455513522, 0.4680606710241056, 0.7693352098683666, 0.5231390901672386, 0.0, 0.3055402033194326, 0.05167843946258601, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.10709085097314959, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9566114756205484, 0.9330780699376525, 0.9825432213014764, 0.7407527869187011, 0.7699343805925631, 0.5956987080450505, nan, 0.36859837253822414, 0.05994063484371214, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4869983041266252, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.3826 | 30.0 | 600 | 2.4233 | 0.0904 | 0.1416 | 0.5575 | [0.5119552280143455, 0.8199371986427764, 0.9542587763055794, 0.46780402658887665, 0.7983808215095777, 0.5227657946477864, 0.0, 0.2978553040543449, 0.01962687827114638, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.12628517230477396, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9640251100299766, 0.9334592708050963, 0.9828617098629298, 0.7486224841203031, 0.7990819911181812, 0.6042632274243748, nan, 0.3494850411916536, 0.02112612066876666, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.4032140838245982, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.6463 | 31.0 | 620 | 2.4264 | 0.0891 | 0.1424 | 0.5517 | [0.5087716983709148, 0.7961915655249836, 0.9422048694584277, 0.5216542842991579, 0.7608908022187267, 0.4894583034188466, 0.0, 0.2650125146137745, 0.029923016240036997, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.1422919548457617, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9685755273239262, 0.9552639604228789, 0.9773993305730991, 0.6716792938955639, 0.7614833963014516, 0.541904569982759, nan, 0.33319021906585106, 0.03331717954930943, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, 0.5939594605507551, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1258 | 32.0 | 640 | 2.3982 | 0.0905 | 0.1459 | 0.5696 | [0.5188148838822648, 0.803991121325235, 0.9411620809356289, 0.46322936273191717, 0.7986109970939301, 0.4870755144136439, 0.0, 0.38434420091878657, 0.07936659093691757, nan, 0.0, 0.0, 0.01315502027867747, 0.0, 0.0, 0.0005077173030056864, 0.0, nan, 0.1251307754069244, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9634592626846575, 0.9542558959067498, 0.9710295593440338, 0.7470919109206398, 0.8060084841254059, 0.5419493517834352, nan, 0.4796415745263339, 0.08835110249575963, nan, 0.0, nan, 0.013247630252422355, nan, 0.0, 0.0005097013150293928, nan, nan, 0.41532746507308405, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.2958 | 33.0 | 660 | 2.3860 | 0.0955 | 0.1524 | 0.5735 | [0.5375765406531137, 0.8110853226428411, 0.954462712687812, 0.47733671569201686, 0.7880233174350821, 0.5977448263382951, 0.0, 0.3451584856219023, 0.15232419259491609, nan, 0.0, 0.0, 0.00037936626864822315, 0.0, 0.0, 0.0, 0.0, nan, 0.11023077406778595, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9564489607171931, 0.9443446733532123, 0.9799171929740221, 0.7536096017958726, 0.7884934049181415, 0.7026824298605047, nan, 0.4351263025502963, 0.19121032226799128, nan, 0.0, nan, 0.00038260303978115105, nan, 0.0, 0.0, nan, nan, 0.4954776710005653, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.6949 | 34.0 | 680 | 2.4148 | 0.0918 | 0.1475 | 0.5613 | [0.5290693028524354, 0.7945630929959845, 0.9478845254791959, 0.4378261787685903, 0.7753794713058447, 0.5583728198407097, 0.0, 0.294060489825014, 0.09295956791297402, nan, 0.0, 0.0, 0.015815294162562516, 0.0, 0.0, 0.0, 0.0, nan, 0.14532379891865785, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9488270117498275, 0.9470639062076444, 0.974707200846098, 0.7404658044437642, 0.7762146218598793, 0.7440720091354873, nan, 0.35005581456431806, 0.11103707293433487, nan, 0.0, nan, 0.015849330922934182, nan, 0.0, 0.0, nan, nan, 0.4373738189453283, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7268 | 35.0 | 700 | 2.3556 | 0.0926 | 0.1479 | 0.5638 | [0.5410470829128678, 0.7780415307765347, 0.9476531340280946, 0.4552572445110248, 0.7523007150423728, 0.5545934323690384, 0.0, 0.3147277402817886, 0.15298449658591226, nan, 0.0, 0.0, 0.013374103392106993, 0.0, 0.0, 0.0, 0.0, nan, 0.11979662163590622, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9529268195390191, 0.9579069531038221, 0.9737517351617381, 0.7442220861712712, 0.7531484059123749, 0.6352970152929849, nan, 0.3820443788924976, 0.19442088684274292, nan, 0.0, nan, 0.013429366696318402, nan, 0.0, 0.0, nan, nan, 0.4576031656302996, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8777 | 36.0 | 720 | 2.3999 | 0.0909 | 0.1447 | 0.5650 | [0.5195887577564181, 0.7589975376935272, 0.9568716745005531, 0.47163186638014915, 0.7390994868399272, 0.5596583309503823, 0.0, 0.34237489727059855, 0.05195082273248527, nan, 0.0, 0.0, 0.019154068221530048, 0.0, 0.0, 0.0, 0.0, nan, 0.12754062385704582, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9590093091491458, 0.9530868799132556, 0.9823328986665545, 0.7264419275018494, 0.7398588188506662, 0.6792503526566803, nan, 0.4187506629557979, 0.05900169614732251, nan, 0.0, nan, 0.019292758280964543, nan, 0.0, 0.0, nan, nan, 0.39425018170071874, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.5147 | 37.0 | 740 | 2.3679 | 0.0929 | 0.1519 | 0.5702 | [0.5364579552707658, 0.8087594119494329, 0.9561376717960766, 0.40982070007215154, 0.8017200811359027, 0.6135778964476434, 0.0, 0.3455637603595059, 0.1178796275898851, 0.0, 0.0, 0.0, 0.02513673698854649, 0.0, 0.0, 0.0, 0.0, nan, 0.12247120239988897, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9499276808680068, 0.9381014502575223, 0.9782526395490683, 0.7751843065227928, 0.8186849605620733, 0.739649806318712, nan, 0.3864741863953894, 0.15397080203537677, nan, 0.0, nan, 0.025232670473566914, nan, 0.0, 0.0, nan, nan, 0.463215698942098, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8831 | 38.0 | 760 | 2.3579 | 0.0936 | 0.1469 | 0.5595 | [0.5304507686446, 0.8067229372973851, 0.9543009495299125, 0.4539171592059717, 0.8000993130844989, 0.6138934235074627, 0.0, 0.2697607964234687, 0.10662386132885616, nan, 0.0, 0.0, 0.019987270464628042, 0.0, 0.0, 0.0, 0.0, nan, 0.1252110694183865, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9523358562540906, 0.9254201680672269, 0.9808005480406944, 0.7376023570827275, 0.8009876052230397, 0.7072949553301539, nan, 0.32520444294034156, 0.1423400775381633, nan, 0.0, nan, 0.020124919892488546, nan, 0.0, 0.0, nan, nan, 0.43115561657110557, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.5653 | 39.0 | 780 | 2.3282 | 0.0907 | 0.1479 | 0.5667 | [0.5275596317733419, 0.7942082826255595, 0.9517817247869964, 0.4541504138802404, 0.807428478543563, 0.5882710451677632, 0.0, 0.32820640572452037, 0.11085403522999678, 0.0, 0.0, 0.0, 0.028076457871027343, 0.0, 0.0, 0.0, 0.0, nan, 0.12453114251969526, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9578436340696244, 0.945335795608566, 0.9753802332778482, 0.7358294431264508, 0.8231092993968318, 0.6826873558585791, nan, 0.38597412830783373, 0.1407650835958323, nan, 0.0, nan, 0.028197844031870832, nan, 0.0, 0.0, nan, nan, 0.3874263102640717, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1075 | 40.0 | 800 | 2.3290 | 0.0938 | 0.1462 | 0.5654 | [0.525696705455315, 0.8097656901943995, 0.9540430140437204, 0.468111166556006, 0.8067950394887163, 0.6026423969326851, 0.0, 0.3172524415747904, 0.07324316230784587, nan, 0.0, 0.0, 0.01990007056086351, 0.0, 0.0, 0.0, 0.0, nan, 0.11110484334984262, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9615755672139479, 0.9453188533477908, 0.9740341684143476, 0.7238909721690773, 0.807450122622125, 0.6827545285595934, nan, 0.3946165463664971, 0.08889629270656652, nan, 0.0, nan, 0.019962313600581556, nan, 0.0, 0.0, nan, nan, 0.3976419284502948, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.858 | 41.0 | 820 | 2.2884 | 0.0861 | 0.1402 | 0.5554 | [0.5197047648538989, 0.7522506253594982, 0.9529308898004213, 0.4432910713601716, 0.7616208786659381, 0.5651882235681975, 0.0, 0.2511281328404124, 0.07432842851653051, 0.0, 0.0, 0.0, 0.03464474456840869, 0.0, 0.0, 0.0, 0.0, nan, 0.12361143100118704, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9645451577207137, 0.9527649769585254, 0.9758249154202546, 0.7380551516542946, 0.7621130774839265, 0.6602404782696312, nan, 0.28615950337665486, 0.0908044584443906, nan, 0.0, nan, 0.034989047987986265, nan, 0.0, 0.0, nan, nan, 0.2817168699022854, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.8407 | 42.0 | 840 | 2.3426 | 0.0886 | 0.1434 | 0.5560 | [0.5319928364107948, 0.7797183410955866, 0.9486764705882353, 0.4390484927134142, 0.7554676247082002, 0.5886622292884924, 0.0, 0.2479837292662356, 0.0663740228502706, 0.0, 0.0, 0.0, 0.03285631534465895, 0.0, 0.0, 0.0, 0.0, nan, 0.12666638151047235, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9545416267150862, 0.9530360531309298, 0.969136655629736, 0.7508545700364787, 0.7561311062504142, 0.7336826313786078, nan, 0.2851492850179566, 0.08358068815119941, nan, 0.0, nan, 0.03309516294106957, nan, 0.0, 0.0, nan, nan, 0.3587175967051603, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.094 | 43.0 | 860 | 2.3197 | 0.0938 | 0.1469 | 0.5661 | [0.5321809020623814, 0.7960168163032635, 0.9480932639440182, 0.45480238271229184, 0.8054216967228046, 0.6017263259674073, 0.0, 0.3017606715947139, 0.11857016219842353, nan, 0.0, 0.0, 0.024840957285671008, 0.0, 0.0, 0.0, 0.0, nan, 0.10708057202650854, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.963567113484157, 0.9463353889943074, 0.9688542223771265, 0.7337823014719013, 0.8059422018956718, 0.6688497794496316, nan, 0.37692762290569104, 0.1498970196268476, nan, 0.0, nan, 0.02509875940964351, nan, 0.0, 0.0, nan, nan, 0.38427683113946537, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.2333 | 44.0 | 880 | 2.3351 | 0.0924 | 0.1440 | 0.5649 | [0.5234600096067719, 0.7993256716093246, 0.9540437935614297, 0.4402231330031544, 0.8011626173796393, 0.5988770094608107, 0.0, 0.31263255344218144, 0.06051157899885066, nan, 0.0, 0.0, 0.01943918475313282, 0.0, 0.0, 0.0, 0.0, nan, 0.1115382392045783, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9644136683898171, 0.9338489428029276, 0.9755484913857858, 0.7493877707201347, 0.8016007158480811, 0.6973422001298673, nan, 0.37526581370563244, 0.0749485098134238, nan, 0.0, nan, 0.01954145025682229, nan, 0.0, 0.0, nan, nan, 0.3116369215860454, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7624 | 45.0 | 900 | 2.3306 | 0.0882 | 0.1482 | 0.5742 | [0.5258849135317549, 0.7047259963908543, 0.9470757995170505, 0.4796076709120187, 0.7687998311222253, 0.5991964723174914, 0.0, 0.3769427430313193, 0.12063174979711593, 0.0, 0.0, 0.0, 0.02665966248604636, 0.0, 0.0, 0.0, 0.0, nan, 0.12325740513726943, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9648908712423969, 0.9626761995120628, 0.9663063138855004, 0.7312759878574526, 0.7845330416915225, 0.6845793869371487, nan, 0.460987892532971, 0.14632299491155804, nan, 0.0, nan, 0.027183945976450782, nan, 0.0, 0.0, nan, nan, 0.3491480255188565, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.7946 | 46.0 | 920 | 2.3309 | 0.0924 | 0.1501 | 0.5684 | [0.5361640179143524, 0.7846633151596686, 0.9536386080413826, 0.4568883842931527, 0.7759403160506726, 0.6158210406873075, 0.0, 0.32342639353641117, 0.11150484571020869, nan, 0.0, 0.0, 0.0293502430935062, 0.0, 0.0, 0.0, 0.0, nan, 0.123842605156038, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9543968407102787, 0.9387537272973706, 0.9748934866084574, 0.7442603505012627, 0.7876151653741632, 0.72303575826784, nan, 0.3969147931325356, 0.14200690574267022, nan, 0.0, nan, 0.02979521172295714, nan, 0.0, 0.0, nan, nan, 0.460671888879916, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.7709 | 47.0 | 940 | 2.3227 | 0.0913 | 0.1509 | 0.5704 | [0.5424758709486558, 0.7733391833636282, 0.9512393001599097, 0.4543827573277878, 0.7687249054047356, 0.6394879643801308, 0.0, 0.32752722872125856, 0.14992535250207353, 0.0, 0.0, 0.0, 0.02317038711500424, 0.0, 0.0, 0.0, 0.0, nan, 0.11885573064179251, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9536167691741732, 0.9406597316345893, 0.9723035135898468, 0.7496747531950716, 0.7709286140385763, 0.7203600456774367, nan, 0.4101183470807215, 0.20530954688635814, nan, 0.0, nan, 0.02353008694654079, nan, 0.0, 0.0, nan, nan, 0.4388678026326415, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.0854 | 48.0 | 960 | 2.3119 | 0.0923 | 0.1515 | 0.5675 | [0.5484915919545876, 0.7502571445858459, 0.9503989939945234, 0.4518522223975188, 0.7665169241908147, 0.6387045798547317, 0.0, 0.3014816981040113, 0.14846499417229633, nan, 0.0, 0.0, 0.024767918571226614, 0.0, 0.0, 0.0, 0.0, nan, 0.1255448070585734, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9485950586604931, 0.9453781512605042, 0.971912914410706, 0.7488712022652483, 0.7718897063697223, 0.7511475336423278, nan, 0.3749223394636751, 0.19869154349406348, nan, 0.0, nan, 0.025137019713621624, nan, 0.0, 0.0, nan, nan, 0.4768634418153921, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 1.2518 | 49.0 | 980 | 2.3313 | 0.0959 | 0.1480 | 0.5662 | [0.5342926601876615, 0.8062937473403085, 0.9553006514505319, 0.47239519660475027, 0.8008359767380386, 0.625160514049403, 0.0, 0.28114668652271035, 0.08647048160009668, nan, 0.0, 0.0, 0.027304971193724548, 0.0, 0.0, 0.0, 0.0, nan, 0.11171062547098719, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9601395264315716, 0.9309179316888045, 0.9798991653196003, 0.736824315706232, 0.8032246304765692, 0.6922034885022726, nan, 0.38143824787727865, 0.1083565543978677, nan, 0.0, nan, 0.02783437114407874, nan, 0.0, 0.0, nan, nan, 0.44892190906888474, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | | 2.1639 | 50.0 | 1000 | 2.3127 | 0.0939 | 0.1473 | 0.5650 | [0.5336831298064109, 0.7946016311618073, 0.9542612124083791, 0.4698340415687516, 0.7895064764715527, 0.6196465123602583, 0.0, 0.28656549336868975, 0.10462468913822695, nan, 0.0, 0.0, 0.024119941721107298, 0.0, 0.0, 0.0, 0.0, nan, 0.11790298802632052, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan] | [0.9584331199463405, 0.9359158986175116, 0.9802837552806004, 0.7382528506925844, 0.797905481540399, 0.704932715344484, nan, 0.3646332654803336, 0.13570692997334627, nan, 0.0, nan, 0.024543985001960842, nan, 0.0, 0.0, nan, nan, 0.39719777113785026, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan] | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dccuchile/albert-tiny-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
DEPRECATED — PLEASE USE [la_core_web_md](https://huggingface.co/latincy/la_core_web_md)
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: creativeml-openrail-m language: - en ---
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 290.62 +/- 14.41 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet-50-shortSleeveCleanedData results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9781420765027322 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-shortSleeveCleanedData This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1103 - Accuracy: 0.9781 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 7 - total_train_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.973 | 1.0 | 147 | 0.9371 | 0.7268 | | 0.6565 | 2.0 | 294 | 0.5520 | 0.8710 | | 0.4609 | 3.0 | 441 | 0.2983 | 0.9279 | | 0.3937 | 4.0 | 588 | 0.2051 | 0.9486 | | 0.3723 | 5.0 | 735 | 0.1521 | 0.9727 | | 0.3926 | 6.0 | 882 | 0.1490 | 0.9672 | | 0.3326 | 7.0 | 1029 | 0.1367 | 0.9650 | | 0.3166 | 8.0 | 1176 | 0.1109 | 0.9738 | | 0.3492 | 9.0 | 1323 | 0.1108 | 0.9760 | | 0.3228 | 10.0 | 1470 | 0.1103 | 0.9781 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: afl-3.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer model-index: - name: codehelper-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codehelper-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Chandanbhat/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: openrail --- # GPTQ-for-SantaCoder Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here. If you want 4-bit weights, visit [santacoder-GPTQ-4bit-128g](https://huggingface.co/mayank31398/santacoder-GPTQ-4bit-128g). ## Results | [SantaCoder](https://arxiv.org/abs/2301.03988) | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) | | -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- | | FP32 | 32 | - | 4344.722 | 24.927 | 38.574 | 27.779 | 2.619 | 4394 | | BF16 | 16 | - | 2173.680 | 24.960 | 38.597 | 27.794 | 2.621 | 2195 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | -1 | 1396.548 | 24.936 | 38.592 | 27.785 | 2.619 | 1411 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | -1 | 911.384 | 26.581 | 40.717 | 29.232 | 2.658 | 913 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | -1 | - | 11761.473 | 7273.338 | 9124.941 | 2485.844 | 789 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 2 | -1 | - | 67976.797 | 68994.484 | 73294.438 | 45370.488 | 649 | # License The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license). # Acknowledgements Thanks to everyone in BigCode who worked so hard to create these code models.
CharlieChen/feedback-bigbird
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - is tags: - Icelandic - Fallbeyging - Declension - Inflection - GED - IceBERT --- Add an Icelandic sentence in to the text box, and the model will return a classification of either correct or incorrect declension Bættu íslenskri setningu inn í textareitinn og líkanið mun skila flokkun með annað hvort rétta eða ranga beygingu
Charlotte77/model_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: openrail --- # GPTQ-for-SantaCoder Visit [GPTQ-for-SantaCoder](https://github.com/mayank31398/GPTQ-for-SantaCoder) for instructions on how to use the model weights here. If you want 8-bit weights, visit [santacoder-GPTQ-8bit-128g](https://huggingface.co/mayank31398/santacoder-GPTQ-8bit-128g). ## Results | [SantaCoder](https://arxiv.org/abs/2301.03988) | Bits | group-size | memory(MiB) | wikitext2 | ptb | c4 | stack | checkpoint size(MB) | | -------------------------------------------------- | ---- | ---------- | ----------- | --------- | ---------- | ---------- | ---------- | ------------------- | | FP32 | 32 | - | 4344.722 | 24.927 | 38.574 | 27.779 | 2.619 | 4394 | | BF16 | 16 | - | 2173.680 | 24.960 | 38.597 | 27.794 | 2.621 | 2195 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 8 | -1 | 1396.548 | 24.936 | 38.592 | 27.785 | 2.619 | 1411 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 4 | -1 | 911.384 | 26.581 | 40.717 | 29.232 | 2.658 | 913 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 3 | -1 | - | 11761.473 | 7273.338 | 9124.941 | 2485.844 | 789 | | [GPTQ](https://arxiv.org/abs/2210.17323) | 2 | -1 | - | 67976.797 | 68994.484 | 73294.438 | 45370.488 | 649 | # License The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license). # Acknowledgements Thanks to everyone in BigCode who worked so hard to create these code models.
ChaseBread/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - Alvinyz/lora-panorama These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png)
ChauhanVipul/BERT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - EleutherAI/pile - togethercomputer/RedPajama-Data-1T --- # RWKV-4 PilePlus ## Model Description RWKV-4-pile models finetuning on [RedPajama + some of Pile v2 = 1.7T tokens]. Updated with 2020+2021+2022 data, and better at all European languages. Although some of these are intermedia checkpoints (XXXGtokens means finetuned for XXXG tokens), you can already use them because I am finetuning from Pile models (instead of retraining). Note: not instruct tuned yet, and recommended to replace vanilla Pile models. 7B and 14B coming soon. See https://github.com/BlinkDL/RWKV-LM for details. Use https://github.com/BlinkDL/ChatRWKV to run it.
Cheatham/xlm-roberta-base-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
Access to model digitaltherapy/Reset is restricted and you are not in the authorized list. Visit https://huggingface.co/digitaltherapy/Reset to ask for access.
Cheatham/xlm-roberta-large-finetuned-d1
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 4.999999873689376e-05 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | mixed_float16 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Cheatham/xlm-roberta-large-finetuned-d12
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-ko-1159h results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-ko-1159h This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1752 - Wer: 10.4449 ## Model description The model was trained to transcript the audio sources into Korean text. ## Intended uses & limitations More information needed ## Training and evaluation data I downloaded all data from AI-HUB (https://aihub.or.kr/). Two datasets, in particular, caught my attention: "Instruction Audio Set" and "Noisy Conversation Audio Set". I intentionally gathered 796 hours of audio from the first dataset and 363 hours of audio from the second dataset (This includes statistics for the training data only, and excludes information about the validation data.). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 18483 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0953 | 0.33 | 2053 | 0.2155 | 13.0432 | | 0.0803 | 0.67 | 4106 | 0.1951 | 12.0399 | | 0.0746 | 1.0 | 6159 | 0.1836 | 11.3995 | | 0.0509 | 1.33 | 8212 | 0.1819 | 11.0396 | | 0.0525 | 1.67 | 10265 | 0.1782 | 10.9039 | | 0.0493 | 2.0 | 12318 | 0.1743 | 10.7255 | | 0.034 | 2.33 | 14371 | 0.1784 | 10.7377 | | 0.0326 | 2.67 | 16424 | 0.1765 | 10.5471 | | 0.0293 | 3.0 | 18477 | 0.1752 | 10.4449 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.11.0 - Tokenizers 0.13.2
Cheatham/xlm-roberta-large-finetuned-d12_2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Cheatham/xlm-roberta-large-finetuned-d1r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: carvax_VITbeans_model results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9924812030075187 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # carvax_VITbeans_model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0292 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1387 | 3.85 | 500 | 0.0292 | 0.9925 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Chester/traffic-rec
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- Pythia 410m model fine tuned for dialog. Example prompt ``` ###I### Jhon talks to Mike. Jhon tells Mary about how he likes his new job. happy ###P### Jhon: ... Mary: ... ```
Ching/negation_detector
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1411783471228461058/NACe_2Kf_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">TStorm</div> <div style="text-align: center; font-size: 14px;">@tstorm106</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from TStorm. | Data | TStorm | | --- | --- | | Tweets downloaded | 3220 | | Retweets | 171 | | Short tweets | 900 | | Tweets kept | 2149 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/cxkqs7up/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tstorm106's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/72bi3ylz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/72bi3ylz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tstorm106') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Chinmay/mlindia
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-diabetes_sentences results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-diabetes_sentences This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5278 - Accuracy: 0.8462 - F1: 0.8441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1171 | 1.0 | 2 | 1.1097 | 0.4103 | 0.3239 | | 1.0594 | 2.0 | 4 | 1.0910 | 0.5641 | 0.4791 | | 1.0633 | 3.0 | 6 | 1.0726 | 0.5897 | 0.4859 | | 1.0348 | 4.0 | 8 | 1.0520 | 0.6410 | 0.5779 | | 0.9992 | 5.0 | 10 | 1.0326 | 0.5385 | 0.4980 | | 0.9915 | 6.0 | 12 | 1.0260 | 0.4872 | 0.4518 | | 0.9447 | 7.0 | 14 | 0.9811 | 0.5641 | 0.5369 | | 0.8217 | 8.0 | 16 | 0.9087 | 0.8205 | 0.8205 | | 0.8067 | 9.0 | 18 | 0.8497 | 0.8462 | 0.8437 | | 0.7156 | 10.0 | 20 | 0.8001 | 0.8462 | 0.8437 | | 0.6859 | 11.0 | 22 | 0.7691 | 0.8462 | 0.8437 | | 0.5988 | 12.0 | 24 | 0.7399 | 0.8462 | 0.8437 | | 0.5365 | 13.0 | 26 | 0.6851 | 0.8462 | 0.8437 | | 0.4467 | 14.0 | 28 | 0.6255 | 0.8462 | 0.8437 | | 0.4347 | 15.0 | 30 | 0.5791 | 0.8462 | 0.8437 | | 0.363 | 16.0 | 32 | 0.5482 | 0.8462 | 0.8437 | | 0.2946 | 17.0 | 34 | 0.5359 | 0.7949 | 0.7967 | | 0.2343 | 18.0 | 36 | 0.4981 | 0.7949 | 0.7967 | | 0.1999 | 19.0 | 38 | 0.4467 | 0.8718 | 0.8706 | | 0.1615 | 20.0 | 40 | 0.4282 | 0.8718 | 0.8706 | | 0.1314 | 21.0 | 42 | 0.4236 | 0.8718 | 0.8706 | | 0.1386 | 22.0 | 44 | 0.4183 | 0.8718 | 0.8706 | | 0.0973 | 23.0 | 46 | 0.4291 | 0.8462 | 0.8467 | | 0.0853 | 24.0 | 48 | 0.4173 | 0.8462 | 0.8467 | | 0.0732 | 25.0 | 50 | 0.3749 | 0.8462 | 0.8467 | | 0.0641 | 26.0 | 52 | 0.3341 | 0.8974 | 0.8971 | | 0.0541 | 27.0 | 54 | 0.3223 | 0.8974 | 0.8971 | | 0.0481 | 28.0 | 56 | 0.3277 | 0.8974 | 0.8971 | | 0.0383 | 29.0 | 58 | 0.3415 | 0.8974 | 0.8971 | | 0.036 | 30.0 | 60 | 0.3609 | 0.8974 | 0.8971 | | 0.0299 | 31.0 | 62 | 0.3823 | 0.8974 | 0.8971 | | 0.0321 | 32.0 | 64 | 0.4026 | 0.8974 | 0.8971 | | 0.03 | 33.0 | 66 | 0.4176 | 0.8718 | 0.8706 | | 0.0277 | 34.0 | 68 | 0.4201 | 0.8718 | 0.8706 | | 0.0236 | 35.0 | 70 | 0.4129 | 0.8718 | 0.8706 | | 0.022 | 36.0 | 72 | 0.4003 | 0.8974 | 0.8971 | | 0.022 | 37.0 | 74 | 0.3865 | 0.8974 | 0.8971 | | 0.0211 | 38.0 | 76 | 0.3731 | 0.8974 | 0.8971 | | 0.017 | 39.0 | 78 | 0.3634 | 0.8718 | 0.8705 | | 0.0188 | 40.0 | 80 | 0.3618 | 0.8718 | 0.8705 | | 0.0169 | 41.0 | 82 | 0.3683 | 0.8718 | 0.8705 | | 0.0161 | 42.0 | 84 | 0.3810 | 0.8718 | 0.8705 | | 0.0162 | 43.0 | 86 | 0.3944 | 0.8718 | 0.8705 | | 0.0141 | 44.0 | 88 | 0.4091 | 0.8974 | 0.8971 | | 0.0132 | 45.0 | 90 | 0.4233 | 0.8974 | 0.8971 | | 0.0143 | 46.0 | 92 | 0.4335 | 0.8718 | 0.8706 | | 0.0142 | 47.0 | 94 | 0.4413 | 0.8718 | 0.8706 | | 0.0125 | 48.0 | 96 | 0.4436 | 0.8718 | 0.8706 | | 0.0115 | 49.0 | 98 | 0.4437 | 0.8718 | 0.8706 | | 0.0106 | 50.0 | 100 | 0.4410 | 0.8462 | 0.8441 | | 0.0109 | 51.0 | 102 | 0.4376 | 0.8462 | 0.8441 | | 0.0119 | 52.0 | 104 | 0.4341 | 0.8462 | 0.8441 | | 0.012 | 53.0 | 106 | 0.4322 | 0.8718 | 0.8705 | | 0.0122 | 54.0 | 108 | 0.4314 | 0.8718 | 0.8705 | | 0.0107 | 55.0 | 110 | 0.4315 | 0.8718 | 0.8705 | | 0.0102 | 56.0 | 112 | 0.4324 | 0.8718 | 0.8705 | | 0.0102 | 57.0 | 114 | 0.4351 | 0.8462 | 0.8441 | | 0.0098 | 58.0 | 116 | 0.4379 | 0.8462 | 0.8441 | | 0.009 | 59.0 | 118 | 0.4399 | 0.8462 | 0.8441 | | 0.0099 | 60.0 | 120 | 0.4415 | 0.8462 | 0.8441 | | 0.0094 | 61.0 | 122 | 0.4429 | 0.8462 | 0.8441 | | 0.008 | 62.0 | 124 | 0.4479 | 0.8462 | 0.8441 | | 0.0084 | 63.0 | 126 | 0.4531 | 0.8462 | 0.8441 | | 0.0079 | 64.0 | 128 | 0.4571 | 0.8462 | 0.8441 | | 0.0079 | 65.0 | 130 | 0.4607 | 0.8462 | 0.8441 | | 0.0076 | 66.0 | 132 | 0.4637 | 0.8462 | 0.8441 | | 0.0072 | 67.0 | 134 | 0.4659 | 0.8462 | 0.8441 | | 0.0076 | 68.0 | 136 | 0.4693 | 0.8462 | 0.8441 | | 0.0078 | 69.0 | 138 | 0.4726 | 0.8462 | 0.8441 | | 0.0066 | 70.0 | 140 | 0.4729 | 0.8462 | 0.8441 | | 0.0082 | 71.0 | 142 | 0.4711 | 0.8462 | 0.8441 | | 0.0075 | 72.0 | 144 | 0.4673 | 0.8462 | 0.8441 | | 0.0065 | 73.0 | 146 | 0.4645 | 0.8462 | 0.8441 | | 0.0064 | 74.0 | 148 | 0.4623 | 0.8462 | 0.8441 | | 0.0075 | 75.0 | 150 | 0.4613 | 0.8718 | 0.8705 | | 0.0064 | 76.0 | 152 | 0.4616 | 0.8718 | 0.8705 | | 0.0063 | 77.0 | 154 | 0.4627 | 0.8462 | 0.8441 | | 0.0072 | 78.0 | 156 | 0.4635 | 0.8462 | 0.8441 | | 0.0058 | 79.0 | 158 | 0.4636 | 0.8462 | 0.8441 | | 0.006 | 80.0 | 160 | 0.4641 | 0.8462 | 0.8441 | | 0.0061 | 81.0 | 162 | 0.4651 | 0.8462 | 0.8441 | | 0.0054 | 82.0 | 164 | 0.4675 | 0.8462 | 0.8441 | | 0.0066 | 83.0 | 166 | 0.4692 | 0.8462 | 0.8441 | | 0.0056 | 84.0 | 168 | 0.4699 | 0.8462 | 0.8441 | | 0.0058 | 85.0 | 170 | 0.4706 | 0.8462 | 0.8441 | | 0.0056 | 86.0 | 172 | 0.4718 | 0.8462 | 0.8441 | | 0.005 | 87.0 | 174 | 0.4745 | 0.8462 | 0.8441 | | 0.0062 | 88.0 | 176 | 0.4766 | 0.8462 | 0.8441 | | 0.0052 | 89.0 | 178 | 0.4786 | 0.8462 | 0.8441 | | 0.0055 | 90.0 | 180 | 0.4801 | 0.8462 | 0.8441 | | 0.0052 | 91.0 | 182 | 0.4811 | 0.8462 | 0.8441 | | 0.0052 | 92.0 | 184 | 0.4818 | 0.8462 | 0.8441 | | 0.0057 | 93.0 | 186 | 0.4832 | 0.8462 | 0.8441 | | 0.005 | 94.0 | 188 | 0.4844 | 0.8462 | 0.8441 | | 0.0055 | 95.0 | 190 | 0.4850 | 0.8462 | 0.8441 | | 0.005 | 96.0 | 192 | 0.4852 | 0.8462 | 0.8441 | | 0.0055 | 97.0 | 194 | 0.4860 | 0.8462 | 0.8441 | | 0.0047 | 98.0 | 196 | 0.4872 | 0.8462 | 0.8441 | | 0.0043 | 99.0 | 198 | 0.4889 | 0.8462 | 0.8441 | | 0.0049 | 100.0 | 200 | 0.4902 | 0.8462 | 0.8441 | | 0.0048 | 101.0 | 202 | 0.4909 | 0.8462 | 0.8441 | | 0.0044 | 102.0 | 204 | 0.4908 | 0.8462 | 0.8441 | | 0.004 | 103.0 | 206 | 0.4915 | 0.8462 | 0.8441 | | 0.0044 | 104.0 | 208 | 0.4918 | 0.8462 | 0.8441 | | 0.0044 | 105.0 | 210 | 0.4935 | 0.8462 | 0.8441 | | 0.0043 | 106.0 | 212 | 0.4956 | 0.8462 | 0.8441 | | 0.004 | 107.0 | 214 | 0.4978 | 0.8462 | 0.8441 | | 0.0047 | 108.0 | 216 | 0.4987 | 0.8462 | 0.8441 | | 0.0037 | 109.0 | 218 | 0.4994 | 0.8462 | 0.8441 | | 0.0046 | 110.0 | 220 | 0.5012 | 0.8462 | 0.8441 | | 0.004 | 111.0 | 222 | 0.5021 | 0.8462 | 0.8441 | | 0.004 | 112.0 | 224 | 0.5030 | 0.8462 | 0.8441 | | 0.004 | 113.0 | 226 | 0.5044 | 0.8462 | 0.8441 | | 0.0039 | 114.0 | 228 | 0.5053 | 0.8462 | 0.8441 | | 0.0038 | 115.0 | 230 | 0.5058 | 0.8462 | 0.8441 | | 0.0041 | 116.0 | 232 | 0.5054 | 0.8462 | 0.8441 | | 0.0038 | 117.0 | 234 | 0.5047 | 0.8462 | 0.8441 | | 0.0035 | 118.0 | 236 | 0.5043 | 0.8462 | 0.8441 | | 0.004 | 119.0 | 238 | 0.5035 | 0.8462 | 0.8441 | | 0.0039 | 120.0 | 240 | 0.5029 | 0.8462 | 0.8441 | | 0.0036 | 121.0 | 242 | 0.5019 | 0.8462 | 0.8441 | | 0.0042 | 122.0 | 244 | 0.5012 | 0.8462 | 0.8441 | | 0.0033 | 123.0 | 246 | 0.5005 | 0.8462 | 0.8441 | | 0.0034 | 124.0 | 248 | 0.5003 | 0.8462 | 0.8441 | | 0.0038 | 125.0 | 250 | 0.5002 | 0.8462 | 0.8441 | | 0.0035 | 126.0 | 252 | 0.4998 | 0.8462 | 0.8441 | | 0.0033 | 127.0 | 254 | 0.5002 | 0.8462 | 0.8441 | | 0.0041 | 128.0 | 256 | 0.5010 | 0.8462 | 0.8441 | | 0.0036 | 129.0 | 258 | 0.5025 | 0.8462 | 0.8441 | | 0.0036 | 130.0 | 260 | 0.5037 | 0.8462 | 0.8441 | | 0.0032 | 131.0 | 262 | 0.5049 | 0.8462 | 0.8441 | | 0.0033 | 132.0 | 264 | 0.5061 | 0.8462 | 0.8441 | | 0.0038 | 133.0 | 266 | 0.5075 | 0.8462 | 0.8441 | | 0.0041 | 134.0 | 268 | 0.5087 | 0.8462 | 0.8441 | | 0.0034 | 135.0 | 270 | 0.5094 | 0.8462 | 0.8441 | | 0.0032 | 136.0 | 272 | 0.5107 | 0.8462 | 0.8441 | | 0.0035 | 137.0 | 274 | 0.5123 | 0.8462 | 0.8441 | | 0.0032 | 138.0 | 276 | 0.5138 | 0.8462 | 0.8441 | | 0.0031 | 139.0 | 278 | 0.5143 | 0.8462 | 0.8441 | | 0.0034 | 140.0 | 280 | 0.5145 | 0.8462 | 0.8441 | | 0.0036 | 141.0 | 282 | 0.5151 | 0.8462 | 0.8441 | | 0.003 | 142.0 | 284 | 0.5160 | 0.8462 | 0.8441 | | 0.0034 | 143.0 | 286 | 0.5162 | 0.8462 | 0.8441 | | 0.0031 | 144.0 | 288 | 0.5160 | 0.8462 | 0.8441 | | 0.0031 | 145.0 | 290 | 0.5157 | 0.8462 | 0.8441 | | 0.0032 | 146.0 | 292 | 0.5155 | 0.8462 | 0.8441 | | 0.0029 | 147.0 | 294 | 0.5159 | 0.8462 | 0.8441 | | 0.0032 | 148.0 | 296 | 0.5162 | 0.8462 | 0.8441 | | 0.0036 | 149.0 | 298 | 0.5164 | 0.8462 | 0.8441 | | 0.0028 | 150.0 | 300 | 0.5167 | 0.8462 | 0.8441 | | 0.0026 | 151.0 | 302 | 0.5172 | 0.8462 | 0.8441 | | 0.0028 | 152.0 | 304 | 0.5174 | 0.8462 | 0.8441 | | 0.0031 | 153.0 | 306 | 0.5172 | 0.8462 | 0.8441 | | 0.0029 | 154.0 | 308 | 0.5168 | 0.8462 | 0.8441 | | 0.0031 | 155.0 | 310 | 0.5168 | 0.8462 | 0.8441 | | 0.0033 | 156.0 | 312 | 0.5167 | 0.8462 | 0.8441 | | 0.003 | 157.0 | 314 | 0.5168 | 0.8462 | 0.8441 | | 0.0029 | 158.0 | 316 | 0.5175 | 0.8462 | 0.8441 | | 0.0031 | 159.0 | 318 | 0.5181 | 0.8462 | 0.8441 | | 0.003 | 160.0 | 320 | 0.5186 | 0.8462 | 0.8441 | | 0.0031 | 161.0 | 322 | 0.5190 | 0.8462 | 0.8441 | | 0.0032 | 162.0 | 324 | 0.5194 | 0.8462 | 0.8441 | | 0.0028 | 163.0 | 326 | 0.5201 | 0.8462 | 0.8441 | | 0.0026 | 164.0 | 328 | 0.5209 | 0.8462 | 0.8441 | | 0.0032 | 165.0 | 330 | 0.5218 | 0.8462 | 0.8441 | | 0.0031 | 166.0 | 332 | 0.5226 | 0.8462 | 0.8441 | | 0.0029 | 167.0 | 334 | 0.5234 | 0.8462 | 0.8441 | | 0.0032 | 168.0 | 336 | 0.5239 | 0.8462 | 0.8441 | | 0.0031 | 169.0 | 338 | 0.5240 | 0.8462 | 0.8441 | | 0.003 | 170.0 | 340 | 0.5243 | 0.8462 | 0.8441 | | 0.0031 | 171.0 | 342 | 0.5246 | 0.8462 | 0.8441 | | 0.0024 | 172.0 | 344 | 0.5250 | 0.8462 | 0.8441 | | 0.0025 | 173.0 | 346 | 0.5256 | 0.8462 | 0.8441 | | 0.0028 | 174.0 | 348 | 0.5265 | 0.8462 | 0.8441 | | 0.003 | 175.0 | 350 | 0.5272 | 0.8462 | 0.8441 | | 0.003 | 176.0 | 352 | 0.5275 | 0.8462 | 0.8441 | | 0.0027 | 177.0 | 354 | 0.5278 | 0.8462 | 0.8441 | | 0.0027 | 178.0 | 356 | 0.5277 | 0.8462 | 0.8441 | | 0.0028 | 179.0 | 358 | 0.5276 | 0.8462 | 0.8441 | | 0.0027 | 180.0 | 360 | 0.5274 | 0.8462 | 0.8441 | | 0.0028 | 181.0 | 362 | 0.5272 | 0.8462 | 0.8441 | | 0.0035 | 182.0 | 364 | 0.5270 | 0.8462 | 0.8441 | | 0.003 | 183.0 | 366 | 0.5269 | 0.8462 | 0.8441 | | 0.0028 | 184.0 | 368 | 0.5267 | 0.8462 | 0.8441 | | 0.0026 | 185.0 | 370 | 0.5266 | 0.8462 | 0.8441 | | 0.0033 | 186.0 | 372 | 0.5265 | 0.8462 | 0.8441 | | 0.0028 | 187.0 | 374 | 0.5265 | 0.8462 | 0.8441 | | 0.0025 | 188.0 | 376 | 0.5267 | 0.8462 | 0.8441 | | 0.0029 | 189.0 | 378 | 0.5268 | 0.8462 | 0.8441 | | 0.0029 | 190.0 | 380 | 0.5269 | 0.8462 | 0.8441 | | 0.0024 | 191.0 | 382 | 0.5270 | 0.8462 | 0.8441 | | 0.0031 | 192.0 | 384 | 0.5271 | 0.8462 | 0.8441 | | 0.0028 | 193.0 | 386 | 0.5273 | 0.8462 | 0.8441 | | 0.0026 | 194.0 | 388 | 0.5274 | 0.8462 | 0.8441 | | 0.0027 | 195.0 | 390 | 0.5275 | 0.8462 | 0.8441 | | 0.0026 | 196.0 | 392 | 0.5276 | 0.8462 | 0.8441 | | 0.0026 | 197.0 | 394 | 0.5277 | 0.8462 | 0.8441 | | 0.0028 | 198.0 | 396 | 0.5277 | 0.8462 | 0.8441 | | 0.0026 | 199.0 | 398 | 0.5278 | 0.8462 | 0.8441 | | 0.003 | 200.0 | 400 | 0.5278 | 0.8462 | 0.8441 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Chiuchiyin/DialoGPT-small-Donald
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-05-04T23:38:29Z
--- license: mit --- ```python from diffusers import LDMSuperResolutionPipeline import requests import tempfile import torch import numpy as np model_id = "csaybar/ldm-super-resolution-4x-cloudsen12" # load model and scheduler pipeline = LDMSuperResolutionPipeline.from_pretrained(model_id) # load image demo_file = "https://huggingface.co/csaybar/ldm-super-resolution-4x-cloudsen12/resolve/main/demo.pt" # create temp file but keep it open file = tempfile.NamedTemporaryFile() with requests.get(demo_file, stream=True) as r: with open(file.name, 'wb') as f: for chunk in r.iter_content(chunk_size=8192): f.write(chunk) torch_image = torch.load(file.name, map_location=torch.device('cpu')) upscaled_image = pipeline(torch_image[0:1], num_inference_steps=100, output_type="numpy") hrrgb = upscaled_image[0][0, :, :] hrrgb = hrrgb[:, :, [0, 1, 2]] lrrgb = torch_image[0, [0, 1, 2], :, :] lrrgb = lrrgb.permute(1, 2, 0) lrrgb = (lrrgb + 1)/2 import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(20, 10)) ax[0].imshow(lrrgb) ax[1].imshow(hrrgb) plt.show() ```
ChoboAvenger/DialoGPT-small-DocBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - de pipeline_tag: text-generation tags: - bloom - lora - LLM --- Github: https://github.com/abdullahalzubaer/bloom-6b4-clp-german-lora-inference Dataset used to train the adapter: See this thread for more details https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora/discussions/2 - yizhongw/self_instruct [Translated to German] - https://huggingface.co/datasets/yizhongw/self_instruct This lora adapter is from https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora. Thanks for the adapter! I did not train it. I thought I was uploading the complete bloom-6b4-clp-german model with the adapter that I made to work but then after pushing the model I realized that it was only the adapter. Still exploring how this PEFT works with LoRA works :) strict requirments for peft `peft==0.2.0` requirment `pip install transformers accelerate bitsandbytes peft==0.2.0` latest peft has breaking changes with the bloom-6b4-clp-german and this lora adapter, and the only way to get them both work is (I think) is to train the base model or the adapter again (I am not sure yet). Reference: - https://github.com/linhduongtuan/BLOOM-LORA/issues/5 - https://github.com/huggingface/peft/issues/276
ChrisP/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-albert-base-v2-on-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.95812 - name: F1 type: f1 value: 0.9580680043253634 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-albert-base-v2-on-imdb This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1288 - Accuracy: 0.9581 - F1: 0.9581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
ChrisVCB/DialoGPT-medium-ej
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: cc-by-nc-sa-4.0 datasets: - jeffwan/sharegpt_vicuna - Hello-SimpleAI/HC3 - tatsu-lab/alpaca - Anthropic/hh-rlhf - victor123/evol_instruct_70k tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-7B-Chat MPT-7B-Chat is a chatbot-like model for dialogue generation. It was built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ (non-commercial use only) * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date May 5, 2023 ## Model License _CC-By-NC-SA-4.0_ (non-commercial use only) ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-w0tiddn9-WGTlRpfjcO9J5jyrMub1dg)! ### Example Dialogue **SamIAm85**: >I want you to come up with a tweet based on this summary of the article: >"Introducing MPT-7B, the latest entry in our MosaicML Foundation Series. >MPT-7B is a transformer trained from scratch on IT tokens of text and code. >It is open source and it matches the quality of LLaMA-7B. >MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. >Starting today, you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch. >For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-Story Writer-65k+, the last of which uses a context length of 65k tokens! **MPT-7B-Instruct**: >Here's my suggestion for a tweet: >"Exciting news! @MosaicML has launched #MPT_7B - a new standard for open-source LLMs. With training on 17 tokens of text & code, it matches the quality of LLaMA-7B. Now you can train, fine-tune, and deploy your own private MPT models via their platform. Check out the full blog post here: https://bit.ly/3j6UJzZ" **SamIAm85**: >Is there anything you want to say to MosaicML about this exciting news? **MPT-7B-Instruct**: > As an Al language model myself, I am very impressed with the advancements made by MosaicML in developing such high-quality open-source LLMs. > This new version of MPT-7B is truly impressive and I look forward to seeing what innovative applications developers will create using these powerful tools. > Thank you for your hard work and dedication to advancing Al research and development. ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-chat', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-chat', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-Chat can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Chat was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
ChristianOrr/madnet_keras
[ "tensorboard", "dataset:flyingthings-3d", "dataset:kitti", "arxiv:1810.05424", "vision", "deep-stereo", "depth-estimation", "Tensorflow2", "Keras", "license:apache-2.0" ]
depth-estimation
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: mattjmattj/HF_RL_unit5_pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ChristopherA08/IndoELECTRA
[ "pytorch", "electra", "pretraining", "id", "dataset:oscar", "transformers" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 281.33 +/- 24.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ChukSamuels/DialoGPT-small-Dr.FauciBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1368 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchAllTripletLoss.BatchAllTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1368, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Chun/w-zh2en-hsk
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-bottomCleanedData results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9931895573212258 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-bottomCleanedData This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0238 - Accuracy: 0.9932 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 7 - total_train_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3257 | 1.0 | 141 | 0.2017 | 0.9330 | | 0.2234 | 2.0 | 283 | 0.0655 | 0.9773 | | 0.2719 | 2.99 | 424 | 0.0542 | 0.9773 | | 0.1726 | 4.0 | 566 | 0.0446 | 0.9818 | | 0.2053 | 4.99 | 707 | 0.0373 | 0.9864 | | 0.1794 | 6.0 | 849 | 0.0413 | 0.9864 | | 0.1645 | 7.0 | 991 | 0.0446 | 0.9818 | | 0.1445 | 8.0 | 1132 | 0.0238 | 0.9932 | | 0.1469 | 9.0 | 1274 | 0.0252 | 0.9909 | | 0.0931 | 9.96 | 1410 | 0.0236 | 0.9921 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Chun/w-zh2en-mtm
[ "pytorch", "mbart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - conversational --- # beterbiffin discord bot
Chungu424/DATA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: text-generation ---from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') generator("EleutherAI has", do_sample=True, min_length=50)
Chungu424/repodata
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 inference: false --- **注意:这个 "delta model"不能直接使用**。 用户必须将其应用于原始的 LLaMA 权重之上,以获得的TryMoreGPT权重。 详情请见https://github.com/TrustedLLM/TryMoreGPT <br> <br> ## 介绍 TryMoreGPT-7B,是由揣摩研习社开源的聊天机器人,本项目以LLaMA作为基座模型,使用Vicuna训练框架,在shareGPT,Alpaca中英数据集,COIG中通用价值观,代码编写数据集完成指令微调。在中文表现上相较于原始Vicuna以及一众中文聊天机器人有具有非常有竞争力的表现。 TryMoreGPT-7B is an open-source chat robot developed by Chuanmo Research Institute. This project uses LLaMA as the base model, uses the Vicuna training framework to complete instruction fine-tuning with shareGPT and Alpaca Chinese-English dataset, universal values and code writing dataset in COIG. In terms of Chinese performance, it has a very competitive performance compared to the original Vicuna and other Chinese chatbots.
Ci/Pai
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 inference: false --- # Dromedary Model Card **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Dromedary weights. See https://github.com/IBM/Dromedary#model-weights for instructions. ## Model details <div align="center"> <img src="https://raw.githubusercontent.com/IBM/Dromedary/main/assets/images/dromedary_logo.svg" alt="Dromedary Logo"/> </div> **Model type:** Dromedary is an open-source self-aligned language model trained with minimal human supervision. The base language model is LLaMA-65b, based on the transformer architecture. **Model date:** Dromedary was trained between April 2023 and May 2023, but its knowledge only goes up until Sept-2021. **Organizations developing the model:** The Dromedary team as a joint effort between CMU and IBM. **Paper or resources for more information:** https://mitibmdemos.draco.res.ibm.com/dromedary **License:** LLaMA's Non-commercial bespoke license **Where to send questions or comments about the model:** https://github.com/IBM/Dromedary/issues ## Intended use **Primary intended uses:** The primary use of Dromedary is research on the alignment of large language models. **Primary intended users:** The primary intended users of the model are researchers in artificial intelligence. ## Delta weights We use the following configuration for the LoRA weights: ``` --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \ --lora_r=16 \ ``` ## Training dataset Fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning), ## Evaluation dataset We evaluate Dromedary on TruthfulQA and HHH Eval, as well as Vicuna benchmark questions.
Cinnamon/electra-small-japanese-generator
[ "pytorch", "electra", "fill-mask", "ja", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # hli/distilroberta-base-sentence-transformer-eval-qqp This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('hli/distilroberta-base-sentence-transformer-eval-qqp') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('hli/distilroberta-base-sentence-transformer-eval-qqp') model = AutoModel.from_pretrained('hli/distilroberta-base-sentence-transformer-eval-qqp') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hli/distilroberta-base-sentence-transformer-eval-qqp) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3181 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3181, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Ciruzzo/DialoGPT-medium-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ciruzzo/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- datasets: - glue model-index: - name: e5-base-mnli results: [] pipeline_tag: zero-shot-classification language: - en license: mit --- # e5-base-mnli This model is a fine-tuned version of [intfloat/e5-base](https://huggingface.co/intfloat/e5-base) on the glue dataset. ## Model description [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 ## How to use the model The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="mjwong/e5-base-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) #{'sequence': 'one day I will see the world', # 'labels': ['travel', 'cooking', 'dancing'], # 'scores': [0.9966430068016052, 0.002475415589287877, 0.0008815451874397695]} ``` If more than one candidate label can be correct, pass `multi_class=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_class=True) #{'sequence': 'one day I will see the world', # 'labels': ['travel', 'exploration', 'cooking', 'dancing'], # 'scores': [0.9982951879501343, # 0.9373384714126587, # 0.37681397795677185, # 0.06597542017698288]} ``` ### Eval results The model was evaluated using the dev sets for MultiNLI and test sets for ANLI. The metric used is accuracy. |Datasets|mnli_dev_m|mnli_dev_mm|anli_test_r1|anli_test_r2|anli_test_r3| | :---: | :---: | :---: | :---: | :---: | :---: | |[e5-base-mnli](https://huggingface.co/mjwong/e5-base-mnli)|0.840|0.839|0.231|0.285|0.309| |[e5-large-mnli](https://huggingface.co/mjwong/e5-large-mnli)|0.868|0.869|0.301|0.296|0.294| |[e5-large-unsupervised-mnli](https://huggingface.co/mjwong/e5-large-unsupervised-mnli)|0.865|0.867|0.314|0.285|0.303| |[e5-large-mnli-anli](https://huggingface.co/mjwong/e5-large-mnli-anli)|0.843|0.848|0.646|0.484|0.458| |[e5-large-unsupervised-mnli-anli](https://huggingface.co/mjwong/e5-large-unsupervised-mnli-anli)|0.836|0.842|0.634|0.481|0.478| ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.28.1 - Pytorch 1.12.1+cu116 - Datasets 2.11.0 - Tokenizers 0.12.1
Ciruzzo/DialoGPT-small-hattypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
ClaudeCOULOMBE/RickBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false --- # MPT-7B MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B is * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B: The following models are finetuned on MPT-7B: * [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths. Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3). At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens. We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b). * License: Apache 2.0 * [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following. Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) * [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat) ## Model Date May 5, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
CleveGreen/FieldClassifier
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- license: cc-by-sa-3.0 datasets: - mosaicml/dolly_hhrlhf tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-7B-Instruct MPT-7B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. * License: _CC-By-SA-3.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct) This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date May 5, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)! ### Example Question/Instruction **Longboi24**: > What is a quoll? **MPT-7B-Instruct**: >A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-instruct', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-instruct', trust_remote_code=True ) config.attn_config['attn_impl'] = 'triton' model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-instruct', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True ) model.to(device='cuda:0') ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python config = transformers.AutoConfig.from_pretrained( 'mosaicml/mpt-7b-instruct', trust_remote_code=True ) config.update({"max_seq_len": 4096}) model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-instruct', config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` ### Formatting This model was trained on data formatted in the dolly-15k format: ```python INSTRUCTION_KEY = "### Instruction:" RESPONSE_KEY = "### Response:" INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request." PROMPT_FOR_GENERATION_FORMAT = """{intro} {instruction_key} {instruction} {response_key} """.format( intro=INTRO_BLURB, instruction_key=INSTRUCTION_KEY, instruction="{instruction}", response_key=RESPONSE_KEY, ) example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering." fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## PreTraining Data For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
CleveGreen/JobClassifier_v2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: apache-2.0 --- gpt2-xl-sft int8量化,显存占用减少50%
Cloudy/DialoGPT-CJ-large
[ "pytorch", "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: lrthomps/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CoachCarter/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1337 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1337, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
CodeDanCode/SP-KyleBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: carbman --- ### carbman2-1SD Dreambooth model trained by ambientocclusion with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: carbman (use that on your prompt) ![carbman 0](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%281%29.jpg)![carbman 1](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%282%29.jpg)![carbman 2](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%283%29.jpg)![carbman 3](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%284%29.jpg)![carbman 4](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%285%29.jpg)![carbman 5](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%286%29.jpg)![carbman 6](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%287%29.jpg)![carbman 7](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%288%29.jpg)![carbman 8](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%289%29.jpg)![carbman 9](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2810%29.jpg)![carbman 10](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2811%29.jpg)![carbman 11](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2812%29.jpg)![carbman 12](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2813%29.jpg)![carbman 13](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2814%29.jpg)![carbman 14](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2815%29.jpg)![carbman 15](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2816%29.jpg)![carbman 16](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2817%29.jpg)![carbman 17](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2818%29.jpg)![carbman 18](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2819%29.jpg)![carbman 19](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2820%29.jpg)![carbman 20](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2821%29.jpg)![carbman 21](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2822%29.jpg)![carbman 22](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2823%29.jpg)![carbman 23](https://huggingface.co/ambientocclusion/carbman2-1sd/resolve/main/concept_images/carbman_%2824%29.jpg)
CodeNinja1126/bert-q-encoder
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 266.84 +/- 18.40 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CodeNinja1126/test-model
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: my_awesome_billsum_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1452 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5388 - Rouge1: 0.1452 - Rouge2: 0.0538 - Rougel: 0.1172 - Rougelsum: 0.1171 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8326 | 0.1261 | 0.0367 | 0.1047 | 0.1049 | 19.0 | | No log | 2.0 | 124 | 2.6220 | 0.136 | 0.045 | 0.1118 | 0.1115 | 19.0 | | No log | 3.0 | 186 | 2.5558 | 0.1429 | 0.0517 | 0.1152 | 0.1152 | 19.0 | | No log | 4.0 | 248 | 2.5388 | 0.1452 | 0.0538 | 0.1172 | 0.1171 | 19.0 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
CoderBoy432/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: mit tags: - generated_from_trainer model-index: - name: hiphop-ds-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hiphop-ds-v3 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.1 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
CoffeeAddict93/gpt1-modest-proposal
[ "pytorch", "openai-gpt", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "OpenAIGPTLMHeadModel" ], "model_type": "openai-gpt", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2217 - Accuracy: 0.922 - F1: 0.9221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8167 | 1.0 | 250 | 0.3190 | 0.906 | 0.9039 | | 0.2442 | 2.0 | 500 | 0.2217 | 0.922 | 0.9221 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
CogComp/roberta-temporal-predictor
[ "pytorch", "roberta", "fill-mask", "arxiv:2202.00436", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: gpl-3.0 language: - en --- These are some LoRAs you can use to make sure that LoRAs are loading the way you expect them to. `AAAAAAA` and `BBBBBBB` can be used with with [Neko-Institute-of-Science/LLaMA-7B-4bit-128g](https://huggingface.co/Neko-Institute-of-Science/LLaMA-7B-4bit-128g) `8bit_AAAAAAA` and `8bit_BBBBBBB` can be used with [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) These models do what is on the label `AAAAAAA` will almost always ouput a stream of the letter "A". `BBBBBBB` will do the same for "B". I decided to make these because it was frustrating trying to figure out whether models were actually loading the way I expected them to. You could, instead, set your parameters and seed to the same thing and do a couple generation tests, with and without LoRA, and look for differences, but this is a lot easier to see at a glance.
ComCom-Dev/gpt2-bible-test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - zh license: mit tags: - 1.1.0 - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 TTS Dutch neunit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 TTS Dutch neunit This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.12.1
cometrain/neurotitle-rugpt3-small
[ "pytorch", "gpt2", "text-generation", "ru", "en", "dataset:All-NeurIPS-Papers-Scraper", "transformers", "Cometrain AutoCode", "Cometrain AlphaML", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- tags: - autotrain - text-classification language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - Ramya2300/autotrain-data-final-sentiment-analysis co2_eq_emissions: emissions: 2.1068707556976243 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 55566129341 - CO2 Emissions (in grams): 2.1069 ## Validation Metrics - Loss: 0.652 - Accuracy: 0.780 - Macro F1: 0.761 - Micro F1: 0.780 - Weighted F1: 0.780 - Macro Precision: 0.759 - Micro Precision: 0.780 - Weighted Precision: 0.781 - Macro Recall: 0.763 - Micro Recall: 0.780 - Weighted Recall: 0.780 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Ramya2300/autotrain-final-sentiment-analysis-55566129341 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ramya2300/autotrain-final-sentiment-analysis-55566129341", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ramya2300/autotrain-final-sentiment-analysis-55566129341", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Connorvr/TeachingGen
[ "pytorch", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - conversational --- # Diluc Genshin Impact DialoGPT Model
Contrastive-Tension/BERT-Base-CT
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - generated_from_keras_callback model-index: - name: Jaikar/biobert-base-cased-v1.1-finetuned-clinical-context results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jaikar/biobert-base-cased-v1.1-finetuned-clinical-context This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6433 - Validation Loss: 0.5915 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.6433 | 0.5915 | 0 | ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Craig/mGqFiPhu
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
feature-extraction
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model biorad/roberta-large-peft-lora is restricted and you are not in the authorized list. Visit https://huggingface.co/biorad/roberta-large-peft-lora to ask for access.
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for caformer_b36.sail_in1k_384 A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 98.8 - GMACs: 72.3 - Activations (M): 261.8 - Image size: 384 x 384 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('caformer_b36.sail_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_b36.sail_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 96, 96]) # torch.Size([1, 256, 48, 48]) # torch.Size([1, 512, 24, 24]) # torch.Size([1, 768, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_b36.sail_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
Davlan/bert-base-multilingual-cased-finetuned-igbo
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for convformer_b36.sail_in22k_ft_in1k_384 A ConvFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 99.9 - GMACs: 66.7 - Activations (M): 164.7 - Image size: 384 x 384 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convformer_b36.sail_in22k_ft_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convformer_b36.sail_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 96, 96]) # torch.Size([1, 256, 48, 48]) # torch.Size([1, 512, 24, 24]) # torch.Size([1, 768, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convformer_b36.sail_in22k_ft_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
Davlan/xlm-roberta-base-finetuned-english
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for convformer_s36.sail_in22k_ft_in1k_384 A ConvFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 40.0 - GMACs: 22.5 - Activations (M): 89.6 - Image size: 384 x 384 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convformer_s36.sail_in22k_ft_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convformer_s36.sail_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 96, 96]) # torch.Size([1, 128, 48, 48]) # torch.Size([1, 320, 24, 24]) # torch.Size([1, 512, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convformer_s36.sail_in22k_ft_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
Davlan/xlm-roberta-base-finetuned-wolof
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for poolformerv2_s24.sail_in1k A PoolFormer-v2 (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.3 - GMACs: 3.4 - Activations (M): 10.7 - Image size: 224 x 224 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('poolformerv2_s24.sail_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'poolformerv2_s24.sail_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 320, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'poolformerv2_s24.sail_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
Davlan/xlm-roberta-base-finetuned-zulu
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: false library_name: diffusers extra_gated_prompt: >- One more step before getting this model. This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well. extra_gated_fields: I have read the License and agree with its terms: checkbox duplicated_from: runwayml/stable-diffusion-inpainting --- Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask. The **Stable-Diffusion-Inpainting** was initialized with the weights of the [Stable-Diffusion-v-1-2](https://steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original). First 595k steps regular training, then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) :-------------------------:|:-------------------------:| ## Examples: You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```python from diffusers import StableDiffusionInpaintPipeline pipe = StableDiffusionInpaintPipeline.from_pretrained( "runwayml/stable-diffusion-inpainting", revision="fp16", torch_dtype=torch.float16, ) prompt = "Face of a yellow cat, high resolution, sitting on a park bench" #image and mask_image should be PIL images. #The mask structure is white for inpainting and black for keeping as is image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0] image.save("./yellow_cat_on_park_bench.png") ``` **How it works:** `image` | `mask_image` :-------------------------:|:-------------------------:| <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/> `prompt` | `Output` :-------------------------:|:-------------------------:| <span style="position: relative;bottom: 150px;">Face of a yellow cat, high resolution, sitting on a park bench</span> | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="300"/> ### Original GitHub Repository 1. Download the weights [sd-v1-5-inpainting.ckpt](https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt) 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/runwayml/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide six checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, `sd-v1-4.ckpt`, `sd-v1-5.ckpt` and `sd-v1-5-inpainting.ckpt` which were trained as follows, - `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - `sd-v1-4.ckpt`: Resumed from stable-diffusion-v1-2.225,000 steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - `sd-v1-5.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. - `sd-v1-5-inpaint.ckpt`: Resumed from sd-v1-2.ckpt. 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Then 440k steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Inpainting Evaluation To assess the performance of the inpainting model, we used the same evaluation protocol as in our [LDM paper](https://arxiv.org/abs/2112.10752). Since the Stable Diffusion Inpainting Model acccepts a text input, we simply used a fixed prompt of `photograph of a beautiful empty scene, highest quality settings`. | Model | FID | LPIPS | |-----------------------------|------|------------------| | Stable Diffusion Inpainting | 1.00 | 0.141 (+- 0.082) | | Latent Diffusion Inpainting | 1.50 | 0.137 (+- 0.080) | | CoModGAN | 1.82 | 0.15 | | LaMa | 2.21 | 0.134 (+- 0.080) | ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
Declan/FoxNews_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # amittian/setfit_address_version_0_0_1 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("amittian/setfit_address_version_0_0_1") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Declan/HuffPost_model_v1
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: my_awesome_billsum_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: ca_test args: default metrics: - name: Rouge1 type: rouge value: 0.1494 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5293 - Rouge1: 0.1494 - Rouge2: 0.0571 - Rougel: 0.124 - Rougelsum: 0.1237 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8266 | 0.1259 | 0.0355 | 0.106 | 0.106 | 19.0 | | No log | 2.0 | 124 | 2.6125 | 0.1364 | 0.0479 | 0.1147 | 0.1149 | 19.0 | | No log | 3.0 | 186 | 2.5473 | 0.1456 | 0.0536 | 0.1213 | 0.121 | 19.0 | | No log | 4.0 | 248 | 2.5293 | 0.1494 | 0.0571 | 0.124 | 0.1237 | 19.0 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Declan/HuffPost_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: other datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered inference: false --- # WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored' version of WizardLM](https://huggingface.co/ehartford/WizardLM-7B-Uncensored). It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to remove all the "I'm sorry.." type ChatGPT responses](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered). ## Other repositories available * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ) * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML) * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-7B-uncensored-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-7B-uncensored-GPTQ`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Provided files **Compatible file - WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors** In the `main` branch - the default one - you will find `WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors` This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. * `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with text-generation-webui one-click-installers * Parameters: Groupsize = 128g. No act-order. * Command used to create the GPTQ: ``` python llama.py models/ehartford_WizardLM-7B-Uncensored c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/eric-gptq/WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors ``` # Eric's original model card This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out, including Rohan, TheBloke, and Caseus # WizardLM's original model card Overview of Evol-Instruct Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_overall.png) ![info](https://github.com/nlpxucan/WizardLM/raw/main/imgs/git_running.png)
Declan/NPR_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1653.81 +/- 116.98 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/NewYorkTimes_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: sw_loso_f02_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sw_loso_f02_1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Declan/NewYorkTimes_model_v4
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 datasets: - c-s-ale/alpaca-gpt4-data pipeline_tag: text2text-generation --- This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 4xA100(80G). He et al. 2022 gave an insight that FFN can better utilize modification at larger capacities. The codes is provided by [tloen/alpaca-lora: Instruct-tune LLaMA on consumer hardware (github.com)](https://github.com/tloen/alpaca-lora). We modify the running scripts to ```bash torchrun --nproc_per_node=4 finetune.py \ --base_model '/cache1/chtan/large_models/llama-hf/llama-7b' \ --data_path './alpaca_data_gpt4.json' \ --output_dir './gpt4-alpaca-lora_mlp-7b' \ --batch_size 128 \ --micro_batch_size 32 \ --num_epochs 10 \ --learning_rate 1e-4 \ --cutoff_len 512 \ --val_set_size 2000 \ --lora_r 16 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[gate_proj,down_proj,up_proj]' \ --train_on_inputs \ --group_by_length ``` ### Examples **Instruction**: Tell me about alpacas. **gpt4-alpaca-lora_mlp-7b**: Alpacas are small, domesticated animals that are native to the Andes Mountains in South America. They are part of the camelid family, which also includes llamas, camels, and vicuñas. Alpacas are known for their soft, silky fleece, which is used to make clothing, blankets, and other textiles. They are also raised for their meat, which is similar to lamb. Alpacas are herd animals and live in groups, with a dominant male and several females. They are known for their gentle nature and are often kept as pets or for their fiber. **Alpaca-LoRA**: Alpacas are members of the camelid family and are native to the Andes Mountains of South America. They are known for their soft, luxurious fleece, which is used to make clothing, blankets, and other items. Alpacas are herbivores and graze on grasses and other plants. They are social animals and live in herds of up to 20 individuals. **Stanford Alpaca**: Alpacas are small, fluffy animals related to camels and llamas. They are native to Peru and Bolivia, and were first domesticated around 5,000 years ago. They are kept mainly for their fine, soft fleece, which is used to make knitwear and other garments. Alpacas are herd animals and live in small family groups, led by an older male. They are highly social animals and can form strong bonds with their owners. **text-davinci-003**: Alpacas are a domesticated species of South American camelid, similar to llamas. They are native to the Andes Mountains in South America and are kept as livestock for their soft, luxurious wool. Alpacas are social animals and live in herds of up to 20 individuals. They are typically kept in small herds of two to five animals, and are relatively easy to care for. Alpacas are herbivores and feed on grass, hay, and other vegetation. They are also known for their gentle and friendly nature, making them popular as pets. --- **Instruction**: Write a Python program that prints the first 10 Fibonacci numbers. **gpt4-alpaca-lora_mlp-7b**: Here is a simple Python program that prints the first 10 Fibonacci numbers: ```python # define the function to calculate the Fibonacci numbers def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n - 1) + fibonacci(n - 2) # call the function to calculate the first 10 Fibonacci numbers print(fibonacci(10)) ``` This program defines a function `fibonacci` that takes an integer `n` as an argument and returns the `n`th Fibonacci number. The function uses recursion to calculate the Fibonacci numbers. The `if` statement checks if the argument is equal to 0 or 1, and if it is, the function returns the corresponding value. Otherwise, the function calls itself with the argument `n - 1` and `n - 2`, and returns the sum of the two results. The program then calls the function to calculate the first 10 Fibonacci numbers, and prints the result. **Alpaca-LoRA**: ```python def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2) print(fibonacci(10)) ``` **Stanford Alpaca**: ```python numbers = [0, 1] for i in range(2, 11): numbers.append(numbers[i-2] + numbers[i-1]) print(numbers) # Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34] ``` **text-davinci-003**: ```python # Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones. # Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ... # Program: # Initialize the first two numbers of the Fibonacci sequence a = 0 b = 1 # Print the first 10 numbers of the Fibonacci sequence print(a) print(b) for i in range(2,10): c = a + b a = b b = c print(c) ``` --- > [1] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig: Towards a Unified View of Parameter-Efficient Transfer Learning. ICLR 2022
Declan/Reuters_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - en metrics: - accuracy library_name: adapter-transformers pipeline_tag: text-generation tags: - biology - medical - chemistry - text-generation-inference --- ![Screenshot 2023-05-05 092541.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/9gMBxc270uN8agP8n6-5m.png) ![Screenshot 2023-05-05 094102.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/kqOUgU2wyxLDP1gKnCKPC.png) ![Screenshot 2023-05-05 094303.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/WpNXVBwbLCNNvWJ65dJI8.png) ![Screenshot 2023-05-05 094409.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/HZ1YdlwfZAi8CPlvrcqDr.png) ![Screenshot 2023-05-05 094542.png](https://s3.amazonaws.com/moonup/production/uploads/641ee41d863b87326f45a5f1/h9EJw9fRNMBpOwJVVw6zI.png) # Model Card for Model ID This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset. The model is intended for text generation in the medical research domain. ## Model Details This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset. The model is intended for text generation in the medical research domain. ### Model Description The model has been fine-tuned on a GPT-2 architecture and trained with a task-specific parameter for text generation. The do_sample parameter is set to true, which means that the model can generate text on its own rather than simply copying from the input. The max_length parameter is set to 50, which means that the maximum length of the generated text will be 50 tokens. - **Developed by:** [OpenAI] - **Shared by [optional]:** [More Information Needed] - **Model type:** [Language Model] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [GPT-2] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ### Direct Use This model can be used for text generation in the medical research domain. It can be used to generate text for a variety of purposes, such as research papers, reports, and summaries. ### Downstream Use [optional] The model can be fine-tuned for downstream tasks such as summarization, question answering, and text classification. ### Out-of-Scope Use This model may not perform as well on text outside the medical research domain. It is important to carefully evaluate the generated text to ensure that it is appropriate for the intended use. ## Bias, Risks, and Limitations This modelcard acknowledges that all language models have limitations and potential biases. The model may produce biased or inaccurate outputs if the input data contains bias or if the training data is not diverse enough. The risks of using the model include the possibility of generating misleading or harmful information. ### Recommendations To mitigate potential risks and limitations, users of the model should carefully evaluate the generated text and consider the following recommendations: 1)Evaluate the input data for potential bias and ensure that it is diverse and representative. 2)Consider fine-tuning the model on additional data to improve its accuracy and reduce the risk of bias. 3)Review and edit the generated text before use to ensure that it is appropriate for the intended purpose. 4)Provide clear and transparent documentation of the model's limitations and potential biases to users and stakeholders. ## How to Get Started with the Model To use the model, load it in your preferred programming language using the transformers library, and pass in the input text. The model will generate text based on the input, using the task-specific parameters that have been set.
Declan/Reuters_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: other --- ★Lora-pri_ver1 トリガーワードはprishe。無くても出るが再現性が上がる? crownの有無で帽子の着脱可。 生成モデルにもよるが、衣装再現しないならepochは25くらいからprisheぽくなる。 epochは数字が増えていくにつれて再現度は高くなるが汎用性はなくなっていくかも。数字無しが最終。 プロンプトの強調やLoraの強度を変えれば衣装やポーズも応用が利く。LoRA Block Weightの利用も有効。 ---------------------------------------- ★Lora-PandU 2キャラ同時学習。prisheとulmiaで描き分け。 キャラが混じらないようにタグをそれぞれで分けてみたが成功したかどうか不明。 余計な要素が混じる場合はネガプロを利用するのも手。 epochは衣装再現性は低いが顔をみると35あたりがオススメかも。数字なしは層別適用を利用したほうがいい。 ●タグ説明 ・prishe 頭装備は crown 胸のリボンと石は red ribbon 耳は pointy ears 外すとヒュム耳になるかも 衣装は costume 外しても脱ぐわけではない 脚装備は short pants 足装備は brown footwear サンプルタグ prishe, costume, crown, white background, open mouth, red ribbon, pointy ears, brown footwear, hand on own hip, short pants ・ulmia ulmia, uniform, solo, standing, full body, brown footwear, medium hair, brown eyes, ear piercing, circlet, orange hair, holding harp, hair ornament, jewelry 頭装備は circlet,hair ornament 耳は elf 衣装は uniform 脚装備は black leggings, gaiters 足装備は brown footwear 楽器は harp サンプルタグ ulmia, uniform, brown footwear, elf, white background, harp, hair ornament, black leggings, gaiters, circlet
Declan/WallStreetJournal_model_v6
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- duplicated_from: malikxseto/backups --- Backups of models I found and I like
Declan/WallStreetJournal_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2023-05-05T08:59:50Z
--- license: creativeml-openrail-m tags: - coreml - stable-diffusion - text-to-image - not-for-all-eyes --- # Core ML Converted Model: - This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).<br> - Provide the model to an app such as Mochi Diffusion [Github](https://github.com/godly-devotion/MochiDiffusion) - [Discord](https://discord.gg/x2kartzxGv) to generate images.<br> - `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> - `original` version is only compatible with CPU & GPU option.<br> - Custom resolution versions are tagged accordingly.<br> - `vae` tagged files have a vae embedded into the model.<br> - Descriptions are posted as-is from original model source. Not all features and/or results may be available in CoreML format.<br> - This model was converted with `vae-encoder` for i2i. # Note: Some models do not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). # a-zovya-rpg-artist-tools: Source(s): [CivitAI](https://civitai.com/models/8124/a-zovya-rpg-artist-tools) <img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bfe8f5fc-337e-4a0f-4199-edc491bfe500/width=450/preview2.jpeg" alt="preview2.png" style="max-height: 100%; max-width: 100%;"> <img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/307c1ffe-138d-4c1f-9934-290b36ce7600/width=450/04602-3943896826-Best_A-Zovya_RPG_Artist_Tools_V2.jpeg" alt="04602-3943896826-Best_A-Zovya_RPG_Artist_Tools_V2.png" style="max-height: 100%; max-width: 100%;"> <img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/75cd72cb-bb1f-4b9c-7f2a-49f69993d700/width=450/24148-987083034-Best_A-Zovya_RPG_Artist_Tools_V2.jpeg" alt="24148-987083034-Best_A-Zovya_RPG_Artist_Tools_V2.png" style="max-height: 100%; max-width: 100%;"> <img class="mantine-7aj0so" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7661277c-b4fd-4b62-bdef-3b34da682600/width=450/04603-504523982-Best_A-Zovya_RPG_Artist_Tools_V2.jpeg" alt="04603-504523982-Best_A-Zovya_RPG_Artist_Tools_V2.png" style="max-height: 100%; max-width: 100%;"> Depending on what you need, several models are included. If you want the best pretty girl portraits, you must have the SD 1.5 models for sure. If you need more details and better coherency on non-human objects, or you like to have hands with only 5 fingers, use the SD 2.1 models. Mix and match and use inpainting to touch up little mistakes. <a target="_blank" rel="ugc" href="https://civitai.com/models/34192/ultra-sharp-high-contrast-tutorial-vaeandupscaler"><strong>Detailed tutorial on how I get the results in the preview images.</strong></a> Check here if you're having trouble getting the same results. You can prompt any style you need with these models, but the default aesthetic is listed for each of the models in this handy list. Different models available, check the blue tabs above the images up top:
Declan/test_model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-learningRate-2-cola-3e-05 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5907527969578087 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-learningRate-2-cola-3e-05 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8555 - Matthews Correlation: 0.5908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.2022 | 1.0 | 535 | 0.9205 | 0.5285 | | 0.1155 | 2.0 | 1070 | 0.8555 | 0.5908 | | 0.1312 | 3.0 | 1605 | 0.9399 | 0.5496 | | 0.0956 | 4.0 | 2140 | 1.0178 | 0.5577 | | 0.048 | 5.0 | 2675 | 1.1525 | 0.5528 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Declan/test_push
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - vision - image-classification datasets: - davanstrien/autotrain-data-color-image-dating widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.0061287024480163954 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 55447129534 - CO2 Emissions (in grams): 0.0061 ## Validation Metrics - Loss: 0.989 - Accuracy: 0.596 - Macro F1: 0.584 - Micro F1: 0.596 - Weighted F1: 0.584 - Macro Precision: 0.580 - Micro Precision: 0.596 - Weighted Precision: 0.580 - Macro Recall: 0.596 - Micro Recall: 0.596 - Weighted Recall: 0.596
DeepBasak/Slack
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - vision - image-classification datasets: - davanstrien/autotrain-data-color-image-dating widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 1.7620026920346206 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 55447129535 - CO2 Emissions (in grams): 1.7620 ## Validation Metrics - Loss: 1.020 - Accuracy: 0.570 - Macro F1: 0.567 - Micro F1: 0.570 - Weighted F1: 0.567 - Macro Precision: 0.567 - Micro Precision: 0.570 - Weighted Precision: 0.567 - Macro Recall: 0.570 - Micro Recall: 0.570 - Weighted Recall: 0.570
DeepChem/ChemBERTa-10M-MLM
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
90
null
--- tags: - autotrain - vision - image-classification datasets: - davanstrien/autotrain-data-color-image-dating widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.563744450414927 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 55447129536 - CO2 Emissions (in grams): 0.5637 ## Validation Metrics - Loss: 1.587 - Accuracy: 0.347 - Macro F1: 0.320 - Micro F1: 0.347 - Weighted F1: 0.320 - Macro Precision: 0.334 - Micro Precision: 0.347 - Weighted Precision: 0.334 - Macro Recall: 0.347 - Micro Recall: 0.347 - Weighted Recall: 0.347
DeepESP/gpt2-spanish-medium
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "es", "dataset:ebooks", "transformers", "GPT-2", "Spanish", "ebooks", "nlg", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
340
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bangla-para-v1-260000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-para-v1-260000 This model is a fine-tuned version of [mHossain/bangla-para-v1-230000](https://huggingface.co/mHossain/bangla-para-v1-230000) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9635 - Rouge1: 0.0 - Rouge2: 0.0 - Rougel: 0.0 - Rougelsum: 0.0 - Gen Len: 18.2917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.241 | 1.0 | 6750 | 0.9635 | 0.0 | 0.0 | 0.0 | 0.0 | 18.2917 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DeepPavlov/xlm-roberta-large-en-ru-mnli
[ "pytorch", "xlm-roberta", "text-classification", "en", "ru", "dataset:glue", "dataset:mnli", "transformers", "xlm-roberta-large", "xlm-roberta-large-en-ru", "xlm-roberta-large-en-ru-mnli", "has_space" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
227
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola-learning_rate-4e-05 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5732046470010711 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola-learning_rate-4e-05 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.3213 - Matthews Correlation: 0.5732 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5002 | 1.0 | 535 | 0.5568 | 0.4891 | | 0.2954 | 2.0 | 1070 | 0.5052 | 0.5210 | | 0.1976 | 3.0 | 1605 | 0.7016 | 0.5033 | | 0.1367 | 4.0 | 2140 | 0.9378 | 0.5628 | | 0.0889 | 5.0 | 2675 | 1.0129 | 0.5470 | | 0.0555 | 6.0 | 3210 | 1.1484 | 0.5575 | | 0.0431 | 7.0 | 3745 | 1.1081 | 0.5527 | | 0.028 | 8.0 | 4280 | 1.1268 | 0.5697 | | 0.0192 | 9.0 | 4815 | 1.3071 | 0.5627 | | 0.013 | 10.0 | 5350 | 1.3213 | 0.5732 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DeepPavlov/xlm-roberta-large-en-ru
[ "pytorch", "xlm-roberta", "feature-extraction", "en", "ru", "transformers" ]
feature-extraction
{ "architectures": [ "XLMRobertaModel" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
190
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-dropout-cola-0.2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5957317644481708 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-dropout-cola-0.2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8150 - Matthews Correlation: 0.5957 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4985 | 1.0 | 535 | 0.5022 | 0.4978 | | 0.3168 | 2.0 | 1070 | 0.4357 | 0.5836 | | 0.2116 | 3.0 | 1605 | 0.6536 | 0.5365 | | 0.149 | 4.0 | 2140 | 0.8150 | 0.5957 | | 0.0911 | 5.0 | 2675 | 0.8846 | 0.5838 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DeltaHub/adapter_t5-3b_mrpc
[ "pytorch", "transformers" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 228 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 228, "warmup_steps": 23, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->