modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
DSI/ar_emotion_6
|
[
"pytorch",
"bert",
"transformers"
] | null |
{
"architectures": [
"BertForMultiLabelSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
- tglobal-xl
- XL
- 8bit
- quantized
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
pipeline_tag: summarization
---
# long-t5-tglobal-xl-16384-book-summary: 8-bit quantized version
<a href="https://colab.research.google.com/gist/pszemraj/c19e32baf876deb866c31cd46c86e893/long-t5-xl-accelerate-test.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.
Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).
- The total size of the model is only ~3.5 GB (vs original 12 GB)
- Enables low-RAM loading, making it easier to use in memory-limited environments like Colab
- Requires `bitsandbytes` - AFAIK at time of writing, only works on GPU
## Basic Usage
To use the model, install or upgrade `transformers`, `accelerate`, and `bitsandbytes`. Make sure to have `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
```bash
pip install -U -q transformers bitsandbytes accelerate
```
Load the model with `AutoTokenizer` and `AutoModelForSeq2SeqLM`:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
## More information about long-t5-tglobal-xl-16384-book-summary
- This is an 8-bit quantized version of `pszemraj/long-t5-tglobal-xl-16384-book-summary`.
- It generalizes reasonably well to academic and narrative text.
- The XL checkpoint typically generates summaries that are considerably better from a human evaluation perspective.
|
DTAI-KULeuven/robbertje-1-gb-merged
|
[
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | 2023-04-29T16:13:30Z |
---
license: openrail
datasets:
- Locutusque/ColumnedChatCombined
language:
- en
- zh
- ru
metrics:
- bleu
- perplexity
- loss
- reward
- penalty
widget:
- text: "<|USER|> Hello! <|ASSISTANT|> "
pipeline_tag: text-generation
---
# Model Card
## Model Details
- Model Name: gpt2-conversational-or-qa
- Model Type: Language Modeling
- Task: Generating Conversational Responses
- Description: This model is trained on a dataset of conversations between a user and an AI assistant, with the goal of generating a coherent and relevant response to the user's input. It uses the GPT-2 architecture, a state-of-the-art transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The model is fine-tuned on the conversational data using maximum likelihood estimation, and is evaluated based on its ability to generate responses that are both grammatically correct and semantically relevant to the user's input.
## Intended Use
This model is intended to be used for generating conversational responses in a variety of contexts, such as chatbots, virtual assistants, and customer service applications. It is designed to provide natural and engaging responses to user input, with a focus on maintaining a consistent tone and style throughout the conversation. The model is suitable for use in both text-based and voice-based interfaces, and can be easily integrated into existing applications using the PyTorch and Transformers frameworks.
## Training Data
The model is trained on a large dataset of conversational data, consisting of interactions between users and an AI assistant. The data is preprocessed to remove any sensitive information and is formatted in a way that is suitable for training a language model. The training data is split into a training set and a validation set, with the training set used to update the model parameters and the validation set used to evaluate the model performance. The model was trained on 245,000 examples over 1,225,000 steps, it achieved decent metrics.
This model outperformed the base GPT-2 model significantly on a new conversational dataset during a fine-tuning session. Here is a side-by-side comparison of the two models during the first steps of training
```python
# Base GPT-2
"""
Epoch 1/5, Batch 1/10000: Loss - 64.9255, Reward - 260.0000, Penalty - 624.0000, BLEU - 0.0000
Epoch 1/5, Batch 2/10000: Loss - 57.4635, Reward - 303.0000, Penalty - 870.0000, BLEU - 0.0000
Epoch 1/5, Batch 3/10000: Loss - 67.8061, Reward - 295.0000, Penalty - 908.0000, BLEU - 0.0000
Epoch 1/5, Batch 4/10000: Loss - 59.6118, Reward - 800.0000, Penalty - 740.0000, BLEU - 0.0000
Epoch 1/5, Batch 5/10000: Loss - 67.4855, Reward - 402.0000, Penalty - 806.0000, BLEU - 0.0000
Epoch 1/5, Batch 6/10000: Loss - 29.3718, Reward - 937.0000, Penalty - 760.0000, BLEU - 0.0000
Epoch 1/5, Batch 7/10000: Loss - 79.0709, Reward - 390.0000, Penalty - 1114.0000, BLEU - 0.0000
Epoch 1/5, Batch 8/10000: Loss - 61.4583, Reward - 385.0000, Penalty - 760.0000, BLEU - 0.0000
Epoch 1/5, Batch 9/10000: Loss - 56.3084, Reward - 741.0000, Penalty - 560.0000, BLEU - 3.5500
Epoch 1/5, Batch 10/10000: Loss - 80.0192, Reward - 838.0000, Penalty - 1424.0000, BLEU - 0.0000
Epoch 1/5, Batch 11/10000: Loss - 51.8236, Reward - 228.0000, Penalty - 812.0000, BLEU - 0.0001
Epoch 1/5, Batch 12/10000: Loss - 71.4071, Reward - 541.0000, Penalty - 982.0000, BLEU - 0.0000
Epoch 1/5, Batch 13/10000: Loss - 33.3624, Reward - 910.0000, Penalty - 1002.0000, BLEU - 0.0027
Epoch 1/5, Batch 14/10000: Loss - 55.9721, Reward - 808.0000, Penalty - 798.0000, BLEU - 0.0005
Epoch 1/5, Batch 15/10000: Loss - 67.0336, Reward - 517.0000, Penalty - 764.0000, BLEU - 0.0000
"""
# Conversational GPT-2
"""
Epoch 1/5, Batch 1/10000: Loss - 6.1980, Reward - 887.0000, Penalty - 1500.0000, BLEU - 0.0648
Epoch 1/5, Batch 2/10000: Loss - 4.5750, Reward - 245.0000, Penalty - 1618.0000, BLEU - 0.0008
Epoch 1/5, Batch 3/10000: Loss - 5.1264, Reward - 600.0000, Penalty - 642.0000, BLEU - 5.7981
Epoch 1/5, Batch 4/10000: Loss - 0.2995, Reward - 1020.0000, Penalty - 74.0000, BLEU - 13.8469
Epoch 1/5, Batch 5/10000: Loss - 7.9377, Reward - 203.0000, Penalty - 1700.0000, BLEU - 0.3218
Epoch 1/5, Batch 6/10000: Loss - 5.0522, Reward - 1020.0000, Penalty - 2034.0000, BLEU - 0.1946
Epoch 1/5, Batch 7/10000: Loss - 2.0585, Reward - 925.0000, Penalty - 526.0000, BLEU - 16.1298
Epoch 1/5, Batch 8/10000: Loss - 5.9736, Reward - 1009.0000, Penalty - 1844.0000, BLEU - 0.0085
Epoch 1/5, Batch 9/10000: Loss - 6.0867, Reward - 245.0000, Penalty - 1690.0000, BLEU - 1.9342
Epoch 1/5, Batch 10/10000: Loss - 7.8497, Reward - 155.0000, Penalty - 1780.0000, BLEU - 0.0115
Epoch 1/5, Batch 11/10000: Loss - 3.8887, Reward - 1012.0000, Penalty - 2010.0000, BLEU - 0.6957
Epoch 1/5, Batch 12/10000: Loss - 6.6133, Reward - 216.0000, Penalty - 1638.0000, BLEU - 1.7853
Epoch 1/5, Batch 13/10000: Loss - 1.3319, Reward - 945.0000, Penalty - 374.0000, BLEU - 0.0075
Epoch 1/5, Batch 14/10000: Loss - 2.6296, Reward - 956.0000, Penalty - 414.0000, BLEU - 3.2207
Epoch 1/5, Batch 15/10000: Loss - 6.8827, Reward - 1013.0000, Penalty - 1970.0000, BLEU - 3.7418
"""
```
## Model Architecture
The model architecture used in this model is GPT-2, a transformer-based language model that is capable of generating high-quality text with a wide range of styles and tones. The GPT-2 architecture consists of a multi-layered transformer encoder-decoder, with self-attention mechanisms that allow the model to capture long-term dependencies and generate coherent text.
## Evaluation Metrics
The model is evaluated based on several metrics, including loss, reward, penalty, BLEU score, and perplexity. The loss metric is calculated during training and reflects the difference between the predicted output and the actual output. The reward metric is based on the number of correct words generated by the model, while the penalty metric penalizes the model for repeating words consecutively. The BLEU score measures the similarity between the generated text and the ground truth text, while the perplexity metric measures how well the model is able to predict the next word in a sequence. During validation, the model achieved the following metrics:
- BLEU Score: 9
- Perplexity: 19
- Loss: 1.7
Although these metrics seem mediocre, it's actually better because that way the model is able to make open-ended responses, but is still coherent to the user's input.
## Limitations and Bias
This model is not suitable for all use cases due to its limited training time on a weak computer. As a result, it may produce irrelevant or nonsensical responses. Additionally, it has not been fine-tuned to remember the chat history, is unable to provide follow-up responses, and it does not know the answer to many questions (it was only fine-tuned to respond in a conversational way). For optimal performance, we recommend using a GPU with at least 4GB of VRAM and downloading the model manually instead of using the Transformers library or deploying it on the Interface API. Here's how you should deploy the model:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.add_special_tokens({'eos_token': '<|End|>'})
special_tokens = {
"additional_special_tokens": ["<|USER|>", "<|SYSTEM|>", "<|ASSISTANT|>"]
}
tokenizer.add_special_tokens(special_tokens)
model.resize_token_embeddings(len(tokenizer))
model.load_state_dict(torch.load("path/to/model"))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def generate_text(model, tokenizer, prompt, max_length=1024):
prompt = f'<|USER|> {prompt} <|ASSISTANT|> '
input_ids = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt").to(device)
attention_mask = torch.ones_like(input_ids).to(device)
output = model.generate(input_ids,
max_length=max_length,
do_sample=True,
top_k=35,
top_p=0.80,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
attention_mask=attention_mask)
output_ids = tokenizer.decode(output[0], skip_special_tokens=False)
assistant_token_index = output_ids.index('<|ASSISTANT|>') + len('<|ASSISTANT|>')
next_token_index = output_ids.find('<|', assistant_token_index)
output_ids = output_ids[assistant_token_index:next_token_index]
return output_ids
# Loop to interact with the model
while True:
prompt = input("Enter a prompt (or 'q' to quit): ")
if prompt == "q":
break
output_text = generate_text(model, tokenizer, prompt)
print(output_text)
```
## Deploying and training the model
The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} <|End|>".``` For the best performance from the model the input text should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> ``` and the target/label should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> {dataset output} <|End|>```
|
DannyMichael/ECU911
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- HumanoidStandup-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HumanoidStandup-v2
type: HumanoidStandup-v2
metrics:
- type: mean_reward
value: 65822.31 +/- 10972.33
name: mean_reward
verified: false
---
# **PPO** Agent playing **HumanoidStandup-v2**
This is a trained model of a **PPO** agent playing **HumanoidStandup-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Darein/Def
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-04-28T01:48:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: ec_classfication
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ec_classfication
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6543
- F1: 0.7609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 31 | 0.6418 | 0.4262 |
| No log | 2.0 | 62 | 0.4992 | 0.7342 |
| No log | 3.0 | 93 | 0.4732 | 0.7879 |
| No log | 4.0 | 124 | 0.4817 | 0.7089 |
| No log | 5.0 | 155 | 0.4872 | 0.7742 |
| No log | 6.0 | 186 | 0.5026 | 0.7872 |
| No log | 7.0 | 217 | 0.5202 | 0.7778 |
| No log | 8.0 | 248 | 0.5280 | 0.7711 |
| No log | 9.0 | 279 | 0.5629 | 0.75 |
| No log | 10.0 | 310 | 0.6319 | 0.7872 |
| No log | 11.0 | 341 | 0.6363 | 0.7872 |
| No log | 12.0 | 372 | 0.6850 | 0.7708 |
| No log | 13.0 | 403 | 0.6702 | 0.7872 |
| No log | 14.0 | 434 | 0.6495 | 0.7692 |
| No log | 15.0 | 465 | 0.6543 | 0.7609 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Darkrider/covidbert_medmarco
|
[
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2010.05987",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 35 | null |
---
language:
- ko
tags:
- text generation
- pytorch
- causal-lm
widget:
- text: "B: 인공지능 서버 전용 인터넷 데이터센터 건립을 위한 사업계획서를 작성하라.\nA:"
inference:
parameters:
max_length: 250
do_sample: False
license: apache-2.0
---
# polyglot-12.8B Korean finetuned for instruction following
|
DataikuNLP/TinyBERT_General_4L_312D
|
[
"pytorch",
"jax",
"bert",
"arxiv:1909.10351",
"transformers"
] | null |
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 74 | 2023-04-28T02:02:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.04 +/- 21.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DataikuNLP/paraphrase-MiniLM-L6-v2
|
[
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 25 | 2023-04-28T02:04:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-synthesized-turkish-8-hour-hlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour-hlr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3824
- Wer: 49.2902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7481 | 0.52 | 100 | 0.2675 | 14.6834 |
| 0.1975 | 1.04 | 200 | 0.2534 | 13.2144 |
| 0.1388 | 1.56 | 300 | 0.2755 | 15.6647 |
| 0.1585 | 2.08 | 400 | 0.3080 | 14.6649 |
| 0.1153 | 2.6 | 500 | 0.3421 | 17.7447 |
| 0.1241 | 3.12 | 600 | 0.3570 | 16.8189 |
| 0.1093 | 3.65 | 700 | 0.3776 | 18.8125 |
| 0.09 | 4.17 | 800 | 0.3859 | 30.0518 |
| 0.0751 | 4.69 | 900 | 0.3874 | 17.3929 |
| 0.0758 | 5.21 | 1000 | 0.3987 | 20.0901 |
| 0.0602 | 5.73 | 1100 | 0.4017 | 17.1460 |
| 0.0568 | 6.25 | 1200 | 0.3824 | 15.6154 |
| 0.0454 | 6.77 | 1300 | 0.3926 | 15.8808 |
| 0.0433 | 7.29 | 1400 | 0.4146 | 16.3869 |
| 0.0341 | 7.81 | 1500 | 0.4078 | 16.1153 |
| 0.0295 | 8.33 | 1600 | 0.4192 | 17.1275 |
| 0.0274 | 8.85 | 1700 | 0.4140 | 16.3745 |
| 0.0246 | 9.38 | 1800 | 0.4077 | 21.0344 |
| 0.0211 | 9.9 | 1900 | 0.4003 | 19.8741 |
| 0.0149 | 10.42 | 2000 | 0.4054 | 108.7335 |
| 0.0172 | 10.94 | 2100 | 0.3917 | 20.6024 |
| 0.0138 | 11.46 | 2200 | 0.3942 | 889.4643 |
| 0.0108 | 11.98 | 2300 | 0.3906 | 55.0673 |
| 0.0099 | 12.5 | 2400 | 0.3834 | 29.9778 |
| 0.0067 | 13.02 | 2500 | 0.3947 | 34.5883 |
| 0.0045 | 13.54 | 2600 | 0.3940 | 20.9789 |
| 0.0035 | 14.06 | 2700 | 0.3911 | 15.6462 |
| 0.0031 | 14.58 | 2800 | 0.3905 | 18.3990 |
| 0.0018 | 15.1 | 2900 | 0.3919 | 16.3190 |
| 0.0011 | 15.62 | 3000 | 0.3906 | 18.0286 |
| 0.001 | 16.15 | 3100 | 0.3911 | 17.6521 |
| 0.0006 | 16.67 | 3200 | 0.3813 | 27.6879 |
| 0.0007 | 17.19 | 3300 | 0.3800 | 45.7536 |
| 0.0003 | 17.71 | 3400 | 0.3805 | 51.2529 |
| 0.0001 | 18.23 | 3500 | 0.3815 | 51.7282 |
| 0.0001 | 18.75 | 3600 | 0.3821 | 47.0065 |
| 0.0002 | 19.27 | 3700 | 0.3821 | 45.8585 |
| 0.0001 | 19.79 | 3800 | 0.3823 | 47.7904 |
| 0.0001 | 20.31 | 3900 | 0.3824 | 49.2594 |
| 0.0003 | 20.83 | 4000 | 0.3824 | 49.2902 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2
|
[
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] |
sentence-similarity
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,517 | 2023-04-28T02:05:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-synthesized-turkish-8-hour-llr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour-llr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2166
- eval_wer: 13.5662
- eval_runtime: 518.2334
- eval_samples_per_second: 1.482
- eval_steps_per_second: 0.185
- epoch: 18.75
- step: 3600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Davlan/xlm-roberta-base-finetuned-luo
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-04-28T03:21:50Z |
---
license: mit
language:
- en
---
## Instruction-tuned LLaMA (Alpaca-GPT4)
Fine-tune [LLaMA-7B](https://huggingface.co/decapoda-research/llama-7b-hf) on the alpaca dataset.
The main training scripts are from [stanford-alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), while the data is from [GPT-4-LLM repo](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM#data-release), with the default training hyper-parameters.
Please refer to [this page](https://instruction-tuning-with-gpt-4.github.io/) for more details.
|
Davlan/xlm-roberta-base-finetuned-yoruba
|
[
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 29 | 2023-04-28T03:35:33Z |
3bit quantized version of this: https://huggingface.co/ausboss/llama-30b-supercot
GPTQ quantization using https://github.com/0cc4m/GPTQ-for-LLaMa
Made at the request of someone that wanted a 3bit version. The file is 17% smaller than 4bit non-groupsize, but the wikitext2 ppl is 12% worse. I don't have a functioning Ooba install so I can't test this myself.
Command used to quantize:
```python llama.py c:\llama-30b-supercot c4 --wbits 3 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors```
Evaluation & Score (Lower is better):
* WikiText2: 5.22 (12% worse than 4bit non-groupsize)
* PTB: 19.63 (11% worse than 4bit non-groupsize)
* C4: 6.93 (7% worse than 4bit non-groupsize)
4bit non-groupsize version is here: https://huggingface.co/tsumeone/llama-30b-supercot-4bit-cuda
4bit 128 groupsize version is here: https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda
|
Davlan/xlm-roberta-base-sadilar-ner
|
[
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-04-28T03:43:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-imageclds
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9992215879605605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-imageclds
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
- Accuracy: 0.9992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.177 | 1.0 | 241 | 0.0830 | 0.9942 |
| 0.0597 | 2.0 | 482 | 0.0107 | 0.9982 |
| 0.0387 | 3.0 | 723 | 0.0068 | 0.9988 |
| 0.0381 | 4.0 | 964 | 0.0044 | 0.9992 |
| 0.0361 | 5.0 | 1205 | 0.0040 | 0.9991 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
Davlan/xlm-roberta-base-wikiann-ner
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 235 | 2023-04-28T03:50:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Falguni/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Davlan/xlm-roberta-large-ner-hrl
|
[
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,322 | 2023-04-28T03:51:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ar-Noise2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar-Noise2
This model is a fine-tuned version of [MohammedNasri/whisper-small-ar-Noise](https://huggingface.co/MohammedNasri/whisper-small-ar-Noise) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2544
- Wer: 20.9167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0283 | 0.21 | 1000 | 0.3144 | 23.5388 |
| 0.0543 | 0.42 | 2000 | 0.2991 | 22.7861 |
| 0.083 | 0.62 | 3000 | 0.2827 | 23.0508 |
| 0.087 | 0.83 | 4000 | 0.2611 | 21.3127 |
| 0.0223 | 1.04 | 5000 | 0.2544 | 20.9167 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Declan/CNN_model_v2
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.47 +/- 16.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Declan/CNN_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- A
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/jiyang_tang_aphsiabank_english_asr_ebranchformer_small_wavlm_large1`
This model was trained by Jiyang Tang using A recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 4ddda8634b6b03fbbdae97927e58722a13f1f7c8
pip install -e .
cd jtang1/A/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/jiyang_tang_aphsiabank_english_asr_ebranchformer_small_wavlm_large1
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 13 15:37:27 EDT 2023`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.8.1`
- Git hash: `b0b2a0aa9c335267046e83036b87e88af30698da`
- Commit date: `Tue Feb 7 14:56:31 2023 -0500`
## asr_ebranchformer_wavlm
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|28424|240039|81.3|13.2|5.6|3.4|22.2|67.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|28424|1103375|89.9|4.1|6.0|3.7|13.8|67.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_ebranchformer_small_wavlm_large1.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_ebranchformer_wavlm
ngpu: 1
seed: 2022
num_workers: 2
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 47613
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 200
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/val/wav.scp
- speech
- sound
- - dump/raw/val/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- '[APH]'
- '[NONAPH]'
- <space>
- e
- t
- a
- h
- o
- n
- i
- s
- d
- r
- u
- l
- m
- w
- y
- g
- c
- b
- f
- p
- k
- ''''
- v
- j
- <
- L
- A
- U
- '>'
- ɪ
- x
- ə
- z
- ɛ
- ɑ
- q
- ɹ
- æ
- ˞
- ʌ
- ʃ
- ʊ
- ɔ
- ŋ
- ɚ
- ɾ
- ʒ
- ð
- θ
- ɜ
- ɝ
- ɡ
- '0'
- ː
- ʔ
- ɒ
- é
- ɸ
- ̩
- ʤ
- ʧ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: local/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
layer_drop_rate: 0.1
input_layer: conv2d1
macaron_ffn: true
pos_enc_layer_type: rel_pos
attention_layer_type: rel_selfattn
rel_pos_type: latest
cgmlp_linear_units: 3072
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
positionwise_layer_type: linear
use_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Declan/WallStreetJournal_model_v5
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: taNER-500-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# taNER-500-V2
This model is a fine-tuned version of [livinNector/tabert-500](https://huggingface.co/livinNector/tabert-500) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4057
- Precision: 0.7870
- Recall: 0.8040
- F1: 0.7954
- Accuracy: 0.9056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3751 | 0.49 | 1000 | 0.3876 | 0.7294 | 0.7381 | 0.7337 | 0.8758 |
| 0.3211 | 0.99 | 2000 | 0.3530 | 0.7603 | 0.7427 | 0.7514 | 0.8851 |
| 0.2932 | 1.48 | 3000 | 0.3443 | 0.7501 | 0.7757 | 0.7627 | 0.8882 |
| 0.2884 | 1.98 | 4000 | 0.3404 | 0.7553 | 0.7878 | 0.7712 | 0.8907 |
| 0.268 | 2.47 | 5000 | 0.3241 | 0.7705 | 0.7888 | 0.7795 | 0.8959 |
| 0.2638 | 2.96 | 6000 | 0.3246 | 0.7823 | 0.7850 | 0.7836 | 0.8954 |
| 0.246 | 3.46 | 7000 | 0.3175 | 0.7769 | 0.7989 | 0.7878 | 0.8999 |
| 0.2457 | 3.95 | 8000 | 0.3216 | 0.7732 | 0.7934 | 0.7832 | 0.8999 |
| 0.2253 | 4.44 | 9000 | 0.3180 | 0.7792 | 0.7983 | 0.7887 | 0.8995 |
| 0.2271 | 4.94 | 10000 | 0.3250 | 0.7868 | 0.7895 | 0.7882 | 0.8996 |
| 0.2085 | 5.43 | 11000 | 0.3435 | 0.7838 | 0.7967 | 0.7902 | 0.8995 |
| 0.2091 | 5.93 | 12000 | 0.3300 | 0.7855 | 0.7958 | 0.7906 | 0.9009 |
| 0.1927 | 6.42 | 13000 | 0.3272 | 0.7771 | 0.7983 | 0.7876 | 0.9017 |
| 0.1932 | 6.91 | 14000 | 0.3310 | 0.7836 | 0.8060 | 0.7946 | 0.9047 |
| 0.1777 | 7.41 | 15000 | 0.3377 | 0.7882 | 0.8045 | 0.7963 | 0.9052 |
| 0.1785 | 7.9 | 16000 | 0.3406 | 0.7812 | 0.8042 | 0.7925 | 0.9036 |
| 0.1658 | 8.4 | 17000 | 0.3528 | 0.7892 | 0.7992 | 0.7942 | 0.9043 |
| 0.1651 | 8.89 | 18000 | 0.3419 | 0.7914 | 0.8072 | 0.7992 | 0.9068 |
| 0.1549 | 9.38 | 19000 | 0.3600 | 0.7931 | 0.7964 | 0.7948 | 0.9045 |
| 0.1539 | 9.88 | 20000 | 0.3525 | 0.7851 | 0.8091 | 0.7970 | 0.9052 |
| 0.1449 | 10.37 | 21000 | 0.3634 | 0.7881 | 0.7998 | 0.7939 | 0.9046 |
| 0.1436 | 10.86 | 22000 | 0.3736 | 0.7916 | 0.8058 | 0.7986 | 0.9069 |
| 0.1368 | 11.36 | 23000 | 0.3771 | 0.7892 | 0.8020 | 0.7955 | 0.9053 |
| 0.1347 | 11.85 | 24000 | 0.3800 | 0.7861 | 0.8060 | 0.7959 | 0.9045 |
| 0.1281 | 12.35 | 25000 | 0.3911 | 0.7852 | 0.8055 | 0.7952 | 0.9059 |
| 0.1272 | 12.84 | 26000 | 0.3919 | 0.7880 | 0.8005 | 0.7942 | 0.9052 |
| 0.1217 | 13.33 | 27000 | 0.4021 | 0.7887 | 0.7981 | 0.7934 | 0.9050 |
| 0.1202 | 13.83 | 28000 | 0.3959 | 0.7845 | 0.8057 | 0.7950 | 0.9056 |
| 0.1175 | 14.32 | 29000 | 0.4066 | 0.7864 | 0.8031 | 0.7947 | 0.9052 |
| 0.115 | 14.81 | 30000 | 0.4057 | 0.7870 | 0.8040 | 0.7954 | 0.9056 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeepPavlov/roberta-large-winogrande
|
[
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:winogrande",
"arxiv:1907.11692",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 348 | null |
---
license: other
---
# OpenAssistant LLaMA 30B SFT 7
Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide XOR weights for the OA models.
Thanks to Mick for writing the `xor_codec.py` script which enables this process
## The Process
Note: This process applies to `oasst-sft-7-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different..
**This process is tested only on Linux (specifically Ubuntu). Some users have reported that the process does not work on Windows. We recommend using WSL if you only have a Windows machine.**
To use OpenAssistant LLaMA-Based Models, you should have a copy of the original LLaMA model weights and add them to a `llama` subdirectory here. If you cannot obtain the original LLaMA, see the note in italic below for a possible alternative.
Ensure your LLaMA 30B checkpoint matches the correct md5sums:
```
f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
d9dbfbea61309dc1e087f5081e98331a consolidated.01.pth
2b2bed47912ceb828c0a37aac4b99073 consolidated.02.pth
ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth
4babdbd05b8923226a9e9622492054b6 params.json
```
*If you do not have a copy of the original LLaMA weights and cannot obtain one, you may still be able to complete this process. Some users have reported that [this model](https://huggingface.co/elinas/llama-30b-hf-transformers-4.29) can be used as a base for the XOR conversion. This will also allow you to skip to Step 7. However, we only support conversion starting from LLaMA original checkpoint and cannot provide support if you experience issues with this alternative approach.*
**Important: Follow these exact steps to convert your original LLaMA checkpoint to a HuggingFace Transformers-compatible format. If you use the wrong versions of any dependency, you risk ending up with weights which are not compatible with the XOR files.**
1. Create a clean Python **3.10** virtual environment & activate it:
```
python3.10 -m venv xor_venv
source xor_venv/bin/activate
```
2. Clone transformers repo and switch to tested version:
```
git clone https://github.com/huggingface/transformers.git
cd transformers
git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c
pip install .
```
3. Install **exactly** these dependency versions:
```
pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1
```
4. Check `pip freeze` output:
```
accelerate==0.18.0
certifi==2022.12.7
charset-normalizer==3.1.0
filelock==3.12.0
huggingface-hub==0.13.4
idna==3.4
numpy==1.24.2
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
packaging==23.1
protobuf==3.20.1
psutil==5.9.5
PyYAML==6.0
regex==2023.3.23
requests==2.28.2
sentencepiece==0.1.98
tokenizers==0.13.3
torch==1.13.1
tqdm==4.65.0
transformers @ file:///mnt/data/koepf/transformers
typing_extensions==4.5.0
urllib3==1.26.15
```
5. While in `transformers` repo root, run HF LLaMA conversion script:
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir <input_path_llama_base> --output_dir <output_path_llama30b_hf> --model_size 30B
```
6. Run `find . -type f -exec md5sum "{}" +` in the conversion target directory (`output_dir`). This should produce exactly the following checksums if your files are correct:
```
462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin
e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin
9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin
aee09e21813368c49baaece120125ae3 ./generation_config.json
92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin
3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin
eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin
598538f18fed1877b41f77de034c0c8a ./config.json
fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json
edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin
```
**Important: You should now have the correct LLaMA weights and be ready to apply the XORs. If the checksums above do not match yours, there is a problem.**
7. Once you have LLaMA weights in the correct format, you can apply the XOR decoding:
```
python xor_codec.py oasst-sft-7-llama-30b/ oasst-sft-7-llama-30b-xor/ llama30b_hf/
```
You should **expect to see one warning message** during execution:
`Exception when processing 'added_tokens.json'`
This is normal. **If similar messages appear for other files, something has gone wrong**.
8. Now run `find . -type f -exec md5sum "{}" +` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these checksums:
```
8ae4537c64a1ef202d1d82eb0d356703 ./pytorch_model-00007-of-00007.bin
d84f99d23369e159e50cb0597b6c9673 ./pytorch_model-00003-of-00007.bin
f7de50a725d678eb65cc3dced727842f ./pytorch_model-00001-of-00007.bin
27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json
aee09e21813368c49baaece120125ae3 ./generation_config.json
31a2b04b139f4af043ad04478f1497f5 ./pytorch_model-00005-of-00007.bin
a16a2dfacbde77a1659a7c9df7966d0a ./pytorch_model-00004-of-00007.bin
eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
baa778a8679d47b085446faf97b72758 ./pytorch_model-00006-of-00007.bin
b2d64f2198ab7b53e3b8d12fbcadeb3c ./config.json
deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json
76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json
ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
704373f0c0d62be75e5f7d41d39a7e57 ./special_tokens_map.json
e836168cdbbb74db51d04f25ed6408ce ./pytorch_model-00002-of-00007.bin
```
If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers. **If your checksums do not match those above, there is a problem.**
### Configuration
```
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
```
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|
DeepPavlov/xlm-roberta-large-en-ru
|
[
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 190 | null |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A photo of a sks capsule.
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Akuxcw/capsule1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A photo of a sks capsule. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




















|
DeividasM/wav2vec2-large-xlsr-53-lithuanian
|
[
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.45 +/- 15.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DeltaHub/adapter_t5-3b_mrpc
|
[
"pytorch",
"transformers"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
datasets:
- c-s-ale/alpaca-gpt4-data
pipeline_tag: text2text-generation
---
This repo provides the training checkpoint of LLaMA on the alpaca_data_gpt4 dataset via LoRA [MLP] on 8xA100(80G).
He et al. 2022 gave an insight that FFN can better utilize modification at larger capacities.
The codes is provided by [tloen/alpaca-lora: Instruct-tune LLaMA on consumer hardware (github.com)](https://github.com/tloen/alpaca-lora).
We modify the running scripts to
```bash
torchrun --nproc_per_node=8 finetune.py \
--base_model '/cache1/chtan/large_models/llama-hf/llama-65b' \
--data_path './alpaca_data_gpt4.json' \
--output_dir './gpt4-alpaca-lora_mlp-65b' \
--batch_size 128 \
--micro_batch_size 2 \
--num_epochs 10 \
--learning_rate 1e-4 \
--cutoff_len 512 \
--val_set_size 2000 \
--lora_r 16 \
--lora_alpha 16 \
--lora_dropout 0.05 \
--lora_target_modules '[gate_proj,down_proj,up_proj]' \
--train_on_inputs \
--group_by_length
```
### Examples
**Instruction**: Tell me about alpacas.
**gpt4-alpaca-lora_mlp-65b**:
Alpacas are small, domesticated mammals that are closely related to llamas. They are native to the Andes Mountains of South America, primarily in Peru, Bolivia, and Chile. These animals have been domesticated for thousands of years and were used by the Incas for their fleece, meat, and as pack animals.
Alpacas are known for their soft and warm fleece, which comes in over 22 natural colors. Their fleece is highly valued and is used in the textile industry to make clothing and other products. Alpacas are also known for their friendly and gentle nature, making them popular pets and great additions to farms.
These animals are social and live in herds, typically with one dominant male. They communicate through a series of hums, clucks, and other vocalizations. Alpacas are also known for their distinctive "humming" sound, which they make when they are content or curious.
In terms of physical appearance, alpacas have a small, stocky build, with an average height of around 36 inches at the withers. They have large, expressive eyes, and their ears are perked forward. Alpacas are herbivores and primarily eat grasses and other vegetation. They are also known for their efficient digestive system, which allows them to thrive on a diet of low-quality forage.
Overall, alpacas are unique and fascinating animals that have been an important part of Andean culture for centuries. They continue to be valued for their fleece, meat, and as pets, and their popularity continues to grow around the world.
**Alpaca-LoRA**: Alpacas are members of the camelid family and are native to the Andes Mountains of South America. They are known for their soft, luxurious fleece, which is used to make clothing, blankets, and other items. Alpacas are herbivores and graze on grasses and other plants. They are social animals and live in herds of up to 20 individuals.
**Stanford Alpaca**: Alpacas are small, fluffy animals related to camels and llamas. They are native to Peru and Bolivia, and were first domesticated around 5,000 years ago. They are kept mainly for their fine, soft fleece, which is used to make knitwear and other garments. Alpacas are herd animals and live in small family groups, led by an older male. They are highly social animals and can form strong bonds with their owners.
**text-davinci-003**: Alpacas are a domesticated species of South American camelid, similar to llamas. They are native to the Andes Mountains in South America and are kept as livestock for their soft, luxurious wool. Alpacas are social animals and live in herds of up to 20 individuals. They are typically kept in small herds of two to five animals, and are relatively easy to care for. Alpacas are herbivores and feed on grass, hay, and other vegetation. They are also known for their gentle and friendly nature, making them popular as pets.
---
**Instruction**: Write a Python program that prints the first 10 Fibonacci numbers.
**gpt4-alpaca-lora_mlp-65b**:
Here is a Python program that prints the first 10 Fibonacci numbers:
```python
# initialize variables
a = 0
b = 1
# loop to print the first 10 Fibonacci numbers
for i in range(10):
print(a, end=" ")
a, b = b, a + b
```
In this program, we initialize two variables `a` and `b` with the first two Fibonacci numbers, which are 0 and 1. Then, we use a for loop to iterate 10 times. In each iteration, we print the current value of `a` and update the values of `a` and `b` using the formula `a = b` and `b = a + b`. This formula is the definition of the Fibonacci sequence.
**Alpaca-LoRA**:
```python
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
print(fibonacci(10))
```
**Stanford Alpaca**:
```python
numbers = [0, 1]
for i in range(2, 11):
numbers.append(numbers[i-2] + numbers[i-1])
print(numbers)
# Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
```
**text-davinci-003**:
```python
# Fibonacci numbers are the numbers in the following integer sequence, called the Fibonacci sequence, and characterized by the fact that every number after the first two is the sum of the two preceding ones.
# Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
# Program:
# Initialize the first two numbers of the Fibonacci sequence
a = 0
b = 1
# Print the first 10 numbers of the Fibonacci sequence
print(a)
print(b)
for i in range(2,10):
c = a + b
a = b
b = c
print(c)
```
---
> [1] Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig: Towards a Unified View of Parameter-Efficient Transfer Learning. ICLR 2022
|
Denilson/gbert-base-germaner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language: ja
datasets:
- reazon-research/reazonspeech
tags:
- hubert
- speech
license: apache-2.0
---
# japanese-hubert-base

This is a Japanese HuBERT (Hidden Unit Bidirectional Encoder Representations from Transformers) model trained by [rinna Co., Ltd.](https://rinna.co.jp/)
This model was traind using a large-scale Japanese audio dataset, [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech) corpus.
## How to use the model
```python
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("rinna/japanese-hubert-base")
model.eval()
wav_input_16khz = torch.randn(1, 10000)
outputs = model(wav_input_16khz)
print(f"Input: {wav_input_16khz.size()}") # [1, 10000]
print(f"Output: {outputs.last_hidden_state.size()}") # [1, 31, 768]
```
## Model summary
The model architecture is the same as the [original HuBERT base model](https://huggingface.co/facebook/hubert-base-ls960), which contains 12 transformer layers with 8 attention heads.
The model was trained using code from the [official repository](https://github.com/facebookresearch/fairseq/tree/main/examples/hubert), and the detailed training configuration can be found in the same repository and the [original paper](https://ieeexplore.ieee.org/document/9585401).
A fairseq checkpoint file can also be available [here](https://huggingface.co/rinna/japanese-hubert-base/tree/main/fairseq).
## Training
The model was trained on approximately 19,000 hours of [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech) corpus.
## License
[The Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0)
## Citation
```bibtex
@article{hubert2021hsu,
author={Hsu, Wei-Ning and Bolte, Benjamin and Tsai, Yao-Hung Hubert and Lakhotia, Kushal and Salakhutdinov, Ruslan and Mohamed, Abdelrahman},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units},
year={2021},
volume={29},
number={},
pages={3451-3460},
doi={10.1109/TASLP.2021.3122291}
}
```
|
Deniskin/emailer_medium_300
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: LiLT-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LiLT-finetuned
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7302
- Precision: 0.2787
- Recall: 0.2982
- F1: 0.2881
- Accuracy: 0.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 41.67 | 250 | 1.4739 | 0.2667 | 0.2807 | 0.2735 | 0.7632 |
| 0.1955 | 83.33 | 500 | 1.7302 | 0.2787 | 0.2982 | 0.2881 | 0.7616 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DeskDown/MarianMixFT_en-fil
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- speedppc/autotrain-data-beeline-g-answer-purchase-refi-v4
co2_eq_emissions:
emissions: 0.0018883242785470097
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 53609126236
- CO2 Emissions (in grams): 0.0019
## Validation Metrics
- Loss: 0.006
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/speedppc/autotrain-beeline-g-answer-purchase-refi-v4-53609126236
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("speedppc/autotrain-beeline-g-answer-purchase-refi-v4-53609126236", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("speedppc/autotrain-beeline-g-answer-purchase-refi-v4-53609126236", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
DeskDown/MarianMixFT_en-id
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
language:
- zh
- en
tags:
- chatglm
- glm
- onnx
- onnxruntime
---
# ChatGLM-6B + ONNX
This model is exported from [ChatGLM-6b](https://huggingface.co/THUDM/chatglm-6b) with int8 quantization and optimized for [ONNXRuntime](https://onnxruntime.ai/) inference. Export code in [this repo](https://github.com/K024/chatglm-q).
Inference code with ONNXRuntime is uploaded with the model. Install requirements and run `streamlit run web-ui.py` to start chatting. Currently the `MatMulInteger` (for u8s8 data type) and `DynamicQuantizeLinear` operators are only supported on CPU. Arm64 with Neon support (Apple M1/M2) should be reasonably fast.
安装依赖并运行 `streamlit run web-ui.py` 预览模型效果。由于 ONNXRuntime 算子支持问题,目前仅能够使用 CPU 进行推理,在 Arm64 (Apple M1/M2) 上有可观的速度。具体的 ONNX 导出代码在[这个仓库](https://github.com/K024/chatglm-q)中。
## Usage
Clone with [git-lfs](https://git-lfs.com/):
```sh
git lfs clone https://huggingface.co/K024/ChatGLM-6b-onnx-u8s8
cd ChatGLM-6b-onnx-u8s8
pip install -r requirements.txt
streamlit run web-ui.py
```
Or use `huggingface_hub` [python client lib](https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder) to download the repo snapshot:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="K024/ChatGLM-6b-onnx-u8s8", local_dir="./ChatGLM-6b-onnx-u8s8")
```
Codes are released under MIT license.
Model weights are released under the same license as ChatGLM-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm-6b/blob/main/MODEL_LICENSE).
|
DeskDown/MarianMixFT_en-ja
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: taNER-1k-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# taNER-1k-V2
This model is a fine-tuned version of [livinNector/tabert-1k](https://huggingface.co/livinNector/tabert-1k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- Precision: 0.7866
- Recall: 0.8029
- F1: 0.7947
- Accuracy: 0.9049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3724 | 0.49 | 1000 | 0.3865 | 0.7280 | 0.7372 | 0.7326 | 0.8758 |
| 0.3199 | 0.99 | 2000 | 0.3516 | 0.7524 | 0.7561 | 0.7543 | 0.8858 |
| 0.2911 | 1.48 | 3000 | 0.3436 | 0.7543 | 0.7765 | 0.7653 | 0.8906 |
| 0.2867 | 1.98 | 4000 | 0.3391 | 0.7522 | 0.7908 | 0.7710 | 0.8909 |
| 0.2654 | 2.47 | 5000 | 0.3262 | 0.7696 | 0.7845 | 0.7770 | 0.8961 |
| 0.2616 | 2.96 | 6000 | 0.3294 | 0.7784 | 0.7800 | 0.7792 | 0.8954 |
| 0.2422 | 3.46 | 7000 | 0.3191 | 0.7779 | 0.7934 | 0.7856 | 0.8999 |
| 0.2422 | 3.95 | 8000 | 0.3272 | 0.7735 | 0.7962 | 0.7847 | 0.8985 |
| 0.2208 | 4.44 | 9000 | 0.3252 | 0.7811 | 0.7952 | 0.7881 | 0.9012 |
| 0.2227 | 4.94 | 10000 | 0.3220 | 0.7789 | 0.7993 | 0.7890 | 0.9026 |
| 0.204 | 5.43 | 11000 | 0.3413 | 0.7904 | 0.7894 | 0.7899 | 0.9007 |
| 0.2036 | 5.93 | 12000 | 0.3329 | 0.7810 | 0.7984 | 0.7896 | 0.9009 |
| 0.1874 | 6.42 | 13000 | 0.3362 | 0.7872 | 0.7986 | 0.7929 | 0.9033 |
| 0.1877 | 6.91 | 14000 | 0.3414 | 0.7764 | 0.8029 | 0.7894 | 0.9013 |
| 0.172 | 7.41 | 15000 | 0.3463 | 0.7871 | 0.7997 | 0.7933 | 0.9032 |
| 0.1729 | 7.9 | 16000 | 0.3441 | 0.7863 | 0.8001 | 0.7931 | 0.9034 |
| 0.159 | 8.4 | 17000 | 0.3625 | 0.7856 | 0.7970 | 0.7912 | 0.9019 |
| 0.1585 | 8.89 | 18000 | 0.3575 | 0.7867 | 0.7980 | 0.7923 | 0.9030 |
| 0.1485 | 9.38 | 19000 | 0.3761 | 0.7850 | 0.7965 | 0.7907 | 0.9029 |
| 0.1468 | 9.88 | 20000 | 0.3658 | 0.7874 | 0.8019 | 0.7946 | 0.9037 |
| 0.1378 | 10.37 | 21000 | 0.3835 | 0.7851 | 0.8039 | 0.7944 | 0.9042 |
| 0.1364 | 10.86 | 22000 | 0.3852 | 0.7861 | 0.8019 | 0.7940 | 0.9043 |
| 0.1294 | 11.36 | 23000 | 0.3906 | 0.7854 | 0.7973 | 0.7913 | 0.9038 |
| 0.1277 | 11.85 | 24000 | 0.3947 | 0.7875 | 0.7988 | 0.7931 | 0.9030 |
| 0.1207 | 12.35 | 25000 | 0.4082 | 0.7841 | 0.7997 | 0.7918 | 0.9035 |
| 0.1199 | 12.84 | 26000 | 0.4137 | 0.7888 | 0.7993 | 0.7940 | 0.9049 |
| 0.1144 | 13.33 | 27000 | 0.4155 | 0.7875 | 0.7996 | 0.7935 | 0.9046 |
| 0.113 | 13.83 | 28000 | 0.4177 | 0.7840 | 0.8053 | 0.7945 | 0.9046 |
| 0.1103 | 14.32 | 29000 | 0.4280 | 0.7867 | 0.8021 | 0.7943 | 0.9042 |
| 0.1078 | 14.81 | 30000 | 0.4253 | 0.7866 | 0.8029 | 0.7947 | 0.9049 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
DewiBrynJones/wav2vec2-large-xlsr-welsh
|
[
"cy",
"dataset:common_voice",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
title: chinese-alpaca-plus-7b-merged
emoji: 📚
colorFrom: gray
colorTo: red
sdk: gradio
sdk_version: 3.23.0
app_file: app.py
pinned: false
---
加入中文词表并继续预训练中文Embedding,得到的中文LLaMA-plus模型。
详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.0
### 使用方法参考
1. 安装模块包
```bash
pip install sentencepiece
pip install transformers>=4.28.0
```
2. 生成文本
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-llama-plus-7b-merged')
model = LlamaForCausalLM.from_pretrained('minlik/chinese-llama-plus-7b-merged').half().to('cuda')
model.eval()
text = '第一个登上月球的人是'
input_ids = tokenizer.encode(text, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=128,
temperature=1,
top_k=40,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace('text', '').strip())
```
|
DheerajPranav/Dialo-GPT-Rick-bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DimaOrekhov/transformer-method-name
|
[
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-04-28T08:32:21Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- speedppc/autotrain-data-beeline-human
co2_eq_emissions:
emissions: 0.29833727886449596
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 53619126277
- CO2 Emissions (in grams): 0.2983
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/speedppc/autotrain-beeline-human-53619126277
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("speedppc/autotrain-beeline-human-53619126277", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("speedppc/autotrain-beeline-human-53619126277", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Waynehillsdev/Wayne_NLP_mT5
|
[
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.29 +/- 1.53
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Waynehillsdev/Waynehills-STT-doogie-server
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 61 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ThomasSimonini/testpyramidsrndNewintegration222
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Waynehillsdev/Waynehills_summary_tensorflow
|
[
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
inference: false
---
[Pygmalion 6B model (v3 / experiment 2)](https://huggingface.co/PygmalionAI/pygmalion-6b/tree/2a0d74449c8fbf0378194e95f64aa92e16297294)
|
Doohae/p_encoder
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.45 +/- 10.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | 2023-04-28T09:20:56Z |
---
license: openrail
datasets:
- OpenAssistant/oasst1
language:
- aa
- ay
metrics:
- accuracy
library_name: adapter-transformers
tags:
- chemistry
---
|
albert-large-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 687 | 2023-04-28T09:39:25Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 537.50 +/- 190.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga usix79 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga usix79 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga usix79
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
albert-large-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26,792 | 2023-04-28T09:40:55Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ultmcntry
---
### country-ultmcntry-v1 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ultmcntry (use that on your prompt)

|
albert-xlarge-v2
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,973 | 2023-04-28T09:41:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ultmhxphxp
---
### hiphop-ultmhxphxp-v3 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ultmhxphxp (use that on your prompt)

|
albert-xxlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7,091 | 2023-04-28T09:42:52Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: ultmcntry
---
### country-ultmcntry-v3 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
ultmcntry (use that on your prompt)

|
albert-xxlarge-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42,640 | 2023-04-28T09:43:37Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.24 +/- 0.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bert-base-chinese
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3,377,486 | 2023-04-28T09:47:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ShunCena/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bert-base-german-dbmdz-uncased
|
[
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68,305 | 2023-04-28T09:50:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 533.50 +/- 138.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ntrant7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ntrant7 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ntrant7
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
bert-base-multilingual-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4,749,504 | 2023-04-28T09:54:40Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: AutoTaxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ShunCena/AutoTaxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bert-large-cased-whole-word-masking
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,316 | 2023-04-28T10:01:26Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# MeinaMix API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "meinamix"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/meinamix)
Credits: [View credits](https://civitai.com/?query=MeinaMix)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "meinamix",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
distilbert-base-cased
|
[
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574,859 | 2023-04-28T10:16:41Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 98.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
distilbert-base-multilingual-cased
|
[
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,339,633 | 2023-04-28T10:24:20Z |
---
title: chinese-alpaca-plus-7b-merged
emoji: 📚
colorFrom: gray
colorTo: red
sdk: gradio
sdk_version: 3.23.0
app_file: app.py
pinned: false
---
加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-plus模型。
详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v3.0
### 使用方法参考
1. 安装模块包
```bash
pip install sentencepiece
pip install transformers>=4.28.0
```
2. 生成文本
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-plus-7b-merged')
model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-plus-7b-merged').half().to('cuda')
model.eval()
text = '第一个登上月球的人是谁?'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=128,
temperature=1,
top_k=40,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace('text', '').strip())
```
|
Adi2K/Priv-Consent
|
[
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- A
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/jiyang_tang_aphsiabank_english_asr_ebranchformer_wavlm_aph_en_both`
This model was trained by Jiyang Tang using A recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout edf949f535938da8c705c1d26cc561b2d4cb4778
pip install -e .
cd jtang1/A/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/jiyang_tang_aphsiabank_english_asr_ebranchformer_wavlm_aph_en_both
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Mar 7 12:06:32 EST 2023`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.8.1`
- Git hash: `b0b2a0aa9c335267046e83036b87e88af30698da`
- Commit date: `Tue Feb 7 14:56:31 2023 -0500`
## asr_ebranchformer_wavlm_aph_en_both
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|28424|296887|83.0|12.4|4.6|2.7|19.6|71.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|28424|1507391|91.9|3.0|5.1|3.0|11.1|71.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_ebranchformer_small_wavlm_large1.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_ebranchformer_wavlm_aph_en_both
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44175
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 200
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/val/wav.scp
- speech
- sound
- - dump/raw/val/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- '[APH]'
- '[NONAPH]'
- <space>
- e
- t
- a
- h
- o
- A
- n
- '['
- P
- H
- ']'
- i
- s
- N
- d
- r
- u
- l
- m
- w
- O
- y
- g
- c
- b
- f
- p
- k
- ''''
- v
- j
- <
- L
- U
- '>'
- ɪ
- x
- ə
- z
- ɛ
- ɑ
- q
- ɹ
- æ
- ˞
- ʌ
- ʃ
- ʊ
- ɔ
- ŋ
- ɚ
- ɾ
- ʒ
- ð
- θ
- ɜ
- ɝ
- ɡ
- '0'
- ː
- ʔ
- ɒ
- é
- ɸ
- ̩
- ʤ
- ʧ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: local/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: e_branchformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
layer_drop_rate: 0.1
input_layer: conv2d1
macaron_ffn: true
pos_enc_layer_type: rel_pos
attention_layer_type: rel_selfattn
rel_pos_type: latest
cgmlp_linear_units: 3072
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
positionwise_layer_type: linear
use_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Aidan8756/stephenKingModel
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
tags:
- causal-lm
- llama
license: cc-by-nc-sa-4.0
datasets:
- OpenAssistant/oasst1
- nomic-ai/gpt4all_prompt_generations
- tatsu-lab/alpaca
inference: false
---
# StableVicuna-13B-GPTQ
This repo contains 4bit GPTQ format quantised models of [CarperAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
* [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
* [Unquantised float16 model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
## PROMPT TEMPLATE
This model works best with the following prompt template:
```
### Human: your prompt here
### Assistant:
```
## How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/stable-vicuna-13B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
I have uploaded two versions of the GPTQ.
**Compatible file - stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors**
In the `main` branch - the default one - you will find `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
* `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors
```
**Latest file - stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors**
Created for more recent versions of GPTQ-for-LLaMa, and uses the `--act-order` flag for maximum theoretical performance.
To access this file, please switch to the `latest` branch fo this repo and download from there.
* `stable-vicuna-13B-GPTQ-4bit.latest.act-order.safetensors`
* Only works with recent GPTQ-for-LLaMa code
* **Does not** work with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. **act-order**.
* Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
* Command used to create the GPTQ:
```
CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
```
## Manual instructions for `text-generation-webui`
File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model stable-vicuna-13B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
# Original StableVicuna-13B model card
## Model Description
StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
## Model Details
* **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
* **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
* **Language(s)**: English
* **Library**: [trlX](https://github.com/CarperAI/trlx)
* **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
* *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
* **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
| Hyperparameter | Value |
|---------------------------|-------|
| \\(n_\text{parameters}\\) | 13B |
| \\(d_\text{model}\\) | 5120 |
| \\(n_\text{layers}\\) | 40 |
| \\(n_\text{heads}\\) | 40 |
## Training
### Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
[GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
### Training Procedure
`CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
| Hyperparameter | Value |
|-------------------|---------|
| num_rollouts | 128 |
| chunk_size | 16 |
| ppo_epochs | 4 |
| init_kl_coef | 0.1 |
| target | 6 |
| horizon | 10000 |
| gamma | 1 |
| lam | 0.95 |
| cliprange | 0.2 |
| cliprange_value | 0.2 |
| vf_coef | 1.0 |
| scale_reward | None |
| cliprange_reward | 10 |
| generation_kwargs | |
| max_length | 512 |
| min_length | 48 |
| top_k | 0.0 |
| top_p | 1.0 |
| do_sample | True |
| temperature | 1.0 |
## Use and Limitations
### Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
### Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
## Acknowledgements
This work would not have been possible without the support of [Stability AI](https://stability.ai/).
## Citations
```bibtex
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
```bibtex
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
```
```bibtex
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```
```bibtex
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
```bibtex
@software{leandro_von_werra_2023_7790115,
author = {Leandro von Werra and
Alex Havrilla and
Max reciprocated and
Jonathan Tow and
Aman cat-state and
Duy V. Phung and
Louis Castricato and
Shahbuland Matiana and
Alan and
Ayush Thakur and
Alexey Bukhtiyarov and
aaronrmm and
Fabrizio Milo and
Daniel and
Daniel King and
Dong Shin and
Ethan Kim and
Justin Wei and
Manuel Romero and
Nicky Pochinkov and
Omar Sanseviero and
Reshinth Adithyan and
Sherman Siu and
Thomas Simonini and
Vladimir Blagojevic and
Xu Song and
Zack Witten and
alexandremuzio and
crumb},
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
Util, T5 ILQL, Tests}},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {v0.6.0},
doi = {10.5281/zenodo.7790115},
url = {https://doi.org/10.5281/zenodo.7790115}
}
```
|
AidenGO/KDXF_Bert4MaskedLM
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-dialogue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-dialogue
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2936 | 1.0 | 242 | 3.1288 |
| 3.1354 | 2.0 | 484 | 3.0396 |
| 3.0398 | 3.0 | 726 | 3.0175 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Akash7897/distilbert-base-uncased-finetuned-sst2
|
[
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: toki-pona
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toki-pona
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7747 | 1.0 | 11978 | 1.6708 |
| 1.6538 | 2.0 | 23956 | 1.5588 |
| 1.6185 | 3.0 | 35934 | 1.5251 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Akash7897/my-newtokenizer
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
model-index:
- name: arwiki_mlm
results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Arabic_Wikipedia_20230101
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو <mask>
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# arwiki_mlm (arRoberta)
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Pseudo-Perplexity:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1 | 3000 | 5.681200 |
| 2 | 6000 | 3.777100 |
| 3 | 9000 | 3.246300 |
| 4 | 12000 | 3.012100 |
| 5 | 15000 | 2.888400 |
| Train Runtime | Train Samples Per Second | Train Steps Per Second | Total Flos | Train Loss | Epoch |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 17048.756800 | 248.355000 | 0.970000 | 140390797515571200.000000 | 3.639375 | 5.000000 |
### Framework versions
- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
|
Akashamba/distilbert-base-uncased-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8414824042354406
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1567
- F1: 0.8415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3594 | 1.0 | 191 | 0.1855 | 0.7971 |
| 0.1597 | 2.0 | 382 | 0.1544 | 0.8272 |
| 0.1003 | 3.0 | 573 | 0.1567 | 0.8415 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Aklily/Lilys
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- generated_from_trainer
model-index:
- name: AutomotiveBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AutomotiveBert
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AkshatSurolia/BEiT-FaceMask-Finetuned
|
[
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
image-classification
|
{
"architectures": [
"BeitForImageClassification"
],
"model_type": "beit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 239 | null |
---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: clasificador-tweet-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: stance_feminist
split: test
args: stance_feminist
metrics:
- name: Accuracy
type: accuracy
value: 0.6
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-tweet-sentiment
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9057
- Accuracy: 0.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 75 | 0.7909 | 0.6596 |
| No log | 2.0 | 150 | 0.7958 | 0.6281 |
| No log | 3.0 | 225 | 0.9057 | 0.6 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AkshatSurolia/ViT-FaceMask-Finetuned
|
[
"pytorch",
"safetensors",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
image-classification
|
{
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 40 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pythia-1b-deduped-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-1b-deduped-chatml
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0088
- Accuracy: 0.2334
- Entropy: 0.8955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Entropy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|
| 1.1292 | 1.0 | 792 | 1.0127 | 0.2327 | 1.0290 |
| 0.7489 | 2.0 | 1584 | 1.0088 | 0.2334 | 0.8955 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0-rc1
- Datasets 2.10.1
- Tokenizers 0.13.3
|
AkshayDev/BERT_Fine_Tuning
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1766.23 +/- 65.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AkshaySg/langid
|
[
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
] |
audio-classification
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Akuva2001/SocialGraph
|
[
"has_space"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: device_c_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# device_c_2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9457 | 1.0 | 5 | 3.5934 |
| 2.2144 | 2.0 | 10 | 1.7731 |
| 1.8866 | 3.0 | 15 | 1.4610 |
| 1.3164 | 4.0 | 20 | 1.0539 |
| 0.6409 | 5.0 | 25 | 0.7325 |
| 0.6833 | 6.0 | 30 | 0.5473 |
| 0.6743 | 7.0 | 35 | 0.4157 |
| 0.6691 | 8.0 | 40 | 0.3602 |
| 0.3706 | 9.0 | 45 | 0.2831 |
| 0.4026 | 10.0 | 50 | 0.2250 |
| 0.2761 | 11.0 | 55 | 0.2030 |
| 0.3899 | 12.0 | 60 | 0.1854 |
| 0.2964 | 13.0 | 65 | 0.1694 |
| 0.2146 | 14.0 | 70 | 0.1338 |
| 0.2452 | 15.0 | 75 | 0.1292 |
| 0.2499 | 16.0 | 80 | 0.1253 |
| 0.3775 | 17.0 | 85 | 0.1297 |
| 0.2058 | 18.0 | 90 | 0.1292 |
| 0.162 | 19.0 | 95 | 0.1063 |
| 0.146 | 20.0 | 100 | 0.0984 |
| 0.2462 | 21.0 | 105 | 0.1057 |
| 0.0739 | 22.0 | 110 | 0.0965 |
| 0.2286 | 23.0 | 115 | 0.0999 |
| 0.2376 | 24.0 | 120 | 0.0952 |
| 0.1166 | 25.0 | 125 | 0.0944 |
| 0.1758 | 26.0 | 130 | 0.0978 |
| 0.1734 | 27.0 | 135 | 0.0896 |
| 0.1804 | 28.0 | 140 | 0.0893 |
| 0.0686 | 29.0 | 145 | 0.0878 |
| 0.1791 | 30.0 | 150 | 0.0826 |
| 0.1256 | 31.0 | 155 | 0.0844 |
| 0.0911 | 32.0 | 160 | 0.0859 |
| 0.0864 | 33.0 | 165 | 0.0817 |
| 0.1252 | 34.0 | 170 | 0.0811 |
| 0.1538 | 35.0 | 175 | 0.0811 |
| 0.1039 | 36.0 | 180 | 0.0801 |
| 0.0968 | 37.0 | 185 | 0.0781 |
| 0.0688 | 38.0 | 190 | 0.0777 |
| 0.1269 | 39.0 | 195 | 0.0759 |
| 0.1428 | 40.0 | 200 | 0.0735 |
| 0.1073 | 41.0 | 205 | 0.0744 |
| 0.101 | 42.0 | 210 | 0.0739 |
| 0.1131 | 43.0 | 215 | 0.0769 |
| 0.1094 | 44.0 | 220 | 0.0793 |
| 0.1089 | 45.0 | 225 | 0.0801 |
| 0.0545 | 46.0 | 230 | 0.0772 |
| 0.1156 | 47.0 | 235 | 0.0785 |
| 0.0897 | 48.0 | 240 | 0.0774 |
| 0.0479 | 49.0 | 245 | 0.0782 |
| 0.0788 | 50.0 | 250 | 0.0755 |
| 0.1351 | 51.0 | 255 | 0.0734 |
| 0.128 | 52.0 | 260 | 0.0735 |
| 0.1419 | 53.0 | 265 | 0.0737 |
| 0.149 | 54.0 | 270 | 0.0737 |
| 0.1097 | 55.0 | 275 | 0.0725 |
| 0.2128 | 56.0 | 280 | 0.0717 |
| 0.0932 | 57.0 | 285 | 0.0726 |
| 0.1127 | 58.0 | 290 | 0.0727 |
| 0.1035 | 59.0 | 295 | 0.0714 |
| 0.1 | 60.0 | 300 | 0.0726 |
| 0.0933 | 61.0 | 305 | 0.0721 |
| 0.0668 | 62.0 | 310 | 0.0701 |
| 0.1054 | 63.0 | 315 | 0.0698 |
| 0.0675 | 64.0 | 320 | 0.0703 |
| 0.0939 | 65.0 | 325 | 0.0710 |
| 0.0801 | 66.0 | 330 | 0.0704 |
| 0.0999 | 67.0 | 335 | 0.0706 |
| 0.0704 | 68.0 | 340 | 0.0702 |
| 0.0982 | 69.0 | 345 | 0.0701 |
| 0.0562 | 70.0 | 350 | 0.0700 |
| 0.1112 | 71.0 | 355 | 0.0695 |
| 0.1347 | 72.0 | 360 | 0.0692 |
| 0.1124 | 73.0 | 365 | 0.0696 |
| 0.0744 | 74.0 | 370 | 0.0693 |
| 0.0814 | 75.0 | 375 | 0.0689 |
| 0.0746 | 76.0 | 380 | 0.0685 |
| 0.0557 | 77.0 | 385 | 0.0683 |
| 0.0897 | 78.0 | 390 | 0.0682 |
| 0.0525 | 79.0 | 395 | 0.0683 |
| 0.0701 | 80.0 | 400 | 0.0685 |
| 0.0741 | 81.0 | 405 | 0.0682 |
| 0.0436 | 82.0 | 410 | 0.0679 |
| 0.1056 | 83.0 | 415 | 0.0679 |
| 0.0835 | 84.0 | 420 | 0.0678 |
| 0.0968 | 85.0 | 425 | 0.0677 |
| 0.0646 | 86.0 | 430 | 0.0677 |
| 0.0908 | 87.0 | 435 | 0.0676 |
| 0.0793 | 88.0 | 440 | 0.0677 |
| 0.0168 | 89.0 | 445 | 0.0677 |
| 0.0975 | 90.0 | 450 | 0.0678 |
| 0.0646 | 91.0 | 455 | 0.0678 |
| 0.0877 | 92.0 | 460 | 0.0678 |
| 0.0627 | 93.0 | 465 | 0.0677 |
| 0.0592 | 94.0 | 470 | 0.0677 |
| 0.0577 | 95.0 | 475 | 0.0677 |
| 0.0277 | 96.0 | 480 | 0.0677 |
| 0.0928 | 97.0 | 485 | 0.0677 |
| 0.0503 | 98.0 | 490 | 0.0676 |
| 0.078 | 99.0 | 495 | 0.0677 |
| 0.0762 | 100.0 | 500 | 0.0676 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AlbertHSU/ChineseFoodBert
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: biogpt-healthcare-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biogpt-healthcare-tuned
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0196 | 1.0 | 1921 | 2.9902 |
| 2.6071 | 2.0 | 3842 | 2.8189 |
| 2.3931 | 3.0 | 5763 | 2.7690 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ale/Alen
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9185
- name: F1
type: f1
value: 0.9185586323168572
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2189
- Accuracy: 0.9185
- F1: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7972 | 1.0 | 250 | 0.3171 | 0.903 | 0.8995 |
| 0.2464 | 2.0 | 500 | 0.2189 | 0.9185 | 0.9186 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
Aleenbo/Arcane
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
Access to model peese2005/UTD is restricted and you are not in the authorized list. Visit https://huggingface.co/peese2005/UTD to ask for access.
|
Aleksandar/distilbert-srb-base-cased-oscar
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: bsd
tags:
- chemistry
- biology
- protein
- antibodies
- antibody
- heavy chain
- AbLang
- CDR
- OAS
---
### AbLang model for heavy chains
This is a 🤗 version of AbLang: A language model for antibodies. It was introduced in
[this paper](https://doi.org/10.1101/2022.01.20.477061) and first released in
[this repository](https://github.com/oxpig/AbLang). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
### Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks (TBA).
### How to use
Here is how to use this model to get the features of a given antibody sequence in PyTorch:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('qilowoq/AbLang_heavy')
model = AutoModel.from_pretrained('qilowoq/AbLang_heavy', trust_remote_code=True)
sequence_Example = ' '.join("EVQLQESGPGLVKPSETLSLTCTVSGGPINNAYWTWIRQPPGKGLEYLGYVYHTGVTNYNPSLKSRLTITIDTSRKQLSLSLKFVTAADSAVYYCAREWAEDGDFGNAFHVWGQGTMVAVSSASTKGPSVFPLAPSSKSTSGGTAALGCL")
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
model_output = model(**encoded_input)
```
Sequence embeddings can be produced as follows:
```python
def get_sequence_embeddings(encoded_input, model_output):
mask = encoded_input['attention_mask'].float()
d = {k: v for k, v in torch.nonzero(mask).cpu().numpy()} # dict of sep tokens
# make sep token invisible
for i in d:
mask[i, d[i]] = 0
mask[:, 0] = 0.0 # make cls token invisible
mask = mask.unsqueeze(-1).expand(model_output.last_hidden_state.size())
sum_embeddings = torch.sum(model_output.last_hidden_state * mask, 1)
sum_mask = torch.clamp(mask.sum(1), min=1e-9)
return sum_embeddings / sum_mask
seq_embeds = get_sequence_embeddings(encoded_input, model_output)
```
### Fine-tune
To save memory we recomend using [LoRA](https://doi.org/10.48550/arXiv.2106.09685):
```python
pip install git+https://github.com/huggingface/peft.git
pip install loralib
```
LoRA greatly reduces the number of trainable parameters and performs on-par or better than fine-tuning full model.
```python
from peft import LoraConfig, get_peft_model
def apply_lora_bert(model):
config = LoraConfig(
r=8, lora_alpha=32,
lora_dropout=0.3,
target_modules=['query', 'value']
)
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.enable_input_require_grads()
model = get_peft_model(model, config)
return model
model = apply_lora_bert(model)
model.print_trainable_parameters()
# trainable params: 294912 || all params: 85493760 || trainable%: 0.3449514911965505
```
### Citation
```
@article{Olsen2022,
title={AbLang: An antibody language model for completing antibody sequences},
author={Tobias H. Olsen, Iain H. Moal and Charlotte M. Deane},
journal={bioRxiv},
doi={https://doi.org/10.1101/2022.01.20.477061},
year={2022}
}
```
|
Aleksandar1932/gpt2-spanish-classics
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 603.50 +/- 123.41
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga drbeane -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga drbeane -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga drbeane
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Aleksandra/herbert-base-cased-finetuned-squad
|
[
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: mit
---
### banana-haai on Stable Diffusion
This is the `<banana-photo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
adorkin/xlm-roberta-en-ru-emoji
|
[
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:tweet_eval",
"transformers"
] |
text-classification
|
{
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
license: bsd
tags:
- chemistry
- biology
- protein
- antibodies
- antibody
- light chain
- AbLang
- CDR
- OAS
---
### AbLang model for light chains
This is a 🤗 version of AbLang: A language model for antibodies. It was introduced in
[this paper](https://doi.org/10.1101/2022.01.20.477061) and first released in
[this repository](https://github.com/oxpig/AbLang). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
### Intended uses & limitations
The model could be used for protein feature extraction or to be fine-tuned on downstream tasks (TBA).
### How to use
Here is how to use this model to get the features of a given antibody sequence in PyTorch:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('qilowoq/AbLang_light')
model = AutoModel.from_pretrained('qilowoq/AbLang_light', trust_remote_code=True)
sequence_Example = ' '.join("GSELTQDPAVSVALGQTVRITCQGDSLRNYYASWYQQKPRQAPVLVFYGKNNRPSGIPDRFSGSSSGNTASLTISGAQAEDEADYYCNSRDSSSNHLVFGGGTKLTVLSQ")
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
model_output = model(**encoded_input)
```
Sequence embeddings can be produced as follows:
```python
def get_sequence_embeddings(encoded_input, model_output):
mask = encoded_input['attention_mask'].float()
d = {k: v for k, v in torch.nonzero(mask).cpu().numpy()} # dict of sep tokens
# make sep token invisible
for i in d:
mask[i, d[i]] = 0
mask[:, 0] = 0.0 # make cls token invisible
mask = mask.unsqueeze(-1).expand(model_output.last_hidden_state.size())
sum_embeddings = torch.sum(model_output.last_hidden_state * mask, 1)
sum_mask = torch.clamp(mask.sum(1), min=1e-9)
return sum_embeddings / sum_mask
seq_embeds = get_sequence_embeddings(encoded_input, model_output)
```
### Fine-tune
To save memory we recomend using [LoRA](https://doi.org/10.48550/arXiv.2106.09685):
```python
pip install git+https://github.com/huggingface/peft.git
pip install loralib
```
LoRA greatly reduces the number of trainable parameters and performs on-par or better than fine-tuning full model.
```python
from peft import LoraConfig, get_peft_model
def apply_lora_bert(model):
config = LoraConfig(
r=8, lora_alpha=32,
lora_dropout=0.3,
target_modules=['query', 'value']
)
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.enable_input_require_grads()
model = get_peft_model(model, config)
return model
model = apply_lora_bert(model)
model.print_trainable_parameters()
# trainable params: 294912 || all params: 85493760 || trainable%: 0.3449514911965505
```
### Citation
```
@article{Olsen2022,
title={AbLang: An antibody language model for completing antibody sequences},
author={Tobias H. Olsen, Iain H. Moal and Charlotte M. Deane},
journal={bioRxiv},
doi={https://doi.org/10.1101/2022.01.20.477061},
year={2022}
}
```
|
Alerosae/SocratesGPT-2
|
[
"pytorch",
"gpt2",
"feature-extraction",
"en",
"transformers",
"text-generation"
] |
text-generation
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
library_name: diffusers
tags:
- jax-diffusers-event
- controlnet
- stable-diffusion
pipeline_tag: image-to-image
---
This model is a ControlNet model using MediaPipe hand landmarks for control.
|
AmazonScience/qanlu
|
[
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:atis",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 494 | null |
---
tags:
- espnet
- audio
- text-to-speech
language: zh
datasets:
- aishell3
license: cc-by-4.0
---
## ESPnet2 TTS model
This model was trained by winniech using aishell3 recipe in [espnet](https://github.com/espnet/espnet/).
## TTS config
```
config: conf/train.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/22k/tts_train_raw_phn_pypinyin_g2p_phone
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 1250000
valid_batch_bins: null
train_shape_file:
- exp/22k/tts_stats_raw_linear_spectrogram_phn_pypinyin_g2p_phone/train/text_shape.phn
- exp/22k/tts_stats_raw_linear_spectrogram_phn_pypinyin_g2p_phone/train/speech_shape
valid_shape_file:
- exp/22k/tts_stats_raw_linear_spectrogram_phn_pypinyin_g2p_phone/valid/text_shape.phn
- exp/22k/tts_stats_raw_linear_spectrogram_phn_pypinyin_g2p_phone/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/22k/raw/train_no_dev/text
- text
- text
- - dump/22k/raw/train_no_dev/wav.scp
- speech
- sound
- - dump/22k/xvector/train_no_dev/xvector.scp
- spembs
- kaldi_ark
valid_data_path_and_name_and_type:
- - dump/22k/raw/dev/text
- text
- text
- - dump/22k/raw/dev/wav.scp
- speech
- sound
- - dump/22k/xvector/dev/xvector.scp
- spembs
- kaldi_ark
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- d
- sh
- j
- i4
- zh
- l
- x
- e
- b
- g
- i1
- h
- q
- m
- t
- i2
- u4
- z
- ch
- i3
- f
- s
- n
- iou3
- r
- ian4
- ong1
- uei4
- e4
- en2
- ai4
- k
- ing2
- a1
- uo3
- u3
- ao4
- p
- an1
- eng2
- e2
- in1
- c
- ai2
- an4
- ian2
- u2
- ang4
- ian1
- ai3
- ing1
- ao3
- uo4
- ian3
- ing4
- ü4
- ang1
- u1
- iao4
- eng1
- iou4
- a4
- üan2
- ie4
- ou4
- er4
- en1
- ong2
- e1
- an3
- ei4
- uo2
- ou3
- ang2
- iang4
- ou1
- ang3
- an2
- eng4
- ong4
- uan4
- a3
- ia4
- ia1
- iao1
- iang1
- iou2
- uo1
- ei3
- iao3
- in4
- e3
- ü3
- iang3
- uei2
- en3
- uan1
- ie3
- ao1
- ai1
- üe4
- ü2
- ing3
- en4
- uei1
- er2
- uan3
- ü1
- in3
- en
- üe2
- ie2
- ei2
- ua4
- uan2
- in2
- a2
- ie1
- iang2
- ou2
- ong3
- uang3
- eng3
- uen1
- uai4
- ün4
- uang4
- uei3
- uen2
- uen4
- i
- iong4
- v3
- iao2
- üan4
- uang1
- ei1
- o2
- iou1
- uang2
- a
- ao2
- o1
- ua2
- uen3
- ua1
- v4
- üan3
- ün1
- üe1
- ün2
- o4
- er3
- iong3
- üan1
- ia3
- ia2
- iong1
- üe3
- ve4
- iong2
- uai2
- er
- ua3
- uai1
- ou
- ün3
- uai3
- ia
- uo
- o3
- v2
- ueng1
- o
- ei
- ua
- io1
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pypinyin_g2p_phone
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
spk_embed_dim: 512
global_channels: 256
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 180
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: false
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Amit29/t5-small-finetuned-xsum
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
---
|
AndrewMcDowell/wav2vec2-xls-r-300m-arabic
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
# Kohya's GUI
This repository provides a Windows-focused Gradio GUI for [Kohya's Stable Diffusion trainers](https://github.com/kohya-ss/sd-scripts). The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.
If you run on Linux and would like to use the GUI, there is now a port of it as a docker container. You can find the project [here](https://github.com/P2Enjoy/kohya_ss-docker).
### Table of Contents
- [Tutorials](#tutorials)
- [Required Dependencies](#required-dependencies)
- [Linux/macOS](#linux-and-macos-dependencies)
- [Installation](#installation)
- [Linux/macOS](#linux-and-macos)
- [Default Install Locations](#install-location)
- [Windows](#windows)
- [CUDNN 8.6](#optional--cudnn-86)
- [Upgrading](#upgrading)
- [Windows](#windows-upgrade)
- [Linux/macOS](#linux-and-macos-upgrade)
- [Launching the GUI](#starting-gui-service)
- [Windows](#launching-the-gui-on-windows)
- [Linux/macOS](#launching-the-gui-on-linux-and-macos)
- [Direct Launch via Python Script](#launching-the-gui-directly-using-kohyaguipy)
- [Dreambooth](#dreambooth)
- [Finetune](#finetune)
- [Train Network](#train-network)
- [LoRA](#lora)
- [Troubleshooting](#troubleshooting)
- [Page File Limit](#page-file-limit)
- [No module called tkinter](#no-module-called-tkinter)
- [FileNotFoundError](#filenotfounderror)
- [Change History](#change-history)
## Tutorials
[How to Create a LoRA Part 1: Dataset Preparation](https://www.youtube.com/watch?v=N4_-fB62Hwk):
[](https://www.youtube.com/watch?v=N4_-fB62Hwk)
[How to Create a LoRA Part 2: Training the Model](https://www.youtube.com/watch?v=k5imq01uvUY):
[](https://www.youtube.com/watch?v=k5imq01uvUY)
## Required Dependencies
- Install [Python 3.10](https://www.python.org/ftp/python/3.10.9/python-3.10.9-amd64.exe)
- make sure to tick the box to add Python to the 'PATH' environment variable
- Install [Git](https://git-scm.com/download/win)
- Install [Visual Studio 2015, 2017, 2019, and 2022 redistributable](https://aka.ms/vs/17/release/vc_redist.x64.exe)
### Linux and macOS dependencies
These dependencies are taken care of via `setup.sh` in the installation section. No additional steps should be needed unless the scripts inform you otherwise.
## Installation
### Runpod
Follow the instructions found in this discussion: https://github.com/bmaltais/kohya_ss/discussions/379
### Linux and macOS
In the terminal, run
```
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
# May need to chmod +x ./setup.sh if you're on a machine with stricter security.
# There are additional options if needed for a runpod environment.
# Call 'setup.sh -h' or 'setup.sh --help' for more information.
./setup.sh
```
Setup.sh help included here:
```bash
Kohya_SS Installation Script for POSIX operating systems.
The following options are useful in a runpod environment,
but will not affect a local machine install.
Usage:
setup.sh -b dev -d /workspace/kohya_ss -g https://mycustom.repo.tld/custom_fork.git
setup.sh --branch=dev --dir=/workspace/kohya_ss --git-repo=https://mycustom.repo.tld/custom_fork.git
Options:
-b BRANCH, --branch=BRANCH Select which branch of kohya to check out on new installs.
-d DIR, --dir=DIR The full path you want kohya_ss installed to.
-g REPO, --git_repo=REPO You can optionally provide a git repo to check out for runpod installation. Useful for custom forks.
-h, --help Show this screen.
-i, --interactive Interactively configure accelerate instead of using default config file.
-n, --no-update Do not update kohya_ss repo. No git pull or clone operations.
-p, --public Expose public URL in runpod mode. Won't have an effect in other modes.
-r, --runpod Forces a runpod installation. Useful if detection fails for any reason.
-s, --skip-space-check Skip the 10Gb minimum storage space check.
-u, --no-gui Skips launching the GUI.
-v, --verbose Increase verbosity levels up to 3.
```
#### Install location
The default install location for Linux is where the script is located if a previous installation is detected that location.
Otherwise, it will fall to `/opt/kohya_ss`. If /opt is not writeable, the fallback is `$HOME/kohya_ss`. Lastly, if all else fails it will simply install to the current folder you are in (PWD).
On macOS and other non-Linux machines, it will first try to detect an install where the script is run from and then run setup there if that's detected.
If a previous install isn't found at that location, then it will default install to `$HOME/kohya_ss` followed by where you're currently at if there's no access to $HOME.
You can override this behavior by specifying an install directory with the -d option.
If you are using the interactive mode, our default values for the accelerate config screen after running the script answer "This machine", "None", "No" for the remaining questions.
These are the same answers as the Windows install.
### Windows
- Install [Python 3.10](https://www.python.org/ftp/python/3.10.9/python-3.10.9-amd64.exe)
- make sure to tick the box to add Python to the 'PATH' environment variable
- Install [Git](https://git-scm.com/download/win)
- Install [Visual Studio 2015, 2017, 2019, and 2022 redistributable](https://aka.ms/vs/17/release/vc_redist.x64.exe)
In the terminal, run:
```
git clone https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
.\setup.bat
```
If this is a 1st install answer No when asked `Do you want to uninstall previous versions of torch and associated files before installing`.
Then configure accelerate with the same answers as in the MacOS instructions when prompted.
### Optional: CUDNN 8.6
This step is optional but can improve the learning speed for NVIDIA 30X0/40X0 owners. It allows for larger training batch size and faster training speed.
Due to the file size, I can't host the DLLs needed for CUDNN 8.6 on Github. I strongly advise you download them for a speed boost in sample generation (almost 50% on 4090 GPU) you can download them [here](https://b1.thefileditch.ch/mwxKTEtelILoIbMbruuM.zip).
To install, simply unzip the directory and place the `cudnn_windows` folder in the root of the this repo.
Run the following commands to install:
```
.\venv\Scripts\activate
python .\tools\cudann_1.8_install.py
```
Once the commands have completed successfully you should be ready to use the new version. MacOS support is not tested and has been mostly taken from https://gist.github.com/jstayco/9f5733f05b9dc29de95c4056a023d645
## Upgrading
The following commands will work from the root directory of the project if you'd prefer to not run scripts.
These commands will work on any OS.
```bash
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt
```
### Windows Upgrade
When a new release comes out, you can upgrade your repo with the following commands in the root directory:
```powershell
upgrade.bat
```
### Linux and macOS Upgrade
You can cd into the root directory and simply run
```bash
# Refresh and update everything
./setup.sh
# This will refresh everything, but NOT clone or pull the git repo.
./setup.sh --no-git-update
```
Once the commands have completed successfully you should be ready to use the new version.
# Starting GUI Service
The following command line arguments can be passed to the scripts on any OS to configure the underlying service.
```
--listen: the IP address to listen on for connections to Gradio.
--username: a username for authentication.
--password: a password for authentication.
--server_port: the port to run the server listener on.
--inbrowser: opens the Gradio UI in a web browser.
--share: shares the Gradio UI.
```
### Launching the GUI on Windows
The two scripts to launch the GUI on Windows are gui.ps1 and gui.bat in the root directory.
You can use whichever script you prefer.
To launch the Gradio UI, run the script in a terminal with the desired command line arguments, for example:
`gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share`
or
`gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share`
## Launching the GUI on Linux and macOS
Run the launcher script with the desired command line arguments similar to Windows.
`gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share`
## Launching the GUI directly using kohya_gui.py
To run the GUI directly bypassing the wrapper scripts, simply use this command from the root project directory:
```
.\venv\Scripts\activate
python .\kohya_gui.py
```
## Dreambooth
You can find the dreambooth solution specific here: [Dreambooth README](train_db_README.md)
## Finetune
You can find the finetune solution specific here: [Finetune README](fine_tune_README.md)
## Train Network
You can find the train network solution specific here: [Train network README](train_network_README.md)
## LoRA
Training a LoRA currently uses the `train_network.py` code. You can create a LoRA network by using the all-in-one `gui.cmd` or by running the dedicated LoRA training GUI with:
```
.\venv\Scripts\activate
python lora_gui.py
```
Once you have created the LoRA network, you can generate images via auto1111 by installing [this extension](https://github.com/kohya-ss/sd-webui-additional-networks).
### Naming of LoRA
The LoRA supported by `train_network.py` has been named to avoid confusion. The documentation has been updated. The following are the names of LoRA types in this repository.
1. __LoRA-LierLa__ : (LoRA for __Li__ n __e__ a __r__ __La__ yers)
LoRA for Linear layers and Conv2d layers with 1x1 kernel
2. __LoRA-C3Lier__ : (LoRA for __C__ olutional layers with __3__ x3 Kernel and __Li__ n __e__ a __r__ layers)
In addition to 1., LoRA for Conv2d layers with 3x3 kernel
LoRA-LierLa is the default LoRA type for `train_network.py` (without `conv_dim` network arg). LoRA-LierLa can be used with [our extension](https://github.com/kohya-ss/sd-webui-additional-networks) for AUTOMATIC1111's Web UI, or with the built-in LoRA feature of the Web UI.
To use LoRA-C3Liar with Web UI, please use our extension.
## Sample image generation during training
A prompt file might look like this, for example
```
# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28
# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40
```
Lines beginning with `#` are comments. You can specify options for the generated image with options like `--n` after the prompt. The following can be used.
* `--n` Negative prompt up to the next option.
* `--w` Specifies the width of the generated image.
* `--h` Specifies the height of the generated image.
* `--d` Specifies the seed of the generated image.
* `--l` Specifies the CFG scale of the generated image.
* `--s` Specifies the number of steps in the generation.
The prompt weighting such as `( )` and `[ ]` are working.
## Troubleshooting
### Page File Limit
- X error relating to `page file`: Increase the page file size limit in Windows.
### No module called tkinter
- Re-install [Python 3.10](https://www.python.org/ftp/python/3.10.9/python-3.10.9-amd64.exe) on your system.
### FileNotFoundError
This is usually related to an installation issue. Make sure you do not have any python modules installed locally that could conflict with the ones installed in the venv:
1. Open a new powershell terminal and make sure no venv is active.
2. Run the following commands:
```
pip freeze > uninstall.txt
pip uninstall -r uninstall.txt
```
This will store a backup file with your current locally installed pip packages and then uninstall them. Then, redo the installation instructions within the kohya_ss venv.
## Change History
* 2023/04/22 (v21.5.5)
- Update LoRA merge GUI to support SD checkpoint merge and up to 4 LoRA merging
- Fixed `lora_interrogator.py` not working. Please refer to [PR #392](https://github.com/kohya-ss/sd-scripts/pull/392) for details. Thank you A2va and heyalexchoi!
- Fixed the handling of tags containing `_` in `tag_images_by_wd14_tagger.py`.
- Add new Extract DyLoRA gui to the Utilities tab.
- Add new Merge LyCORIS models into checkpoint gui to the Utilities tab.
- Add new info on startup to help debug things
* 2023/04/17 (v21.5.4)
- Fixed a bug that caused an error when loading DyLoRA with the `--network_weight` option in `train_network.py`.
- Added the `--recursive` option to each script in the `finetune` folder to process folders recursively. Please refer to [PR #400](https://github.com/kohya-ss/sd-scripts/pull/400/) for details. Thanks to Linaqruf!
- Upgrade Gradio to latest release
- Fix issue when Adafactor is used as optimizer and LR Warmup is not 0: https://github.com/bmaltais/kohya_ss/issues/617
- Added support for DyLoRA in `train_network.py`. Please refer to [here](./train_network_README-ja.md#dylora) for details (currently only in Japanese).
- Added support for caching latents to disk in each training script. Please specify __both__ `--cache_latents` and `--cache_latents_to_disk` options.
- The files are saved in the same folder as the images with the extension `.npz`. If you specify the `--flip_aug` option, the files with `_flip.npz` will also be saved.
- Multi-GPU training has not been tested.
- This feature is not tested with all combinations of datasets and training scripts, so there may be bugs.
- Added workaround for an error that occurs when training with `fp16` or `bf16` in `fine_tune.py`.
- Implemented DyLoRA GUI support. There will now be a new 'DyLoRA Unit` slider when the LoRA type is selected as `kohya DyLoRA` to specify the desired Unit value for DyLoRA training.
- Update gui.bat and gui.ps1 based on: https://github.com/bmaltais/kohya_ss/issues/188
- Update `setup.bat` to install torch 2.0.0 instead of 1.2.1. If you want to upgrade from 1.2.1 to 2.0.0 run setup.bat again, select 1 to uninstall the previous torch modules, then select 2 for torch 2.0.0
* 2023/04/09 (v21.5.2)
- Added support for training with weighted captions. Thanks to AI-Casanova for the great contribution!
- Please refer to the PR for details: [PR #336](https://github.com/kohya-ss/sd-scripts/pull/336)
- Specify the `--weighted_captions` option. It is available for all training scripts except Textual Inversion and XTI.
- This option is also applicable to token strings of the DreamBooth method.
- The syntax for weighted captions is almost the same as the Web UI, and you can use things like `(abc)`, `[abc]`, and `(abc:1.23)`. Nesting is also possible.
- If you include a comma in the parentheses, the parentheses will not be properly matched in the prompt shuffle/dropout, so do not include a comma in the parentheses.
- Run gui.sh from any place
|
Anonymous/ReasonBERT-RoBERTa
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ttkSuperSpirit2 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "ttksuperspirit2"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/ttksuperspirit2)
Credits: [View credits](https://civitai.com/?query=ttkSuperSpirit2)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "ttksuperspirit2",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5884 | 0.34 | 100 | 1.9159 |
| 1.9705 | 0.68 | 200 | 1.7310 |
| 1.8439 | 1.02 | 300 | 1.6672 |
| 1.7426 | 1.35 | 400 | 1.6382 |
| 1.7147 | 1.69 | 500 | 1.6199 |
| 1.6908 | 2.03 | 600 | 1.6053 |
| 1.6315 | 2.37 | 700 | 1.5967 |
| 1.627 | 2.71 | 800 | 1.5939 |
| 1.6122 | 3.05 | 900 | 1.5877 |
| 1.5706 | 3.39 | 1000 | 1.5861 |
| 1.5708 | 3.73 | 1100 | 1.5742 |
| 1.5534 | 4.06 | 1200 | 1.5798 |
| 1.5351 | 4.4 | 1300 | 1.5738 |
| 1.5226 | 4.74 | 1400 | 1.5757 |
| 1.5187 | 5.08 | 1500 | 1.5727 |
| 1.4963 | 5.42 | 1600 | 1.5710 |
| 1.4841 | 5.76 | 1700 | 1.5668 |
| 1.5025 | 6.1 | 1800 | 1.5688 |
| 1.4778 | 6.44 | 1900 | 1.5717 |
| 1.4769 | 6.77 | 2000 | 1.5674 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-11
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9456 | 1.0 | 1 | 0.8527 |
| 2.2138 | 2.0 | 2 | 1.4124 |
| 2.2592 | 3.0 | 3 | 0.0520 |
| 1.2128 | 4.0 | 4 | 0.1093 |
| 0.7006 | 5.0 | 5 | 0.0019 |
| 1.0518 | 6.0 | 6 | 0.0063 |
| 0.7639 | 7.0 | 7 | 2.9212 |
| 0.4586 | 8.0 | 8 | 0.8805 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
Access to model Tian03/chilloutmix is restricted and you are not in the authorized list. Visit https://huggingface.co/Tian03/chilloutmix to ask for access.
|
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
Access to model ZhangAo/VPGTrans is restricted and you are not in the authorized list. Visit https://huggingface.co/ZhangAo/VPGTrans to ask for access.
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
BART MODEL #4 PRETRAINED ON XSUM AND FINETUNED ON SAMSUM
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: bhadresh-savani/ppo-PyramidRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: 20230429-001-baseline-mbart-no-qa-ft-clickbait-spoiling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230429-001-baseline-mbart-no-qa-ft-clickbait-spoiling
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 2.7084 |
| No log | 2.0 | 400 | 2.7053 |
| 2.3618 | 3.0 | 600 | 3.5040 |
| 2.3618 | 4.0 | 800 | 4.7206 |
| 0.4143 | 5.0 | 1000 | 6.2197 |
| 0.4143 | 6.0 | 1200 | 6.2887 |
| 0.4143 | 7.0 | 1400 | 7.3206 |
| 0.0651 | 8.0 | 1600 | 7.9945 |
| 0.0651 | 9.0 | 1800 | 7.7822 |
| 0.4984 | 10.0 | 2000 | 7.0787 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_1.5e_07
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_1.5e_07
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9722
- Train Prediction Logits Accuracy: 0.0837
- Train Seq Relationship Logits Accuracy: 0.5737
- Validation Loss: 0.9894
- Validation Prediction Logits Accuracy: 0.0841
- Validation Seq Relationship Logits Accuracy: 0.5270
- Train Lr: 1.3699909e-07
- Epoch: 1099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1.3699909e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Prediction Logits Accuracy | Train Seq Relationship Logits Accuracy | Validation Loss | Validation Prediction Logits Accuracy | Validation Seq Relationship Logits Accuracy | Train Lr | Epoch |
|:----------:|:--------------------------------:|:--------------------------------------:|:---------------:|:-------------------------------------:|:-------------------------------------------:|:-------------:|:-----:|
| 9.5260 | 0.0204 | 0.5000 | 7.9292 | 0.0613 | 0.5064 | 1.5e-07 | 0 |
| 7.3406 | 0.0656 | 0.5010 | 5.6988 | 0.0677 | 0.5065 | 1.4999998e-07 | 1 |
| 5.6964 | 0.0685 | 0.5021 | 4.3577 | 0.0692 | 0.4935 | 1.4999993e-07 | 2 |
| 4.5795 | 0.0691 | 0.4985 | 3.5870 | 0.0695 | 0.4934 | 1.4999986e-07 | 3 |
| 3.8527 | 0.0695 | 0.5009 | 3.1469 | 0.0697 | 0.4937 | 1.4999978e-07 | 4 |
| 3.3783 | 0.0699 | 0.5004 | 2.8688 | 0.0704 | 0.4938 | 1.4999966e-07 | 5 |
| 3.0601 | 0.0702 | 0.5003 | 2.6719 | 0.0705 | 0.4934 | 1.4999954e-07 | 6 |
| 2.8273 | 0.0706 | 0.5001 | 2.5258 | 0.0708 | 0.5109 | 1.4999938e-07 | 7 |
| 2.6510 | 0.0708 | 0.4990 | 2.3968 | 0.0712 | 0.4935 | 1.499992e-07 | 8 |
| 2.5079 | 0.0710 | 0.5016 | 2.3034 | 0.0710 | 0.4948 | 1.49999e-07 | 9 |
| 2.3898 | 0.0712 | 0.5023 | 2.2186 | 0.0710 | 0.5010 | 1.4999877e-07 | 10 |
| 2.2882 | 0.0713 | 0.5011 | 2.1318 | 0.0711 | 0.5056 | 1.4999853e-07 | 11 |
| 2.2059 | 0.0712 | 0.4980 | 2.0714 | 0.0711 | 0.5063 | 1.4999826e-07 | 12 |
| 2.1270 | 0.0714 | 0.5001 | 1.9996 | 0.0717 | 0.4960 | 1.4999796e-07 | 13 |
| 2.0552 | 0.0715 | 0.5012 | 1.9424 | 0.0718 | 0.4935 | 1.4999765e-07 | 14 |
| 1.9923 | 0.0717 | 0.5009 | 1.8947 | 0.0719 | 0.5065 | 1.499973e-07 | 15 |
| 1.9342 | 0.0716 | 0.5006 | 1.8550 | 0.0713 | 0.5051 | 1.4999695e-07 | 16 |
| 1.8840 | 0.0717 | 0.5001 | 1.8053 | 0.0719 | 0.5067 | 1.4999657e-07 | 17 |
| 1.8390 | 0.0717 | 0.5004 | 1.7600 | 0.0719 | 0.4935 | 1.4999617e-07 | 18 |
| 1.7952 | 0.0719 | 0.4994 | 1.7337 | 0.0718 | 0.4935 | 1.4999574e-07 | 19 |
| 1.7600 | 0.0718 | 0.5012 | 1.6990 | 0.0721 | 0.4938 | 1.4999529e-07 | 20 |
| 1.7260 | 0.0718 | 0.4993 | 1.6681 | 0.0717 | 0.5079 | 1.4999482e-07 | 21 |
| 1.6932 | 0.0719 | 0.4989 | 1.6435 | 0.0723 | 0.5060 | 1.4999432e-07 | 22 |
| 1.6684 | 0.0719 | 0.5011 | 1.6311 | 0.0719 | 0.5089 | 1.4999381e-07 | 23 |
| 1.6398 | 0.0721 | 0.5001 | 1.6059 | 0.0721 | 0.5046 | 1.4999327e-07 | 24 |
| 1.6206 | 0.0721 | 0.4994 | 1.5891 | 0.0722 | 0.5010 | 1.499927e-07 | 25 |
| 1.6023 | 0.0720 | 0.5001 | 1.5696 | 0.0719 | 0.4920 | 1.4999212e-07 | 26 |
| 1.5829 | 0.0721 | 0.5028 | 1.5487 | 0.0724 | 0.4946 | 1.4999151e-07 | 27 |
| 1.5700 | 0.0721 | 0.5007 | 1.5356 | 0.0726 | 0.4916 | 1.4999088e-07 | 28 |
| 1.5566 | 0.0722 | 0.5016 | 1.5258 | 0.0723 | 0.4986 | 1.4999023e-07 | 29 |
| 1.5438 | 0.0723 | 0.4998 | 1.5051 | 0.0729 | 0.5060 | 1.4998956e-07 | 30 |
| 1.5316 | 0.0722 | 0.5050 | 1.5021 | 0.0723 | 0.5040 | 1.4998886e-07 | 31 |
| 1.5232 | 0.0723 | 0.5011 | 1.4969 | 0.0728 | 0.5013 | 1.4998814e-07 | 32 |
| 1.5136 | 0.0723 | 0.5004 | 1.4811 | 0.0729 | 0.5056 | 1.499874e-07 | 33 |
| 1.5016 | 0.0724 | 0.5022 | 1.4751 | 0.0727 | 0.5060 | 1.4998663e-07 | 34 |
| 1.4957 | 0.0724 | 0.5011 | 1.4774 | 0.0727 | 0.5056 | 1.4998585e-07 | 35 |
| 1.4867 | 0.0726 | 0.4999 | 1.4690 | 0.0726 | 0.4937 | 1.4998504e-07 | 36 |
| 1.4813 | 0.0726 | 0.5018 | 1.4597 | 0.0727 | 0.5065 | 1.499842e-07 | 37 |
| 1.4744 | 0.0726 | 0.4994 | 1.4516 | 0.0728 | 0.5001 | 1.4998335e-07 | 38 |
| 1.4692 | 0.0727 | 0.5010 | 1.4557 | 0.0725 | 0.4996 | 1.4998247e-07 | 39 |
| 1.4644 | 0.0727 | 0.5023 | 1.4446 | 0.0730 | 0.4944 | 1.4998157e-07 | 40 |
| 1.4575 | 0.0728 | 0.5007 | 1.4391 | 0.0732 | 0.5064 | 1.4998065e-07 | 41 |
| 1.4532 | 0.0727 | 0.5028 | 1.4332 | 0.0729 | 0.5092 | 1.4997971e-07 | 42 |
| 1.4481 | 0.0728 | 0.5006 | 1.4300 | 0.0733 | 0.5089 | 1.4997875e-07 | 43 |
| 1.4448 | 0.0728 | 0.5012 | 1.4317 | 0.0733 | 0.4942 | 1.4997775e-07 | 44 |
| 1.4401 | 0.0729 | 0.5000 | 1.4262 | 0.0730 | 0.5074 | 1.4997674e-07 | 45 |
| 1.4346 | 0.0730 | 0.5018 | 1.4172 | 0.0732 | 0.4927 | 1.499757e-07 | 46 |
| 1.4309 | 0.0731 | 0.5004 | 1.4157 | 0.0730 | 0.4997 | 1.4997465e-07 | 47 |
| 1.4276 | 0.0731 | 0.5016 | 1.4137 | 0.0735 | 0.5060 | 1.4997357e-07 | 48 |
| 1.4232 | 0.0731 | 0.5023 | 1.4092 | 0.0737 | 0.4946 | 1.4997246e-07 | 49 |
| 1.4189 | 0.0732 | 0.5019 | 1.4008 | 0.0737 | 0.4953 | 1.4997134e-07 | 50 |
| 1.4166 | 0.0732 | 0.5 | 1.4008 | 0.0734 | 0.5050 | 1.4997019e-07 | 51 |
| 1.4132 | 0.0733 | 0.4990 | 1.3988 | 0.0732 | 0.4979 | 1.4996903e-07 | 52 |
| 1.4105 | 0.0734 | 0.5016 | 1.3899 | 0.0736 | 0.5002 | 1.4996783e-07 | 53 |
| 1.4053 | 0.0735 | 0.5012 | 1.3838 | 0.0739 | 0.4957 | 1.4996662e-07 | 54 |
| 1.4046 | 0.0735 | 0.5010 | 1.3879 | 0.0735 | 0.4924 | 1.4996539e-07 | 55 |
| 1.3990 | 0.0736 | 0.5016 | 1.3789 | 0.0739 | 0.5053 | 1.4996412e-07 | 56 |
| 1.3937 | 0.0737 | 0.5017 | 1.3742 | 0.0743 | 0.5066 | 1.4996284e-07 | 57 |
| 1.3912 | 0.0738 | 0.5009 | 1.3741 | 0.0741 | 0.4948 | 1.4996154e-07 | 58 |
| 1.3856 | 0.0739 | 0.5020 | 1.3698 | 0.0739 | 0.4959 | 1.4996021e-07 | 59 |
| 1.3814 | 0.0740 | 0.5028 | 1.3658 | 0.0746 | 0.5046 | 1.4995886e-07 | 60 |
| 1.3770 | 0.0742 | 0.5011 | 1.3604 | 0.0745 | 0.4974 | 1.4995749e-07 | 61 |
| 1.3711 | 0.0744 | 0.5010 | 1.3503 | 0.0748 | 0.4927 | 1.499561e-07 | 62 |
| 1.3671 | 0.0745 | 0.5011 | 1.3492 | 0.0750 | 0.5018 | 1.4995467e-07 | 63 |
| 1.3621 | 0.0747 | 0.4991 | 1.3396 | 0.0752 | 0.4952 | 1.4995324e-07 | 64 |
| 1.3589 | 0.0747 | 0.4994 | 1.3410 | 0.0752 | 0.4991 | 1.4995177e-07 | 65 |
| 1.3534 | 0.0750 | 0.5021 | 1.3248 | 0.0753 | 0.4944 | 1.499503e-07 | 66 |
| 1.3488 | 0.0751 | 0.5024 | 1.3254 | 0.0756 | 0.5038 | 1.4994879e-07 | 67 |
| 1.3419 | 0.0753 | 0.4997 | 1.3208 | 0.0759 | 0.5024 | 1.4994725e-07 | 68 |
| 1.3393 | 0.0754 | 0.5016 | 1.3145 | 0.0763 | 0.4986 | 1.499457e-07 | 69 |
| 1.3355 | 0.0756 | 0.5012 | 1.3161 | 0.0759 | 0.5096 | 1.4994413e-07 | 70 |
| 1.3297 | 0.0757 | 0.5005 | 1.3108 | 0.0763 | 0.5055 | 1.4994254e-07 | 71 |
| 1.3254 | 0.0758 | 0.5014 | 1.3048 | 0.0761 | 0.4978 | 1.4994092e-07 | 72 |
| 1.3219 | 0.0758 | 0.5006 | 1.3093 | 0.0758 | 0.5080 | 1.4993927e-07 | 73 |
| 1.3185 | 0.0759 | 0.5023 | 1.2945 | 0.0767 | 0.5004 | 1.499376e-07 | 74 |
| 1.3141 | 0.0760 | 0.5018 | 1.2883 | 0.0764 | 0.4955 | 1.4993591e-07 | 75 |
| 1.3114 | 0.0761 | 0.5008 | 1.2922 | 0.0763 | 0.5009 | 1.4993421e-07 | 76 |
| 1.3070 | 0.0762 | 0.5014 | 1.2830 | 0.0767 | 0.4990 | 1.4993248e-07 | 77 |
| 1.3034 | 0.0763 | 0.5016 | 1.2844 | 0.0767 | 0.5001 | 1.4993073e-07 | 78 |
| 1.2985 | 0.0766 | 0.5025 | 1.2710 | 0.0767 | 0.4993 | 1.4992895e-07 | 79 |
| 1.2948 | 0.0765 | 0.5025 | 1.2759 | 0.0768 | 0.4975 | 1.4992715e-07 | 80 |
| 1.2925 | 0.0766 | 0.5019 | 1.2760 | 0.0767 | 0.4979 | 1.4992533e-07 | 81 |
| 1.2906 | 0.0766 | 0.5019 | 1.2707 | 0.0771 | 0.5036 | 1.4992348e-07 | 82 |
| 1.2851 | 0.0768 | 0.5005 | 1.2572 | 0.0772 | 0.5064 | 1.4992162e-07 | 83 |
| 1.2806 | 0.0769 | 0.5032 | 1.2593 | 0.0769 | 0.5064 | 1.4991973e-07 | 84 |
| 1.2775 | 0.0770 | 0.5013 | 1.2618 | 0.0772 | 0.4996 | 1.4991781e-07 | 85 |
| 1.2756 | 0.0769 | 0.5020 | 1.2506 | 0.0776 | 0.5059 | 1.4991588e-07 | 86 |
| 1.2700 | 0.0771 | 0.5038 | 1.2453 | 0.0779 | 0.5039 | 1.4991392e-07 | 87 |
| 1.2674 | 0.0773 | 0.5015 | 1.2436 | 0.0773 | 0.5029 | 1.4991194e-07 | 88 |
| 1.2632 | 0.0774 | 0.4994 | 1.2387 | 0.0777 | 0.5058 | 1.4990994e-07 | 89 |
| 1.2591 | 0.0775 | 0.5020 | 1.2364 | 0.0777 | 0.4992 | 1.4990792e-07 | 90 |
| 1.2553 | 0.0775 | 0.5020 | 1.2271 | 0.0785 | 0.5008 | 1.4990587e-07 | 91 |
| 1.2521 | 0.0776 | 0.5007 | 1.2343 | 0.0780 | 0.5030 | 1.499038e-07 | 92 |
| 1.2480 | 0.0778 | 0.5018 | 1.2199 | 0.0785 | 0.5037 | 1.4990171e-07 | 93 |
| 1.2431 | 0.0779 | 0.5017 | 1.2186 | 0.0784 | 0.5040 | 1.4989959e-07 | 94 |
| 1.2394 | 0.0780 | 0.5031 | 1.2139 | 0.0789 | 0.5041 | 1.4989746e-07 | 95 |
| 1.2352 | 0.0781 | 0.5024 | 1.2114 | 0.0785 | 0.5009 | 1.498953e-07 | 96 |
| 1.2327 | 0.0782 | 0.5022 | 1.2073 | 0.0789 | 0.5064 | 1.4989313e-07 | 97 |
| 1.2276 | 0.0784 | 0.5023 | 1.2037 | 0.0786 | 0.4961 | 1.4989092e-07 | 98 |
| 1.2232 | 0.0785 | 0.5038 | 1.1962 | 0.0788 | 0.5063 | 1.4988869e-07 | 99 |
| 1.2218 | 0.0785 | 0.5022 | 1.1963 | 0.0789 | 0.5030 | 1.4988645e-07 | 100 |
| 1.2160 | 0.0787 | 0.5005 | 1.1878 | 0.0791 | 0.4976 | 1.4988417e-07 | 101 |
| 1.2130 | 0.0787 | 0.5008 | 1.1819 | 0.0798 | 0.5079 | 1.4988188e-07 | 102 |
| 1.2114 | 0.0788 | 0.5023 | 1.1810 | 0.0797 | 0.5054 | 1.4987957e-07 | 103 |
| 1.2083 | 0.0789 | 0.5013 | 1.1820 | 0.0788 | 0.5001 | 1.4987722e-07 | 104 |
| 1.2035 | 0.0790 | 0.5023 | 1.1754 | 0.0800 | 0.5054 | 1.4987486e-07 | 105 |
| 1.2007 | 0.0790 | 0.5026 | 1.1731 | 0.0798 | 0.4948 | 1.4987248e-07 | 106 |
| 1.1981 | 0.0792 | 0.5016 | 1.1696 | 0.0798 | 0.5037 | 1.4987008e-07 | 107 |
| 1.1952 | 0.0792 | 0.5018 | 1.1694 | 0.0794 | 0.5078 | 1.4986765e-07 | 108 |
| 1.1937 | 0.0792 | 0.5020 | 1.1698 | 0.0796 | 0.5096 | 1.498652e-07 | 109 |
| 1.1896 | 0.0794 | 0.5025 | 1.1632 | 0.0797 | 0.4958 | 1.4986273e-07 | 110 |
| 1.1865 | 0.0793 | 0.5020 | 1.1635 | 0.0798 | 0.5064 | 1.4986023e-07 | 111 |
| 1.1853 | 0.0794 | 0.5015 | 1.1594 | 0.0800 | 0.5060 | 1.4985771e-07 | 112 |
| 1.1832 | 0.0794 | 0.5021 | 1.1555 | 0.0803 | 0.5001 | 1.4985517e-07 | 113 |
| 1.1798 | 0.0796 | 0.5019 | 1.1592 | 0.0798 | 0.5061 | 1.4985261e-07 | 114 |
| 1.1771 | 0.0797 | 0.5029 | 1.1523 | 0.0799 | 0.5054 | 1.4985002e-07 | 115 |
| 1.1771 | 0.0796 | 0.5031 | 1.1546 | 0.0800 | 0.5066 | 1.4984742e-07 | 116 |
| 1.1753 | 0.0797 | 0.5036 | 1.1489 | 0.0801 | 0.4990 | 1.498448e-07 | 117 |
| 1.1732 | 0.0796 | 0.5025 | 1.1451 | 0.0804 | 0.5052 | 1.4984214e-07 | 118 |
| 1.1698 | 0.0798 | 0.5030 | 1.1466 | 0.0805 | 0.5083 | 1.4983947e-07 | 119 |
| 1.1670 | 0.0798 | 0.5012 | 1.1417 | 0.0804 | 0.5043 | 1.4983677e-07 | 120 |
| 1.1655 | 0.0798 | 0.5034 | 1.1459 | 0.0803 | 0.5071 | 1.4983405e-07 | 121 |
| 1.1641 | 0.0799 | 0.5009 | 1.1438 | 0.0800 | 0.5111 | 1.4983131e-07 | 122 |
| 1.1618 | 0.0798 | 0.5031 | 1.1437 | 0.0805 | 0.5030 | 1.4982854e-07 | 123 |
| 1.1620 | 0.0799 | 0.5019 | 1.1443 | 0.0799 | 0.5068 | 1.4982575e-07 | 124 |
| 1.1591 | 0.0799 | 0.5018 | 1.1361 | 0.0801 | 0.4988 | 1.4982294e-07 | 125 |
| 1.1583 | 0.0798 | 0.5043 | 1.1314 | 0.0800 | 0.4946 | 1.4982011e-07 | 126 |
| 1.1554 | 0.0800 | 0.5032 | 1.1403 | 0.0802 | 0.5056 | 1.4981725e-07 | 127 |
| 1.1546 | 0.0800 | 0.5033 | 1.1299 | 0.0806 | 0.5048 | 1.4981438e-07 | 128 |
| 1.1537 | 0.0800 | 0.5043 | 1.1273 | 0.0805 | 0.4951 | 1.4981148e-07 | 129 |
| 1.1519 | 0.0801 | 0.5046 | 1.1335 | 0.0803 | 0.5039 | 1.4980856e-07 | 130 |
| 1.1517 | 0.0800 | 0.5023 | 1.1317 | 0.0802 | 0.4949 | 1.4980562e-07 | 131 |
| 1.1487 | 0.0802 | 0.5025 | 1.1292 | 0.0803 | 0.5028 | 1.4980264e-07 | 132 |
| 1.1490 | 0.0802 | 0.5002 | 1.1317 | 0.0805 | 0.5052 | 1.4979966e-07 | 133 |
| 1.1466 | 0.0802 | 0.5031 | 1.1328 | 0.0802 | 0.4985 | 1.4979665e-07 | 134 |
| 1.1451 | 0.0802 | 0.5043 | 1.1269 | 0.0805 | 0.4955 | 1.4979362e-07 | 135 |
| 1.1439 | 0.0802 | 0.5037 | 1.1245 | 0.0803 | 0.4933 | 1.4979057e-07 | 136 |
| 1.1437 | 0.0802 | 0.5012 | 1.1288 | 0.0806 | 0.5059 | 1.4978748e-07 | 137 |
| 1.1415 | 0.0803 | 0.5039 | 1.1251 | 0.0808 | 0.4942 | 1.4978438e-07 | 138 |
| 1.1403 | 0.0803 | 0.5030 | 1.1262 | 0.0804 | 0.4975 | 1.4978126e-07 | 139 |
| 1.1380 | 0.0804 | 0.5005 | 1.1251 | 0.0806 | 0.5032 | 1.4977812e-07 | 140 |
| 1.1380 | 0.0804 | 0.5034 | 1.1239 | 0.0806 | 0.4974 | 1.4977495e-07 | 141 |
| 1.1363 | 0.0803 | 0.5033 | 1.1184 | 0.0807 | 0.5089 | 1.4977176e-07 | 142 |
| 1.1355 | 0.0804 | 0.5025 | 1.1143 | 0.0809 | 0.5102 | 1.4976855e-07 | 143 |
| 1.1335 | 0.0804 | 0.5032 | 1.1167 | 0.0806 | 0.5020 | 1.4976531e-07 | 144 |
| 1.1337 | 0.0804 | 0.5022 | 1.1231 | 0.0810 | 0.5077 | 1.4976206e-07 | 145 |
| 1.1307 | 0.0806 | 0.5017 | 1.1128 | 0.0809 | 0.5073 | 1.4975878e-07 | 146 |
| 1.1296 | 0.0806 | 0.5022 | 1.1115 | 0.0810 | 0.5112 | 1.4975548e-07 | 147 |
| 1.1297 | 0.0805 | 0.5024 | 1.1103 | 0.0807 | 0.5128 | 1.4975215e-07 | 148 |
| 1.1273 | 0.0806 | 0.5008 | 1.1081 | 0.0810 | 0.4954 | 1.497488e-07 | 149 |
| 1.1269 | 0.0805 | 0.5037 | 1.1082 | 0.0812 | 0.5083 | 1.4974543e-07 | 150 |
| 1.1241 | 0.0807 | 0.5028 | 1.1056 | 0.0811 | 0.4949 | 1.4974204e-07 | 151 |
| 1.1238 | 0.0807 | 0.5025 | 1.1062 | 0.0805 | 0.4975 | 1.4973863e-07 | 152 |
| 1.1225 | 0.0808 | 0.5017 | 1.1053 | 0.0814 | 0.5128 | 1.4973519e-07 | 153 |
| 1.1219 | 0.0807 | 0.5015 | 1.1039 | 0.0812 | 0.5076 | 1.4973173e-07 | 154 |
| 1.1182 | 0.0809 | 0.5040 | 1.1033 | 0.0811 | 0.4952 | 1.4972825e-07 | 155 |
| 1.1187 | 0.0809 | 0.5021 | 1.1001 | 0.0809 | 0.5001 | 1.4972474e-07 | 156 |
| 1.1175 | 0.0808 | 0.5041 | 1.0957 | 0.0813 | 0.5097 | 1.4972122e-07 | 157 |
| 1.1166 | 0.0808 | 0.5031 | 1.1002 | 0.0814 | 0.5040 | 1.4971766e-07 | 158 |
| 1.1150 | 0.0809 | 0.5039 | 1.0943 | 0.0811 | 0.5105 | 1.497141e-07 | 159 |
| 1.1129 | 0.0810 | 0.5028 | 1.1034 | 0.0812 | 0.5065 | 1.497105e-07 | 160 |
| 1.1121 | 0.0810 | 0.5032 | 1.0932 | 0.0814 | 0.5068 | 1.4970689e-07 | 161 |
| 1.1098 | 0.0811 | 0.5028 | 1.0900 | 0.0819 | 0.5097 | 1.4970325e-07 | 162 |
| 1.1107 | 0.0811 | 0.5027 | 1.0916 | 0.0813 | 0.4964 | 1.4969959e-07 | 163 |
| 1.1076 | 0.0812 | 0.5037 | 1.0909 | 0.0815 | 0.5098 | 1.4969591e-07 | 164 |
| 1.1055 | 0.0812 | 0.5021 | 1.0897 | 0.0817 | 0.5079 | 1.496922e-07 | 165 |
| 1.1046 | 0.0811 | 0.5040 | 1.0849 | 0.0813 | 0.5074 | 1.4968847e-07 | 166 |
| 1.1054 | 0.0811 | 0.5037 | 1.0844 | 0.0820 | 0.5037 | 1.4968472e-07 | 167 |
| 1.1025 | 0.0812 | 0.5034 | 1.0845 | 0.0819 | 0.5073 | 1.4968096e-07 | 168 |
| 1.1015 | 0.0814 | 0.5017 | 1.0838 | 0.0817 | 0.5088 | 1.4967716e-07 | 169 |
| 1.1014 | 0.0814 | 0.5039 | 1.0806 | 0.0816 | 0.5061 | 1.4967334e-07 | 170 |
| 1.0998 | 0.0814 | 0.5042 | 1.0873 | 0.0818 | 0.4955 | 1.496695e-07 | 171 |
| 1.0986 | 0.0815 | 0.5022 | 1.0886 | 0.0814 | 0.5117 | 1.4966564e-07 | 172 |
| 1.0959 | 0.0815 | 0.5045 | 1.0737 | 0.0818 | 0.4975 | 1.4966176e-07 | 173 |
| 1.0959 | 0.0815 | 0.5012 | 1.0793 | 0.0820 | 0.4949 | 1.4965785e-07 | 174 |
| 1.0952 | 0.0815 | 0.5020 | 1.0809 | 0.0817 | 0.5011 | 1.4965393e-07 | 175 |
| 1.0944 | 0.0815 | 0.5040 | 1.0766 | 0.0818 | 0.5037 | 1.4964998e-07 | 176 |
| 1.0939 | 0.0815 | 0.5041 | 1.0751 | 0.0814 | 0.5039 | 1.49646e-07 | 177 |
| 1.0918 | 0.0815 | 0.5035 | 1.0773 | 0.0819 | 0.5076 | 1.49642e-07 | 178 |
| 1.0908 | 0.0815 | 0.5034 | 1.0701 | 0.0820 | 0.4952 | 1.4963798e-07 | 179 |
| 1.0908 | 0.0815 | 0.5031 | 1.0756 | 0.0817 | 0.5011 | 1.4963395e-07 | 180 |
| 1.0894 | 0.0815 | 0.5029 | 1.0687 | 0.0816 | 0.5121 | 1.4962988e-07 | 181 |
| 1.0897 | 0.0816 | 0.5045 | 1.0735 | 0.0821 | 0.5068 | 1.496258e-07 | 182 |
| 1.0878 | 0.0817 | 0.5044 | 1.0749 | 0.0822 | 0.4951 | 1.496217e-07 | 183 |
| 1.0863 | 0.0816 | 0.5019 | 1.0675 | 0.0822 | 0.5047 | 1.4961756e-07 | 184 |
| 1.0874 | 0.0816 | 0.5035 | 1.0738 | 0.0819 | 0.5045 | 1.4961341e-07 | 185 |
| 1.0849 | 0.0816 | 0.5033 | 1.0744 | 0.0820 | 0.5061 | 1.4960924e-07 | 186 |
| 1.0847 | 0.0816 | 0.5043 | 1.0721 | 0.0817 | 0.5085 | 1.4960504e-07 | 187 |
| 1.0846 | 0.0817 | 0.5057 | 1.0709 | 0.0818 | 0.5088 | 1.4960082e-07 | 188 |
| 1.0838 | 0.0817 | 0.5029 | 1.0697 | 0.0818 | 0.5080 | 1.4959659e-07 | 189 |
| 1.0816 | 0.0817 | 0.5049 | 1.0691 | 0.0822 | 0.5072 | 1.4959232e-07 | 190 |
| 1.0807 | 0.0818 | 0.5055 | 1.0615 | 0.0822 | 0.5068 | 1.4958803e-07 | 191 |
| 1.0808 | 0.0818 | 0.5025 | 1.0641 | 0.0820 | 0.5024 | 1.4958373e-07 | 192 |
| 1.0813 | 0.0818 | 0.5047 | 1.0646 | 0.0818 | 0.5049 | 1.4957939e-07 | 193 |
| 1.0790 | 0.0818 | 0.5062 | 1.0642 | 0.0820 | 0.5042 | 1.4957504e-07 | 194 |
| 1.0792 | 0.0818 | 0.5029 | 1.0661 | 0.0823 | 0.4942 | 1.4957067e-07 | 195 |
| 1.0793 | 0.0818 | 0.5044 | 1.0624 | 0.0826 | 0.5035 | 1.4956628e-07 | 196 |
| 1.0786 | 0.0818 | 0.5030 | 1.0629 | 0.0826 | 0.5092 | 1.4956186e-07 | 197 |
| 1.0772 | 0.0819 | 0.5058 | 1.0617 | 0.0826 | 0.4999 | 1.4955741e-07 | 198 |
| 1.0761 | 0.0818 | 0.5028 | 1.0628 | 0.0821 | 0.5085 | 1.4955295e-07 | 199 |
| 1.0754 | 0.0819 | 0.5037 | 1.0633 | 0.0821 | 0.4953 | 1.4954846e-07 | 200 |
| 1.0752 | 0.0819 | 0.5042 | 1.0634 | 0.0825 | 0.5068 | 1.4954395e-07 | 201 |
| 1.0734 | 0.0819 | 0.5030 | 1.0635 | 0.0821 | 0.5055 | 1.4953942e-07 | 202 |
| 1.0746 | 0.0819 | 0.5073 | 1.0557 | 0.0822 | 0.5128 | 1.4953487e-07 | 203 |
| 1.0725 | 0.0819 | 0.5034 | 1.0576 | 0.0822 | 0.5111 | 1.495303e-07 | 204 |
| 1.0725 | 0.0820 | 0.5068 | 1.0581 | 0.0822 | 0.5083 | 1.4952569e-07 | 205 |
| 1.0724 | 0.0819 | 0.5049 | 1.0599 | 0.0819 | 0.5 | 1.4952107e-07 | 206 |
| 1.0724 | 0.0819 | 0.5041 | 1.0595 | 0.0825 | 0.5001 | 1.4951642e-07 | 207 |
| 1.0703 | 0.0819 | 0.5050 | 1.0521 | 0.0823 | 0.5109 | 1.4951176e-07 | 208 |
| 1.0696 | 0.0819 | 0.5052 | 1.0546 | 0.0822 | 0.5018 | 1.4950707e-07 | 209 |
| 1.0697 | 0.0820 | 0.5060 | 1.0591 | 0.0826 | 0.5004 | 1.4950237e-07 | 210 |
| 1.0691 | 0.0820 | 0.5048 | 1.0527 | 0.0825 | 0.5079 | 1.4949764e-07 | 211 |
| 1.0688 | 0.0820 | 0.5039 | 1.0465 | 0.0827 | 0.5020 | 1.4949288e-07 | 212 |
| 1.0674 | 0.0820 | 0.5042 | 1.0557 | 0.0819 | 0.5068 | 1.494881e-07 | 213 |
| 1.0676 | 0.0821 | 0.5062 | 1.0536 | 0.0820 | 0.5075 | 1.494833e-07 | 214 |
| 1.0653 | 0.0821 | 0.5052 | 1.0543 | 0.0824 | 0.5064 | 1.4947848e-07 | 215 |
| 1.0652 | 0.0821 | 0.5070 | 1.0509 | 0.0826 | 0.5028 | 1.4947364e-07 | 216 |
| 1.0665 | 0.0821 | 0.5058 | 1.0500 | 0.0823 | 0.4998 | 1.4946878e-07 | 217 |
| 1.0646 | 0.0821 | 0.5042 | 1.0509 | 0.0825 | 0.5068 | 1.4946389e-07 | 218 |
| 1.0649 | 0.0821 | 0.5054 | 1.0552 | 0.0821 | 0.5010 | 1.4945898e-07 | 219 |
| 1.0635 | 0.0821 | 0.5048 | 1.0460 | 0.0825 | 0.5117 | 1.4945405e-07 | 220 |
| 1.0635 | 0.0821 | 0.5042 | 1.0551 | 0.0825 | 0.4955 | 1.494491e-07 | 221 |
| 1.0635 | 0.0821 | 0.5035 | 1.0486 | 0.0821 | 0.5011 | 1.4944412e-07 | 222 |
| 1.0620 | 0.0822 | 0.5045 | 1.0508 | 0.0825 | 0.5080 | 1.4943912e-07 | 223 |
| 1.0615 | 0.0822 | 0.5039 | 1.0470 | 0.0827 | 0.5055 | 1.494341e-07 | 224 |
| 1.0620 | 0.0821 | 0.5050 | 1.0560 | 0.0821 | 0.5049 | 1.4942906e-07 | 225 |
| 1.0615 | 0.0821 | 0.5054 | 1.0495 | 0.0823 | 0.5005 | 1.49424e-07 | 226 |
| 1.0600 | 0.0822 | 0.5075 | 1.0459 | 0.0825 | 0.5028 | 1.4941891e-07 | 227 |
| 1.0585 | 0.0823 | 0.5044 | 1.0448 | 0.0824 | 0.5081 | 1.494138e-07 | 228 |
| 1.0579 | 0.0823 | 0.5069 | 1.0462 | 0.0825 | 0.5061 | 1.4940866e-07 | 229 |
| 1.0585 | 0.0823 | 0.5043 | 1.0452 | 0.0826 | 0.5017 | 1.494035e-07 | 230 |
| 1.0588 | 0.0823 | 0.5058 | 1.0456 | 0.0828 | 0.5043 | 1.4939833e-07 | 231 |
| 1.0582 | 0.0822 | 0.5056 | 1.0434 | 0.0825 | 0.5004 | 1.4939313e-07 | 232 |
| 1.0566 | 0.0822 | 0.5066 | 1.0473 | 0.0825 | 0.4959 | 1.4938792e-07 | 233 |
| 1.0572 | 0.0822 | 0.5045 | 1.0428 | 0.0829 | 0.5099 | 1.4938267e-07 | 234 |
| 1.0574 | 0.0823 | 0.5043 | 1.0461 | 0.0827 | 0.5123 | 1.493774e-07 | 235 |
| 1.0573 | 0.0823 | 0.5038 | 1.0456 | 0.0826 | 0.4955 | 1.4937211e-07 | 236 |
| 1.0562 | 0.0823 | 0.5058 | 1.0485 | 0.0823 | 0.4985 | 1.493668e-07 | 237 |
| 1.0549 | 0.0822 | 0.5051 | 1.0430 | 0.0822 | 0.5038 | 1.4936147e-07 | 238 |
| 1.0548 | 0.0823 | 0.5042 | 1.0476 | 0.0824 | 0.5054 | 1.4935611e-07 | 239 |
| 1.0550 | 0.0824 | 0.5059 | 1.0421 | 0.0826 | 0.5086 | 1.4935074e-07 | 240 |
| 1.0546 | 0.0823 | 0.5068 | 1.0450 | 0.0824 | 0.5017 | 1.4934534e-07 | 241 |
| 1.0542 | 0.0824 | 0.5082 | 1.0407 | 0.0830 | 0.5022 | 1.4933993e-07 | 242 |
| 1.0545 | 0.0824 | 0.5062 | 1.0438 | 0.0823 | 0.5015 | 1.4933448e-07 | 243 |
| 1.0527 | 0.0823 | 0.5068 | 1.0408 | 0.0828 | 0.5059 | 1.4932901e-07 | 244 |
| 1.0537 | 0.0823 | 0.5051 | 1.0386 | 0.0824 | 0.4990 | 1.4932353e-07 | 245 |
| 1.0530 | 0.0822 | 0.5077 | 1.0385 | 0.0823 | 0.5020 | 1.4931801e-07 | 246 |
| 1.0528 | 0.0824 | 0.5072 | 1.0447 | 0.0827 | 0.5012 | 1.4931248e-07 | 247 |
| 1.0525 | 0.0824 | 0.5052 | 1.0445 | 0.0826 | 0.4958 | 1.4930693e-07 | 248 |
| 1.0513 | 0.0823 | 0.5070 | 1.0372 | 0.0826 | 0.5059 | 1.4930136e-07 | 249 |
| 1.0511 | 0.0824 | 0.5059 | 1.0398 | 0.0824 | 0.5004 | 1.4929576e-07 | 250 |
| 1.0508 | 0.0822 | 0.5057 | 1.0424 | 0.0826 | 0.5060 | 1.4929013e-07 | 251 |
| 1.0516 | 0.0824 | 0.5061 | 1.0397 | 0.0824 | 0.5052 | 1.4928449e-07 | 252 |
| 1.0501 | 0.0824 | 0.5064 | 1.0450 | 0.0824 | 0.5083 | 1.4927882e-07 | 253 |
| 1.0499 | 0.0824 | 0.5055 | 1.0409 | 0.0828 | 0.5009 | 1.4927313e-07 | 254 |
| 1.0497 | 0.0825 | 0.5040 | 1.0388 | 0.0826 | 0.5004 | 1.4926742e-07 | 255 |
| 1.0503 | 0.0824 | 0.5073 | 1.0368 | 0.0825 | 0.5026 | 1.492617e-07 | 256 |
| 1.0500 | 0.0824 | 0.5060 | 1.0406 | 0.0826 | 0.5051 | 1.4925594e-07 | 257 |
| 1.0497 | 0.0824 | 0.5049 | 1.0418 | 0.0824 | 0.5035 | 1.4925017e-07 | 258 |
| 1.0487 | 0.0825 | 0.5068 | 1.0394 | 0.0826 | 0.5104 | 1.4924437e-07 | 259 |
| 1.0476 | 0.0825 | 0.5047 | 1.0396 | 0.0831 | 0.4989 | 1.4923855e-07 | 260 |
| 1.0485 | 0.0825 | 0.5055 | 1.0385 | 0.0827 | 0.5060 | 1.492327e-07 | 261 |
| 1.0478 | 0.0825 | 0.5060 | 1.0397 | 0.0827 | 0.4992 | 1.4922684e-07 | 262 |
| 1.0487 | 0.0824 | 0.5034 | 1.0374 | 0.0827 | 0.5054 | 1.4922095e-07 | 263 |
| 1.0480 | 0.0824 | 0.5061 | 1.0354 | 0.0828 | 0.5059 | 1.4921504e-07 | 264 |
| 1.0469 | 0.0824 | 0.5060 | 1.0422 | 0.0828 | 0.5038 | 1.4920911e-07 | 265 |
| 1.0468 | 0.0825 | 0.5082 | 1.0371 | 0.0826 | 0.5054 | 1.4920316e-07 | 266 |
| 1.0460 | 0.0824 | 0.5051 | 1.0337 | 0.0830 | 0.5040 | 1.4919719e-07 | 267 |
| 1.0456 | 0.0825 | 0.5048 | 1.0337 | 0.0830 | 0.4976 | 1.491912e-07 | 268 |
| 1.0459 | 0.0825 | 0.5052 | 1.0373 | 0.0828 | 0.5047 | 1.4918517e-07 | 269 |
| 1.0449 | 0.0826 | 0.5082 | 1.0338 | 0.0828 | 0.5023 | 1.4917913e-07 | 270 |
| 1.0460 | 0.0826 | 0.5064 | 1.0325 | 0.0832 | 0.5041 | 1.4917306e-07 | 271 |
| 1.0444 | 0.0826 | 0.5051 | 1.0378 | 0.0828 | 0.5010 | 1.4916698e-07 | 272 |
| 1.0451 | 0.0825 | 0.5060 | 1.0363 | 0.0828 | 0.4995 | 1.4916087e-07 | 273 |
| 1.0435 | 0.0825 | 0.5059 | 1.0344 | 0.0829 | 0.5084 | 1.4915474e-07 | 274 |
| 1.0439 | 0.0826 | 0.5066 | 1.0347 | 0.0829 | 0.4964 | 1.4914859e-07 | 275 |
| 1.0442 | 0.0825 | 0.5064 | 1.0337 | 0.0827 | 0.5024 | 1.4914242e-07 | 276 |
| 1.0445 | 0.0826 | 0.5069 | 1.0366 | 0.0827 | 0.5005 | 1.4913623e-07 | 277 |
| 1.0440 | 0.0826 | 0.5077 | 1.0315 | 0.0830 | 0.5026 | 1.4913e-07 | 278 |
| 1.0416 | 0.0826 | 0.5083 | 1.0389 | 0.0827 | 0.5081 | 1.4912376e-07 | 279 |
| 1.0435 | 0.0828 | 0.5055 | 1.0321 | 0.0827 | 0.5051 | 1.491175e-07 | 280 |
| 1.0433 | 0.0826 | 0.5049 | 1.0289 | 0.0829 | 0.5068 | 1.4911122e-07 | 281 |
| 1.0423 | 0.0827 | 0.5051 | 1.0336 | 0.0829 | 0.5035 | 1.491049e-07 | 282 |
| 1.0417 | 0.0826 | 0.5088 | 1.0314 | 0.0831 | 0.5065 | 1.4909858e-07 | 283 |
| 1.0419 | 0.0827 | 0.5058 | 1.0294 | 0.0828 | 0.5071 | 1.4909223e-07 | 284 |
| 1.0405 | 0.0827 | 0.5077 | 1.0288 | 0.0827 | 0.4974 | 1.4908586e-07 | 285 |
| 1.0417 | 0.0826 | 0.5075 | 1.0304 | 0.0829 | 0.4990 | 1.4907947e-07 | 286 |
| 1.0420 | 0.0826 | 0.5080 | 1.0289 | 0.0831 | 0.5035 | 1.4907305e-07 | 287 |
| 1.0416 | 0.0826 | 0.5080 | 1.0260 | 0.0829 | 0.4988 | 1.4906661e-07 | 288 |
| 1.0403 | 0.0827 | 0.5063 | 1.0228 | 0.0830 | 0.5029 | 1.4906014e-07 | 289 |
| 1.0399 | 0.0827 | 0.5051 | 1.0300 | 0.0832 | 0.5008 | 1.4905366e-07 | 290 |
| 1.0415 | 0.0827 | 0.5073 | 1.0335 | 0.0829 | 0.5091 | 1.4904715e-07 | 291 |
| 1.0401 | 0.0828 | 0.5060 | 1.0297 | 0.0828 | 0.4992 | 1.4904063e-07 | 292 |
| 1.0386 | 0.0827 | 0.5094 | 1.0336 | 0.0826 | 0.5050 | 1.4903408e-07 | 293 |
| 1.0383 | 0.0827 | 0.5065 | 1.0331 | 0.0828 | 0.5080 | 1.4902751e-07 | 294 |
| 1.0385 | 0.0826 | 0.5060 | 1.0293 | 0.0831 | 0.5077 | 1.4902092e-07 | 295 |
| 1.0389 | 0.0827 | 0.5062 | 1.0275 | 0.0832 | 0.5052 | 1.490143e-07 | 296 |
| 1.0404 | 0.0827 | 0.5086 | 1.0265 | 0.0834 | 0.5 | 1.4900766e-07 | 297 |
| 1.0391 | 0.0827 | 0.5093 | 1.0306 | 0.0829 | 0.4977 | 1.49001e-07 | 298 |
| 1.0387 | 0.0827 | 0.5087 | 1.0277 | 0.0837 | 0.5048 | 1.4899432e-07 | 299 |
| 1.0382 | 0.0827 | 0.5072 | 1.0280 | 0.0830 | 0.5059 | 1.4898761e-07 | 300 |
| 1.0380 | 0.0826 | 0.5071 | 1.0219 | 0.0829 | 0.4987 | 1.4898089e-07 | 301 |
| 1.0369 | 0.0828 | 0.5071 | 1.0248 | 0.0833 | 0.5041 | 1.4897414e-07 | 302 |
| 1.0377 | 0.0827 | 0.5093 | 1.0324 | 0.0832 | 0.5046 | 1.4896737e-07 | 303 |
| 1.0374 | 0.0827 | 0.5077 | 1.0257 | 0.0832 | 0.5055 | 1.4896058e-07 | 304 |
| 1.0376 | 0.0827 | 0.5087 | 1.0297 | 0.0831 | 0.4999 | 1.4895376e-07 | 305 |
| 1.0381 | 0.0827 | 0.5083 | 1.0226 | 0.0831 | 0.5056 | 1.4894692e-07 | 306 |
| 1.0366 | 0.0828 | 0.5083 | 1.0257 | 0.0829 | 0.5098 | 1.4894006e-07 | 307 |
| 1.0350 | 0.0829 | 0.5083 | 1.0285 | 0.0832 | 0.5053 | 1.4893318e-07 | 308 |
| 1.0365 | 0.0827 | 0.5053 | 1.0263 | 0.0828 | 0.5016 | 1.4892628e-07 | 309 |
| 1.0360 | 0.0827 | 0.5066 | 1.0275 | 0.0830 | 0.5060 | 1.4891936e-07 | 310 |
| 1.0357 | 0.0827 | 0.5092 | 1.0291 | 0.0827 | 0.5003 | 1.489124e-07 | 311 |
| 1.0353 | 0.0826 | 0.5083 | 1.0294 | 0.0827 | 0.5072 | 1.4890544e-07 | 312 |
| 1.0354 | 0.0828 | 0.5069 | 1.0267 | 0.0827 | 0.5018 | 1.4889845e-07 | 313 |
| 1.0350 | 0.0828 | 0.5092 | 1.0227 | 0.0830 | 0.5096 | 1.4889144e-07 | 314 |
| 1.0348 | 0.0827 | 0.5078 | 1.0233 | 0.0833 | 0.5063 | 1.4888441e-07 | 315 |
| 1.0356 | 0.0828 | 0.5085 | 1.0253 | 0.0828 | 0.4978 | 1.4887735e-07 | 316 |
| 1.0335 | 0.0829 | 0.5098 | 1.0279 | 0.0830 | 0.4960 | 1.4887027e-07 | 317 |
| 1.0345 | 0.0829 | 0.5080 | 1.0234 | 0.0836 | 0.4952 | 1.4886317e-07 | 318 |
| 1.0337 | 0.0828 | 0.5098 | 1.0199 | 0.0827 | 0.5026 | 1.4885605e-07 | 319 |
| 1.0334 | 0.0827 | 0.5078 | 1.0207 | 0.0833 | 0.5133 | 1.488489e-07 | 320 |
| 1.0333 | 0.0828 | 0.5069 | 1.0202 | 0.0833 | 0.5066 | 1.4884174e-07 | 321 |
| 1.0340 | 0.0827 | 0.5110 | 1.0223 | 0.0829 | 0.5034 | 1.4883454e-07 | 322 |
| 1.0333 | 0.0829 | 0.5102 | 1.0221 | 0.0835 | 0.5040 | 1.4882734e-07 | 323 |
| 1.0331 | 0.0828 | 0.5098 | 1.0215 | 0.0832 | 0.5022 | 1.488201e-07 | 324 |
| 1.0325 | 0.0828 | 0.5104 | 1.0234 | 0.0831 | 0.5008 | 1.4881286e-07 | 325 |
| 1.0313 | 0.0828 | 0.5091 | 1.0242 | 0.0832 | 0.4973 | 1.4880558e-07 | 326 |
| 1.0326 | 0.0828 | 0.5095 | 1.0196 | 0.0831 | 0.5077 | 1.4879828e-07 | 327 |
| 1.0318 | 0.0828 | 0.5096 | 1.0192 | 0.0832 | 0.5083 | 1.4879096e-07 | 328 |
| 1.0306 | 0.0828 | 0.5081 | 1.0256 | 0.0831 | 0.5058 | 1.4878361e-07 | 329 |
| 1.0317 | 0.0829 | 0.5091 | 1.0214 | 0.0831 | 0.5075 | 1.4877625e-07 | 330 |
| 1.0311 | 0.0829 | 0.5087 | 1.0211 | 0.0833 | 0.5047 | 1.4876886e-07 | 331 |
| 1.0306 | 0.0830 | 0.5080 | 1.0218 | 0.0832 | 0.4987 | 1.4876146e-07 | 332 |
| 1.0291 | 0.0829 | 0.5084 | 1.0187 | 0.0831 | 0.5111 | 1.4875403e-07 | 333 |
| 1.0301 | 0.0829 | 0.5098 | 1.0226 | 0.0830 | 0.5027 | 1.4874658e-07 | 334 |
| 1.0310 | 0.0829 | 0.5107 | 1.0210 | 0.0831 | 0.5060 | 1.487391e-07 | 335 |
| 1.0295 | 0.0829 | 0.5092 | 1.0182 | 0.0832 | 0.5004 | 1.4873162e-07 | 336 |
| 1.0296 | 0.0829 | 0.5087 | 1.0259 | 0.0830 | 0.5024 | 1.487241e-07 | 337 |
| 1.0313 | 0.0829 | 0.5083 | 1.0190 | 0.0834 | 0.5016 | 1.4871655e-07 | 338 |
| 1.0292 | 0.0829 | 0.5098 | 1.0184 | 0.0836 | 0.5105 | 1.4870899e-07 | 339 |
| 1.0297 | 0.0829 | 0.5091 | 1.0205 | 0.0832 | 0.5046 | 1.487014e-07 | 340 |
| 1.0283 | 0.0829 | 0.5103 | 1.0230 | 0.0832 | 0.5004 | 1.486938e-07 | 341 |
| 1.0284 | 0.0830 | 0.5095 | 1.0207 | 0.0832 | 0.5080 | 1.4868617e-07 | 342 |
| 1.0291 | 0.0831 | 0.5092 | 1.0222 | 0.0833 | 0.5008 | 1.4867852e-07 | 343 |
| 1.0289 | 0.0829 | 0.5094 | 1.0195 | 0.0828 | 0.4992 | 1.4867085e-07 | 344 |
| 1.0279 | 0.0830 | 0.5098 | 1.0156 | 0.0833 | 0.5008 | 1.4866316e-07 | 345 |
| 1.0287 | 0.0830 | 0.5105 | 1.0225 | 0.0832 | 0.5045 | 1.4865545e-07 | 346 |
| 1.0265 | 0.0830 | 0.5091 | 1.0211 | 0.0830 | 0.5086 | 1.4864771e-07 | 347 |
| 1.0278 | 0.0830 | 0.5095 | 1.0147 | 0.0831 | 0.5018 | 1.4863996e-07 | 348 |
| 1.0271 | 0.0830 | 0.5088 | 1.0191 | 0.0831 | 0.5004 | 1.4863217e-07 | 349 |
| 1.0276 | 0.0829 | 0.5100 | 1.0162 | 0.0836 | 0.5058 | 1.4862437e-07 | 350 |
| 1.0257 | 0.0830 | 0.5096 | 1.0224 | 0.0832 | 0.5014 | 1.4861654e-07 | 351 |
| 1.0262 | 0.0830 | 0.5093 | 1.0182 | 0.0830 | 0.5087 | 1.4860869e-07 | 352 |
| 1.0264 | 0.0830 | 0.5097 | 1.0200 | 0.0833 | 0.5021 | 1.4860082e-07 | 353 |
| 1.0260 | 0.0830 | 0.5100 | 1.0166 | 0.0833 | 0.4986 | 1.4859293e-07 | 354 |
| 1.0257 | 0.0830 | 0.5107 | 1.0150 | 0.0831 | 0.5011 | 1.4858502e-07 | 355 |
| 1.0250 | 0.0830 | 0.5089 | 1.0167 | 0.0830 | 0.5066 | 1.4857709e-07 | 356 |
| 1.0262 | 0.0830 | 0.5094 | 1.0158 | 0.0833 | 0.5026 | 1.4856913e-07 | 357 |
| 1.0247 | 0.0830 | 0.5110 | 1.0128 | 0.0832 | 0.5022 | 1.4856116e-07 | 358 |
| 1.0251 | 0.0831 | 0.5104 | 1.0161 | 0.0831 | 0.5078 | 1.4855316e-07 | 359 |
| 1.0247 | 0.0831 | 0.5101 | 1.0136 | 0.0833 | 0.5035 | 1.4854514e-07 | 360 |
| 1.0251 | 0.0831 | 0.5103 | 1.0182 | 0.0833 | 0.5029 | 1.485371e-07 | 361 |
| 1.0244 | 0.0830 | 0.5092 | 1.0160 | 0.0835 | 0.5041 | 1.4852903e-07 | 362 |
| 1.0239 | 0.0831 | 0.5114 | 1.0161 | 0.0833 | 0.5048 | 1.4852094e-07 | 363 |
| 1.0238 | 0.0830 | 0.5095 | 1.0155 | 0.0834 | 0.5083 | 1.4851283e-07 | 364 |
| 1.0237 | 0.0831 | 0.5116 | 1.0107 | 0.0835 | 0.5074 | 1.485047e-07 | 365 |
| 1.0229 | 0.0831 | 0.5104 | 1.0163 | 0.0829 | 0.5065 | 1.4849654e-07 | 366 |
| 1.0239 | 0.0830 | 0.5102 | 1.0163 | 0.0835 | 0.5033 | 1.4848837e-07 | 367 |
| 1.0223 | 0.0830 | 0.5120 | 1.0128 | 0.0834 | 0.5096 | 1.4848017e-07 | 368 |
| 1.0235 | 0.0830 | 0.5112 | 1.0115 | 0.0838 | 0.5024 | 1.4847195e-07 | 369 |
| 1.0228 | 0.0831 | 0.5105 | 1.0120 | 0.0836 | 0.5076 | 1.4846371e-07 | 370 |
| 1.0229 | 0.0831 | 0.5127 | 1.0150 | 0.0835 | 0.5066 | 1.4845546e-07 | 371 |
| 1.0234 | 0.0831 | 0.5101 | 1.0126 | 0.0837 | 0.5078 | 1.4844717e-07 | 372 |
| 1.0224 | 0.0831 | 0.5114 | 1.0112 | 0.0838 | 0.5038 | 1.4843887e-07 | 373 |
| 1.0225 | 0.0831 | 0.5106 | 1.0134 | 0.0831 | 0.5049 | 1.4843054e-07 | 374 |
| 1.0222 | 0.0831 | 0.5113 | 1.0106 | 0.0830 | 0.5045 | 1.484222e-07 | 375 |
| 1.0214 | 0.0831 | 0.5120 | 1.0138 | 0.0831 | 0.5096 | 1.4841383e-07 | 376 |
| 1.0202 | 0.0830 | 0.5113 | 1.0122 | 0.0834 | 0.5053 | 1.4840543e-07 | 377 |
| 1.0221 | 0.0832 | 0.5116 | 1.0116 | 0.0827 | 0.5010 | 1.4839702e-07 | 378 |
| 1.0225 | 0.0830 | 0.5100 | 1.0125 | 0.0839 | 0.5037 | 1.4838858e-07 | 379 |
| 1.0205 | 0.0831 | 0.5115 | 1.0122 | 0.0834 | 0.5084 | 1.4838012e-07 | 380 |
| 1.0213 | 0.0831 | 0.5127 | 1.0135 | 0.0834 | 0.5054 | 1.4837164e-07 | 381 |
| 1.0204 | 0.0832 | 0.5138 | 1.0134 | 0.0834 | 0.4985 | 1.4836314e-07 | 382 |
| 1.0209 | 0.0832 | 0.5115 | 1.0130 | 0.0833 | 0.5090 | 1.4835462e-07 | 383 |
| 1.0194 | 0.0832 | 0.5129 | 1.0165 | 0.0834 | 0.5055 | 1.4834607e-07 | 384 |
| 1.0210 | 0.0831 | 0.5116 | 1.0123 | 0.0837 | 0.5065 | 1.483375e-07 | 385 |
| 1.0199 | 0.0831 | 0.5141 | 1.0111 | 0.0836 | 0.5124 | 1.4832892e-07 | 386 |
| 1.0188 | 0.0831 | 0.5120 | 1.0137 | 0.0836 | 0.5050 | 1.4832031e-07 | 387 |
| 1.0192 | 0.0830 | 0.5114 | 1.0095 | 0.0836 | 0.5058 | 1.4831168e-07 | 388 |
| 1.0186 | 0.0832 | 0.5147 | 1.0109 | 0.0834 | 0.5089 | 1.4830303e-07 | 389 |
| 1.0196 | 0.0831 | 0.5125 | 1.0096 | 0.0835 | 0.5033 | 1.4829436e-07 | 390 |
| 1.0191 | 0.0831 | 0.5127 | 1.0135 | 0.0831 | 0.5053 | 1.4828566e-07 | 391 |
| 1.0190 | 0.0832 | 0.5135 | 1.0077 | 0.0835 | 0.5124 | 1.4827694e-07 | 392 |
| 1.0188 | 0.0832 | 0.5118 | 1.0103 | 0.0833 | 0.5090 | 1.482682e-07 | 393 |
| 1.0182 | 0.0831 | 0.5128 | 1.0144 | 0.0835 | 0.5059 | 1.4825943e-07 | 394 |
| 1.0179 | 0.0832 | 0.5120 | 1.0060 | 0.0836 | 0.5076 | 1.4825065e-07 | 395 |
| 1.0180 | 0.0833 | 0.5124 | 1.0093 | 0.0835 | 0.5109 | 1.4824184e-07 | 396 |
| 1.0180 | 0.0833 | 0.5128 | 1.0086 | 0.0839 | 0.5085 | 1.4823301e-07 | 397 |
| 1.0187 | 0.0831 | 0.5118 | 1.0104 | 0.0834 | 0.5049 | 1.4822416e-07 | 398 |
| 1.0182 | 0.0831 | 0.5155 | 1.0082 | 0.0834 | 0.5108 | 1.4821529e-07 | 399 |
| 1.0179 | 0.0832 | 0.5131 | 1.0148 | 0.0831 | 0.5035 | 1.482064e-07 | 400 |
| 1.0179 | 0.0832 | 0.5142 | 1.0136 | 0.0831 | 0.5100 | 1.4819749e-07 | 401 |
| 1.0189 | 0.0832 | 0.5128 | 1.0090 | 0.0833 | 0.5068 | 1.4818855e-07 | 402 |
| 1.0182 | 0.0831 | 0.5132 | 1.0078 | 0.0834 | 0.5060 | 1.481796e-07 | 403 |
| 1.0180 | 0.0832 | 0.5139 | 1.0115 | 0.0835 | 0.5058 | 1.4817061e-07 | 404 |
| 1.0172 | 0.0831 | 0.5133 | 1.0109 | 0.0836 | 0.5050 | 1.4816162e-07 | 405 |
| 1.0159 | 0.0833 | 0.5139 | 1.0101 | 0.0829 | 0.5058 | 1.481526e-07 | 406 |
| 1.0173 | 0.0831 | 0.5135 | 1.0068 | 0.0834 | 0.5084 | 1.4814356e-07 | 407 |
| 1.0158 | 0.0832 | 0.5132 | 1.0106 | 0.0835 | 0.5047 | 1.4813449e-07 | 408 |
| 1.0171 | 0.0833 | 0.5128 | 1.0104 | 0.0835 | 0.5085 | 1.4812541e-07 | 409 |
| 1.0162 | 0.0833 | 0.5153 | 1.0095 | 0.0837 | 0.5111 | 1.481163e-07 | 410 |
| 1.0168 | 0.0832 | 0.5128 | 1.0079 | 0.0833 | 0.5078 | 1.4810716e-07 | 411 |
| 1.0162 | 0.0832 | 0.5144 | 1.0076 | 0.0836 | 0.5101 | 1.4809801e-07 | 412 |
| 1.0152 | 0.0833 | 0.5143 | 1.0049 | 0.0835 | 0.5110 | 1.4808883e-07 | 413 |
| 1.0172 | 0.0832 | 0.5149 | 1.0042 | 0.0832 | 0.5074 | 1.4807964e-07 | 414 |
| 1.0162 | 0.0832 | 0.5161 | 1.0063 | 0.0835 | 0.5084 | 1.4807041e-07 | 415 |
| 1.0151 | 0.0833 | 0.5155 | 1.0079 | 0.0836 | 0.5013 | 1.4806118e-07 | 416 |
| 1.0165 | 0.0833 | 0.5141 | 1.0068 | 0.0831 | 0.5144 | 1.4805191e-07 | 417 |
| 1.0158 | 0.0832 | 0.5139 | 1.0040 | 0.0833 | 0.5066 | 1.4804263e-07 | 418 |
| 1.0166 | 0.0832 | 0.5143 | 1.0064 | 0.0834 | 0.5064 | 1.4803332e-07 | 419 |
| 1.0155 | 0.0833 | 0.5153 | 1.0072 | 0.0837 | 0.5072 | 1.48024e-07 | 420 |
| 1.0155 | 0.0832 | 0.5179 | 1.0028 | 0.0833 | 0.5088 | 1.4801465e-07 | 421 |
| 1.0145 | 0.0833 | 0.5158 | 1.0043 | 0.0833 | 0.5078 | 1.4800528e-07 | 422 |
| 1.0143 | 0.0832 | 0.5156 | 1.0067 | 0.0834 | 0.5115 | 1.4799589e-07 | 423 |
| 1.0140 | 0.0833 | 0.5140 | 1.0040 | 0.0835 | 0.5121 | 1.4798648e-07 | 424 |
| 1.0138 | 0.0832 | 0.5154 | 1.0105 | 0.0832 | 0.5133 | 1.4797705e-07 | 425 |
| 1.0140 | 0.0833 | 0.5164 | 1.0040 | 0.0833 | 0.5085 | 1.479676e-07 | 426 |
| 1.0140 | 0.0832 | 0.5170 | 1.0103 | 0.0833 | 0.5090 | 1.4795812e-07 | 427 |
| 1.0143 | 0.0831 | 0.5158 | 1.0092 | 0.0832 | 0.5106 | 1.4794863e-07 | 428 |
| 1.0146 | 0.0833 | 0.5171 | 1.0037 | 0.0840 | 0.5036 | 1.479391e-07 | 429 |
| 1.0132 | 0.0832 | 0.5155 | 1.0016 | 0.0831 | 0.5079 | 1.4792957e-07 | 430 |
| 1.0124 | 0.0832 | 0.5167 | 1.0071 | 0.0832 | 0.5073 | 1.4792e-07 | 431 |
| 1.0132 | 0.0833 | 0.5158 | 1.0042 | 0.0834 | 0.5142 | 1.4791043e-07 | 432 |
| 1.0129 | 0.0834 | 0.5178 | 1.0037 | 0.0833 | 0.5074 | 1.4790082e-07 | 433 |
| 1.0137 | 0.0833 | 0.5175 | 1.0102 | 0.0837 | 0.5110 | 1.478912e-07 | 434 |
| 1.0128 | 0.0832 | 0.5172 | 1.0062 | 0.0834 | 0.5131 | 1.4788155e-07 | 435 |
| 1.0132 | 0.0834 | 0.5156 | 1.0018 | 0.0835 | 0.5174 | 1.4787187e-07 | 436 |
| 1.0132 | 0.0832 | 0.5160 | 1.0016 | 0.0838 | 0.5123 | 1.4786218e-07 | 437 |
| 1.0127 | 0.0834 | 0.5155 | 1.0004 | 0.0838 | 0.5144 | 1.4785246e-07 | 438 |
| 1.0125 | 0.0834 | 0.5176 | 1.0059 | 0.0835 | 0.5187 | 1.4784273e-07 | 439 |
| 1.0142 | 0.0833 | 0.5164 | 1.0068 | 0.0836 | 0.5135 | 1.4783296e-07 | 440 |
| 1.0126 | 0.0833 | 0.5157 | 1.0115 | 0.0831 | 0.5052 | 1.4782319e-07 | 441 |
| 1.0120 | 0.0833 | 0.5174 | 1.0068 | 0.0830 | 0.5186 | 1.4781338e-07 | 442 |
| 1.0125 | 0.0833 | 0.5182 | 1.0007 | 0.0836 | 0.5135 | 1.4780356e-07 | 443 |
| 1.0110 | 0.0833 | 0.5172 | 1.0018 | 0.0833 | 0.5151 | 1.4779371e-07 | 444 |
| 1.0118 | 0.0832 | 0.5178 | 0.9987 | 0.0836 | 0.5144 | 1.4778385e-07 | 445 |
| 1.0119 | 0.0834 | 0.5171 | 1.0007 | 0.0838 | 0.5098 | 1.4777396e-07 | 446 |
| 1.0129 | 0.0832 | 0.5175 | 1.0050 | 0.0836 | 0.5133 | 1.4776406e-07 | 447 |
| 1.0119 | 0.0833 | 0.5186 | 1.0054 | 0.0835 | 0.5149 | 1.4775412e-07 | 448 |
| 1.0120 | 0.0834 | 0.5171 | 1.0103 | 0.0833 | 0.5138 | 1.4774417e-07 | 449 |
| 1.0116 | 0.0833 | 0.5171 | 1.0013 | 0.0835 | 0.5154 | 1.477342e-07 | 450 |
| 1.0112 | 0.0834 | 0.5170 | 1.0045 | 0.0835 | 0.5091 | 1.4772421e-07 | 451 |
| 1.0123 | 0.0833 | 0.5171 | 1.0043 | 0.0834 | 0.5128 | 1.4771419e-07 | 452 |
| 1.0109 | 0.0833 | 0.5177 | 1.0045 | 0.0838 | 0.5164 | 1.4770416e-07 | 453 |
| 1.0110 | 0.0833 | 0.5194 | 1.0065 | 0.0831 | 0.5097 | 1.476941e-07 | 454 |
| 1.0101 | 0.0832 | 0.5175 | 1.0036 | 0.0833 | 0.5096 | 1.4768402e-07 | 455 |
| 1.0106 | 0.0833 | 0.5173 | 1.0055 | 0.0841 | 0.5064 | 1.4767392e-07 | 456 |
| 1.0120 | 0.0833 | 0.5186 | 1.0051 | 0.0836 | 0.5133 | 1.476638e-07 | 457 |
| 1.0107 | 0.0832 | 0.5185 | 1.0027 | 0.0836 | 0.5101 | 1.4765365e-07 | 458 |
| 1.0103 | 0.0833 | 0.5181 | 1.0078 | 0.0839 | 0.5135 | 1.4764349e-07 | 459 |
| 1.0111 | 0.0832 | 0.5176 | 1.0046 | 0.0834 | 0.5081 | 1.476333e-07 | 460 |
| 1.0097 | 0.0834 | 0.5177 | 1.0042 | 0.0834 | 0.5106 | 1.476231e-07 | 461 |
| 1.0106 | 0.0832 | 0.5186 | 1.0046 | 0.0835 | 0.5114 | 1.4761287e-07 | 462 |
| 1.0107 | 0.0833 | 0.5185 | 1.0001 | 0.0841 | 0.5153 | 1.4760262e-07 | 463 |
| 1.0108 | 0.0833 | 0.5183 | 1.0032 | 0.0835 | 0.5072 | 1.4759235e-07 | 464 |
| 1.0095 | 0.0833 | 0.5193 | 1.0054 | 0.0833 | 0.5163 | 1.4758206e-07 | 465 |
| 1.0100 | 0.0834 | 0.5175 | 0.9980 | 0.0835 | 0.5136 | 1.4757174e-07 | 466 |
| 1.0098 | 0.0833 | 0.5198 | 1.0017 | 0.0839 | 0.5086 | 1.4756141e-07 | 467 |
| 1.0095 | 0.0834 | 0.5198 | 1.0009 | 0.0837 | 0.5097 | 1.4755105e-07 | 468 |
| 1.0103 | 0.0832 | 0.5182 | 0.9981 | 0.0839 | 0.5080 | 1.4754067e-07 | 469 |
| 1.0096 | 0.0833 | 0.5197 | 1.0008 | 0.0834 | 0.5086 | 1.4753027e-07 | 470 |
| 1.0098 | 0.0834 | 0.5207 | 1.0069 | 0.0835 | 0.5116 | 1.4751986e-07 | 471 |
| 1.0086 | 0.0833 | 0.5183 | 1.0064 | 0.0836 | 0.5103 | 1.4750941e-07 | 472 |
| 1.0100 | 0.0833 | 0.5193 | 1.0058 | 0.0836 | 0.5041 | 1.4749895e-07 | 473 |
| 1.0093 | 0.0834 | 0.5192 | 0.9980 | 0.0837 | 0.5154 | 1.4748846e-07 | 474 |
| 1.0090 | 0.0833 | 0.5190 | 1.0067 | 0.0835 | 0.5117 | 1.4747796e-07 | 475 |
| 1.0090 | 0.0834 | 0.5201 | 0.9987 | 0.0835 | 0.5123 | 1.4746743e-07 | 476 |
| 1.0083 | 0.0833 | 0.5189 | 1.0043 | 0.0836 | 0.5067 | 1.4745689e-07 | 477 |
| 1.0090 | 0.0833 | 0.5203 | 1.0044 | 0.0835 | 0.5106 | 1.4744631e-07 | 478 |
| 1.0095 | 0.0834 | 0.5193 | 1.0051 | 0.0835 | 0.5106 | 1.4743573e-07 | 479 |
| 1.0084 | 0.0834 | 0.5195 | 1.0063 | 0.0834 | 0.5102 | 1.4742511e-07 | 480 |
| 1.0089 | 0.0834 | 0.5206 | 1.0030 | 0.0836 | 0.5135 | 1.4741448e-07 | 481 |
| 1.0090 | 0.0833 | 0.5199 | 1.0027 | 0.0838 | 0.5079 | 1.4740382e-07 | 482 |
| 1.0069 | 0.0834 | 0.5201 | 0.9992 | 0.0835 | 0.5123 | 1.4739315e-07 | 483 |
| 1.0078 | 0.0834 | 0.5199 | 0.9952 | 0.0837 | 0.5079 | 1.4738245e-07 | 484 |
| 1.0081 | 0.0833 | 0.5201 | 1.0030 | 0.0835 | 0.5063 | 1.4737174e-07 | 485 |
| 1.0091 | 0.0834 | 0.5207 | 1.0000 | 0.0837 | 0.5133 | 1.4736099e-07 | 486 |
| 1.0080 | 0.0834 | 0.5192 | 1.0047 | 0.0834 | 0.5124 | 1.4735024e-07 | 487 |
| 1.0071 | 0.0834 | 0.5197 | 1.0004 | 0.0837 | 0.5151 | 1.4733945e-07 | 488 |
| 1.0079 | 0.0833 | 0.5205 | 0.9998 | 0.0834 | 0.5098 | 1.4732865e-07 | 489 |
| 1.0075 | 0.0834 | 0.5204 | 0.9995 | 0.0838 | 0.5126 | 1.4731782e-07 | 490 |
| 1.0081 | 0.0834 | 0.5210 | 0.9992 | 0.0838 | 0.5126 | 1.4730698e-07 | 491 |
| 1.0068 | 0.0834 | 0.5216 | 1.0016 | 0.0837 | 0.5100 | 1.472961e-07 | 492 |
| 1.0073 | 0.0834 | 0.5204 | 1.0017 | 0.0835 | 0.5079 | 1.4728522e-07 | 493 |
| 1.0063 | 0.0835 | 0.5213 | 1.0008 | 0.0840 | 0.5111 | 1.472743e-07 | 494 |
| 1.0071 | 0.0834 | 0.5206 | 0.9984 | 0.0840 | 0.5114 | 1.4726338e-07 | 495 |
| 1.0079 | 0.0835 | 0.5206 | 0.9985 | 0.0835 | 0.5119 | 1.4725242e-07 | 496 |
| 1.0069 | 0.0834 | 0.5198 | 1.0025 | 0.0836 | 0.5093 | 1.4724145e-07 | 497 |
| 1.0066 | 0.0834 | 0.5202 | 1.0027 | 0.0834 | 0.5108 | 1.4723045e-07 | 498 |
| 1.0068 | 0.0834 | 0.5211 | 1.0007 | 0.0835 | 0.5138 | 1.4721944e-07 | 499 |
| 1.0080 | 0.0835 | 0.5202 | 0.9985 | 0.0835 | 0.5114 | 1.472084e-07 | 500 |
| 1.0073 | 0.0833 | 0.5217 | 0.9940 | 0.0841 | 0.5114 | 1.4719734e-07 | 501 |
| 1.0067 | 0.0834 | 0.5222 | 0.9999 | 0.0834 | 0.5109 | 1.4718626e-07 | 502 |
| 1.0061 | 0.0834 | 0.5227 | 0.9993 | 0.0837 | 0.5146 | 1.4717516e-07 | 503 |
| 1.0059 | 0.0835 | 0.5229 | 0.9945 | 0.0837 | 0.5138 | 1.4716403e-07 | 504 |
| 1.0059 | 0.0834 | 0.5225 | 0.9981 | 0.0834 | 0.5083 | 1.4715289e-07 | 505 |
| 1.0059 | 0.0834 | 0.5224 | 0.9968 | 0.0835 | 0.5119 | 1.4714172e-07 | 506 |
| 1.0058 | 0.0834 | 0.5233 | 1.0022 | 0.0837 | 0.5071 | 1.4713054e-07 | 507 |
| 1.0064 | 0.0834 | 0.5199 | 0.9985 | 0.0835 | 0.5168 | 1.4711932e-07 | 508 |
| 1.0061 | 0.0834 | 0.5222 | 0.9953 | 0.0840 | 0.5101 | 1.471081e-07 | 509 |
| 1.0055 | 0.0834 | 0.5219 | 0.9990 | 0.0840 | 0.5099 | 1.4709684e-07 | 510 |
| 1.0059 | 0.0833 | 0.5225 | 0.9949 | 0.0837 | 0.5106 | 1.4708557e-07 | 511 |
| 1.0062 | 0.0833 | 0.5218 | 1.0061 | 0.0834 | 0.5143 | 1.4707427e-07 | 512 |
| 1.0060 | 0.0835 | 0.5201 | 0.9981 | 0.0838 | 0.5135 | 1.4706296e-07 | 513 |
| 1.0053 | 0.0834 | 0.5214 | 1.0020 | 0.0835 | 0.5102 | 1.4705162e-07 | 514 |
| 1.0059 | 0.0834 | 0.5237 | 1.0077 | 0.0832 | 0.5139 | 1.4704027e-07 | 515 |
| 1.0042 | 0.0834 | 0.5218 | 1.0035 | 0.0833 | 0.5167 | 1.4702889e-07 | 516 |
| 1.0056 | 0.0834 | 0.5221 | 1.0025 | 0.0837 | 0.5112 | 1.4701749e-07 | 517 |
| 1.0054 | 0.0833 | 0.5232 | 0.9972 | 0.0838 | 0.5126 | 1.4700606e-07 | 518 |
| 1.0053 | 0.0834 | 0.5233 | 0.9975 | 0.0841 | 0.5101 | 1.4699462e-07 | 519 |
| 1.0053 | 0.0834 | 0.5220 | 1.0015 | 0.0838 | 0.5079 | 1.4698315e-07 | 520 |
| 1.0050 | 0.0834 | 0.5232 | 0.9995 | 0.0835 | 0.5134 | 1.4697167e-07 | 521 |
| 1.0053 | 0.0834 | 0.5235 | 0.9982 | 0.0838 | 0.5118 | 1.4696016e-07 | 522 |
| 1.0047 | 0.0834 | 0.5221 | 1.0028 | 0.0839 | 0.5179 | 1.4694864e-07 | 523 |
| 1.0052 | 0.0835 | 0.5222 | 0.9997 | 0.0840 | 0.5153 | 1.4693708e-07 | 524 |
| 1.0055 | 0.0835 | 0.5213 | 1.0005 | 0.0836 | 0.5154 | 1.4692552e-07 | 525 |
| 1.0062 | 0.0834 | 0.5225 | 1.0011 | 0.0836 | 0.5114 | 1.4691392e-07 | 526 |
| 1.0063 | 0.0834 | 0.5232 | 0.9989 | 0.0838 | 0.5123 | 1.4690231e-07 | 527 |
| 1.0040 | 0.0834 | 0.5240 | 1.0011 | 0.0835 | 0.5161 | 1.4689067e-07 | 528 |
| 1.0044 | 0.0834 | 0.5237 | 1.0007 | 0.0835 | 0.5148 | 1.4687902e-07 | 529 |
| 1.0050 | 0.0834 | 0.5225 | 0.9980 | 0.0835 | 0.5137 | 1.4686734e-07 | 530 |
| 1.0047 | 0.0835 | 0.5242 | 1.0012 | 0.0835 | 0.5077 | 1.4685564e-07 | 531 |
| 1.0043 | 0.0833 | 0.5230 | 0.9973 | 0.0836 | 0.5097 | 1.4684392e-07 | 532 |
| 1.0037 | 0.0835 | 0.5250 | 0.9997 | 0.0836 | 0.5209 | 1.4683218e-07 | 533 |
| 1.0039 | 0.0835 | 0.5220 | 1.0002 | 0.0841 | 0.5133 | 1.4682041e-07 | 534 |
| 1.0038 | 0.0835 | 0.5228 | 1.0019 | 0.0835 | 0.5184 | 1.4680863e-07 | 535 |
| 1.0045 | 0.0834 | 0.5229 | 1.0010 | 0.0835 | 0.5100 | 1.4679682e-07 | 536 |
| 1.0041 | 0.0835 | 0.5233 | 0.9959 | 0.0837 | 0.5167 | 1.46785e-07 | 537 |
| 1.0041 | 0.0834 | 0.5233 | 0.9995 | 0.0834 | 0.5152 | 1.4677316e-07 | 538 |
| 1.0038 | 0.0834 | 0.5239 | 1.0041 | 0.0835 | 0.5158 | 1.467613e-07 | 539 |
| 1.0040 | 0.0835 | 0.5224 | 0.9974 | 0.0834 | 0.5172 | 1.4674941e-07 | 540 |
| 1.0035 | 0.0834 | 0.5240 | 0.9941 | 0.0836 | 0.5117 | 1.467375e-07 | 541 |
| 1.0032 | 0.0834 | 0.5231 | 0.9994 | 0.0835 | 0.5102 | 1.4672558e-07 | 542 |
| 1.0036 | 0.0834 | 0.5222 | 0.9955 | 0.0836 | 0.5119 | 1.4671363e-07 | 543 |
| 1.0029 | 0.0836 | 0.5231 | 0.9994 | 0.0837 | 0.5138 | 1.4670167e-07 | 544 |
| 1.0028 | 0.0834 | 0.5230 | 1.0027 | 0.0833 | 0.5136 | 1.4668967e-07 | 545 |
| 1.0029 | 0.0835 | 0.5244 | 0.9929 | 0.0837 | 0.5128 | 1.4667766e-07 | 546 |
| 1.0034 | 0.0835 | 0.5246 | 0.9973 | 0.0831 | 0.5128 | 1.4666563e-07 | 547 |
| 1.0024 | 0.0834 | 0.5235 | 0.9987 | 0.0835 | 0.5129 | 1.4665358e-07 | 548 |
| 1.0035 | 0.0835 | 0.5229 | 0.9937 | 0.0834 | 0.5167 | 1.466415e-07 | 549 |
| 1.0028 | 0.0835 | 0.5246 | 0.9959 | 0.0835 | 0.5176 | 1.466294e-07 | 550 |
| 1.0027 | 0.0835 | 0.5242 | 1.0011 | 0.0835 | 0.5126 | 1.4661728e-07 | 551 |
| 1.0025 | 0.0834 | 0.5237 | 0.9978 | 0.0837 | 0.5148 | 1.4660515e-07 | 552 |
| 1.0022 | 0.0834 | 0.5227 | 0.9981 | 0.0832 | 0.5138 | 1.4659298e-07 | 553 |
| 1.0031 | 0.0834 | 0.5247 | 0.9986 | 0.0836 | 0.5117 | 1.465808e-07 | 554 |
| 1.0024 | 0.0834 | 0.5240 | 0.9993 | 0.0838 | 0.5159 | 1.465686e-07 | 555 |
| 1.0017 | 0.0834 | 0.5241 | 0.9965 | 0.0838 | 0.5103 | 1.4655637e-07 | 556 |
| 1.0018 | 0.0836 | 0.5251 | 0.9937 | 0.0839 | 0.5138 | 1.4654412e-07 | 557 |
| 1.0015 | 0.0835 | 0.5259 | 0.9989 | 0.0835 | 0.5143 | 1.4653186e-07 | 558 |
| 1.0020 | 0.0836 | 0.5248 | 1.0007 | 0.0836 | 0.5104 | 1.4651957e-07 | 559 |
| 1.0022 | 0.0835 | 0.5236 | 0.9977 | 0.0839 | 0.5151 | 1.4650726e-07 | 560 |
| 1.0022 | 0.0835 | 0.5239 | 0.9940 | 0.0837 | 0.5182 | 1.4649494e-07 | 561 |
| 1.0028 | 0.0835 | 0.5239 | 1.0010 | 0.0836 | 0.5144 | 1.4648259e-07 | 562 |
| 1.0026 | 0.0836 | 0.5244 | 0.9970 | 0.0841 | 0.5135 | 1.4647023e-07 | 563 |
| 1.0020 | 0.0834 | 0.5251 | 1.0008 | 0.0837 | 0.5162 | 1.4645784e-07 | 564 |
| 1.0033 | 0.0834 | 0.5242 | 0.9972 | 0.0834 | 0.5146 | 1.4644543e-07 | 565 |
| 1.0024 | 0.0835 | 0.5239 | 0.9995 | 0.0841 | 0.5180 | 1.46433e-07 | 566 |
| 1.0023 | 0.0835 | 0.5242 | 0.9949 | 0.0836 | 0.5111 | 1.4642055e-07 | 567 |
| 1.0022 | 0.0834 | 0.5253 | 0.9932 | 0.0838 | 0.5162 | 1.4640807e-07 | 568 |
| 1.0019 | 0.0835 | 0.5253 | 0.9985 | 0.0837 | 0.5182 | 1.4639558e-07 | 569 |
| 1.0016 | 0.0835 | 0.5247 | 0.9913 | 0.0831 | 0.5128 | 1.4638306e-07 | 570 |
| 1.0015 | 0.0835 | 0.5239 | 0.9990 | 0.0838 | 0.5173 | 1.4637052e-07 | 571 |
| 1.0014 | 0.0835 | 0.5247 | 0.9954 | 0.0836 | 0.5150 | 1.4635796e-07 | 572 |
| 1.0003 | 0.0836 | 0.5249 | 0.9991 | 0.0836 | 0.5210 | 1.4634539e-07 | 573 |
| 1.0008 | 0.0835 | 0.5251 | 0.9961 | 0.0839 | 0.5125 | 1.4633278e-07 | 574 |
| 1.0013 | 0.0835 | 0.5260 | 0.9931 | 0.0834 | 0.5161 | 1.4632016e-07 | 575 |
| 1.0018 | 0.0835 | 0.5243 | 1.0017 | 0.0835 | 0.5113 | 1.4630751e-07 | 576 |
| 1.0017 | 0.0834 | 0.5264 | 0.9953 | 0.0836 | 0.5172 | 1.4629485e-07 | 577 |
| 1.0021 | 0.0834 | 0.5259 | 1.0019 | 0.0843 | 0.5175 | 1.4628218e-07 | 578 |
| 1.0008 | 0.0834 | 0.5251 | 0.9982 | 0.0836 | 0.5133 | 1.4626947e-07 | 579 |
| 1.0007 | 0.0834 | 0.5255 | 0.9958 | 0.0832 | 0.5119 | 1.4625675e-07 | 580 |
| 1.0007 | 0.0834 | 0.5255 | 0.9972 | 0.0839 | 0.5165 | 1.46244e-07 | 581 |
| 1.0009 | 0.0834 | 0.5254 | 0.9954 | 0.0836 | 0.5172 | 1.4623124e-07 | 582 |
| 1.0000 | 0.0836 | 0.5262 | 0.9968 | 0.0837 | 0.5187 | 1.4621845e-07 | 583 |
| 1.0005 | 0.0836 | 0.5254 | 0.9903 | 0.0842 | 0.5171 | 1.4620565e-07 | 584 |
| 1.0019 | 0.0835 | 0.5266 | 0.9915 | 0.0841 | 0.5181 | 1.4619282e-07 | 585 |
| 1.0008 | 0.0835 | 0.5236 | 0.9952 | 0.0833 | 0.5175 | 1.4617997e-07 | 586 |
| 1.0010 | 0.0834 | 0.5271 | 0.9927 | 0.0839 | 0.5212 | 1.461671e-07 | 587 |
| 1.0006 | 0.0834 | 0.5263 | 0.9927 | 0.0840 | 0.5147 | 1.461542e-07 | 588 |
| 0.9995 | 0.0836 | 0.5262 | 0.9950 | 0.0839 | 0.5130 | 1.4614129e-07 | 589 |
| 1.0007 | 0.0835 | 0.5261 | 0.9953 | 0.0836 | 0.5186 | 1.4612836e-07 | 590 |
| 1.0001 | 0.0835 | 0.5265 | 0.9927 | 0.0837 | 0.5168 | 1.4611541e-07 | 591 |
| 1.0006 | 0.0835 | 0.5260 | 0.9938 | 0.0836 | 0.5193 | 1.4610244e-07 | 592 |
| 1.0007 | 0.0835 | 0.5259 | 0.9936 | 0.0838 | 0.5175 | 1.4608945e-07 | 593 |
| 1.0005 | 0.0836 | 0.5267 | 0.9969 | 0.0835 | 0.5158 | 1.4607643e-07 | 594 |
| 1.0003 | 0.0834 | 0.5254 | 0.9944 | 0.0837 | 0.5187 | 1.460634e-07 | 595 |
| 0.9997 | 0.0834 | 0.5255 | 0.9947 | 0.0836 | 0.5163 | 1.4605034e-07 | 596 |
| 0.9997 | 0.0835 | 0.5260 | 0.9951 | 0.0839 | 0.5190 | 1.4603727e-07 | 597 |
| 0.9999 | 0.0836 | 0.5261 | 0.9930 | 0.0841 | 0.5146 | 1.4602416e-07 | 598 |
| 0.9993 | 0.0835 | 0.5258 | 0.9983 | 0.0837 | 0.5142 | 1.4601105e-07 | 599 |
| 1.0000 | 0.0836 | 0.5271 | 0.9966 | 0.0840 | 0.5225 | 1.459979e-07 | 600 |
| 1.0009 | 0.0836 | 0.5268 | 0.9900 | 0.0839 | 0.5168 | 1.4598474e-07 | 601 |
| 0.9995 | 0.0835 | 0.5262 | 0.9958 | 0.0838 | 0.5158 | 1.4597155e-07 | 602 |
| 0.9998 | 0.0835 | 0.5252 | 0.9971 | 0.0840 | 0.5215 | 1.4595835e-07 | 603 |
| 1.0002 | 0.0834 | 0.5259 | 0.9951 | 0.0834 | 0.5200 | 1.4594514e-07 | 604 |
| 1.0003 | 0.0835 | 0.5272 | 0.9932 | 0.0836 | 0.5148 | 1.4593189e-07 | 605 |
| 0.9996 | 0.0835 | 0.5268 | 0.9967 | 0.0836 | 0.5141 | 1.4591863e-07 | 606 |
| 1.0009 | 0.0836 | 0.5276 | 0.9978 | 0.0835 | 0.5181 | 1.4590535e-07 | 607 |
| 1.0003 | 0.0835 | 0.5243 | 0.9950 | 0.0834 | 0.5179 | 1.4589205e-07 | 608 |
| 0.9991 | 0.0835 | 0.5280 | 0.9920 | 0.0836 | 0.5163 | 1.4587872e-07 | 609 |
| 0.9998 | 0.0836 | 0.5268 | 0.9947 | 0.0836 | 0.5150 | 1.4586537e-07 | 610 |
| 1.0000 | 0.0835 | 0.5264 | 0.9867 | 0.0843 | 0.5185 | 1.45852e-07 | 611 |
| 0.9992 | 0.0835 | 0.5282 | 0.9930 | 0.0839 | 0.5151 | 1.4583861e-07 | 612 |
| 1.0000 | 0.0835 | 0.5284 | 0.9929 | 0.0837 | 0.5149 | 1.458252e-07 | 613 |
| 0.9989 | 0.0836 | 0.5276 | 0.9955 | 0.0838 | 0.5125 | 1.4581177e-07 | 614 |
| 0.9979 | 0.0836 | 0.5276 | 0.9982 | 0.0834 | 0.5153 | 1.4579832e-07 | 615 |
| 0.9999 | 0.0835 | 0.5271 | 1.0003 | 0.0836 | 0.5169 | 1.4578485e-07 | 616 |
| 0.9995 | 0.0835 | 0.5285 | 0.9962 | 0.0840 | 0.5156 | 1.4577137e-07 | 617 |
| 0.9982 | 0.0836 | 0.5286 | 0.9895 | 0.0837 | 0.5204 | 1.4575785e-07 | 618 |
| 1.0001 | 0.0835 | 0.5268 | 0.9952 | 0.0835 | 0.5188 | 1.4574432e-07 | 619 |
| 0.9986 | 0.0836 | 0.5273 | 0.9974 | 0.0833 | 0.5182 | 1.4573077e-07 | 620 |
| 0.9986 | 0.0835 | 0.5268 | 0.9948 | 0.0834 | 0.5148 | 1.457172e-07 | 621 |
| 0.9990 | 0.0836 | 0.5286 | 0.9952 | 0.0838 | 0.5182 | 1.457036e-07 | 622 |
| 0.9981 | 0.0834 | 0.5283 | 0.9938 | 0.0835 | 0.5137 | 1.4568998e-07 | 623 |
| 0.9988 | 0.0835 | 0.5282 | 0.9923 | 0.0832 | 0.5176 | 1.4567635e-07 | 624 |
| 0.9990 | 0.0835 | 0.5266 | 0.9968 | 0.0837 | 0.5162 | 1.456627e-07 | 625 |
| 0.9987 | 0.0837 | 0.5269 | 0.9910 | 0.0836 | 0.5179 | 1.4564903e-07 | 626 |
| 0.9986 | 0.0836 | 0.5290 | 0.9942 | 0.0836 | 0.5224 | 1.4563533e-07 | 627 |
| 0.9992 | 0.0836 | 0.5269 | 0.9941 | 0.0835 | 0.5149 | 1.4562161e-07 | 628 |
| 0.9983 | 0.0835 | 0.5277 | 0.9940 | 0.0840 | 0.5179 | 1.4560787e-07 | 629 |
| 0.9981 | 0.0836 | 0.5280 | 0.9938 | 0.0838 | 0.5172 | 1.4559411e-07 | 630 |
| 0.9980 | 0.0836 | 0.5289 | 0.9958 | 0.0839 | 0.5260 | 1.4558033e-07 | 631 |
| 0.9987 | 0.0836 | 0.5276 | 0.9920 | 0.0840 | 0.5136 | 1.4556653e-07 | 632 |
| 0.9981 | 0.0835 | 0.5293 | 0.9918 | 0.0836 | 0.5129 | 1.455527e-07 | 633 |
| 0.9991 | 0.0835 | 0.5283 | 0.9921 | 0.0839 | 0.5169 | 1.4553886e-07 | 634 |
| 0.9974 | 0.0835 | 0.5288 | 0.9992 | 0.0835 | 0.5168 | 1.4552501e-07 | 635 |
| 0.9976 | 0.0835 | 0.5287 | 0.9949 | 0.0839 | 0.5210 | 1.4551112e-07 | 636 |
| 0.9978 | 0.0836 | 0.5273 | 0.9912 | 0.0839 | 0.5210 | 1.4549722e-07 | 637 |
| 0.9982 | 0.0835 | 0.5276 | 0.9932 | 0.0835 | 0.5175 | 1.454833e-07 | 638 |
| 0.9979 | 0.0835 | 0.5276 | 0.9965 | 0.0833 | 0.5164 | 1.4546936e-07 | 639 |
| 0.9975 | 0.0836 | 0.5283 | 0.9988 | 0.0839 | 0.5148 | 1.4545539e-07 | 640 |
| 0.9981 | 0.0835 | 0.5287 | 0.9950 | 0.0835 | 0.5184 | 1.454414e-07 | 641 |
| 0.9979 | 0.0836 | 0.5279 | 0.9930 | 0.0833 | 0.5197 | 1.454274e-07 | 642 |
| 0.9989 | 0.0835 | 0.5297 | 0.9940 | 0.0834 | 0.5187 | 1.4541338e-07 | 643 |
| 0.9970 | 0.0836 | 0.5270 | 0.9892 | 0.0835 | 0.5194 | 1.4539934e-07 | 644 |
| 0.9979 | 0.0836 | 0.5287 | 0.9925 | 0.0840 | 0.5205 | 1.4538527e-07 | 645 |
| 0.9965 | 0.0836 | 0.5285 | 0.9971 | 0.0839 | 0.5204 | 1.4537119e-07 | 646 |
| 0.9977 | 0.0835 | 0.5305 | 0.9946 | 0.0833 | 0.5167 | 1.4535708e-07 | 647 |
| 0.9970 | 0.0836 | 0.5282 | 0.9931 | 0.0834 | 0.5185 | 1.4534295e-07 | 648 |
| 0.9977 | 0.0836 | 0.5278 | 0.9912 | 0.0835 | 0.5186 | 1.453288e-07 | 649 |
| 0.9982 | 0.0835 | 0.5275 | 0.9934 | 0.0834 | 0.5144 | 1.4531463e-07 | 650 |
| 0.9981 | 0.0835 | 0.5279 | 0.9923 | 0.0839 | 0.5202 | 1.4530045e-07 | 651 |
| 0.9972 | 0.0835 | 0.5291 | 0.9999 | 0.0834 | 0.5158 | 1.4528624e-07 | 652 |
| 0.9971 | 0.0836 | 0.5289 | 0.9935 | 0.0832 | 0.5159 | 1.4527201e-07 | 653 |
| 0.9972 | 0.0836 | 0.5293 | 0.9878 | 0.0841 | 0.5164 | 1.4525776e-07 | 654 |
| 0.9970 | 0.0835 | 0.5302 | 0.9897 | 0.0836 | 0.5243 | 1.4524349e-07 | 655 |
| 0.9965 | 0.0835 | 0.5285 | 0.9964 | 0.0836 | 0.5167 | 1.452292e-07 | 656 |
| 0.9964 | 0.0836 | 0.5287 | 0.9897 | 0.0840 | 0.5215 | 1.4521488e-07 | 657 |
| 0.9982 | 0.0835 | 0.5285 | 0.9948 | 0.0838 | 0.5155 | 1.4520056e-07 | 658 |
| 0.9971 | 0.0837 | 0.5297 | 0.9891 | 0.0837 | 0.5173 | 1.451862e-07 | 659 |
| 0.9956 | 0.0836 | 0.5312 | 0.9936 | 0.0840 | 0.5167 | 1.4517184e-07 | 660 |
| 0.9970 | 0.0835 | 0.5312 | 0.9977 | 0.0835 | 0.5197 | 1.4515744e-07 | 661 |
| 0.9967 | 0.0836 | 0.5309 | 0.9959 | 0.0835 | 0.5138 | 1.4514303e-07 | 662 |
| 0.9966 | 0.0836 | 0.5305 | 0.9947 | 0.0835 | 0.5160 | 1.451286e-07 | 663 |
| 0.9968 | 0.0836 | 0.5314 | 0.9950 | 0.0836 | 0.5177 | 1.4511414e-07 | 664 |
| 0.9970 | 0.0836 | 0.5306 | 0.9921 | 0.0837 | 0.5151 | 1.4509968e-07 | 665 |
| 0.9966 | 0.0836 | 0.5308 | 0.9904 | 0.0839 | 0.5191 | 1.4508518e-07 | 666 |
| 0.9975 | 0.0836 | 0.5294 | 0.9889 | 0.0838 | 0.5204 | 1.4507067e-07 | 667 |
| 0.9956 | 0.0837 | 0.5313 | 0.9930 | 0.0840 | 0.5188 | 1.4505613e-07 | 668 |
| 0.9958 | 0.0835 | 0.5307 | 0.9939 | 0.0837 | 0.5234 | 1.4504158e-07 | 669 |
| 0.9955 | 0.0837 | 0.5313 | 0.9974 | 0.0837 | 0.5203 | 1.45027e-07 | 670 |
| 0.9969 | 0.0835 | 0.5316 | 0.9965 | 0.0841 | 0.5139 | 1.4501241e-07 | 671 |
| 0.9969 | 0.0836 | 0.5308 | 0.9934 | 0.0838 | 0.5206 | 1.449978e-07 | 672 |
| 0.9971 | 0.0836 | 0.5315 | 0.9947 | 0.0835 | 0.5161 | 1.4498316e-07 | 673 |
| 0.9952 | 0.0836 | 0.5325 | 0.9913 | 0.0837 | 0.5182 | 1.4496851e-07 | 674 |
| 0.9969 | 0.0835 | 0.5300 | 0.9993 | 0.0832 | 0.5209 | 1.4495383e-07 | 675 |
| 0.9954 | 0.0835 | 0.5312 | 0.9902 | 0.0835 | 0.5199 | 1.4493914e-07 | 676 |
| 0.9953 | 0.0836 | 0.5303 | 0.9901 | 0.0840 | 0.5186 | 1.4492441e-07 | 677 |
| 0.9955 | 0.0834 | 0.5327 | 0.9954 | 0.0834 | 0.5198 | 1.4490968e-07 | 678 |
| 0.9965 | 0.0835 | 0.5300 | 0.9951 | 0.0836 | 0.5147 | 1.4489493e-07 | 679 |
| 0.9964 | 0.0835 | 0.5316 | 0.9913 | 0.0839 | 0.5213 | 1.4488015e-07 | 680 |
| 0.9962 | 0.0836 | 0.5318 | 0.9922 | 0.0840 | 0.5213 | 1.4486535e-07 | 681 |
| 0.9958 | 0.0835 | 0.5321 | 0.9970 | 0.0837 | 0.5167 | 1.4485053e-07 | 682 |
| 0.9956 | 0.0836 | 0.5326 | 0.9933 | 0.0838 | 0.5257 | 1.448357e-07 | 683 |
| 0.9955 | 0.0835 | 0.5306 | 0.9911 | 0.0834 | 0.5143 | 1.4482083e-07 | 684 |
| 0.9943 | 0.0836 | 0.5323 | 0.9969 | 0.0833 | 0.5196 | 1.4480595e-07 | 685 |
| 0.9954 | 0.0835 | 0.5309 | 0.9894 | 0.0840 | 0.5243 | 1.4479106e-07 | 686 |
| 0.9947 | 0.0836 | 0.5326 | 0.9878 | 0.0838 | 0.5207 | 1.4477614e-07 | 687 |
| 0.9963 | 0.0837 | 0.5329 | 0.9914 | 0.0836 | 0.5162 | 1.447612e-07 | 688 |
| 0.9960 | 0.0835 | 0.5319 | 0.9950 | 0.0835 | 0.5231 | 1.4474624e-07 | 689 |
| 0.9943 | 0.0836 | 0.5330 | 0.9961 | 0.0834 | 0.5227 | 1.4473126e-07 | 690 |
| 0.9940 | 0.0836 | 0.5317 | 0.9917 | 0.0833 | 0.5171 | 1.4471625e-07 | 691 |
| 0.9958 | 0.0836 | 0.5327 | 0.9946 | 0.0834 | 0.5231 | 1.4470123e-07 | 692 |
| 0.9962 | 0.0835 | 0.5322 | 0.9876 | 0.0843 | 0.5240 | 1.446862e-07 | 693 |
| 0.9965 | 0.0837 | 0.5330 | 0.9944 | 0.0835 | 0.5241 | 1.4467113e-07 | 694 |
| 0.9953 | 0.0836 | 0.5338 | 0.9909 | 0.0837 | 0.5210 | 1.4465606e-07 | 695 |
| 0.9944 | 0.0836 | 0.5320 | 0.9903 | 0.0838 | 0.5213 | 1.4464095e-07 | 696 |
| 0.9956 | 0.0836 | 0.5318 | 0.9869 | 0.0837 | 0.5186 | 1.4462583e-07 | 697 |
| 0.9962 | 0.0836 | 0.5328 | 0.9875 | 0.0838 | 0.5215 | 1.446107e-07 | 698 |
| 0.9952 | 0.0836 | 0.5328 | 0.9912 | 0.0836 | 0.5229 | 1.4459553e-07 | 699 |
| 0.9939 | 0.0836 | 0.5341 | 0.9854 | 0.0839 | 0.5167 | 1.4458035e-07 | 700 |
| 0.9930 | 0.0837 | 0.5358 | 0.9939 | 0.0836 | 0.5235 | 1.4456515e-07 | 701 |
| 0.9939 | 0.0837 | 0.5353 | 0.9854 | 0.0837 | 0.5222 | 1.4454993e-07 | 702 |
| 0.9953 | 0.0837 | 0.5323 | 0.9930 | 0.0841 | 0.5177 | 1.4453468e-07 | 703 |
| 0.9940 | 0.0836 | 0.5336 | 0.9912 | 0.0835 | 0.5225 | 1.4451942e-07 | 704 |
| 0.9938 | 0.0836 | 0.5341 | 0.9865 | 0.0832 | 0.5297 | 1.4450414e-07 | 705 |
| 0.9949 | 0.0836 | 0.5337 | 0.9924 | 0.0837 | 0.5211 | 1.4448884e-07 | 706 |
| 0.9944 | 0.0836 | 0.5343 | 0.9902 | 0.0839 | 0.5260 | 1.4447352e-07 | 707 |
| 0.9949 | 0.0835 | 0.5324 | 0.9877 | 0.0839 | 0.5265 | 1.4445817e-07 | 708 |
| 0.9943 | 0.0837 | 0.5338 | 0.9882 | 0.0839 | 0.5212 | 1.4444281e-07 | 709 |
| 0.9941 | 0.0835 | 0.5333 | 0.9867 | 0.0841 | 0.5148 | 1.4442743e-07 | 710 |
| 0.9948 | 0.0835 | 0.5337 | 0.9952 | 0.0833 | 0.5163 | 1.4441203e-07 | 711 |
| 0.9942 | 0.0836 | 0.5343 | 0.9897 | 0.0836 | 0.5199 | 1.4439661e-07 | 712 |
| 0.9933 | 0.0836 | 0.5351 | 0.9935 | 0.0837 | 0.5201 | 1.4438116e-07 | 713 |
| 0.9939 | 0.0836 | 0.5351 | 0.9913 | 0.0840 | 0.5202 | 1.443657e-07 | 714 |
| 0.9941 | 0.0836 | 0.5335 | 0.9936 | 0.0833 | 0.5217 | 1.4435022e-07 | 715 |
| 0.9944 | 0.0836 | 0.5355 | 0.9893 | 0.0838 | 0.5218 | 1.4433472e-07 | 716 |
| 0.9934 | 0.0836 | 0.5359 | 0.9920 | 0.0839 | 0.5252 | 1.443192e-07 | 717 |
| 0.9930 | 0.0836 | 0.5346 | 0.9882 | 0.0834 | 0.5189 | 1.4430366e-07 | 718 |
| 0.9934 | 0.0837 | 0.5361 | 0.9918 | 0.0840 | 0.5181 | 1.442881e-07 | 719 |
| 0.9945 | 0.0835 | 0.5339 | 0.9877 | 0.0840 | 0.5174 | 1.4427252e-07 | 720 |
| 0.9939 | 0.0836 | 0.5369 | 0.9945 | 0.0837 | 0.5310 | 1.4425692e-07 | 721 |
| 0.9938 | 0.0836 | 0.5347 | 0.9962 | 0.0839 | 0.5215 | 1.442413e-07 | 722 |
| 0.9939 | 0.0836 | 0.5348 | 0.9937 | 0.0841 | 0.5236 | 1.4422565e-07 | 723 |
| 0.9935 | 0.0836 | 0.5351 | 0.9925 | 0.0837 | 0.5209 | 1.4420999e-07 | 724 |
| 0.9922 | 0.0836 | 0.5356 | 0.9985 | 0.0831 | 0.5204 | 1.4419432e-07 | 725 |
| 0.9925 | 0.0836 | 0.5367 | 0.9937 | 0.0838 | 0.5223 | 1.4417861e-07 | 726 |
| 0.9925 | 0.0836 | 0.5358 | 0.9927 | 0.0835 | 0.5215 | 1.441629e-07 | 727 |
| 0.9934 | 0.0835 | 0.5365 | 0.9916 | 0.0839 | 0.5272 | 1.4414715e-07 | 728 |
| 0.9930 | 0.0836 | 0.5357 | 0.9924 | 0.0837 | 0.5210 | 1.4413139e-07 | 729 |
| 0.9931 | 0.0836 | 0.5357 | 0.9888 | 0.0832 | 0.5235 | 1.4411562e-07 | 730 |
| 0.9919 | 0.0837 | 0.5369 | 0.9921 | 0.0840 | 0.5224 | 1.4409981e-07 | 731 |
| 0.9925 | 0.0836 | 0.5361 | 0.9860 | 0.0838 | 0.5273 | 1.44084e-07 | 732 |
| 0.9931 | 0.0836 | 0.5368 | 0.9913 | 0.0835 | 0.5210 | 1.4406815e-07 | 733 |
| 0.9924 | 0.0838 | 0.5385 | 0.9854 | 0.0842 | 0.5251 | 1.440523e-07 | 734 |
| 0.9930 | 0.0836 | 0.5360 | 0.9945 | 0.0838 | 0.5250 | 1.4403642e-07 | 735 |
| 0.9922 | 0.0836 | 0.5378 | 0.9932 | 0.0836 | 0.5242 | 1.4402052e-07 | 736 |
| 0.9922 | 0.0836 | 0.5377 | 0.9920 | 0.0840 | 0.5232 | 1.440046e-07 | 737 |
| 0.9929 | 0.0836 | 0.5381 | 0.9919 | 0.0838 | 0.5242 | 1.4398866e-07 | 738 |
| 0.9930 | 0.0835 | 0.5381 | 0.9894 | 0.0841 | 0.5221 | 1.439727e-07 | 739 |
| 0.9929 | 0.0836 | 0.5369 | 0.9856 | 0.0841 | 0.5206 | 1.4395673e-07 | 740 |
| 0.9918 | 0.0837 | 0.5384 | 0.9987 | 0.0834 | 0.5184 | 1.4394072e-07 | 741 |
| 0.9923 | 0.0836 | 0.5364 | 0.9930 | 0.0835 | 0.5227 | 1.4392471e-07 | 742 |
| 0.9925 | 0.0836 | 0.5370 | 0.9887 | 0.0843 | 0.5201 | 1.4390866e-07 | 743 |
| 0.9921 | 0.0836 | 0.5392 | 0.9855 | 0.0839 | 0.5231 | 1.438926e-07 | 744 |
| 0.9928 | 0.0835 | 0.5381 | 0.9854 | 0.0841 | 0.5224 | 1.4387653e-07 | 745 |
| 0.9920 | 0.0836 | 0.5378 | 0.9906 | 0.0839 | 0.5202 | 1.4386043e-07 | 746 |
| 0.9914 | 0.0837 | 0.5383 | 0.9873 | 0.0839 | 0.5256 | 1.4384432e-07 | 747 |
| 0.9917 | 0.0835 | 0.5387 | 0.9879 | 0.0837 | 0.5236 | 1.4382817e-07 | 748 |
| 0.9916 | 0.0837 | 0.5394 | 0.9949 | 0.0835 | 0.5265 | 1.4381202e-07 | 749 |
| 0.9925 | 0.0836 | 0.5372 | 0.9890 | 0.0834 | 0.5249 | 1.4379584e-07 | 750 |
| 0.9924 | 0.0837 | 0.5368 | 0.9928 | 0.0838 | 0.5235 | 1.4377964e-07 | 751 |
| 0.9915 | 0.0837 | 0.5387 | 0.9901 | 0.0839 | 0.5249 | 1.4376343e-07 | 752 |
| 0.9912 | 0.0836 | 0.5390 | 0.9873 | 0.0837 | 0.5241 | 1.4374719e-07 | 753 |
| 0.9911 | 0.0837 | 0.5381 | 0.9908 | 0.0842 | 0.5304 | 1.4373093e-07 | 754 |
| 0.9915 | 0.0836 | 0.5402 | 0.9938 | 0.0831 | 0.5304 | 1.4371466e-07 | 755 |
| 0.9918 | 0.0836 | 0.5404 | 0.9920 | 0.0837 | 0.5185 | 1.4369836e-07 | 756 |
| 0.9917 | 0.0836 | 0.5389 | 0.9894 | 0.0838 | 0.5239 | 1.4368204e-07 | 757 |
| 0.9918 | 0.0837 | 0.5384 | 0.9911 | 0.0835 | 0.5274 | 1.4366572e-07 | 758 |
| 0.9917 | 0.0836 | 0.5395 | 0.9898 | 0.0836 | 0.5201 | 1.4364936e-07 | 759 |
| 0.9893 | 0.0836 | 0.5412 | 0.9905 | 0.0838 | 0.5304 | 1.4363299e-07 | 760 |
| 0.9909 | 0.0837 | 0.5391 | 0.9931 | 0.0837 | 0.5234 | 1.4361659e-07 | 761 |
| 0.9915 | 0.0837 | 0.5389 | 0.9889 | 0.0838 | 0.5226 | 1.4360018e-07 | 762 |
| 0.9901 | 0.0836 | 0.5408 | 0.9894 | 0.0832 | 0.5264 | 1.4358375e-07 | 763 |
| 0.9906 | 0.0836 | 0.5392 | 0.9897 | 0.0835 | 0.5253 | 1.4356729e-07 | 764 |
| 0.9907 | 0.0837 | 0.5407 | 0.9863 | 0.0840 | 0.5245 | 1.4355082e-07 | 765 |
| 0.9920 | 0.0835 | 0.5406 | 0.9907 | 0.0836 | 0.5255 | 1.4353432e-07 | 766 |
| 0.9923 | 0.0836 | 0.5392 | 0.9899 | 0.0840 | 0.5235 | 1.4351781e-07 | 767 |
| 0.9922 | 0.0836 | 0.5417 | 0.9856 | 0.0839 | 0.5272 | 1.4350128e-07 | 768 |
| 0.9905 | 0.0836 | 0.5414 | 0.9825 | 0.0833 | 0.5308 | 1.4348473e-07 | 769 |
| 0.9905 | 0.0836 | 0.5410 | 0.9871 | 0.0837 | 0.5288 | 1.4346816e-07 | 770 |
| 0.9915 | 0.0836 | 0.5401 | 0.9892 | 0.0837 | 0.5266 | 1.4345157e-07 | 771 |
| 0.9918 | 0.0836 | 0.5400 | 0.9904 | 0.0839 | 0.5351 | 1.4343496e-07 | 772 |
| 0.9907 | 0.0837 | 0.5407 | 0.9918 | 0.0836 | 0.5259 | 1.4341833e-07 | 773 |
| 0.9907 | 0.0836 | 0.5410 | 0.9866 | 0.0839 | 0.5273 | 1.4340168e-07 | 774 |
| 0.9910 | 0.0836 | 0.5417 | 0.9882 | 0.0837 | 0.5269 | 1.4338501e-07 | 775 |
| 0.9899 | 0.0837 | 0.5415 | 0.9935 | 0.0836 | 0.5244 | 1.4336833e-07 | 776 |
| 0.9913 | 0.0837 | 0.5409 | 0.9868 | 0.0835 | 0.5311 | 1.4335161e-07 | 777 |
| 0.9899 | 0.0835 | 0.5423 | 0.9860 | 0.0839 | 0.5289 | 1.4333489e-07 | 778 |
| 0.9908 | 0.0836 | 0.5406 | 0.9882 | 0.0837 | 0.5286 | 1.4331815e-07 | 779 |
| 0.9903 | 0.0838 | 0.5422 | 0.9849 | 0.0839 | 0.5249 | 1.4330138e-07 | 780 |
| 0.9911 | 0.0837 | 0.5418 | 0.9879 | 0.0840 | 0.5304 | 1.432846e-07 | 781 |
| 0.9902 | 0.0836 | 0.5412 | 0.9854 | 0.0839 | 0.5219 | 1.4326778e-07 | 782 |
| 0.9903 | 0.0836 | 0.5421 | 0.9855 | 0.0838 | 0.5282 | 1.4325096e-07 | 783 |
| 0.9896 | 0.0836 | 0.5418 | 0.9939 | 0.0834 | 0.5240 | 1.4323412e-07 | 784 |
| 0.9893 | 0.0837 | 0.5399 | 0.9816 | 0.0836 | 0.5256 | 1.4321725e-07 | 785 |
| 0.9900 | 0.0835 | 0.5430 | 0.9891 | 0.0838 | 0.5310 | 1.4320037e-07 | 786 |
| 0.9898 | 0.0837 | 0.5432 | 0.9895 | 0.0836 | 0.5276 | 1.4318347e-07 | 787 |
| 0.9889 | 0.0836 | 0.5430 | 0.9888 | 0.0836 | 0.5284 | 1.4316655e-07 | 788 |
| 0.9886 | 0.0837 | 0.5418 | 0.9900 | 0.0837 | 0.5280 | 1.431496e-07 | 789 |
| 0.9892 | 0.0835 | 0.5418 | 0.9863 | 0.0835 | 0.5320 | 1.4313264e-07 | 790 |
| 0.9892 | 0.0836 | 0.5420 | 0.9889 | 0.0838 | 0.5340 | 1.4311566e-07 | 791 |
| 0.9892 | 0.0836 | 0.5427 | 0.9890 | 0.0834 | 0.5295 | 1.4309866e-07 | 792 |
| 0.9898 | 0.0837 | 0.5430 | 0.9894 | 0.0839 | 0.5263 | 1.4308164e-07 | 793 |
| 0.9895 | 0.0837 | 0.5439 | 0.9897 | 0.0834 | 0.5279 | 1.430646e-07 | 794 |
| 0.9880 | 0.0836 | 0.5434 | 0.9880 | 0.0836 | 0.5272 | 1.4304754e-07 | 795 |
| 0.9902 | 0.0835 | 0.5427 | 0.9877 | 0.0842 | 0.5282 | 1.4303046e-07 | 796 |
| 0.9893 | 0.0837 | 0.5434 | 0.9884 | 0.0841 | 0.5312 | 1.4301337e-07 | 797 |
| 0.9887 | 0.0838 | 0.5438 | 0.9821 | 0.0835 | 0.5385 | 1.4299626e-07 | 798 |
| 0.9899 | 0.0837 | 0.5422 | 0.9916 | 0.0836 | 0.5333 | 1.4297912e-07 | 799 |
| 0.9894 | 0.0836 | 0.5432 | 0.9897 | 0.0835 | 0.5329 | 1.4296197e-07 | 800 |
| 0.9880 | 0.0837 | 0.5442 | 0.9964 | 0.0838 | 0.5300 | 1.4294478e-07 | 801 |
| 0.9886 | 0.0837 | 0.5441 | 0.9833 | 0.0840 | 0.5243 | 1.4292759e-07 | 802 |
| 0.9883 | 0.0837 | 0.5443 | 0.9844 | 0.0839 | 0.5324 | 1.4291038e-07 | 803 |
| 0.9898 | 0.0837 | 0.5440 | 0.9906 | 0.0839 | 0.5278 | 1.4289314e-07 | 804 |
| 0.9889 | 0.0838 | 0.5426 | 0.9913 | 0.0834 | 0.5234 | 1.4287589e-07 | 805 |
| 0.9888 | 0.0836 | 0.5450 | 0.9855 | 0.0834 | 0.5323 | 1.4285862e-07 | 806 |
| 0.9879 | 0.0835 | 0.5457 | 0.9880 | 0.0841 | 0.5251 | 1.4284133e-07 | 807 |
| 0.9881 | 0.0836 | 0.5439 | 0.9953 | 0.0833 | 0.5293 | 1.4282402e-07 | 808 |
| 0.9887 | 0.0836 | 0.5452 | 0.9884 | 0.0838 | 0.5213 | 1.428067e-07 | 809 |
| 0.9896 | 0.0837 | 0.5442 | 0.9875 | 0.0838 | 0.5312 | 1.4278935e-07 | 810 |
| 0.9882 | 0.0837 | 0.5444 | 0.9900 | 0.0843 | 0.5268 | 1.4277198e-07 | 811 |
| 0.9889 | 0.0836 | 0.5440 | 0.9875 | 0.0841 | 0.5241 | 1.4275459e-07 | 812 |
| 0.9885 | 0.0836 | 0.5447 | 0.9886 | 0.0839 | 0.5274 | 1.4273718e-07 | 813 |
| 0.9885 | 0.0837 | 0.5458 | 0.9920 | 0.0838 | 0.5274 | 1.4271976e-07 | 814 |
| 0.9883 | 0.0836 | 0.5457 | 0.9929 | 0.0835 | 0.5289 | 1.427023e-07 | 815 |
| 0.9883 | 0.0836 | 0.5454 | 0.9910 | 0.0836 | 0.5255 | 1.4268484e-07 | 816 |
| 0.9887 | 0.0836 | 0.5453 | 0.9852 | 0.0841 | 0.5255 | 1.4266736e-07 | 817 |
| 0.9882 | 0.0837 | 0.5451 | 0.9907 | 0.0841 | 0.5291 | 1.4264985e-07 | 818 |
| 0.9888 | 0.0837 | 0.5444 | 0.9882 | 0.0841 | 0.5294 | 1.4263233e-07 | 819 |
| 0.9883 | 0.0836 | 0.5463 | 0.9852 | 0.0839 | 0.5311 | 1.426148e-07 | 820 |
| 0.9876 | 0.0836 | 0.5476 | 0.9911 | 0.0840 | 0.5259 | 1.4259723e-07 | 821 |
| 0.9887 | 0.0837 | 0.5456 | 0.9930 | 0.0836 | 0.5307 | 1.4257965e-07 | 822 |
| 0.9869 | 0.0836 | 0.5450 | 0.9903 | 0.0840 | 0.5287 | 1.4256206e-07 | 823 |
| 0.9876 | 0.0836 | 0.5454 | 0.9879 | 0.0839 | 0.5297 | 1.4254444e-07 | 824 |
| 0.9872 | 0.0837 | 0.5472 | 0.9860 | 0.0839 | 0.5253 | 1.425268e-07 | 825 |
| 0.9893 | 0.0836 | 0.5465 | 0.9917 | 0.0834 | 0.5289 | 1.4250915e-07 | 826 |
| 0.9881 | 0.0838 | 0.5469 | 0.9878 | 0.0835 | 0.5305 | 1.4249147e-07 | 827 |
| 0.9878 | 0.0837 | 0.5450 | 0.9839 | 0.0840 | 0.5323 | 1.4247378e-07 | 828 |
| 0.9877 | 0.0836 | 0.5476 | 0.9854 | 0.0836 | 0.5313 | 1.4245606e-07 | 829 |
| 0.9865 | 0.0837 | 0.5492 | 0.9879 | 0.0840 | 0.5270 | 1.4243832e-07 | 830 |
| 0.9878 | 0.0837 | 0.5477 | 0.9908 | 0.0836 | 0.5291 | 1.4242057e-07 | 831 |
| 0.9870 | 0.0837 | 0.5463 | 0.9882 | 0.0831 | 0.5281 | 1.424028e-07 | 832 |
| 0.9873 | 0.0836 | 0.5465 | 0.9889 | 0.0836 | 0.5327 | 1.42385e-07 | 833 |
| 0.9859 | 0.0836 | 0.5478 | 0.9871 | 0.0835 | 0.5274 | 1.423672e-07 | 834 |
| 0.9878 | 0.0836 | 0.5474 | 0.9895 | 0.0838 | 0.5284 | 1.4234936e-07 | 835 |
| 0.9872 | 0.0837 | 0.5482 | 0.9862 | 0.0839 | 0.5298 | 1.4233152e-07 | 836 |
| 0.9865 | 0.0837 | 0.5479 | 0.9848 | 0.0840 | 0.5351 | 1.4231365e-07 | 837 |
| 0.9871 | 0.0837 | 0.5455 | 0.9878 | 0.0838 | 0.5323 | 1.4229576e-07 | 838 |
| 0.9862 | 0.0837 | 0.5464 | 0.9883 | 0.0839 | 0.5308 | 1.4227786e-07 | 839 |
| 0.9879 | 0.0837 | 0.5472 | 0.9887 | 0.0837 | 0.5354 | 1.4225994e-07 | 840 |
| 0.9865 | 0.0838 | 0.5481 | 0.9900 | 0.0834 | 0.5282 | 1.4224199e-07 | 841 |
| 0.9864 | 0.0835 | 0.5495 | 0.9883 | 0.0838 | 0.5292 | 1.4222402e-07 | 842 |
| 0.9866 | 0.0837 | 0.5475 | 0.9880 | 0.0837 | 0.5285 | 1.4220605e-07 | 843 |
| 0.9865 | 0.0837 | 0.5475 | 0.9881 | 0.0843 | 0.5302 | 1.4218804e-07 | 844 |
| 0.9866 | 0.0836 | 0.5482 | 0.9889 | 0.0842 | 0.5343 | 1.4217002e-07 | 845 |
| 0.9866 | 0.0836 | 0.5474 | 0.9861 | 0.0842 | 0.5274 | 1.4215199e-07 | 846 |
| 0.9856 | 0.0837 | 0.5492 | 0.9908 | 0.0840 | 0.5324 | 1.4213393e-07 | 847 |
| 0.9868 | 0.0836 | 0.5481 | 0.9880 | 0.0836 | 0.5267 | 1.4211585e-07 | 848 |
| 0.9866 | 0.0836 | 0.5492 | 0.9864 | 0.0838 | 0.5327 | 1.4209776e-07 | 849 |
| 0.9862 | 0.0836 | 0.5480 | 0.9926 | 0.0838 | 0.5336 | 1.4207964e-07 | 850 |
| 0.9869 | 0.0836 | 0.5506 | 0.9874 | 0.0831 | 0.5345 | 1.4206151e-07 | 851 |
| 0.9855 | 0.0836 | 0.5489 | 0.9865 | 0.0838 | 0.5319 | 1.4204336e-07 | 852 |
| 0.9855 | 0.0837 | 0.5494 | 0.9894 | 0.0837 | 0.5243 | 1.4202519e-07 | 853 |
| 0.9857 | 0.0837 | 0.5487 | 0.9861 | 0.0837 | 0.5357 | 1.42007e-07 | 854 |
| 0.9866 | 0.0836 | 0.5510 | 0.9851 | 0.0836 | 0.5356 | 1.4198879e-07 | 855 |
| 0.9879 | 0.0836 | 0.5494 | 0.9849 | 0.0839 | 0.5342 | 1.4197056e-07 | 856 |
| 0.9862 | 0.0837 | 0.5496 | 0.9842 | 0.0837 | 0.5403 | 1.4195231e-07 | 857 |
| 0.9870 | 0.0836 | 0.5507 | 0.9898 | 0.0837 | 0.5298 | 1.4193405e-07 | 858 |
| 0.9852 | 0.0837 | 0.5493 | 0.9876 | 0.0841 | 0.5361 | 1.4191576e-07 | 859 |
| 0.9857 | 0.0838 | 0.5506 | 0.9820 | 0.0843 | 0.5315 | 1.4189746e-07 | 860 |
| 0.9866 | 0.0837 | 0.5497 | 0.9888 | 0.0835 | 0.5273 | 1.4187914e-07 | 861 |
| 0.9861 | 0.0837 | 0.5505 | 0.9882 | 0.0839 | 0.5275 | 1.418608e-07 | 862 |
| 0.9844 | 0.0837 | 0.5501 | 0.9898 | 0.0838 | 0.5308 | 1.4184243e-07 | 863 |
| 0.9858 | 0.0837 | 0.5488 | 0.9863 | 0.0834 | 0.5319 | 1.4182406e-07 | 864 |
| 0.9853 | 0.0836 | 0.5493 | 0.9870 | 0.0839 | 0.5308 | 1.4180566e-07 | 865 |
| 0.9845 | 0.0837 | 0.5528 | 0.9876 | 0.0840 | 0.5260 | 1.4178724e-07 | 866 |
| 0.9854 | 0.0835 | 0.5517 | 0.9870 | 0.0834 | 0.5280 | 1.4176881e-07 | 867 |
| 0.9866 | 0.0837 | 0.5498 | 0.9826 | 0.0841 | 0.5368 | 1.4175035e-07 | 868 |
| 0.9857 | 0.0837 | 0.5505 | 0.9830 | 0.0837 | 0.5358 | 1.4173187e-07 | 869 |
| 0.9864 | 0.0836 | 0.5508 | 0.9850 | 0.0836 | 0.5383 | 1.4171339e-07 | 870 |
| 0.9851 | 0.0837 | 0.5484 | 0.9874 | 0.0840 | 0.5300 | 1.4169487e-07 | 871 |
| 0.9849 | 0.0837 | 0.5508 | 0.9833 | 0.0839 | 0.5323 | 1.4167634e-07 | 872 |
| 0.9847 | 0.0836 | 0.5516 | 0.9872 | 0.0842 | 0.5351 | 1.416578e-07 | 873 |
| 0.9849 | 0.0837 | 0.5516 | 0.9885 | 0.0841 | 0.5315 | 1.4163922e-07 | 874 |
| 0.9852 | 0.0837 | 0.5520 | 0.9904 | 0.0834 | 0.5332 | 1.4162063e-07 | 875 |
| 0.9843 | 0.0837 | 0.5527 | 0.9867 | 0.0840 | 0.5264 | 1.4160203e-07 | 876 |
| 0.9847 | 0.0837 | 0.5513 | 0.9867 | 0.0838 | 0.5279 | 1.415834e-07 | 877 |
| 0.9844 | 0.0836 | 0.5518 | 0.9876 | 0.0839 | 0.5325 | 1.4156475e-07 | 878 |
| 0.9846 | 0.0837 | 0.5522 | 0.9876 | 0.0838 | 0.5313 | 1.415461e-07 | 879 |
| 0.9845 | 0.0837 | 0.5526 | 0.9900 | 0.0838 | 0.5290 | 1.4152741e-07 | 880 |
| 0.9857 | 0.0837 | 0.5515 | 0.9893 | 0.0839 | 0.5255 | 1.415087e-07 | 881 |
| 0.9845 | 0.0837 | 0.5523 | 0.9892 | 0.0837 | 0.5324 | 1.4148999e-07 | 882 |
| 0.9839 | 0.0837 | 0.5523 | 0.9866 | 0.0839 | 0.5330 | 1.4147125e-07 | 883 |
| 0.9837 | 0.0837 | 0.5541 | 0.9832 | 0.0838 | 0.5330 | 1.4145249e-07 | 884 |
| 0.9841 | 0.0838 | 0.5527 | 0.9877 | 0.0841 | 0.5311 | 1.4143372e-07 | 885 |
| 0.9843 | 0.0837 | 0.5515 | 0.9851 | 0.0838 | 0.5368 | 1.4141492e-07 | 886 |
| 0.9832 | 0.0837 | 0.5541 | 0.9840 | 0.0841 | 0.5322 | 1.413961e-07 | 887 |
| 0.9845 | 0.0837 | 0.5539 | 0.9859 | 0.0837 | 0.5286 | 1.4137727e-07 | 888 |
| 0.9842 | 0.0837 | 0.5533 | 0.9914 | 0.0838 | 0.5303 | 1.4135843e-07 | 889 |
| 0.9835 | 0.0837 | 0.5530 | 0.9890 | 0.0837 | 0.5358 | 1.4133956e-07 | 890 |
| 0.9830 | 0.0837 | 0.5528 | 0.9904 | 0.0839 | 0.5347 | 1.4132067e-07 | 891 |
| 0.9856 | 0.0836 | 0.5534 | 0.9850 | 0.0838 | 0.5328 | 1.4130177e-07 | 892 |
| 0.9841 | 0.0837 | 0.5537 | 0.9851 | 0.0842 | 0.5367 | 1.4128284e-07 | 893 |
| 0.9847 | 0.0837 | 0.5541 | 0.9902 | 0.0837 | 0.5311 | 1.412639e-07 | 894 |
| 0.9836 | 0.0837 | 0.5539 | 0.9841 | 0.0838 | 0.5301 | 1.4124494e-07 | 895 |
| 0.9832 | 0.0837 | 0.5520 | 0.9847 | 0.0839 | 0.5288 | 1.4122595e-07 | 896 |
| 0.9848 | 0.0836 | 0.5535 | 0.9815 | 0.0840 | 0.5340 | 1.4120695e-07 | 897 |
| 0.9847 | 0.0837 | 0.5534 | 0.9873 | 0.0841 | 0.5351 | 1.4118794e-07 | 898 |
| 0.9834 | 0.0837 | 0.5549 | 0.9872 | 0.0836 | 0.5319 | 1.411689e-07 | 899 |
| 0.9844 | 0.0836 | 0.5536 | 0.9851 | 0.0838 | 0.5324 | 1.4114984e-07 | 900 |
| 0.9838 | 0.0837 | 0.5565 | 0.9878 | 0.0839 | 0.5278 | 1.4113077e-07 | 901 |
| 0.9847 | 0.0836 | 0.5542 | 0.9880 | 0.0839 | 0.5303 | 1.4111167e-07 | 902 |
| 0.9845 | 0.0836 | 0.5533 | 0.9874 | 0.0839 | 0.5269 | 1.4109256e-07 | 903 |
| 0.9830 | 0.0836 | 0.5536 | 0.9892 | 0.0835 | 0.5287 | 1.4107343e-07 | 904 |
| 0.9839 | 0.0836 | 0.5550 | 0.9898 | 0.0836 | 0.5335 | 1.4105429e-07 | 905 |
| 0.9834 | 0.0837 | 0.5547 | 0.9856 | 0.0839 | 0.5348 | 1.4103512e-07 | 906 |
| 0.9836 | 0.0838 | 0.5540 | 0.9843 | 0.0838 | 0.5348 | 1.4101593e-07 | 907 |
| 0.9832 | 0.0837 | 0.5547 | 0.9890 | 0.0837 | 0.5396 | 1.4099673e-07 | 908 |
| 0.9821 | 0.0838 | 0.5559 | 0.9877 | 0.0839 | 0.5326 | 1.409775e-07 | 909 |
| 0.9829 | 0.0838 | 0.5554 | 0.9810 | 0.0841 | 0.5350 | 1.4095826e-07 | 910 |
| 0.9822 | 0.0837 | 0.5547 | 0.9892 | 0.0837 | 0.5291 | 1.4093901e-07 | 911 |
| 0.9821 | 0.0837 | 0.5551 | 0.9898 | 0.0836 | 0.5282 | 1.4091972e-07 | 912 |
| 0.9824 | 0.0837 | 0.5555 | 0.9864 | 0.0837 | 0.5323 | 1.4090043e-07 | 913 |
| 0.9829 | 0.0837 | 0.5562 | 0.9855 | 0.0838 | 0.5310 | 1.4088111e-07 | 914 |
| 0.9816 | 0.0837 | 0.5570 | 0.9829 | 0.0837 | 0.5327 | 1.4086179e-07 | 915 |
| 0.9822 | 0.0837 | 0.5557 | 0.9911 | 0.0839 | 0.5308 | 1.4084243e-07 | 916 |
| 0.9815 | 0.0837 | 0.5568 | 0.9857 | 0.0840 | 0.5292 | 1.4082306e-07 | 917 |
| 0.9818 | 0.0838 | 0.5570 | 0.9931 | 0.0839 | 0.5288 | 1.4080368e-07 | 918 |
| 0.9818 | 0.0837 | 0.5565 | 0.9852 | 0.0838 | 0.5306 | 1.4078427e-07 | 919 |
| 0.9825 | 0.0837 | 0.5556 | 0.9897 | 0.0836 | 0.5326 | 1.4076484e-07 | 920 |
| 0.9832 | 0.0836 | 0.5578 | 0.9895 | 0.0838 | 0.5289 | 1.407454e-07 | 921 |
| 0.9828 | 0.0837 | 0.5568 | 0.9860 | 0.0839 | 0.5311 | 1.4072593e-07 | 922 |
| 0.9817 | 0.0837 | 0.5575 | 0.9841 | 0.0836 | 0.5353 | 1.4070645e-07 | 923 |
| 0.9816 | 0.0836 | 0.5572 | 0.9861 | 0.0838 | 0.5317 | 1.4068695e-07 | 924 |
| 0.9814 | 0.0836 | 0.5553 | 0.9880 | 0.0838 | 0.5315 | 1.4066744e-07 | 925 |
| 0.9828 | 0.0837 | 0.5562 | 0.9895 | 0.0837 | 0.5310 | 1.406479e-07 | 926 |
| 0.9831 | 0.0836 | 0.5562 | 0.9857 | 0.0837 | 0.5320 | 1.4062834e-07 | 927 |
| 0.9818 | 0.0837 | 0.5570 | 0.9871 | 0.0838 | 0.5353 | 1.4060878e-07 | 928 |
| 0.9825 | 0.0836 | 0.5560 | 0.9859 | 0.0834 | 0.5358 | 1.4058918e-07 | 929 |
| 0.9825 | 0.0836 | 0.5558 | 0.9922 | 0.0833 | 0.5282 | 1.4056957e-07 | 930 |
| 0.9815 | 0.0836 | 0.5580 | 0.9846 | 0.0839 | 0.5306 | 1.4054994e-07 | 931 |
| 0.9818 | 0.0837 | 0.5575 | 0.9799 | 0.0841 | 0.5348 | 1.405303e-07 | 932 |
| 0.9827 | 0.0837 | 0.5556 | 0.9870 | 0.0837 | 0.5337 | 1.4051064e-07 | 933 |
| 0.9815 | 0.0837 | 0.5567 | 0.9850 | 0.0836 | 0.5328 | 1.4049095e-07 | 934 |
| 0.9811 | 0.0837 | 0.5587 | 0.9865 | 0.0838 | 0.5348 | 1.4047126e-07 | 935 |
| 0.9819 | 0.0837 | 0.5585 | 0.9912 | 0.0842 | 0.5315 | 1.4045153e-07 | 936 |
| 0.9814 | 0.0837 | 0.5576 | 0.9863 | 0.0836 | 0.5333 | 1.404318e-07 | 937 |
| 0.9813 | 0.0837 | 0.5576 | 0.9860 | 0.0841 | 0.5401 | 1.4041204e-07 | 938 |
| 0.9819 | 0.0837 | 0.5582 | 0.9888 | 0.0837 | 0.5332 | 1.4039227e-07 | 939 |
| 0.9804 | 0.0837 | 0.5592 | 0.9883 | 0.0835 | 0.5358 | 1.4037248e-07 | 940 |
| 0.9808 | 0.0837 | 0.5586 | 0.9868 | 0.0837 | 0.5358 | 1.4035267e-07 | 941 |
| 0.9795 | 0.0838 | 0.5584 | 0.9842 | 0.0842 | 0.5303 | 1.4033284e-07 | 942 |
| 0.9811 | 0.0837 | 0.5594 | 0.9839 | 0.0835 | 0.5302 | 1.4031299e-07 | 943 |
| 0.9818 | 0.0837 | 0.5579 | 0.9899 | 0.0837 | 0.5353 | 1.4029312e-07 | 944 |
| 0.9810 | 0.0838 | 0.5600 | 0.9862 | 0.0835 | 0.5322 | 1.4027324e-07 | 945 |
| 0.9810 | 0.0838 | 0.5588 | 0.9916 | 0.0839 | 0.5306 | 1.4025335e-07 | 946 |
| 0.9820 | 0.0836 | 0.5604 | 0.9800 | 0.0844 | 0.5330 | 1.4023342e-07 | 947 |
| 0.9834 | 0.0836 | 0.5582 | 0.9860 | 0.0837 | 0.5298 | 1.4021349e-07 | 948 |
| 0.9809 | 0.0836 | 0.5580 | 0.9870 | 0.0842 | 0.5308 | 1.4019354e-07 | 949 |
| 0.9823 | 0.0838 | 0.5580 | 0.9855 | 0.0837 | 0.5323 | 1.4017355e-07 | 950 |
| 0.9805 | 0.0837 | 0.5592 | 0.9874 | 0.0839 | 0.5315 | 1.4015356e-07 | 951 |
| 0.9796 | 0.0838 | 0.5603 | 0.9900 | 0.0840 | 0.5356 | 1.4013355e-07 | 952 |
| 0.9800 | 0.0838 | 0.5593 | 0.9884 | 0.0836 | 0.5316 | 1.4011353e-07 | 953 |
| 0.9797 | 0.0838 | 0.5594 | 0.9863 | 0.0839 | 0.5268 | 1.4009348e-07 | 954 |
| 0.9813 | 0.0836 | 0.5595 | 0.9840 | 0.0835 | 0.5387 | 1.4007341e-07 | 955 |
| 0.9799 | 0.0837 | 0.5594 | 0.9838 | 0.0839 | 0.5351 | 1.4005333e-07 | 956 |
| 0.9809 | 0.0837 | 0.5595 | 0.9901 | 0.0833 | 0.5344 | 1.4003322e-07 | 957 |
| 0.9805 | 0.0837 | 0.5602 | 0.9880 | 0.0836 | 0.5360 | 1.400131e-07 | 958 |
| 0.9792 | 0.0839 | 0.5591 | 0.9883 | 0.0839 | 0.5332 | 1.3999296e-07 | 959 |
| 0.9803 | 0.0837 | 0.5605 | 0.9853 | 0.0841 | 0.5338 | 1.3997281e-07 | 960 |
| 0.9808 | 0.0838 | 0.5598 | 0.9855 | 0.0838 | 0.5294 | 1.3995263e-07 | 961 |
| 0.9800 | 0.0838 | 0.5606 | 0.9849 | 0.0838 | 0.5367 | 1.3993244e-07 | 962 |
| 0.9800 | 0.0837 | 0.5600 | 0.9878 | 0.0837 | 0.5314 | 1.3991223e-07 | 963 |
| 0.9799 | 0.0839 | 0.5619 | 0.9846 | 0.0841 | 0.5369 | 1.3989201e-07 | 964 |
| 0.9798 | 0.0838 | 0.5612 | 0.9849 | 0.0838 | 0.5340 | 1.3987176e-07 | 965 |
| 0.9794 | 0.0837 | 0.5632 | 0.9866 | 0.0840 | 0.5332 | 1.398515e-07 | 966 |
| 0.9797 | 0.0837 | 0.5596 | 0.9851 | 0.0837 | 0.5330 | 1.3983122e-07 | 967 |
| 0.9796 | 0.0838 | 0.5609 | 0.9892 | 0.0836 | 0.5252 | 1.3981091e-07 | 968 |
| 0.9794 | 0.0837 | 0.5603 | 0.9834 | 0.0838 | 0.5354 | 1.3979059e-07 | 969 |
| 0.9802 | 0.0837 | 0.5610 | 0.9900 | 0.0837 | 0.5308 | 1.3977025e-07 | 970 |
| 0.9799 | 0.0837 | 0.5607 | 0.9908 | 0.0833 | 0.5331 | 1.397499e-07 | 971 |
| 0.9796 | 0.0837 | 0.5601 | 0.9875 | 0.0843 | 0.5398 | 1.3972952e-07 | 972 |
| 0.9787 | 0.0837 | 0.5633 | 0.9842 | 0.0836 | 0.5352 | 1.3970913e-07 | 973 |
| 0.9803 | 0.0837 | 0.5608 | 0.9912 | 0.0835 | 0.5345 | 1.3968872e-07 | 974 |
| 0.9801 | 0.0838 | 0.5618 | 0.9879 | 0.0838 | 0.5310 | 1.396683e-07 | 975 |
| 0.9794 | 0.0838 | 0.5635 | 0.9881 | 0.0840 | 0.5301 | 1.3964785e-07 | 976 |
| 0.9807 | 0.0836 | 0.5603 | 0.9825 | 0.0839 | 0.5337 | 1.3962739e-07 | 977 |
| 0.9807 | 0.0837 | 0.5617 | 0.9874 | 0.0834 | 0.5328 | 1.3960691e-07 | 978 |
| 0.9797 | 0.0837 | 0.5610 | 0.9898 | 0.0836 | 0.5320 | 1.3958642e-07 | 979 |
| 0.9779 | 0.0838 | 0.5625 | 0.9883 | 0.0844 | 0.5347 | 1.395659e-07 | 980 |
| 0.9799 | 0.0837 | 0.5609 | 0.9863 | 0.0837 | 0.5371 | 1.3954536e-07 | 981 |
| 0.9784 | 0.0837 | 0.5634 | 0.9839 | 0.0840 | 0.5310 | 1.3952481e-07 | 982 |
| 0.9800 | 0.0837 | 0.5625 | 0.9814 | 0.0838 | 0.5338 | 1.3950425e-07 | 983 |
| 0.9791 | 0.0837 | 0.5616 | 0.9870 | 0.0842 | 0.5310 | 1.3948366e-07 | 984 |
| 0.9797 | 0.0836 | 0.5614 | 0.9861 | 0.0835 | 0.5329 | 1.3946305e-07 | 985 |
| 0.9794 | 0.0838 | 0.5623 | 0.9873 | 0.0835 | 0.5404 | 1.3944243e-07 | 986 |
| 0.9801 | 0.0836 | 0.5625 | 0.9883 | 0.0841 | 0.5364 | 1.3942179e-07 | 987 |
| 0.9785 | 0.0837 | 0.5615 | 0.9904 | 0.0839 | 0.5373 | 1.3940112e-07 | 988 |
| 0.9792 | 0.0837 | 0.5641 | 0.9851 | 0.0836 | 0.5374 | 1.3938045e-07 | 989 |
| 0.9796 | 0.0837 | 0.5617 | 0.9839 | 0.0839 | 0.5320 | 1.3935976e-07 | 990 |
| 0.9787 | 0.0837 | 0.5615 | 0.9890 | 0.0840 | 0.5354 | 1.3933904e-07 | 991 |
| 0.9780 | 0.0837 | 0.5647 | 0.9870 | 0.0838 | 0.5291 | 1.393183e-07 | 992 |
| 0.9782 | 0.0837 | 0.5648 | 0.9862 | 0.0835 | 0.5356 | 1.3929755e-07 | 993 |
| 0.9792 | 0.0837 | 0.5635 | 0.9801 | 0.0839 | 0.5340 | 1.3927679e-07 | 994 |
| 0.9786 | 0.0839 | 0.5628 | 0.9864 | 0.0844 | 0.5306 | 1.39256e-07 | 995 |
| 0.9781 | 0.0838 | 0.5642 | 0.9869 | 0.0843 | 0.5375 | 1.392352e-07 | 996 |
| 0.9778 | 0.0837 | 0.5648 | 0.9867 | 0.0836 | 0.5363 | 1.3921438e-07 | 997 |
| 0.9780 | 0.0838 | 0.5640 | 0.9859 | 0.0838 | 0.5285 | 1.3919355e-07 | 998 |
| 0.9798 | 0.0837 | 0.5628 | 0.9814 | 0.0841 | 0.5330 | 1.3917268e-07 | 999 |
| 0.9781 | 0.0838 | 0.5653 | 0.9842 | 0.0838 | 0.5342 | 1.3915181e-07 | 1000 |
| 0.9782 | 0.0838 | 0.5627 | 0.9852 | 0.0841 | 0.5330 | 1.3913092e-07 | 1001 |
| 0.9780 | 0.0837 | 0.5640 | 0.9861 | 0.0836 | 0.5355 | 1.3911001e-07 | 1002 |
| 0.9797 | 0.0837 | 0.5629 | 0.9844 | 0.0837 | 0.5297 | 1.3908908e-07 | 1003 |
| 0.9784 | 0.0836 | 0.5645 | 0.9847 | 0.0836 | 0.5354 | 1.3906813e-07 | 1004 |
| 0.9774 | 0.0838 | 0.5631 | 0.9842 | 0.0840 | 0.5356 | 1.3904717e-07 | 1005 |
| 0.9783 | 0.0836 | 0.5650 | 0.9886 | 0.0840 | 0.5393 | 1.390262e-07 | 1006 |
| 0.9775 | 0.0838 | 0.5618 | 0.9865 | 0.0843 | 0.5375 | 1.390052e-07 | 1007 |
| 0.9778 | 0.0837 | 0.5649 | 0.9910 | 0.0837 | 0.5320 | 1.3898418e-07 | 1008 |
| 0.9772 | 0.0837 | 0.5639 | 0.9905 | 0.0840 | 0.5375 | 1.3896314e-07 | 1009 |
| 0.9774 | 0.0836 | 0.5648 | 0.9838 | 0.0838 | 0.5381 | 1.389421e-07 | 1010 |
| 0.9785 | 0.0837 | 0.5661 | 0.9870 | 0.0835 | 0.5362 | 1.3892104e-07 | 1011 |
| 0.9772 | 0.0837 | 0.5637 | 0.9897 | 0.0835 | 0.5324 | 1.3889995e-07 | 1012 |
| 0.9772 | 0.0838 | 0.5650 | 0.9840 | 0.0844 | 0.5350 | 1.3887885e-07 | 1013 |
| 0.9771 | 0.0838 | 0.5647 | 0.9887 | 0.0837 | 0.5351 | 1.3885773e-07 | 1014 |
| 0.9773 | 0.0837 | 0.5659 | 0.9860 | 0.0841 | 0.5376 | 1.388366e-07 | 1015 |
| 0.9776 | 0.0838 | 0.5644 | 0.9851 | 0.0840 | 0.5336 | 1.3881544e-07 | 1016 |
| 0.9782 | 0.0837 | 0.5647 | 0.9858 | 0.0838 | 0.5365 | 1.3879426e-07 | 1017 |
| 0.9772 | 0.0837 | 0.5664 | 0.9832 | 0.0838 | 0.5351 | 1.3877307e-07 | 1018 |
| 0.9762 | 0.0838 | 0.5657 | 0.9856 | 0.0835 | 0.5398 | 1.3875187e-07 | 1019 |
| 0.9778 | 0.0837 | 0.5653 | 0.9833 | 0.0839 | 0.5407 | 1.3873064e-07 | 1020 |
| 0.9773 | 0.0838 | 0.5651 | 0.9898 | 0.0839 | 0.5360 | 1.387094e-07 | 1021 |
| 0.9765 | 0.0837 | 0.5672 | 0.9860 | 0.0834 | 0.5282 | 1.3868814e-07 | 1022 |
| 0.9774 | 0.0837 | 0.5654 | 0.9846 | 0.0841 | 0.5332 | 1.3866686e-07 | 1023 |
| 0.9774 | 0.0838 | 0.5659 | 0.9847 | 0.0844 | 0.5340 | 1.3864556e-07 | 1024 |
| 0.9773 | 0.0837 | 0.5669 | 0.9924 | 0.0837 | 0.5370 | 1.3862424e-07 | 1025 |
| 0.9770 | 0.0838 | 0.5648 | 0.9877 | 0.0833 | 0.5362 | 1.3860291e-07 | 1026 |
| 0.9771 | 0.0838 | 0.5669 | 0.9888 | 0.0839 | 0.5297 | 1.3858157e-07 | 1027 |
| 0.9762 | 0.0837 | 0.5678 | 0.9867 | 0.0837 | 0.5298 | 1.3856021e-07 | 1028 |
| 0.9768 | 0.0838 | 0.5663 | 0.9846 | 0.0837 | 0.5376 | 1.3853882e-07 | 1029 |
| 0.9763 | 0.0838 | 0.5653 | 0.9850 | 0.0838 | 0.5318 | 1.3851742e-07 | 1030 |
| 0.9765 | 0.0838 | 0.5663 | 0.9852 | 0.0841 | 0.5323 | 1.38496e-07 | 1031 |
| 0.9769 | 0.0837 | 0.5675 | 0.9875 | 0.0837 | 0.5343 | 1.3847458e-07 | 1032 |
| 0.9768 | 0.0837 | 0.5682 | 0.9864 | 0.0837 | 0.5331 | 1.3845312e-07 | 1033 |
| 0.9759 | 0.0839 | 0.5679 | 0.9853 | 0.0839 | 0.5438 | 1.3843164e-07 | 1034 |
| 0.9764 | 0.0838 | 0.5676 | 0.9890 | 0.0836 | 0.5287 | 1.3841016e-07 | 1035 |
| 0.9748 | 0.0838 | 0.5674 | 0.9918 | 0.0837 | 0.5350 | 1.3838866e-07 | 1036 |
| 0.9762 | 0.0837 | 0.5669 | 0.9912 | 0.0836 | 0.5330 | 1.3836713e-07 | 1037 |
| 0.9758 | 0.0838 | 0.5668 | 0.9856 | 0.0836 | 0.5373 | 1.3834558e-07 | 1038 |
| 0.9756 | 0.0838 | 0.5681 | 0.9859 | 0.0840 | 0.5327 | 1.3832403e-07 | 1039 |
| 0.9767 | 0.0835 | 0.5683 | 0.9885 | 0.0841 | 0.5380 | 1.3830245e-07 | 1040 |
| 0.9763 | 0.0837 | 0.5676 | 0.9879 | 0.0836 | 0.5388 | 1.3828087e-07 | 1041 |
| 0.9756 | 0.0837 | 0.5674 | 0.9822 | 0.0846 | 0.5308 | 1.3825925e-07 | 1042 |
| 0.9765 | 0.0838 | 0.5675 | 0.9873 | 0.0837 | 0.5351 | 1.3823762e-07 | 1043 |
| 0.9771 | 0.0837 | 0.5666 | 0.9866 | 0.0837 | 0.5306 | 1.3821598e-07 | 1044 |
| 0.9755 | 0.0837 | 0.5678 | 0.9865 | 0.0836 | 0.5379 | 1.3819432e-07 | 1045 |
| 0.9760 | 0.0838 | 0.5677 | 0.9877 | 0.0838 | 0.5322 | 1.3817264e-07 | 1046 |
| 0.9755 | 0.0838 | 0.5680 | 0.9882 | 0.0836 | 0.5350 | 1.3815094e-07 | 1047 |
| 0.9754 | 0.0837 | 0.5684 | 0.9856 | 0.0842 | 0.5364 | 1.3812922e-07 | 1048 |
| 0.9752 | 0.0838 | 0.5689 | 0.9875 | 0.0841 | 0.5287 | 1.381075e-07 | 1049 |
| 0.9758 | 0.0838 | 0.5672 | 0.9885 | 0.0841 | 0.5328 | 1.3808575e-07 | 1050 |
| 0.9740 | 0.0837 | 0.5682 | 0.9828 | 0.0839 | 0.5337 | 1.3806398e-07 | 1051 |
| 0.9758 | 0.0838 | 0.5692 | 0.9825 | 0.0844 | 0.5367 | 1.380422e-07 | 1052 |
| 0.9762 | 0.0837 | 0.5682 | 0.9888 | 0.0834 | 0.5391 | 1.380204e-07 | 1053 |
| 0.9748 | 0.0837 | 0.5699 | 0.9895 | 0.0837 | 0.5373 | 1.3799858e-07 | 1054 |
| 0.9753 | 0.0837 | 0.5687 | 0.9923 | 0.0832 | 0.5332 | 1.3797676e-07 | 1055 |
| 0.9736 | 0.0837 | 0.5709 | 0.9911 | 0.0841 | 0.5284 | 1.379549e-07 | 1056 |
| 0.9754 | 0.0837 | 0.5681 | 0.9881 | 0.0836 | 0.5377 | 1.3793303e-07 | 1057 |
| 0.9749 | 0.0838 | 0.5688 | 0.9881 | 0.0837 | 0.5380 | 1.3791114e-07 | 1058 |
| 0.9749 | 0.0837 | 0.5705 | 0.9869 | 0.0837 | 0.5333 | 1.3788924e-07 | 1059 |
| 0.9759 | 0.0836 | 0.5676 | 0.9894 | 0.0838 | 0.5352 | 1.3786732e-07 | 1060 |
| 0.9758 | 0.0836 | 0.5691 | 0.9873 | 0.0838 | 0.5326 | 1.3784538e-07 | 1061 |
| 0.9736 | 0.0838 | 0.5704 | 0.9910 | 0.0839 | 0.5349 | 1.3782342e-07 | 1062 |
| 0.9741 | 0.0836 | 0.5696 | 0.9881 | 0.0840 | 0.5323 | 1.3780145e-07 | 1063 |
| 0.9754 | 0.0836 | 0.5684 | 0.9855 | 0.0837 | 0.5420 | 1.3777947e-07 | 1064 |
| 0.9755 | 0.0838 | 0.5704 | 0.9842 | 0.0839 | 0.5367 | 1.3775745e-07 | 1065 |
| 0.9741 | 0.0837 | 0.5696 | 0.9840 | 0.0838 | 0.5318 | 1.3773543e-07 | 1066 |
| 0.9748 | 0.0837 | 0.5692 | 0.9810 | 0.0844 | 0.5361 | 1.3771339e-07 | 1067 |
| 0.9752 | 0.0837 | 0.5699 | 0.9807 | 0.0842 | 0.5345 | 1.3769133e-07 | 1068 |
| 0.9733 | 0.0838 | 0.5696 | 0.9885 | 0.0835 | 0.5332 | 1.3766926e-07 | 1069 |
| 0.9748 | 0.0838 | 0.5722 | 0.9891 | 0.0841 | 0.5350 | 1.3764716e-07 | 1070 |
| 0.9740 | 0.0837 | 0.5697 | 0.9879 | 0.0833 | 0.5303 | 1.3762505e-07 | 1071 |
| 0.9746 | 0.0838 | 0.5706 | 0.9834 | 0.0837 | 0.5395 | 1.3760292e-07 | 1072 |
| 0.9737 | 0.0838 | 0.5717 | 0.9874 | 0.0840 | 0.5341 | 1.3758078e-07 | 1073 |
| 0.9750 | 0.0837 | 0.5702 | 0.9904 | 0.0835 | 0.5325 | 1.3755863e-07 | 1074 |
| 0.9741 | 0.0837 | 0.5722 | 0.9879 | 0.0839 | 0.5319 | 1.3753645e-07 | 1075 |
| 0.9744 | 0.0837 | 0.5710 | 0.9885 | 0.0840 | 0.5280 | 1.3751425e-07 | 1076 |
| 0.9733 | 0.0836 | 0.5710 | 0.9892 | 0.0840 | 0.5344 | 1.3749204e-07 | 1077 |
| 0.9739 | 0.0838 | 0.5699 | 0.9880 | 0.0839 | 0.5337 | 1.3746981e-07 | 1078 |
| 0.9742 | 0.0838 | 0.5694 | 0.9876 | 0.0838 | 0.5337 | 1.3744757e-07 | 1079 |
| 0.9729 | 0.0838 | 0.5716 | 0.9824 | 0.0839 | 0.5364 | 1.374253e-07 | 1080 |
| 0.9741 | 0.0838 | 0.5719 | 0.9863 | 0.0843 | 0.5392 | 1.3740302e-07 | 1081 |
| 0.9745 | 0.0838 | 0.5712 | 0.9868 | 0.0842 | 0.5357 | 1.3738072e-07 | 1082 |
| 0.9728 | 0.0837 | 0.5725 | 0.9929 | 0.0839 | 0.5316 | 1.3735841e-07 | 1083 |
| 0.9736 | 0.0837 | 0.5701 | 0.9877 | 0.0836 | 0.5371 | 1.3733609e-07 | 1084 |
| 0.9751 | 0.0837 | 0.5714 | 0.9854 | 0.0838 | 0.5349 | 1.3731373e-07 | 1085 |
| 0.9737 | 0.0837 | 0.5714 | 0.9902 | 0.0846 | 0.5298 | 1.3729137e-07 | 1086 |
| 0.9736 | 0.0837 | 0.5732 | 0.9880 | 0.0837 | 0.5318 | 1.3726898e-07 | 1087 |
| 0.9737 | 0.0838 | 0.5726 | 0.9864 | 0.0837 | 0.5387 | 1.3724659e-07 | 1088 |
| 0.9738 | 0.0837 | 0.5729 | 0.9849 | 0.0840 | 0.5327 | 1.3722418e-07 | 1089 |
| 0.9738 | 0.0838 | 0.5722 | 0.9903 | 0.0839 | 0.5315 | 1.3720174e-07 | 1090 |
| 0.9728 | 0.0838 | 0.5714 | 0.9865 | 0.0839 | 0.5332 | 1.3717928e-07 | 1091 |
| 0.9739 | 0.0839 | 0.5716 | 0.9878 | 0.0837 | 0.5344 | 1.3715682e-07 | 1092 |
| 0.9738 | 0.0838 | 0.5710 | 0.9869 | 0.0841 | 0.5342 | 1.3713434e-07 | 1093 |
| 0.9718 | 0.0837 | 0.5739 | 0.9877 | 0.0839 | 0.5327 | 1.3711184e-07 | 1094 |
| 0.9737 | 0.0838 | 0.5717 | 0.9894 | 0.0839 | 0.5331 | 1.3708933e-07 | 1095 |
| 0.9729 | 0.0837 | 0.5712 | 0.9837 | 0.0841 | 0.5353 | 1.3706679e-07 | 1096 |
| 0.9723 | 0.0838 | 0.5745 | 0.9821 | 0.0839 | 0.5399 | 1.3704424e-07 | 1097 |
| 0.9732 | 0.0838 | 0.5743 | 0.9855 | 0.0834 | 0.5316 | 1.3702167e-07 | 1098 |
| 0.9722 | 0.0837 | 0.5737 | 0.9894 | 0.0841 | 0.5270 | 1.3699909e-07 | 1099 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/EManuals_RoBERTa_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- tinyYhorm/autotrain-data-1-xlmr-rs
co2_eq_emissions:
emissions: 1.5970322869917484
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 53879126771
- CO2 Emissions (in grams): 1.5970
## Validation Metrics
- Loss: 0.170
- Accuracy: 0.959
- Precision: 0.957
- Recall: 0.959
- F1: 0.958
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/tinyYhorm/autotrain-1-xlmr-rs-53879126771
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("tinyYhorm/autotrain-1-xlmr-rs-53879126771", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("tinyYhorm/autotrain-1-xlmr-rs-53879126771", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
AnonymousSub/SDR_HF_model_base
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- tinyYhorm/autotrain-data-2-xlmr-r
co2_eq_emissions:
emissions: 1.191588461069133
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 53880126783
- CO2 Emissions (in grams): 1.1916
## Validation Metrics
- Loss: 0.186
- Accuracy: 0.957
- Precision: 0.872
- Recall: 0.877
- F1: 0.875
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/tinyYhorm/autotrain-2-xlmr-r-53880126783
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("tinyYhorm/autotrain-2-xlmr-r-53880126783", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("tinyYhorm/autotrain-2-xlmr-r-53880126783", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
AnonymousSub/SR_EManuals-BERT
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: creativeml-openrail-m
tags:
- legal
- art
---
# Bod obnovy – CELÝ FILM ONLINE ZDARMA CZ/SK DABING
Sleduj Bod obnovy (2023) – Celý Film CZ Dabing HD Kvalite | Sleduj Filmy Online, Bod obnovy (2023) – Online Titulky Filmu Dabing CZ, Bod obnovy (2023) – Sleduj Filmy Online CZ Dabing HD Kvalite, [Bombuj-HD] Bod obnovy (2023) Film CZ Dabing [Online], [Sledovat-HD] Bod obnovy (2023) Film Online [CZ Dabing]
[](https://fliktv.net/cs/movie/1071815)
poslední aktualizace: 29. dubna 2023
➤ ► 🌍📺📱👉 Bod obnovy Online CZ : [https://fliktv.net/cs/movie/1071815](https://fliktv.net/cs/movie/1071815)
➤ ► 🌍📺📱👉 Bod obnovy Cz Dabing : [https://fliktv.net/cs/movie/1071815](https://fliktv.net/cs/movie/1071815)
| 𝟜𝕂 𝕌ℍ𝔻 | 𝟙𝟘𝟠𝟘ℙ 𝔽𝕌𝕃𝕃 ℍ𝔻 | 𝟟𝟚𝟘ℙ ℍ𝔻 | 𝕄𝕂𝕍 | 𝕄ℙ𝟜 | 𝔻𝕍𝔻 | 𝔹𝕝𝕦-ℝ𝕒𝕪 |
Klíčová slova :
Online Bod obnovy (film, 2023) Celý Film Online Cz A Zdarma Bod obnovy (2023) Filmy ONLINE CZ-SK Dabing HD Sledujte!!! Bod obnovy (2023) [Filmy] Celý Film Online CZ a Zdarma ©[Sledujte] Bod obnovy (2023) (Filmy) Celý Film Online CZ a Zdarma Sledujte]▷ Bod obnovy (2023) Celý Film Online " Sleduj!~ Bod obnovy Online Cz !Celý Film a Zdarma"
Bod obnovy (2023) Filmy ONLINE CZ-SK Dabing HD Sledujte!!! Bod obnovy (2023) [Filmy] Celý Film Online CZ a Zdarma ©Sledujte Bod obnovy (2023) (Filmy) Celý Film Online CZ a Zdarma Sledujte]▷ Bod obnovy (2023) Celý Film Online "Sleduj! ~ Bod obnovy (2023) ~ !Celý Film" Online ™FILMY]~ Bod obnovy (2023) Celý film Slovenské Titulky Audio Sledujte Bod obnovy (2023) Celý Film Online a Zdarma {CZ-SK} Dabing i Titulky [[Sleduj~HD]» ” Bod obnovy (2023) Celý Film Online a Zdarma {CZ — SK} Dabing i Titulky] CZ-SK Filmy Bod obnovy (2023) (Český!) ke shlédnutí CZ Filmy Online Videa Bod obnovy (2023) Celý Film Online Cz A Zdarma Sledujte]] Bod obnovy (2023) Celý Film Online Cz A Zdarma Sledujte]▷ Bod obnovy (2023) Celý film online zdarma Sledujte↑↑〙» Bod obnovy (2023) Film Online (CZ-SK) Zdarma Dabing HD Bod obnovy (2023) Celý Film 2021, Bod obnovy (2023) Celý Film 2021, Bod obnovy (2023) Filmové Novinky, Bod obnovy (2023) celý film Český Dokumentární, Bod obnovy (2023) Filmové premiéry, Bod obnovy (2023) celý film Česka cz dabing, Bod obnovy (2023) zkouknito, Bod obnovy (2023) sleduj filmy, Bod obnovy (2023) online cz titulky, Bod obnovy (2023) Program filmy, Bod obnovy (2023) CZ HD Film o filmu, Bod obnovy (2023) CZ dabing, Bod obnovy (2023) premiéra, Bod obnovy (2023) online cz, Bod obnovy (2023) online cz dabing, Bod obnovy (2023) Zadarmo, Bod obnovy (2023) Celý Film, Bod obnovy (2023) Titulky, Bod obnovy (2023) nový film, Bod obnovy (2023) DVD filmy, Bod obnovy (2023) Blu-ray filmy, Bod obnovy (2023) 3D filmy, Bod obnovy (2023) online bombuj, Bod obnovy (2023) online cely film CZ, Bod obnovy (2023) online ke shlednuti, Bod obnovy (2023) cz dabing online ke shlednuti, Bod obnovy (2023) online, Bod obnovy (2023) online film cz, Bod obnovy (2023) Bombuj, Bod obnovy (2023) bombuj cz, Bod obnovy (2023) online ke shlédnutí, Bod obnovy (2023) celý film Cesky, Bod obnovy (2023) celý film zdarma ke shlédnutí, Bod obnovy (2023) celý film cz dabing, Bod obnovy (2023) zkouknito, Bod obnovy (2023) sleduj filmy, Bod obnovy (2023) online cz titulky, Bod obnovy (2023) celý film,
❏ STREAMOVAT MÉDIA ❏
Streamovaná média jsou multimédia, která jsou nepřetržitě přijímána a prezentována koncovému uživateli, zatímco jsou dodávána poskytovatelem. Sloveso streamovat odkazuje na proces doručování nebo získávání médií tímto způsobem. [Potřebné upřesnění] Streamování se týká způsobu doručení média, nikoli média samotného. Rozdíl mezi způsobem doručování a vysílacím médiem se týká zejména telekomunikačních sítí, protože většina doručovacích systémů má streamingový charakter (např. rádio, televize, streamingové aplikace) nebo nestreamingový charakter (např. knihy, videokazety, audio CD). Streamování obsahu přes internet přináší problémy. Uživatelé, jejichž připojení k internetu nemá dostatečnou šířku pásma, mohou například zaznamenat zamrzání, zpoždění nebo pomalé ukládání obsahu do vyrovnávací paměti. A uživatelé, kteří nemají kompatibilní hardwarové nebo softwarové systémy, nemusí být schopni streamovat nějaký obsah.
Živé vysílání je poskytování internetového obsahu v reálném čase, podobně jako živé televizní vysílání obsahu prostřednictvím rádiových vln prostřednictvím televizního signálu. Online živé vysílání vyžaduje určitou formu zdrojového média (např. videokameru, audio rozhraní, software pro snímání obrazovky), kodér pro digitalizaci obsahu, vydavatele médií a síť pro doručování obsahu pro distribuci a doručování obsahu. Přímé přenosy není nutné nahrávat na začátku, i když často ano. Streamování je alternativou ke stahování souborů, což je proces, při kterém koncový uživatel získá celý soubor obsahu před jeho zobrazením nebo poslechem. Streamování umožňuje koncovému uživateli používat svůj přehrávač médií ke spuštění přehrávání digitálního videa nebo digitálního zvukového obsahu před přenesením celého souboru. Termín „streamingová média“ může odkazovat na média jiná než video a zvuk, jako jsou živé titulky, kazety se zprávami a text v reálném čase, které jsou považovány za „streamovaný text“.
❏ OBSAH AUTORSKÝCH PRÁV ❏
Autorské právo je druh duševního vlastnictví, který dává vlastníkovi výhradní právo pořídit si rozmnoženinu tvůrčího díla, obvykle na omezenou dobu. Tvůrčí práce může být literární, výtvarná, vzdělávací nebo hudební. Ochrana autorských práv je určena k ochraně původního vyjádření myšlenky jako tvůrčího díla, nikoli však myšlenky samotné. Autorská práva jsou omezena ohledy veřejného zájmu, jako je americká doktrína fair use.
Některé jurisdikce vyžadují, abyste „opravili“ díla chráněná autorským právem v hmotné podobě. Často jej sdílí více autorů, každý hWatch Dunes je soubor práv k použití nebo licencování díla, kteří jsou běžně označováni jako práva hWatch Duneers. [pochvalná zmínka potřebovaný] Tato práva často zahrnují reprodukci, odvozenou kontrolu, distribuci, veřejné předvádění a osobní práva, jako je uvádění zdroje.
Autorská práva mohou být udělena podle veřejného práva a v tomto případě jsou považována za „územní práva“. To znamená, že autorská práva udělená právem dané země nepřesahují území této konkrétní jurisdikce. Tyto typy autorských práv se v jednotlivých zemích liší; mnoho zemí, a někdy i velká skupina zemí, má dohody s jinými zeměmi o postupech, které je třeba dodržet, když práce „překročí“ státní hranice nebo když jsou vnitrostátní zákony nekonzistentní . Veřejná autorská práva obvykle vyprší 50 až 100 let po smrti tvůrce, v závislosti na jurisdikci. Některé země vyžadují určité formality týkající se autorských práv pro stanovení autorských práv, jiné uznávají autorská práva v jakémkoli dokončeném díle bez formální registrace.
Obecně se má za to, že autorská práva jsou zásadní pro podporu kulturní rozmanitosti a kreativity. Na rozdíl od všeobecného přesvědčení však Parc tvrdí, že napodobování a kopírování neomezuje kreativitu ani kulturní rozmanitost, ale ve skutečnosti je navíc podporuje. Tento argument byl podpořen mnoha příklady jako Millet a Van Gogh, Picasso, Manet a Monet atd.
❏ SERVIS ZBOŽÍ ❏
Kredit (z latinského credit, „(ona/ona) věří“) je trust, který umožňuje jedné straně poskytnout peníze nebo zdroje druhé straně, přičemž druhá strana okamžitě nevrátí první straně (která vytváří dluh), ale slíbí splatit nebo splatit tyto zdroje (nebo jiné materiály stejné hodnoty) později Jinými slovy, úvěr je způsob, jak učinit formality reciproční, právně vymahatelné a rozšířitelné na velkou skupinu nepříbuzných jednotlivců .
Převáděné prostředky mohou mít finanční povahu (např. poskytnutí půjčky) nebo představovat zboží či služby (např. spotřebitelský úvěr). Kredit zahrnuje jakoukoli formu odložené platby. Půjčku poskytuje věřitel, také známý jako věřitel, dlužník, také známý jako dlužník .
|
AnonymousSub/SR_cline
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
4-bit quantized weights (q4_0) of StableLM's 7B Tuned Alpha model for use with gptneox.cpp (fork of llama.cpp) https://github.com/byroneverson/gptneox.cpp
The weights bin file should be placed in the models/pythia subdirectory of gptneox.cpp in order to use the chat scripts.
This is a ggjt version of the model, the tensors are 32 byte aligned to allow mmap loading. This will load just fine in gptneox.cpp or my fork or ggml.
|
AnonymousSub/SR_consert
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -2.70 +/- 0.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: apache-2.0
datasets:
- LinhDuong/chatdoctor-200k
language:
- en
pipeline_tag: text-generation
tags:
- medical
- doctor
- chat
- qa
- question-answering
thumbnail: https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor/resolve/main/cdl.png
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor/resolve/main/cdl.png" alt="chat doctor bioGPT logo"">
</div>
# BioGPT (Large) 🧬 fine-tuned on ChatDoctor 🩺 for QA
[Microsoft's BioGPT Large](https://huggingface.co/microsoft/BioGPT-Large) fine-tuned on ChatDoctor dataset for Question Answering.
## Intended Use
This is just a research model and does **NOT** have to be used out of this scope.
## Limitations
TBA
## Model
[Microsoft's BioGPT Large](https://huggingface.co/microsoft/BioGPT-Large):
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
## Dataset
ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
The dataset is composed by:
- 100k real conversations between patients and doctors from HealthCareMagic.com [HealthCareMagic-100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing).
- 10k real conversations between patients and doctors from icliniq.com [icliniq-10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing).
- 5k generated conversations between patients and physicians from ChatGPT [GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing) and [disease database](https://github.com/Kent0n-Li/ChatDoctor/blob/main/format_dataset.csv)
## Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "Narrativaai/BioGPT-Large-finetuned-chatdoctor"
tokenizer = AutoTokenizer.from_pretrained("microsoft/BioGPT-Large")
model = AutoModelForCausalLM.from_pretrained(model_id)
def answer_question(
prompt,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=2,
**kwargs,
):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id
)
s = generation_output.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)
return output.split(" Response:")[1]
example_prompt = """
Below is an instruction that describes a task, paired with an input that provides further context.Write a response that appropriately completes the request.
### Instruction:
If you are a doctor, please answer the medical questions based on the patient's description.
### Input:
Hi i have sore lumps under the skin on my legs. they started on my left ankle and are approx 1 - 2cm diameter and are spreading up onto my thies. I am eating panadol night and anti allergy pills (Atarax). I have had this for about two weeks now. Please advise.
### Response:
"""
print(answer_question(example_prompt))
```
## Citation
```
@misc {narrativa_2023,
author = { {Narrativa} },
title = { BioGPT-Large-finetuned-chatdoctor (Revision 13764c0) },
year = 2023,
url = { https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor },
doi = { 10.57967/hf/0601 },
publisher = { Hugging Face }
}
```
|
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: modelBeto6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelBeto6
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1808
- Precision: 0.6219
- Recall: 0.6545
- F1: 0.6378
- Accuracy: 0.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.2309 | 0.0 | 0.0 | 0.0 | 0.9440 |
| No log | 2.0 | 58 | 0.2034 | 0.0 | 0.0 | 0.0 | 0.9440 |
| No log | 3.0 | 87 | 0.1685 | 0.1429 | 0.0157 | 0.0283 | 0.9476 |
| No log | 4.0 | 116 | 0.1425 | 0.3034 | 0.1414 | 0.1929 | 0.9546 |
| No log | 5.0 | 145 | 0.1285 | 0.3802 | 0.2408 | 0.2949 | 0.9589 |
| No log | 6.0 | 174 | 0.1283 | 0.5922 | 0.3194 | 0.4150 | 0.9696 |
| No log | 7.0 | 203 | 0.1337 | 0.5630 | 0.3979 | 0.4663 | 0.9715 |
| No log | 8.0 | 232 | 0.1184 | 0.5505 | 0.6283 | 0.5868 | 0.9686 |
| No log | 9.0 | 261 | 0.1308 | 0.5882 | 0.5759 | 0.5820 | 0.9729 |
| No log | 10.0 | 290 | 0.1329 | 0.5989 | 0.5550 | 0.5761 | 0.9729 |
| No log | 11.0 | 319 | 0.1549 | 0.6781 | 0.5183 | 0.5875 | 0.9742 |
| No log | 12.0 | 348 | 0.1578 | 0.6221 | 0.5602 | 0.5895 | 0.9732 |
| No log | 13.0 | 377 | 0.1505 | 0.6117 | 0.6021 | 0.6069 | 0.9716 |
| No log | 14.0 | 406 | 0.1671 | 0.6412 | 0.5707 | 0.6039 | 0.9729 |
| No log | 15.0 | 435 | 0.1684 | 0.5902 | 0.5654 | 0.5775 | 0.9710 |
| No log | 16.0 | 464 | 0.1707 | 0.6216 | 0.6021 | 0.6117 | 0.9727 |
| No log | 17.0 | 493 | 0.1715 | 0.6453 | 0.5812 | 0.6116 | 0.9737 |
| 0.0738 | 18.0 | 522 | 0.1729 | 0.5734 | 0.6545 | 0.6112 | 0.9701 |
| 0.0738 | 19.0 | 551 | 0.1815 | 0.5990 | 0.6021 | 0.6005 | 0.9716 |
| 0.0738 | 20.0 | 580 | 0.1746 | 0.6354 | 0.6387 | 0.6371 | 0.9732 |
| 0.0738 | 21.0 | 609 | 0.1654 | 0.6686 | 0.5916 | 0.6278 | 0.9749 |
| 0.0738 | 22.0 | 638 | 0.1678 | 0.6359 | 0.6492 | 0.6425 | 0.9741 |
| 0.0738 | 23.0 | 667 | 0.1704 | 0.6218 | 0.6283 | 0.625 | 0.9742 |
| 0.0738 | 24.0 | 696 | 0.1746 | 0.6685 | 0.6440 | 0.6560 | 0.9747 |
| 0.0738 | 25.0 | 725 | 0.1772 | 0.6224 | 0.6387 | 0.6305 | 0.9739 |
| 0.0738 | 26.0 | 754 | 0.1792 | 0.6484 | 0.6178 | 0.6327 | 0.9741 |
| 0.0738 | 27.0 | 783 | 0.1788 | 0.6383 | 0.6283 | 0.6332 | 0.9741 |
| 0.0738 | 28.0 | 812 | 0.1802 | 0.6281 | 0.6545 | 0.6410 | 0.9741 |
| 0.0738 | 29.0 | 841 | 0.1803 | 0.6443 | 0.6545 | 0.6494 | 0.9747 |
| 0.0738 | 30.0 | 870 | 0.1804 | 0.6495 | 0.6597 | 0.6545 | 0.9749 |
| 0.0738 | 31.0 | 899 | 0.1805 | 0.6443 | 0.6545 | 0.6494 | 0.9746 |
| 0.0738 | 32.0 | 928 | 0.1808 | 0.6219 | 0.6545 | 0.6378 | 0.9737 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 135.05 +/- 105.56
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
datasets:
- lambdalabs/pokemon-blip-captions
language:
- en
---
This is the highly optimized version of the [Stable Diffusion model for pokemon generation](https://huggingface.co/svjack/Stable-Diffusion-Pokemon-en).
The model was optimized with a combination of two methods:
* Quantization-aware training from [NNCF](https://github.com/openvinotoolkit/nncf).
* A modification of the Token Merging method from [here](https://github.com/AlexKoff88/tomesd/tree/openvino).
To run the model use the following code:
```python
%pip install optimum[openvino,diffusers]
from optimum.intel.openvino import OVStableDiffusionPipeline
from diffusers import LMSDiscreteScheduler, DDPMScheduler
import torch
import random
import numpy as np
pipe = OVStableDiffusionPipeline.from_pretrained("OpenVINO/stable-diffusion-pokemons-tome-quantized-aggressive", compile=False)
pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1)
pipe.compile()
# Use original model to compare
# pipe = OVStableDiffusionPipeline.from_pretrained("svjack/Stable-Diffusion-Pokemon-en", export=True, compile=False)
prompt = "cartoon bird"
output = pipe(prompt, num_inference_steps=50, output_type="pil")
output.images[0].save("output.png")
```
|
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "nyan-mix-nomal"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/nyan-mix-nomal)
Credits: [View credits](https://civitai.com/?query=model_search)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "nyan-mix-nomal",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | 2023-04-29T10:10:04Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.35 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SciFive_pubmedqa_question_generation
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 7 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.92 +/- 2.99
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r prepsyched/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
Access to model pvduy/vicuna-13b-v1.1-rm-formated is restricted and you are not in the authorized list. Visit https://huggingface.co/pvduy/vicuna-13b-v1.1-rm-formated to ask for access.
|
AnonymousSub/bert_snips
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.08 +/- 2.19
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jwright94/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AnonymousSub/cline-emanuals-s10-SR
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.10 +/- 11.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/cline-emanuals-techqa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: mit
datasets:
- cc100
language:
- en
pipeline_tag: text-generation
---
# GPT-2 Medium Multi-Exit
Pre-trained language model with identical parameters to [gpt2-medium](https://huggingface.co/gpt2-medium), but with additional language modeling heads ("exits") connected to different layers of the model.
These 12 additional heads (in layers 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24) were trained on the English portion of [CC-100](https://huggingface.co/datasets/cc100) while keeping the original pre-trained model parameters frozen.
The model can be used for the _Autocontrastive Decoding_ text generation approach described in [Gera et al. 2023](https://arxiv.org/abs/2305.01628), for _early-exiting_ approaches, or for other algorithms that consider the next-token predictions of different model layers.
## Usage
Harnessing the additional language modeling heads requires loading the model using the [auto-contrastive-generation library](https://github.com/IBM/auto-contrastive-generation) (`pip install autocontrastive-gen`).
In a nutshell, the user creates a `MultiExitConfiguration` that determines model behavior at training and inference, and then loads the model using the dedicated `AutoMultiExitModel` class. After that, the model can be used with the `transformers` API like any other model. See the [GitHub](https://github.com/IBM/auto-contrastive-generation) for detailed usage instructions.
For example, the code below initializes the model to use _Autocontrastive Decoding_, and then performs text generation in this chosen setting:
```python
from transformers import AutoTokenizer
from autocontrastive_gen.modeling.configuration import MultiExitConfiguration
from autocontrastive_gen.modeling.auto_model import AutoMultiExitModel
# initialize a pre-trained multi-exit model to use auto-contrast between layer 24 and layer 12
multi_exit_config = MultiExitConfiguration(use_original_head=False,
contrast_layer_indices=(24, 12))
model = AutoMultiExitModel.from_pretrained("IBM/gpt2-medium-multiexit", multi_exit_config=multi_exit_config)
# perform text generation as usual
tokenizer = AutoTokenizer.from_pretrained("IBM/gpt2-medium-multiexit")
prompt = tokenizer("humpty dumpty sat on", return_tensors='pt')
generated_ids = model.generate(**prompt, max_new_tokens=15)
print(tokenizer.batch_decode(generated_ids))
```
## Citation
Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim and Eyal Shnarch.
[The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers](https://arxiv.org/abs/2305.01628). ACL 2023.
```bibtex
@inproceedings{gera2023autocontrastive,
title={The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers},
author={Gera, Ariel and Friedman, Roni and Arviv, Ofir and Gunasekara, Chulaka and Sznajder, Benjamin and Slonim, Noam and Shnarch, Eyal},
booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
month={july},
address={Toronto, Canada},
year={2023}
}
```
|
AnonymousSub/cline-papers-roberta-0.585
|
[
"pytorch",
"roberta",
"transformers"
] | null |
{
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ange0102/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/cline-s10-SR
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ange0102/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/cline
|
[
"pytorch",
"roberta",
"transformers"
] | null |
{
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: dvesely/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/consert-techqa
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: axi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="coldra1n/axi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
Tagger for [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
---
Interrogate booru style tags for single or multiple image files using various models, such as DeepDanbooru.
[한국어를 사용하시나요? 여기에 한국어 설명서가 있습니다!](README.ko.md)
## Disclaimer
I didn't make any models, and most of the code was heavily borrowed from the [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) and MrSmillingWolf's tagger.
## Installation
1. *Extensions* -> *Install from URL* -> Enter URL of this repository -> Press *Install* button
- or clone this repository under `extensions/`
```sh
$ git clone https://github.com/toriato/stable-diffusion-webui-wd14-tagger.git extensions/tagger
```
1. *(optional)* Add interrogate model
- #### [*Waifu Diffusion 1.4 Tagger by MrSmilingWolf*](docs/what-is-wd14-tagger.md)
Downloads automatically from the [HuggingFace repository](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger) the first time you run it.
- #### *DeepDanbooru*
1. Various model files can be found below.
- [DeepDanbooru models](https://github.com/KichangKim/DeepDanbooru/releases)
- [e621 model by 🐾Zack🐾#1984](https://discord.gg/BDFpq9Yb7K)
*(link contains NSFW contents!)*
1. Move the project folder containing the model and config to `models/deepdanbooru`
1. The file structure should look like:
```
models/
└╴deepdanbooru/
├╴deepdanbooru-v3-20211112-sgd-e28/
│ ├╴project.json
│ └╴...
│
├╴deepdanbooru-v4-20200814-sgd-e30/
│ ├╴project.json
│ └╴...
│
├╴e621-v3-20221117-sgd-e32/
│ ├╴project.json
│ └╴...
│
...
```
1. Start or restart the WebUI.
- or you can press refresh button after *Interrogator* dropdown box.
- "You must close stable diffusion completely after installation and re-run it!"
## Model comparison
[Model comparison](docs/model-comparison.md)
## Screenshot

Artwork made by [hecattaart](https://vk.com/hecattaart?w=wall-89063929_3767)
## Copyright
Public domain, except borrowed parts (e.g. `dbimutils.py`)
|
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: gpl-2.0
datasets:
- kabachuha/atsiftu-dialogue
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- art
- writing
- dialogue
- script
- storytelling
- fantasy
---
This is a LoRA trained on AtS/IftU dialogue with the base of https://huggingface.co/TheBloke/wizardLM-7B-HF for text generation.
|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: other
inference: false
---
# OpenAssistant LLaMA 30B SFT 7 GPTQ
This in a repo of GPTQ format 4bit quantised models for [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit GPU inference using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
This is epoch 7 of OpenAssistant's training of their Llama 30B model.
**Please note that these models will need 24GB VRAM or greater to use effectively**
## Repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML).
* [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF).
## PROMPT TEMPLATE
This model requires the following prompt template:
```
<|prompter|> prompt goes here
<|assistant|>:
```
## CHOICE OF MODELS
Three sets of models are provided:
* Groupsize = None
* Should work reliably in 24GB VRAM
* Uses --act-order for the best possible inference quality given its lack of group_size.
* Groupsize = 1024
* Theoretically higher inference accuracy
* May OOM on long context lengths in 24GB VRAM
* Groupsize = 128
* Optimal setting for highest inference quality
* Will definitely need more than 24GB VRAM on longer context lengths (1000-1500+ tokens returned)
For the 128g and 1024g models, two versions are available:
* `compat.no-act-order.safetensor`
* Works with all versions of GPTQ-for-LLaMa, including the version in text-generation-webui one-click-installers
* `latest.act-order.safetensors`
* uses `--act-order` for higher inference quality
* requires more recent GPTQ-for-LLaMa code, therefore will not currently work with one-click-installers
## HOW TO CHOOSE YOUR MODEL
I have used branches to separate the models. This means you can clone the branch you want and not got model files you don't need.
If you have 24GB VRAM you are strongly recommended to use the file in `main`, with group_size = None. This is fully compatible, and won't OOM.
* Branch: **main** = groupsize None, `OpenAssistant-SFT-7-Llama-30B-GPTQ-4bit.safetensors` file
* Branch: **1024-compat** = groupsize 1024, `compat.no-act-order.safetensors` file
* Branch: **1024-latest** = groupsize 1024, `latest.act-order.safetensors` file
* Branch: **128-compat** = groupsize 128, `compat.no-act-order.safetensors` file
* Branch: **128-latest** = groupsize 128, `latest.act-order.safetensors` file

## How to easily download and run the 1024g compat model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `OpenAssistant-SFT-7-Llama-30B-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Manual instructions for `text-generation-webui`
The `compat.no-act-order.safetensors` files can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
[Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
The `latest.act-order.safetensors` files were created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model OpenAssistant-SFT-7-Llama-30B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
To update the CUDA branch of GPTQ-for-LLaMa, you can do the following. **This requires a C/C++ compiler and the CUDA toolkit installed!**
```
# Clone text-generation-webui, if you don't already have it
git clone https://github.com/oobabooga/text-generation-webui
# Make a repositories directory
mkdir text-generation-webui/repositories
cd text-generation-webui/repositories
# Clone the latest GPTQ-for-LLaMa code inside text-generation-webui
git clone -b cuda https://github.com/qwopqwop200/GPTQ-for-LLaMa
cd GPTQ-for-LLaMa
pip uninstall quant-cuda # uninstall existing CUDA version
python setup_cuda.py install # install latest version
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you can't update GPTQ-for-LLaMa or don't want to, please use a `compat.no-act-order.safetensor` file.
# Original model card
```
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
```
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.