model_id
stringlengths
7
105
model_card
stringlengths
1
130k
model_labels
listlengths
2
80k
natix-miner16v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
kalemlhub/sn72-roadwork-Ci4z8c
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
kalemlhub/sn72-roadwork-jRQ1L6
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
kalemlhub/sn72-roadwork-ahPKZM
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
kalemlhub/sn72-roadwork-8xaxrV
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
kalemlhub/sn72-roadwork-RwDpM8
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner17v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner18v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner19v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner20v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
johnmiceeee/model1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner21v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner22v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner23v2/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner24/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner25/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner26/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner27/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner28/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner29/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner30/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner31/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
jarinschnierl/vit-base-food101
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-food101 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.0830 - Accuracy: 0.974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3003 | 1.0 | 313 | 0.1222 | 0.969 | | 0.223 | 2.0 | 626 | 0.0905 | 0.975 | | 0.1914 | 3.0 | 939 | 0.0814 | 0.977 | | 0.163 | 4.0 | 1252 | 0.0847 | 0.975 | | 0.1676 | 5.0 | 1565 | 0.0830 | 0.974 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "baklava", "beef_carpaccio", "beef_tartare", "beet_salad", "beignets", "bibimbap", "bread_pudding", "breakfast_burrito", "bruschetta", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare", "waffles" ]
natix-miner32/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner33/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
JoGoCr/vit-base-SimpsonsVIT_III
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-SimpsonsVIT_III This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the simpsons dataset. It achieves the following results on the evaluation set: - Loss: 1.1666 - Accuracy: 0.7105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7106 | 1.0 | 1047 | 1.6790 | 0.6111 | | 1.3612 | 2.0 | 2094 | 1.3673 | 0.6775 | | 1.2199 | 3.0 | 3141 | 1.2461 | 0.7033 | | 1.0933 | 4.0 | 4188 | 1.1879 | 0.7100 | | 0.9845 | 5.0 | 5235 | 1.1653 | 0.7129 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
[ "abraham_grampa_simpson", "agnes_skinner", "apu_nahasapeemapetilon", "barney_gumble", "bart_simpson", "carl_carlson", "charles_montgomery_burns", "chief_wiggum", "cletus_spuckler", "comic_book_guy", "disco_stu", "edna_krabappel", "fat_tony", "gil", "groundskeeper_willie", "homer_simpson", "kent_brockman", "krusty_the_clown", "lenny_leonard", "lisa_simpson", "maggie_simpson", "marge_simpson", "martin_prince", "mayor_quimby", "milhouse_van_houten", "miss_hoover", "moe_szyslak", "ned_flanders", "nelson_muntz", "otto_mann", "patty_bouvier", "principal_skinner", "professor_john_frink", "rainier_wolfcastle", "ralph_wiggum", "selma_bouvier", "sideshow_bob", "sideshow_mel", "snake_jailbird", "troy_mcclure", "waylon_smithers" ]
natix-miner34/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
natix-miner35/streetvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
Deval1004/streetvision1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
Jedrzej-Smok/2025-05-30_21-49-09
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-05-30_21-49-09 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.3554 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6915 | 1.0 | 1 | 0.6962 | 0.5417 | | 0.6895 | 2.0 | 2 | 0.6093 | 0.9583 | | 0.6067 | 3.0 | 3 | 0.5473 | 1.0 | | 0.5492 | 4.0 | 4 | 0.4959 | 1.0 | | 0.4956 | 5.0 | 5 | 0.4699 | 1.0 | | 0.4588 | 6.0 | 6 | 0.4226 | 1.0 | | 0.4175 | 7.0 | 7 | 0.3883 | 1.0 | | 0.3846 | 8.0 | 8 | 0.3718 | 1.0 | | 0.3713 | 9.0 | 9 | 0.3618 | 1.0 | | 0.3526 | 10.0 | 10 | 0.3554 | 1.0 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "healthy", "sick" ]
Flogoro/vit-base-maurice-fp-stanford-dogs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-maurice-fp-stanford-dogs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the maurice-fp/stanford-dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.6323 - Accuracy: 0.8416 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7839 | 1.0 | 1029 | 1.6492 | 0.7988 | | 0.765 | 2.0 | 2058 | 0.7655 | 0.8411 | | 0.6504 | 3.0 | 3087 | 0.6558 | 0.8426 | | 0.6054 | 4.0 | 4116 | 0.6601 | 0.8319 | | 0.6279 | 5.0 | 5145 | 0.6133 | 0.8435 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119" ]
Marc-Hagenbusch/vit-base-maurice-fp-stanford-dogs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-maurice-fp-stanford-dogs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the maurice-fp/stanford-dogs dataset. It achieves the following results on the evaluation set: - Loss: 0.6455 - Accuracy: 0.8328 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.8493 | 1.0 | 1029 | 1.6335 | 0.8081 | | 0.794 | 2.0 | 2058 | 0.8066 | 0.8319 | | 0.6688 | 3.0 | 3087 | 0.6532 | 0.8411 | | 0.6161 | 4.0 | 4116 | 0.6464 | 0.8426 | | 0.5938 | 5.0 | 5145 | 0.6708 | 0.8343 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119" ]
Jedrzej-Smok/2025-05-30_23-53-40
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-05-30_23-53-40 This model is a fine-tuned version of [google/deeplabv3_mobilenet_v2_1.0_513](https://huggingface.co/google/deeplabv3_mobilenet_v2_1.0_513) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.5218 - Accuracy: 0.7286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1046 | 1.0 | 2 | 0.7815 | 0.5 | | 0.0928 | 2.0 | 4 | 0.6803 | 0.5143 | | 0.1231 | 3.0 | 6 | 0.6336 | 0.6429 | | 0.0788 | 4.0 | 8 | 0.5967 | 0.6714 | | 0.0828 | 5.0 | 10 | 0.6340 | 0.5857 | | 0.0583 | 6.0 | 12 | 0.6024 | 0.6429 | | 0.0842 | 7.0 | 14 | 0.6859 | 0.5143 | | 0.0901 | 8.0 | 16 | 0.5748 | 0.6714 | | 0.0924 | 9.0 | 18 | 0.5445 | 0.7429 | | 0.0851 | 10.0 | 20 | 0.5218 | 0.7286 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "healthy", "sick" ]
Flogoro/vit-base-caltech-ucsd-birds-200-2011
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-caltech-ucsd-birds-200-2011 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the bentrevett/caltech-ucsd-birds-200-2011 dataset. It achieves the following results on the evaluation set: - Loss: 1.1598 - Accuracy: 0.7371 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.0041 | 1.0 | 590 | 4.7408 | 0.0975 | | 3.5745 | 2.0 | 1180 | 3.2651 | 0.5165 | | 2.3496 | 3.0 | 1770 | 2.2666 | 0.6327 | | 1.8024 | 4.0 | 2360 | 1.7974 | 0.6718 | | 1.4613 | 5.0 | 2950 | 1.4765 | 0.7269 | | 1.2414 | 6.0 | 3540 | 1.3719 | 0.7116 | | 1.222 | 7.0 | 4130 | 1.2804 | 0.7260 | | 1.0678 | 8.0 | 4720 | 1.2496 | 0.7243 | | 1.081 | 9.0 | 5310 | 1.1418 | 0.7515 | | 0.9859 | 10.0 | 5900 | 1.0439 | 0.7744 | | 1.0177 | 11.0 | 6490 | 1.1350 | 0.7498 | | 1.006 | 12.0 | 7080 | 1.0877 | 0.7600 | | 1.0204 | 13.0 | 7670 | 1.1010 | 0.7464 | | 0.9491 | 14.0 | 8260 | 1.0372 | 0.7676 | | 0.9387 | 15.0 | 8850 | 1.0840 | 0.7354 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "148", "149", "150", "151", "152", "153", "154", "155", "156", "157", "158", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "175", "176", "177", "178", "179", "180", "181", "182", "183", "184", "185", "186", "187", "188", "189", "190", "191", "192", "193", "194", "195", "196", "197", "198", "199" ]
MichaelMM2000/vit-base-animals10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-animals10 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Rapidata/Animals-10 dataset. It achieves the following results on the evaluation set: - Loss: 0.0784 - Accuracy: 0.9762 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0997 | 1.0 | 2356 | 0.0928 | 0.9732 | | 0.0752 | 2.0 | 4712 | 0.0678 | 0.9809 | | 0.0743 | 3.0 | 7068 | 0.0584 | 0.9839 | | 0.0882 | 4.0 | 9424 | 0.0605 | 0.9792 | | 0.0656 | 5.0 | 11780 | 0.0653 | 0.9813 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" ]
Jedrzej-Smok/2025-05-31_08-56-13
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-05-31_08-56-13 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6962 | 1.0 | 2 | 0.6824 | 0.5857 | | 0.6628 | 2.0 | 4 | 0.6421 | 0.8143 | | 0.6149 | 3.0 | 6 | 0.6066 | 0.8286 | | 0.5829 | 4.0 | 8 | 0.5789 | 0.8286 | | 0.4701 | 5.0 | 10 | 0.5455 | 0.8571 | | 0.5166 | 6.0 | 12 | 0.5094 | 0.8857 | | 0.4658 | 7.0 | 14 | 0.4987 | 0.8714 | | 0.4678 | 8.0 | 16 | 0.4597 | 0.8857 | | 0.4114 | 9.0 | 18 | 0.4557 | 0.9143 | | 0.339 | 10.0 | 20 | 0.4659 | 0.9 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "healthy", "sick" ]
MaxPowerUnlimited/vit-superhero-villain
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-superhero-villain This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2902 - Accuracy: 0.7363 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 26 | 1.4140 | 0.735 | | 1.2713 | 2.0 | 52 | 1.3908 | 0.735 | | 1.2713 | 3.0 | 78 | 1.3709 | 0.735 | | 1.2028 | 4.0 | 104 | 1.3544 | 0.74 | | 1.2028 | 5.0 | 130 | 1.3359 | 0.74 | | 1.1776 | 6.0 | 156 | 1.3219 | 0.74 | | 1.1776 | 7.0 | 182 | 1.3078 | 0.74 | | 1.1515 | 8.0 | 208 | 1.2952 | 0.74 | | 1.1515 | 9.0 | 234 | 1.2841 | 0.74 | | 1.1519 | 10.0 | 260 | 1.2733 | 0.745 | | 1.1519 | 11.0 | 286 | 1.2637 | 0.745 | | 1.107 | 12.0 | 312 | 1.2557 | 0.745 | | 1.107 | 13.0 | 338 | 1.2495 | 0.745 | | 1.0611 | 14.0 | 364 | 1.2441 | 0.745 | | 1.0611 | 15.0 | 390 | 1.2388 | 0.745 | | 1.0748 | 16.0 | 416 | 1.2347 | 0.745 | | 1.0748 | 17.0 | 442 | 1.2317 | 0.745 | | 1.0563 | 18.0 | 468 | 1.2294 | 0.745 | | 1.0563 | 19.0 | 494 | 1.2280 | 0.745 | | 1.062 | 20.0 | 520 | 1.2277 | 0.745 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.5.1+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "batman", "blackpanther", "catwomen", "flash", "green_goblin", "harleyquinn", "hulk", "ironman", "joker", "lexluthor", "loki", "riddler", "spiderman", "superman", "thanos", "thor", "ultron", "venom", "wolverine", "wonderwoman" ]
viazzana/vit-fruits-classifier-without-augmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fruits-classifier-without-augmentation This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Custom fruit image dataset (uploaded from GitHub) without augmentation dataset. It achieves the following results on the evaluation set: - Loss: 0.1472 - Accuracy: 0.9654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1326 | 1.0 | 520 | 0.1220 | 0.9692 | | 0.0852 | 2.0 | 1040 | 0.1124 | 0.9721 | | 0.0668 | 3.0 | 1560 | 0.1111 | 0.9712 | | 0.0492 | 4.0 | 2080 | 0.1083 | 0.9721 | | 0.059 | 5.0 | 2600 | 0.1085 | 0.9721 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "acai", "apple", "apricot", "avocado", "banana", "black_berry", "coconut", "corn_kernel", "cranberry", "dragonfruit", "durian", "eggplant", "elderberry", "fig", "grape", "grapefruit", "hard_kiwi", "indian_strawberry", "jalapeno", "mandarine", "mango", "papaya", "passion_fruit", "pineapple", "pomegranate", "raspberry" ]
viazzana/vit-fruits-classifier-with-augmentation
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fruits-classifier-with-augmentation This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the Custom fruit image dataset (uploaded from GitHub) with augmentations dataset. It achieves the following results on the evaluation set: - Loss: 0.1227 - Accuracy: 0.9712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0956 | 1.0 | 520 | 0.1204 | 0.9683 | | 0.082 | 2.0 | 1040 | 0.1147 | 0.9683 | | 0.0769 | 3.0 | 1560 | 0.1091 | 0.9683 | | 0.0544 | 4.0 | 2080 | 0.1090 | 0.9712 | | 0.065 | 5.0 | 2600 | 0.1086 | 0.9702 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "acai", "apple", "apricot", "avocado", "banana", "black_berry", "coconut", "corn_kernel", "cranberry", "dragonfruit", "durian", "eggplant", "elderberry", "fig", "grape", "grapefruit", "hard_kiwi", "indian_strawberry", "jalapeno", "mandarine", "mango", "papaya", "passion_fruit", "pineapple", "pomegranate", "raspberry" ]
BootsofLagrangian/linear-vit-b-imagenet1k-hf
# Model Card for Linear ViT-B ImageNet-1k (Vanilla ViT) This model is a Vision Transformer (ViT-B) trained on [ImageNet-1k](https://huggingface.co/datasets/timm/imagenet-1k-wds), incorporating _Orthogonal Residual Updates_ as proposed in the paper [Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks](https://arxiv.org/abs/2505.11881). The core idea is to decompose a module's output relative to the input stream and add only the component orthogonal to this stream, aiming for richer feature learning and more efficient training. This specific checkpoint was trained for approximately 90,000 steps (roughly 270 epochs out of a planned 300). ## Model Details ### Evaluation _**Note:** Validation accuracy below is measured on checkpoint at step 90k (not the final model); results may differ slightly from those reported in the paper._ | Steps | Connection | Top-1 Accuracy (%) | Top-5 Accuracy (%) | Link | |-------|-------------|--------------------|---------------------|------| | 90k | Orthogonal | **74.62** | **92.26** | [link](https://huggingface.co/BootsofLagrangian/ortho-vit-b-imagenet1k-hf) | | 90k | Linear | 71.23 | 90.29 | [here](https://huggingface.co/BootsofLagrangian/linear-vit-b-imagenet1k-hf) | ### Abstract Residual connections are pivotal for deep neural networks, enabling greater depth by mitigating vanishing gradients. However, in standard residual updates, the module's output is directly added to the input stream. This can lead to updates that predominantly reinforce or modulate the existing stream direction, potentially underutilizing the module's capacity for learning entirely novel features. In this work, we introduce _Orthogonal Residual Update_: we decompose the module's output relative to the input stream and add only the component orthogonal to this stream. This design aims to guide modules to contribute primarily new representational directions, fostering richer feature learning while promoting more efficient training. We demonstrate that our orthogonal update strategy improves generalization accuracy and training stability across diverse architectures (ResNetV2, Vision Transformers) and datasets (CIFARs, TinyImageNet, ImageNet-1k), achieving, for instance, a +4.3\%p top-1 accuracy gain for ViT-B on ImageNet-1k. ### Method Overview Our core idea is to modify the standard residual update $x_{n+1} = x_n + f(\sigma(x_n))$ by projecting out the component of $f(\sigma(x_n))$ that is parallel to $x_n$. The update then becomes $x_{n+1} = x_n + f_{\perp}(x_n)$, where $f_{\perp}(x_n)$ is the component of $f(\sigma(x_n))$ orthogonal to $x_n$. ![Figure 1: Intuition behind Orthogonal Residual Update](img/figure1.jpg) *Figure 1: (Left) Standard residual update. (Right) Our Orthogonal Residual Update, which discards the parallel component $f_{||}$ and adds only the orthogonal component $f_{\perp}$.* This approach aims to ensure that each module primarily contributes new information to the residual stream, enhancing representational diversity and mitigating potential interference from updates that merely rescale or oppose the existing stream. ### Key Results: Stable and Efficient Learning Our Orthogonal Residual Update strategy leads to more stable training dynamics and improved learning efficiency. For example, models trained with our method often exhibit faster convergence to better generalization performance, as illustrated by comparative training curves. ![Figure 2: Training Dynamics and Efficiency Comparison](img/figure2.jpg) *Figure 2: Example comparison (e.g., ViT-B on ImageNet-1k) showing Orthogonal Residual Update (blue) achieving lower training loss and higher validation accuracy in less wall-clock time compared to linear residual updates (red).* ### Model Sources - **Repository (Original Implementation):** [https://github.com/BootsofLagrangian/ortho-residual](https://github.com/BootsofLagrangian/ortho-residual) - **Paper:** [Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks (arXiv:2505.11881)](https://arxiv.org/abs/2505.11881) ## Evaluation ```python import torch import torchvision.transforms as transforms from datasets import load_dataset from torch.utils.data import DataLoader from transformers import AutoModelForImageClassification from tqdm import tqdm import argparse from typing import Tuple, List def accuracy_counts( logits: torch.Tensor, target: torch.Tensor, topk: Tuple[int, ...] = (1, 5), ) -> List[int]: """ Given model outputs and targets, return a list of correct-counts for each k in topk. """ maxk = max(topk) _, pred = logits.topk(maxk, dim=1, largest=True, sorted=True) pred = pred.t() correct = pred.eq(target.view(1, -1).expand_as(pred)) res = [] for k in topk: correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) res.append(correct_k.item()) return res def evaluate_model(): device = torch.device("cuda" if torch.cuda.is_available() and not args.cpu else "cpu") print(f"Using device: {device}") model = AutoModelForImageClassification.from_pretrained( "BootsofLagrangian/ortho-vit-b-imagenet1k-hf", trust_remote_code=True ) model.to(device) model.eval() img_size = 224 mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] transform_eval = transforms.Compose([ transforms.Lambda(lambda img: img.convert("RGB")), transforms.Resize(img_size, interpolation=transforms.InterpolationMode.BICUBIC), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize(mean, std), ]) val_dataset = load_dataset("timm/imagenet-1k-wds", split="validation") def collate_fn(batch): images = torch.stack([transform_eval(item['jpg']) for item in batch]) labels = torch.tensor([item['cls'] for item in batch]) return images, labels val_loader = DataLoader( val_dataset, batch_size=32, shuffle=False, num_workers=4, collate_fn=collate_fn, pin_memory=True ) total_samples, correct_top1, correct_top5 = 0, 0, 0 with torch.no_grad(): for images, labels in tqdm(val_loader, desc="Evaluating"): images = images.to(device) labels = labels.to(device) outputs = model(pixel_values=images) logits = outputs.logits counts = accuracy_counts(logits, labels, topk=(1, 5)) correct_top1 += counts[0] correct_top5 += counts[1] total_samples += images.size(0) top1_accuracy = (correct_top1 / total_samples) * 100 top5_accuracy = (correct_top5 / total_samples) * 100 print("\n--- Evaluation Results ---") print(f"Total samples evaluated: {total_samples}") print(f"Top-1 Accuracy: {top1_accuracy:.2f}%") print(f"Top-5 Accuracy: {top5_accuracy:.2f}%") ``` ## Citation ```bib @article{oh2025revisitingresidualconnectionsorthogonal, title={Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks}, author={Giyeong Oh and Woohyun Cho and Siyeol Kim and Suhwan Choi and Younjae Yu}, year={2025}, journal={arXiv preprint arXiv:2505.11881}, eprint={2505.11881}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.11881} } ```
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7", "label_8", "label_9", "label_10", "label_11", "label_12", "label_13", "label_14", "label_15", "label_16", "label_17", "label_18", "label_19", "label_20", "label_21", "label_22", "label_23", "label_24", "label_25", "label_26", "label_27", "label_28", "label_29", "label_30", "label_31", "label_32", "label_33", "label_34", "label_35", "label_36", "label_37", "label_38", "label_39", "label_40", "label_41", "label_42", "label_43", "label_44", "label_45", "label_46", "label_47", "label_48", "label_49", "label_50", "label_51", "label_52", "label_53", "label_54", "label_55", "label_56", "label_57", "label_58", "label_59", "label_60", "label_61", "label_62", "label_63", "label_64", "label_65", "label_66", "label_67", "label_68", "label_69", "label_70", "label_71", "label_72", "label_73", "label_74", "label_75", "label_76", "label_77", "label_78", "label_79", "label_80", "label_81", "label_82", "label_83", "label_84", "label_85", "label_86", "label_87", "label_88", "label_89", "label_90", "label_91", "label_92", "label_93", "label_94", "label_95", "label_96", "label_97", "label_98", "label_99", "label_100", "label_101", "label_102", "label_103", "label_104", "label_105", "label_106", "label_107", "label_108", "label_109", "label_110", "label_111", "label_112", "label_113", "label_114", "label_115", "label_116", "label_117", "label_118", "label_119", "label_120", "label_121", "label_122", "label_123", "label_124", "label_125", "label_126", "label_127", "label_128", "label_129", "label_130", "label_131", "label_132", "label_133", "label_134", "label_135", "label_136", "label_137", "label_138", "label_139", "label_140", "label_141", "label_142", "label_143", "label_144", "label_145", "label_146", "label_147", "label_148", "label_149", "label_150", "label_151", "label_152", "label_153", "label_154", "label_155", "label_156", "label_157", "label_158", "label_159", "label_160", "label_161", "label_162", "label_163", "label_164", "label_165", "label_166", "label_167", "label_168", "label_169", "label_170", "label_171", "label_172", "label_173", "label_174", "label_175", "label_176", "label_177", "label_178", "label_179", "label_180", "label_181", "label_182", "label_183", "label_184", "label_185", "label_186", "label_187", "label_188", "label_189", "label_190", "label_191", "label_192", "label_193", "label_194", "label_195", "label_196", "label_197", "label_198", "label_199", "label_200", "label_201", "label_202", "label_203", "label_204", "label_205", "label_206", "label_207", "label_208", "label_209", "label_210", "label_211", "label_212", "label_213", "label_214", "label_215", "label_216", "label_217", "label_218", "label_219", "label_220", "label_221", "label_222", "label_223", "label_224", "label_225", "label_226", "label_227", "label_228", "label_229", "label_230", "label_231", "label_232", "label_233", "label_234", "label_235", "label_236", "label_237", "label_238", "label_239", "label_240", "label_241", "label_242", "label_243", "label_244", "label_245", "label_246", "label_247", "label_248", "label_249", "label_250", "label_251", "label_252", "label_253", "label_254", "label_255", "label_256", "label_257", "label_258", "label_259", "label_260", "label_261", "label_262", "label_263", "label_264", "label_265", "label_266", "label_267", "label_268", "label_269", "label_270", "label_271", "label_272", "label_273", "label_274", "label_275", "label_276", "label_277", "label_278", "label_279", "label_280", "label_281", "label_282", "label_283", "label_284", "label_285", "label_286", "label_287", "label_288", "label_289", "label_290", "label_291", "label_292", "label_293", "label_294", "label_295", "label_296", "label_297", "label_298", "label_299", "label_300", "label_301", "label_302", "label_303", "label_304", "label_305", "label_306", "label_307", "label_308", "label_309", "label_310", "label_311", "label_312", "label_313", "label_314", "label_315", "label_316", "label_317", "label_318", "label_319", "label_320", "label_321", "label_322", "label_323", "label_324", "label_325", "label_326", "label_327", "label_328", "label_329", "label_330", "label_331", "label_332", "label_333", "label_334", "label_335", "label_336", "label_337", "label_338", "label_339", "label_340", "label_341", "label_342", "label_343", "label_344", "label_345", "label_346", "label_347", "label_348", "label_349", "label_350", "label_351", "label_352", "label_353", "label_354", "label_355", "label_356", "label_357", "label_358", "label_359", "label_360", "label_361", "label_362", "label_363", "label_364", "label_365", "label_366", "label_367", "label_368", "label_369", "label_370", "label_371", "label_372", "label_373", "label_374", "label_375", "label_376", "label_377", "label_378", "label_379", "label_380", "label_381", "label_382", "label_383", "label_384", "label_385", "label_386", "label_387", "label_388", "label_389", "label_390", "label_391", "label_392", "label_393", "label_394", "label_395", "label_396", "label_397", "label_398", "label_399", "label_400", "label_401", "label_402", "label_403", "label_404", "label_405", "label_406", "label_407", "label_408", "label_409", "label_410", "label_411", "label_412", "label_413", "label_414", "label_415", "label_416", "label_417", "label_418", "label_419", "label_420", "label_421", "label_422", "label_423", "label_424", "label_425", "label_426", "label_427", "label_428", "label_429", "label_430", "label_431", "label_432", "label_433", "label_434", "label_435", "label_436", "label_437", "label_438", "label_439", "label_440", "label_441", "label_442", "label_443", "label_444", "label_445", "label_446", "label_447", "label_448", "label_449", "label_450", "label_451", "label_452", "label_453", "label_454", "label_455", "label_456", "label_457", "label_458", "label_459", "label_460", "label_461", "label_462", "label_463", "label_464", "label_465", "label_466", "label_467", "label_468", "label_469", "label_470", "label_471", "label_472", "label_473", "label_474", "label_475", "label_476", "label_477", "label_478", "label_479", "label_480", "label_481", "label_482", "label_483", "label_484", "label_485", "label_486", "label_487", "label_488", "label_489", "label_490", "label_491", "label_492", "label_493", "label_494", "label_495", "label_496", "label_497", "label_498", "label_499", "label_500", "label_501", "label_502", "label_503", "label_504", "label_505", "label_506", "label_507", "label_508", "label_509", "label_510", "label_511", "label_512", "label_513", "label_514", "label_515", "label_516", "label_517", "label_518", "label_519", "label_520", "label_521", "label_522", "label_523", "label_524", "label_525", "label_526", "label_527", "label_528", "label_529", "label_530", "label_531", "label_532", "label_533", "label_534", "label_535", "label_536", "label_537", "label_538", "label_539", "label_540", "label_541", "label_542", "label_543", "label_544", "label_545", "label_546", "label_547", "label_548", "label_549", "label_550", "label_551", "label_552", "label_553", "label_554", "label_555", "label_556", "label_557", "label_558", "label_559", "label_560", "label_561", "label_562", "label_563", "label_564", "label_565", "label_566", "label_567", "label_568", "label_569", "label_570", "label_571", "label_572", "label_573", "label_574", "label_575", "label_576", "label_577", "label_578", "label_579", "label_580", "label_581", "label_582", "label_583", "label_584", "label_585", "label_586", "label_587", "label_588", "label_589", "label_590", "label_591", "label_592", "label_593", "label_594", "label_595", "label_596", "label_597", "label_598", "label_599", "label_600", "label_601", "label_602", "label_603", "label_604", "label_605", "label_606", "label_607", "label_608", "label_609", "label_610", "label_611", "label_612", "label_613", "label_614", "label_615", "label_616", "label_617", "label_618", "label_619", "label_620", "label_621", "label_622", "label_623", "label_624", "label_625", "label_626", "label_627", "label_628", "label_629", "label_630", "label_631", "label_632", "label_633", "label_634", "label_635", "label_636", "label_637", "label_638", "label_639", "label_640", "label_641", "label_642", "label_643", "label_644", "label_645", "label_646", "label_647", "label_648", "label_649", "label_650", "label_651", "label_652", "label_653", "label_654", "label_655", "label_656", "label_657", "label_658", "label_659", "label_660", "label_661", "label_662", "label_663", "label_664", "label_665", "label_666", "label_667", "label_668", "label_669", "label_670", "label_671", "label_672", "label_673", "label_674", "label_675", "label_676", "label_677", "label_678", "label_679", "label_680", "label_681", "label_682", "label_683", "label_684", "label_685", "label_686", "label_687", "label_688", "label_689", "label_690", "label_691", "label_692", "label_693", "label_694", "label_695", "label_696", "label_697", "label_698", "label_699", "label_700", "label_701", "label_702", "label_703", "label_704", "label_705", "label_706", "label_707", "label_708", "label_709", "label_710", "label_711", "label_712", "label_713", "label_714", "label_715", "label_716", "label_717", "label_718", "label_719", "label_720", "label_721", "label_722", "label_723", "label_724", "label_725", "label_726", "label_727", "label_728", "label_729", "label_730", "label_731", "label_732", "label_733", "label_734", "label_735", "label_736", "label_737", "label_738", "label_739", "label_740", "label_741", "label_742", "label_743", "label_744", "label_745", "label_746", "label_747", "label_748", "label_749", "label_750", "label_751", "label_752", "label_753", "label_754", "label_755", "label_756", "label_757", "label_758", "label_759", "label_760", "label_761", "label_762", "label_763", "label_764", "label_765", "label_766", "label_767", "label_768", "label_769", "label_770", "label_771", "label_772", "label_773", "label_774", "label_775", "label_776", "label_777", "label_778", "label_779", "label_780", "label_781", "label_782", "label_783", "label_784", "label_785", "label_786", "label_787", "label_788", "label_789", "label_790", "label_791", "label_792", "label_793", "label_794", "label_795", "label_796", "label_797", "label_798", "label_799", "label_800", "label_801", "label_802", "label_803", "label_804", "label_805", "label_806", "label_807", "label_808", "label_809", "label_810", "label_811", "label_812", "label_813", "label_814", "label_815", "label_816", "label_817", "label_818", "label_819", "label_820", "label_821", "label_822", "label_823", "label_824", "label_825", "label_826", "label_827", "label_828", "label_829", "label_830", "label_831", "label_832", "label_833", "label_834", "label_835", "label_836", "label_837", "label_838", "label_839", "label_840", "label_841", "label_842", "label_843", "label_844", "label_845", "label_846", "label_847", "label_848", "label_849", "label_850", "label_851", "label_852", "label_853", "label_854", "label_855", "label_856", "label_857", "label_858", "label_859", "label_860", "label_861", "label_862", "label_863", "label_864", "label_865", "label_866", "label_867", "label_868", "label_869", "label_870", "label_871", "label_872", "label_873", "label_874", "label_875", "label_876", "label_877", "label_878", "label_879", "label_880", "label_881", "label_882", "label_883", "label_884", "label_885", "label_886", "label_887", "label_888", "label_889", "label_890", "label_891", "label_892", "label_893", "label_894", "label_895", "label_896", "label_897", "label_898", "label_899", "label_900", "label_901", "label_902", "label_903", "label_904", "label_905", "label_906", "label_907", "label_908", "label_909", "label_910", "label_911", "label_912", "label_913", "label_914", "label_915", "label_916", "label_917", "label_918", "label_919", "label_920", "label_921", "label_922", "label_923", "label_924", "label_925", "label_926", "label_927", "label_928", "label_929", "label_930", "label_931", "label_932", "label_933", "label_934", "label_935", "label_936", "label_937", "label_938", "label_939", "label_940", "label_941", "label_942", "label_943", "label_944", "label_945", "label_946", "label_947", "label_948", "label_949", "label_950", "label_951", "label_952", "label_953", "label_954", "label_955", "label_956", "label_957", "label_958", "label_959", "label_960", "label_961", "label_962", "label_963", "label_964", "label_965", "label_966", "label_967", "label_968", "label_969", "label_970", "label_971", "label_972", "label_973", "label_974", "label_975", "label_976", "label_977", "label_978", "label_979", "label_980", "label_981", "label_982", "label_983", "label_984", "label_985", "label_986", "label_987", "label_988", "label_989", "label_990", "label_991", "label_992", "label_993", "label_994", "label_995", "label_996", "label_997", "label_998", "label_999" ]
maceythm/vit-90-animals-moreepochs
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-90-animals-moreepochs This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the iamsouravbanerjee/animal-image-dataset-90-different-animals dataset. It achieves the following results on the evaluation set: - Loss: 0.0709 - Accuracy: 0.9852 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1913 | 1.0 | 270 | 0.3072 | 0.9722 | | 0.2882 | 2.0 | 540 | 0.1545 | 0.9722 | | 0.1824 | 3.0 | 810 | 0.1328 | 0.9704 | | 0.1578 | 4.0 | 1080 | 0.1217 | 0.9704 | | 0.1518 | 5.0 | 1350 | 0.1161 | 0.9704 | | 0.1246 | 6.0 | 1620 | 0.1134 | 0.9704 | | 0.1203 | 7.0 | 1890 | 0.1134 | 0.9704 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "antelope", "badger", "bat", "bear", "bee", "beetle", "bison", "boar", "butterfly", "cat", "caterpillar", "chimpanzee", "cockroach", "cow", "coyote", "crab", "crow", "deer", "dog", "dolphin", "donkey", "dragonfly", "duck", "eagle", "elephant", "flamingo", "fly", "fox", "goat", "goldfish", "goose", "gorilla", "grasshopper", "hamster", "hare", "hedgehog", "hippopotamus", "hornbill", "horse", "hummingbird", "hyena", "jellyfish", "kangaroo", "koala", "ladybugs", "leopard", "lion", "lizard", "lobster", "mosquito", "moth", "mouse", "octopus", "okapi", "orangutan", "otter", "owl", "ox", "oyster", "panda", "parrot", "pelecaniformes", "penguin", "pig", "pigeon", "porcupine", "possum", "raccoon", "rat", "reindeer", "rhinoceros", "sandpiper", "seahorse", "seal", "shark", "sheep", "snake", "sparrow", "squid", "squirrel", "starfish", "swan", "tiger", "turkey", "turtle", "whale", "wolf", "wombat", "woodpecker", "zebra" ]
knt1212/street_roadvision
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
wnkh/image_classification_based_vit-base-patch16-224-in21k
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification_based_vit-base-patch16-224-in21k This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6855 - Accuracy: 0.888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8043 | 1.0 | 63 | 2.5790 | 0.846 | | 1.9246 | 2.0 | 126 | 1.8566 | 0.87 | | 1.6885 | 2.96 | 186 | 1.6855 | 0.888 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "apple_pie", "baby_back_ribs", "bruschetta", "waffles", "caesar_salad", "cannoli", "caprese_salad", "carrot_cake", "ceviche", "cheesecake", "cheese_plate", "chicken_curry", "chicken_quesadilla", "baklava", "chicken_wings", "chocolate_cake", "chocolate_mousse", "churros", "clam_chowder", "club_sandwich", "crab_cakes", "creme_brulee", "croque_madame", "cup_cakes", "beef_carpaccio", "deviled_eggs", "donuts", "dumplings", "edamame", "eggs_benedict", "escargots", "falafel", "filet_mignon", "fish_and_chips", "foie_gras", "beef_tartare", "french_fries", "french_onion_soup", "french_toast", "fried_calamari", "fried_rice", "frozen_yogurt", "garlic_bread", "gnocchi", "greek_salad", "grilled_cheese_sandwich", "beet_salad", "grilled_salmon", "guacamole", "gyoza", "hamburger", "hot_and_sour_soup", "hot_dog", "huevos_rancheros", "hummus", "ice_cream", "lasagna", "beignets", "lobster_bisque", "lobster_roll_sandwich", "macaroni_and_cheese", "macarons", "miso_soup", "mussels", "nachos", "omelette", "onion_rings", "oysters", "bibimbap", "pad_thai", "paella", "pancakes", "panna_cotta", "peking_duck", "pho", "pizza", "pork_chop", "poutine", "prime_rib", "bread_pudding", "pulled_pork_sandwich", "ramen", "ravioli", "red_velvet_cake", "risotto", "samosa", "sashimi", "scallops", "seaweed_salad", "shrimp_and_grits", "breakfast_burrito", "spaghetti_bolognese", "spaghetti_carbonara", "spring_rolls", "steak", "strawberry_shortcake", "sushi", "tacos", "takoyaki", "tiramisu", "tuna_tartare" ]
Amoros/Amoros_Beaugosse_batch_64_epochs_150_test-large-2025_05_31_74882-bs64_freeze
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Amoros_Beaugosse_batch_64_epochs_150_test-large-2025_05_31_74882-bs64_freeze This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0794 - F1 Micro: 0.6595 - F1 Macro: 0.5426 - Accuracy: 0.5713 - Learning Rate: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|:------:| | No log | 1.0 | 489 | 0.1054 | 0.4744 | 0.1919 | 0.3282 | 0.001 | | 0.2259 | 2.0 | 978 | 0.0981 | 0.5294 | 0.2757 | 0.3986 | 0.001 | | 0.1119 | 3.0 | 1467 | 0.0943 | 0.5452 | 0.3540 | 0.4125 | 0.001 | | 0.1063 | 4.0 | 1956 | 0.0947 | 0.5346 | 0.3396 | 0.4013 | 0.001 | | 0.104 | 5.0 | 2445 | 0.0943 | 0.5597 | 0.3631 | 0.4329 | 0.001 | | 0.1031 | 6.0 | 2934 | 0.0946 | 0.5400 | 0.3305 | 0.4131 | 0.001 | | 0.1029 | 7.0 | 3423 | 0.0934 | 0.5622 | 0.3664 | 0.4370 | 0.001 | | 0.1022 | 8.0 | 3912 | 0.0955 | 0.5511 | 0.3686 | 0.4273 | 0.001 | | 0.1021 | 9.0 | 4401 | 0.0934 | 0.5745 | 0.3633 | 0.4571 | 0.001 | | 0.102 | 10.0 | 4890 | 0.0930 | 0.5688 | 0.3772 | 0.4417 | 0.001 | | 0.1027 | 11.0 | 5379 | 0.0930 | 0.5624 | 0.3707 | 0.4352 | 0.001 | | 0.102 | 12.0 | 5868 | 0.0920 | 0.5713 | 0.3767 | 0.4449 | 0.001 | | 0.1017 | 13.0 | 6357 | 0.0924 | 0.5641 | 0.3580 | 0.4338 | 0.001 | | 0.1014 | 14.0 | 6846 | 0.0917 | 0.5733 | 0.3675 | 0.4502 | 0.001 | | 0.1006 | 15.0 | 7335 | 0.0904 | 0.5817 | 0.3965 | 0.4611 | 0.001 | | 0.1011 | 16.0 | 7824 | 0.0906 | 0.5759 | 0.4032 | 0.4497 | 0.001 | | 0.1007 | 17.0 | 8313 | 0.0917 | 0.5629 | 0.3868 | 0.4328 | 0.001 | | 0.1009 | 18.0 | 8802 | 0.0910 | 0.5791 | 0.3982 | 0.4546 | 0.001 | | 0.1007 | 19.0 | 9291 | 0.0909 | 0.5657 | 0.3833 | 0.4363 | 0.001 | | 0.1006 | 20.0 | 9780 | 0.0905 | 0.5832 | 0.3929 | 0.4619 | 0.001 | | 0.1008 | 21.0 | 10269 | 0.0917 | 0.5678 | 0.4099 | 0.4367 | 0.001 | | 0.0998 | 22.0 | 10758 | 0.0868 | 0.6078 | 0.4424 | 0.4921 | 0.0001 | | 0.0947 | 23.0 | 11247 | 0.0861 | 0.6140 | 0.4472 | 0.5008 | 0.0001 | | 0.0937 | 24.0 | 11736 | 0.0853 | 0.6166 | 0.4589 | 0.5022 | 0.0001 | | 0.0932 | 25.0 | 12225 | 0.0849 | 0.6163 | 0.4571 | 0.5025 | 0.0001 | | 0.0922 | 26.0 | 12714 | 0.0845 | 0.6215 | 0.4645 | 0.5119 | 0.0001 | | 0.0912 | 27.0 | 13203 | 0.0842 | 0.6259 | 0.4661 | 0.5159 | 0.0001 | | 0.091 | 28.0 | 13692 | 0.0839 | 0.6245 | 0.4658 | 0.5133 | 0.0001 | | 0.0905 | 29.0 | 14181 | 0.0839 | 0.6248 | 0.4696 | 0.5141 | 0.0001 | | 0.0903 | 30.0 | 14670 | 0.0835 | 0.6276 | 0.4716 | 0.5202 | 0.0001 | | 0.09 | 31.0 | 15159 | 0.0832 | 0.6303 | 0.4792 | 0.5222 | 0.0001 | | 0.0892 | 32.0 | 15648 | 0.0831 | 0.6310 | 0.4858 | 0.5261 | 0.0001 | | 0.0893 | 33.0 | 16137 | 0.0826 | 0.6338 | 0.4873 | 0.5307 | 0.0001 | | 0.0884 | 34.0 | 16626 | 0.0826 | 0.6320 | 0.4740 | 0.5241 | 0.0001 | | 0.0882 | 35.0 | 17115 | 0.0824 | 0.6342 | 0.4855 | 0.5302 | 0.0001 | | 0.0886 | 36.0 | 17604 | 0.0823 | 0.6351 | 0.4845 | 0.5326 | 0.0001 | | 0.0881 | 37.0 | 18093 | 0.0822 | 0.6340 | 0.4825 | 0.5273 | 0.0001 | | 0.0882 | 38.0 | 18582 | 0.0823 | 0.6383 | 0.4913 | 0.5369 | 0.0001 | | 0.0876 | 39.0 | 19071 | 0.0819 | 0.6400 | 0.4970 | 0.5369 | 0.0001 | | 0.0877 | 40.0 | 19560 | 0.0819 | 0.6372 | 0.4887 | 0.5315 | 0.0001 | | 0.0864 | 41.0 | 20049 | 0.0821 | 0.6317 | 0.4833 | 0.5239 | 0.0001 | | 0.0867 | 42.0 | 20538 | 0.0814 | 0.6395 | 0.5033 | 0.5369 | 0.0001 | | 0.0871 | 43.0 | 21027 | 0.0812 | 0.6456 | 0.5000 | 0.5460 | 0.0001 | | 0.087 | 44.0 | 21516 | 0.0812 | 0.6400 | 0.4966 | 0.5371 | 0.0001 | | 0.0863 | 45.0 | 22005 | 0.0815 | 0.6392 | 0.5049 | 0.5344 | 0.0001 | | 0.0863 | 46.0 | 22494 | 0.0812 | 0.6419 | 0.5045 | 0.5395 | 0.0001 | | 0.0859 | 47.0 | 22983 | 0.0809 | 0.6452 | 0.5071 | 0.5442 | 0.0001 | | 0.0858 | 48.0 | 23472 | 0.0811 | 0.6451 | 0.5108 | 0.5449 | 0.0001 | | 0.0861 | 49.0 | 23961 | 0.0812 | 0.6415 | 0.4906 | 0.5406 | 0.0001 | | 0.0856 | 50.0 | 24450 | 0.0808 | 0.6449 | 0.5024 | 0.5432 | 0.0001 | | 0.0857 | 51.0 | 24939 | 0.0807 | 0.6466 | 0.5080 | 0.5475 | 0.0001 | | 0.0857 | 52.0 | 25428 | 0.0808 | 0.6432 | 0.5082 | 0.5414 | 0.0001 | | 0.0852 | 53.0 | 25917 | 0.0806 | 0.6507 | 0.5132 | 0.5525 | 0.0001 | | 0.0847 | 54.0 | 26406 | 0.0806 | 0.6436 | 0.5143 | 0.5420 | 0.0001 | | 0.0849 | 55.0 | 26895 | 0.0809 | 0.6429 | 0.5096 | 0.5409 | 0.0001 | | 0.0847 | 56.0 | 27384 | 0.0807 | 0.6486 | 0.5029 | 0.5485 | 0.0001 | | 0.0845 | 57.0 | 27873 | 0.0807 | 0.6439 | 0.5007 | 0.5412 | 0.0001 | | 0.0848 | 58.0 | 28362 | 0.0806 | 0.6497 | 0.4993 | 0.5520 | 0.0001 | | 0.0843 | 59.0 | 28851 | 0.0804 | 0.6445 | 0.4995 | 0.5391 | 0.0001 | | 0.0839 | 60.0 | 29340 | 0.0801 | 0.6549 | 0.5226 | 0.5597 | 0.0001 | | 0.0844 | 61.0 | 29829 | 0.0807 | 0.6450 | 0.4941 | 0.5454 | 0.0001 | | 0.0832 | 62.0 | 30318 | 0.0801 | 0.6470 | 0.5134 | 0.5438 | 0.0001 | | 0.084 | 63.0 | 30807 | 0.0804 | 0.6494 | 0.5026 | 0.5517 | 0.0001 | | 0.0834 | 64.0 | 31296 | 0.0802 | 0.6448 | 0.5091 | 0.5431 | 0.0001 | | 0.0841 | 65.0 | 31785 | 0.0804 | 0.6504 | 0.5109 | 0.5513 | 0.0001 | | 0.0837 | 66.0 | 32274 | 0.0802 | 0.6483 | 0.5137 | 0.5487 | 0.0001 | | 0.0833 | 67.0 | 32763 | 0.0801 | 0.6517 | 0.5166 | 0.5557 | 0.0001 | | 0.0836 | 68.0 | 33252 | 0.0798 | 0.6553 | 0.5184 | 0.5574 | 0.0001 | | 0.0835 | 69.0 | 33741 | 0.0802 | 0.6516 | 0.5112 | 0.5549 | 0.0001 | | 0.0827 | 70.0 | 34230 | 0.0798 | 0.6536 | 0.5232 | 0.5561 | 0.0001 | | 0.0832 | 71.0 | 34719 | 0.0801 | 0.6510 | 0.5223 | 0.5536 | 0.0001 | | 0.0831 | 72.0 | 35208 | 0.0799 | 0.6534 | 0.5130 | 0.5583 | 0.0001 | | 0.0832 | 73.0 | 35697 | 0.0799 | 0.6489 | 0.5129 | 0.5487 | 0.0001 | | 0.0836 | 74.0 | 36186 | 0.0799 | 0.6451 | 0.5035 | 0.5437 | 0.0001 | | 0.0827 | 75.0 | 36675 | 0.0798 | 0.6520 | 0.5196 | 0.5533 | 0.0001 | | 0.0827 | 76.0 | 37164 | 0.0797 | 0.6507 | 0.5247 | 0.5498 | 0.0001 | | 0.0832 | 77.0 | 37653 | 0.0797 | 0.6537 | 0.5186 | 0.5574 | 0.0001 | | 0.0824 | 78.0 | 38142 | 0.0796 | 0.6520 | 0.5284 | 0.5534 | 0.0001 | | 0.0828 | 79.0 | 38631 | 0.0795 | 0.6536 | 0.5135 | 0.5572 | 0.0001 | | 0.0824 | 80.0 | 39120 | 0.0797 | 0.6519 | 0.5117 | 0.5523 | 0.0001 | | 0.0822 | 81.0 | 39609 | 0.0795 | 0.6548 | 0.5192 | 0.5586 | 0.0001 | | 0.0824 | 82.0 | 40098 | 0.0796 | 0.6550 | 0.5164 | 0.5610 | 0.0001 | | 0.0822 | 83.0 | 40587 | 0.0796 | 0.6556 | 0.5408 | 0.5607 | 0.0001 | | 0.0818 | 84.0 | 41076 | 0.0792 | 0.6562 | 0.5267 | 0.5631 | 0.0001 | | 0.0826 | 85.0 | 41565 | 0.0795 | 0.6517 | 0.5200 | 0.5559 | 0.0001 | | 0.0819 | 86.0 | 42054 | 0.0794 | 0.6546 | 0.5127 | 0.5579 | 0.0001 | | 0.0822 | 87.0 | 42543 | 0.0794 | 0.6566 | 0.5185 | 0.5613 | 0.0001 | | 0.0818 | 88.0 | 43032 | 0.0794 | 0.6549 | 0.5269 | 0.5598 | 0.0001 | | 0.0817 | 89.0 | 43521 | 0.0795 | 0.6555 | 0.5239 | 0.5585 | 0.0001 | | 0.082 | 90.0 | 44010 | 0.0794 | 0.6518 | 0.5180 | 0.5536 | 0.0001 | | 0.082 | 91.0 | 44499 | 0.0787 | 0.6601 | 0.5286 | 0.5699 | 1e-05 | | 0.0804 | 92.0 | 44988 | 0.0786 | 0.6590 | 0.5243 | 0.5656 | 1e-05 | | 0.0803 | 93.0 | 45477 | 0.0785 | 0.6585 | 0.5256 | 0.5645 | 1e-05 | | 0.0792 | 94.0 | 45966 | 0.0785 | 0.6592 | 0.5281 | 0.5646 | 1e-05 | | 0.0789 | 95.0 | 46455 | 0.0785 | 0.6603 | 0.5329 | 0.5696 | 1e-05 | | 0.0788 | 96.0 | 46944 | 0.0785 | 0.6602 | 0.5236 | 0.5685 | 1e-05 | | 0.0786 | 97.0 | 47433 | 0.0785 | 0.6590 | 0.5270 | 0.5653 | 1e-05 | | 0.0789 | 98.0 | 47922 | 0.0784 | 0.6629 | 0.5348 | 0.5727 | 1e-05 | | 0.0783 | 99.0 | 48411 | 0.0784 | 0.6626 | 0.5344 | 0.5726 | 1e-05 | | 0.0789 | 100.0 | 48900 | 0.0785 | 0.6607 | 0.5257 | 0.5709 | 1e-05 | | 0.0783 | 101.0 | 49389 | 0.0783 | 0.6620 | 0.5332 | 0.5723 | 1e-05 | | 0.0783 | 102.0 | 49878 | 0.0783 | 0.6644 | 0.5335 | 0.5750 | 1e-05 | | 0.0781 | 103.0 | 50367 | 0.0783 | 0.6652 | 0.5375 | 0.5796 | 1e-05 | | 0.0782 | 104.0 | 50856 | 0.0783 | 0.6644 | 0.5414 | 0.5751 | 1e-05 | | 0.0776 | 105.0 | 51345 | 0.0783 | 0.6646 | 0.5412 | 0.5776 | 1e-05 | | 0.0778 | 106.0 | 51834 | 0.0782 | 0.6670 | 0.5439 | 0.5803 | 1e-05 | | 0.0777 | 107.0 | 52323 | 0.0781 | 0.6652 | 0.5333 | 0.5771 | 1e-05 | | 0.0778 | 108.0 | 52812 | 0.0782 | 0.6628 | 0.5354 | 0.5716 | 1e-05 | | 0.078 | 109.0 | 53301 | 0.0781 | 0.6640 | 0.5352 | 0.5752 | 1e-05 | | 0.0785 | 110.0 | 53790 | 0.0780 | 0.6655 | 0.5345 | 0.5752 | 1e-05 | | 0.0772 | 111.0 | 54279 | 0.0781 | 0.6639 | 0.5403 | 0.5748 | 1e-05 | | 0.0779 | 112.0 | 54768 | 0.0780 | 0.6648 | 0.5373 | 0.5767 | 1e-05 | | 0.0774 | 113.0 | 55257 | 0.0781 | 0.6658 | 0.5446 | 0.5792 | 1e-05 | | 0.0774 | 114.0 | 55746 | 0.0780 | 0.6672 | 0.5445 | 0.5801 | 1e-05 | | 0.078 | 115.0 | 56235 | 0.0782 | 0.6671 | 0.5445 | 0.5816 | 1e-05 | | 0.0773 | 116.0 | 56724 | 0.0782 | 0.6647 | 0.5352 | 0.5756 | 1e-05 | | 0.0779 | 117.0 | 57213 | 0.0781 | 0.6641 | 0.5323 | 0.5758 | 1e-05 | | 0.0769 | 118.0 | 57702 | 0.0781 | 0.6655 | 0.5342 | 0.5774 | 1e-05 | | 0.0773 | 119.0 | 58191 | 0.0780 | 0.6655 | 0.5362 | 0.5769 | 0.0000 | | 0.0771 | 120.0 | 58680 | 0.0780 | 0.6663 | 0.5425 | 0.5777 | 0.0000 | | 0.0769 | 121.0 | 59169 | 0.0781 | 0.6668 | 0.5404 | 0.5806 | 0.0000 | | 0.0769 | 122.0 | 59658 | 0.0780 | 0.6680 | 0.5436 | 0.5817 | 0.0000 | | 0.0771 | 123.0 | 60147 | 0.0780 | 0.6667 | 0.5441 | 0.5798 | 0.0000 | | 0.0773 | 124.0 | 60636 | 0.0780 | 0.6664 | 0.5436 | 0.5784 | 0.0000 | | 0.0773 | 125.0 | 61125 | 0.0780 | 0.6660 | 0.5453 | 0.5777 | 0.0000 | | 0.077 | 126.0 | 61614 | 0.0779 | 0.6632 | 0.5347 | 0.5726 | 0.0000 | | 0.0774 | 127.0 | 62103 | 0.0780 | 0.6649 | 0.5324 | 0.5757 | 0.0000 | | 0.0767 | 128.0 | 62592 | 0.0780 | 0.6662 | 0.5357 | 0.5765 | 0.0000 | | 0.077 | 129.0 | 63081 | 0.0779 | 0.6664 | 0.5404 | 0.5773 | 0.0000 | | 0.0773 | 130.0 | 63570 | 0.0781 | 0.6670 | 0.5409 | 0.5802 | 0.0000 | | 0.0772 | 131.0 | 64059 | 0.0779 | 0.6686 | 0.5461 | 0.5828 | 0.0000 | | 0.0772 | 132.0 | 64548 | 0.0779 | 0.6671 | 0.5430 | 0.5789 | 0.0000 | | 0.077 | 133.0 | 65037 | 0.0780 | 0.6678 | 0.5418 | 0.5817 | 0.0000 | | 0.0769 | 134.0 | 65526 | 0.0780 | 0.6670 | 0.5429 | 0.5796 | 0.0000 | | 0.0766 | 135.0 | 66015 | 0.0779 | 0.6676 | 0.5453 | 0.5783 | 0.0000 | | 0.0772 | 136.0 | 66504 | 0.0779 | 0.6646 | 0.5399 | 0.5750 | 0.0000 | | 0.0772 | 137.0 | 66993 | 0.0780 | 0.6651 | 0.5299 | 0.5755 | 0.0000 | | 0.0773 | 138.0 | 67482 | 0.0780 | 0.6664 | 0.5401 | 0.5793 | 0.0000 | | 0.0771 | 139.0 | 67971 | 0.0780 | 0.6657 | 0.5310 | 0.5784 | 0.0000 | ### Framework versions - Transformers 4.48.0 - Pytorch 2.6.0+cu118 - Datasets 3.0.2 - Tokenizers 0.21.1
[ "algae", "acr", "anem", "cca", "ech", "fts", "gal", "gon", "mtp", "p", "poc", "por", "r", "rdc", "s", "sg", "ser", "slt", "sp", "unk" ]
walzsil1/vit-base-fruits-360
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-fruits-360 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the PedroSampaio/fruits-360 dataset. It achieves the following results on the evaluation set: - Loss: 0.0006 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0104 | 1.0 | 3385 | 0.0090 | 0.9996 | | 0.0024 | 2.0 | 6770 | 0.0025 | 1.0 | | 0.001 | 3.0 | 10155 | 0.0013 | 1.0 | | 0.0004 | 4.0 | 13540 | 0.0007 | 1.0 | | 0.0004 | 5.0 | 16925 | 0.0005 | 1.0 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1 ### zero-shot classification model "openai/clip-vit-large-patch14" - Accuracy: 0.3161 - Precision: 0.3975 - Recall: 0.3161
[ "apple braeburn", "apple crimson snow", "apple golden", "apple granny smith", "apple pink lady", "apple red", "apple red delicious", "apple red yellow", "apricot", "avocado", "avocado ripe", "banana", "banana lady finger", "banana red", "beetroot", "blueberry", "cactus fruit", "cantaloupe", "carambula", "cauliflower", "cherry", "cherry rainier", "cherry wax black", "cherry wax red", "cherry wax yellow", "chestnut", "clementine", "cocos", "corn", "corn husk", "cucumber ripe", "dates", "eggplant", "fig", "ginger root", "granadilla", "grape blue", "grape pink", "grape white", "grapefruit pink", "grapefruit white", "guava", "hazelnut", "huckleberry", "kaki", "kiwi", "kohlrabi", "kumquats", "lemon", "lemon meyer", "limes", "lychee", "mandarine", "mango", "mango red", "mangostan", "maracuja", "melon piel de sapo", "mulberry", "nectarine", "nectarine flat", "nut forest", "nut pecan", "onion red", "onion red peeled", "onion white", "orange", "papaya", "passion fruit", "peach", "peach flat", "pear", "pear abate", "pear forelle", "pear kaiser", "pear monster", "pear red", "pear stone", "pear williams", "pepino", "pepper green", "pepper orange", "pepper red", "pepper yellow", "physalis", "physalis with husk", "pineapple", "pineapple mini", "pitahaya red", "plum", "pomegranate", "pomelo sweetie", "potato red", "potato red washed", "potato sweet", "potato white", "quince", "rambutan", "raspberry", "redcurrant", "salak", "strawberry", "strawberry wedge", "tamarillo", "tangelo", "tomato", "tomato cherry red", "tomato heart", "tomato maroon", "tomato yellow", "tomato not ripened", "walnut", "watermelon" ]
Makeh3ne/results
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2711 - Accuracy: 0.5062 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
[ "label_0", "label_1", "label_2", "label_3", "label_4", "label_5", "label_6", "label_7" ]
Jedrzej-Smok/2025-06-01_13-58-11
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2025-06-01_13-58-11 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.3588 - Accuracy: 0.9571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6955 | 1.0 | 2 | 0.6818 | 0.6 | | 0.6482 | 2.0 | 4 | 0.6114 | 0.8429 | | 0.5613 | 3.0 | 6 | 0.5547 | 0.8714 | | 0.5062 | 4.0 | 8 | 0.4905 | 0.8714 | | 0.5174 | 5.0 | 10 | 0.4499 | 0.8857 | | 0.4428 | 6.0 | 12 | 0.4480 | 0.9286 | | 0.4778 | 7.0 | 14 | 0.4045 | 0.9143 | | 0.4379 | 8.0 | 16 | 0.4020 | 0.9429 | | 0.3983 | 9.0 | 18 | 0.3847 | 0.9571 | | 0.3377 | 10.0 | 20 | 0.3588 | 0.9571 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "healthy", "sick" ]
ohjoonhee/siglip2-giant-384-rokn391-pln
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "k8_하이브리드_2022_2024", "캐스퍼_2022_2024", "쏘나타_dn8_2020_2023", "sm7_뉴아트_2008_2011", "b_클래스_w246_2013_2018", "머스탱_2015_2023", "팰리세이드_2019_2022", "sm6_2016_2020", "올_뉴_말리부_2017_2018", "뉴쏘렌토_r_2013_2014", "xc40_2019_2022", "니로_2017_2019", "더_뉴_스파크_2019_2022", "x6_g06_2024_2025", "랭글러_jl_2018_2024", "파일럿_3세대_2016_2018", "스파크_2012_2015", "더_뉴_k5_2세대_2019_2020", "qx60_2016_2018", "z4_g29_2019_2025", "g_클래스_w463_2009_2017", "k5_하이브리드_3세대_2020_2023", "파나메라_971_2017_2023", "디_올뉴코나_2023_2025", "뉴_g80_2025_2026", "뉴_mkz_2017_2020", "a_클래스_w176_2015_2018", "라브4_4세대_2013_2018", "베리_뉴_티볼리_2020_2023", "a5_f5_2019_2024", "a7_2012_2016", "리얼_뉴_콜로라도_2021_2022", "제네시스_dh_2014_2016", "qm3_2014_2017", "레인지로버_이보크_2세대_2023_2024", "올_뉴_모닝_2012_2015", "gla_클래스_x156_2015_2019", "셀토스_2020_2023", "아베오_2012_2016", "모델_3_2019_2022", "아이오닉5_2022_2023", "토레스_2023_2025", "모닝_어반_ja_2021_2023", "더_뉴_아반떼_cn7_2023_2025", "마칸_2019_2021", "더_뉴_파사트_2012_2019", "5008_2세대_2021_2024", "6시리즈_gt_g32_2018_2020", "올_뉴_k7_하이브리드_2017_2019", "e_클래스_w213_2021_2023", "eqa_h243_2021_2024", "더_뉴_g70_2021_2025", "xc60_2세대_2022_2025", "더_뉴_카니발_2019_2020", "q7_4m_2016_2019", "3시리즈_gt_f34_2014_2021", "올_뉴_k7_2016_2019", "k5_3세대_2020_2023", "더_뉴_모하비_2017_2019", "xc90_2세대_2020_2025", "넥쏘_2018_2024", "g_클래스_w463b_2019_2025", "xm3_2020_2023", "올_뉴_k3_2019_2021", "투아렉_3세대_2020_2023", "q3_f3_2020_2024", "모하비_더_마스터_2020_2024", "xc60_2세대_2018_2021", "eqe_v295_2022_2024", "싼타페_tm_2019_2020", "티볼리_에어_2016_2019", "k3_2013_2015", "x1_u11_2023_2024", "8시리즈_g15_2020_2024", "eqs_v297_2022_2023", "아반떼_n_2022_2023", "k7_프리미어_하이브리드_2020_2021", "glb_클래스_x247_2020_2023", "gv80_2020_2022", "올_뉴_카마로_2017_2018", "7시리즈_g70_2023_2025", "sm3_네오_2015_2019", "트레일블레이저_2021_2022", "더_뉴_맥스크루즈_2016_2018", "s_클래스_w223_2021_2025", "3008_2세대_2018_2023", "에비에이터_2세대_2020_2025", "gls_클래스_x167_2020_2024", "더_기아_레이_ev_2024_2025", "레인지로버_이보크_2016_2019", "트래버스_2020_2023", "a8_d5_2018_2023", "sm5_노바_2015_2019", "gle_클래스_w167_2019_2024", "g80_rg3_2021_2023", "m4_f82_2015_2020", "1시리즈_f20_2013_2015", "프리우스_4세대_2019_2022", "1시리즈_f20_2016_2019", "컨티넨탈_gt_2세대_2012_2017", "더_뉴_sm6_2021_2024", "더_뉴_아이오닉_하이브리드_2020", "더_뉴_k5_하이브리드_3세대_2023_2025", "뉴_es300h_2013_2015", "sm7_노바_2015_2019", "x7_g07_2023_2025", "q50_2014_2017", "시에나_4세대_2021_2024", "모델_y_2021_2025", "더_뉴_k7_2013_2016", "스포티지_더_볼드_2019_2022", "glc_클래스_x253_2023", "2시리즈_액티브_투어러_u06_2022_2024", "레인지로버_4세대_2014_2017", "더_뉴_k3_2016_2018", "ix_2022_2024", "쿠퍼_컨버터블_2016_2024", "컨티넨탈_10세대_2017_2019", "eq900_2016_2018", "프리우스_c_2018_2020", "에스컬레이드_2015_2020", "스포티지_4세대_2016_2018", "싼타페_더_프라임_2016_2018", "올_뉴_쏘렌토_2015_2017", "gv70_2021_2023", "디_올뉴그랜저_2023_2025", "ux250h_2019_2024", "그랜드카니발_2006_2010", "x3_g01_2022_2024", "m2_f87_2016_2021", "q5_fy_2020", "뉴_sm5_플래티넘_2013_2014", "더_뉴_셀토스_2023_2025", "컨티넨탈_gt_3세대_2018_2023", "q8_4m_2020_2025", "아이오닉_하이브리드_2016_2019", "q5_fy_2021_2024", "그랜저_hg_2011_2014", "5시리즈_g60_2024_2025", "더_뉴_k5_3세대_2024_2025", "그랜저_gn7_2023_2025", "쏘나타_뉴_라이즈_2018_2019", "디_올뉴니로ev_2023_2024", "레인지로버_스포츠_2세대_2018_2022", "어코드_10세대_2018_2022", "올_뉴_투싼_tl_2016_2018", "x6_g06_2020_2023", "에스컬레이드_5세대_2021_2024", "xc90_2세대_2017_2019", "3시리즈_f30_2013_2018", "glc_클래스_x253_2017_2019", "콰트로포르테_2017_2022", "x2_f39_2018_2023", "더_뉴_모닝_2015_2016", "라브4_5세대_2019_2024", "뉴_스타일_코란도_c_2017_2019", "뉴_qm5_2012_2014", "스타리아_2022_2025", "718_카이맨_2017_2024", "아슬란_2015_2018", "뉴_es300h_2016_2018", "말리부_2012_2016", "더_뉴_아반떼_ad_2019_2020", "티볼리_아머_2018_2019", "더_뉴스포티지r_2014_2016", "1시리즈_f40_2020_2024", "x5_f15_2014_2018", "컴패스_2세대_2018_2022", "더_뉴_니로_2020_2022", "레니게이드_2015_2017", "캠리_xv70_2018_2024", "레인지로버_스포츠_2세대_2013_2017", "s_클래스_w221_2006_2013", "쿠퍼_클럽맨_2016_2024", "에쿠스_신형_2010_2015", "new_xf_2012_2015", "아이오닉6_2023_2025", "올_뉴_투싼_tl_2019_2020", "아반떼_ad_2016_2018", "g80_rg3_2025", "올_뉴_렉스턴_2021_2025", "일렉트리파이드_gv70_2022_2024", "yf쏘나타_하이브리드_2011_2015", "더_올뉴g80_2021_2024", "그랑_콜레오스_2025", "뷰티풀_코란도_2019_2024", "디스커버리_5_2017_2020", "팰리세이드_lx3_2025", "v90_크로스컨트리_2018_2024", "아반떼_md_2011_2014", "뉴_a6_2015_2018", "몬데오_4세대_2015_2020", "2시리즈_그란쿠페_f44_2020_2024", "g90_2019_2022", "체로키_kl_2019_2023", "m5_f90_2018_2023", "5시리즈_gt_f07_2010_2017", "뉴_카이엔_2011_2018", "디스커버리_스포츠_2세대_2020_2025", "ev9_2024_2025", "e_클래스_w212_2010_2016", "레니게이드_2019_2023", "7시리즈_g11_2016_2018", "더_뉴_싼타페_2021_2023", "뉴qm3_2018_2019", "4시리즈_g22_2021_2023", "3시리즈_g20_2019_2022", "더_뉴_k9_2세대_2022_2025", "아반떼_하이브리드_cn7_2021_2023", "더_뉴_팰리세이드_2023_2024", "c_클래스_w206_2022_2024", "2시리즈_액티브_투어러_f45_2019_2021", "x1_f48_2016_2019", "i4_2022_2024", "그랜드_스타렉스_2016_2018", "티볼리_2015_2018", "쿠퍼_컨트리맨_2012_2015", "투싼_nx4_2021_2023", "x4_g02_2022_2025", "x5_g05_2024_2025", "x7_g07_2019_2022", "티볼리_에어_2021_2022", "콰트로포르테_2014_2016", "레인지로버_5세대_2023_2024", "뉴_sm5_임프레션_2008_2010", "더_뉴_그랜저_ig_2020_2023", "더_넥스트_스파크_2016_2018", "f150_2004_2021", "콜로라도_2020_2020", "박스터_718_2017_2024", "뉴_티구안_2012_2016", "디_올_뉴_니로_2022_2025", "카니발_4세대_2021", "더_뉴_그랜드_스타렉스_2018_2021", "기블리_2014_2023", "더_뉴_기아_레이_2022_2025", "뉴_a6_2012_2014", "yf쏘나타_2009_2012", "gv80_2024_2025", "더_뉴_레이_2018_2022", "더_뉴_qm6_2024_2025", "르반떼_2017_2022", "i30_pd_2017_2018", "더_뉴_투싼_nx4_2023_2025", "트레일블레이저_2023", "쏘나타_디_엣지_dn8_2024_2025", "글래디에이터_jt_2020_2023", "뉴_cc_2012_2016", "cle_클래스_c236_2024_2025", "디_올_뉴_스포티지_2022_2024", "a4_b9_2016_2019", "쏘렌토_4세대_2021_2023", "a4_b9_2020_2024", "q30_2017_2019", "5008_2세대_2018_2019", "레인지로버_4세대_2018_2022", "s90_2021_2025", "6시리즈_gt_g32_2021_2024", "e_트론_2020_2023", "gle_클래스_w166_2016_2018", "glc_클래스_x254_2023_2025", "싼타페_mx5_2024_2025", "티구안_올스페이스_2018_2023", "렉스턴_스포츠_칸_2019_2020", "v40_2015_2018", "4시리즈_g22_2024_2025", "더_k9_2019_2021", "x6_f16_2015_2019", "랭글러_jk_2009_2017", "익스플로러_6세대_2020_2025", "마칸_2014_2018", "k7_프리미어_2020_2021", "뉴_제타_2011_2016", "ev6_2022_2024", "더_뉴_k3_2세대_2022_2024", "e_클래스_w213_2017_2020", "4시리즈_f32_2014_2020", "s_클래스_w222_2014_2020", "그랜드_체로키_wl_2021_2023", "모델_3_2024_2025", "더_뉴_쏘렌토_4세대_2024_2025", "a6_c8_2019_2025", "3시리즈_e90_2005_2012", "더_뉴_모닝_ja_2024_2025", "더_뉴_말리부_2019_2022", "트랙스_크로스오버_2024_2025", "e_클래스_w214_2024_2025", "뉴_체어맨_w_2012_2016", "아테온_2018_2023", "q7_4m_2020_2023", "더_뉴_카니발_4세대_2024_2025", "스팅어_마이스터_2021_2023", "mkc_2015_2018", "디스커버리_스포츠_2015_2019", "더_뉴_코란도_스포츠_2016_2018", "그랜저_hg_2015_2017", "c_클래스_w205_2015_2021", "cla_클래스_c117_2014_2019", "벨로스터_js_2018_2020", "k8_2022_2024", "g70_2018_2020", "s90_2017_2020", "7시리즈_f01_2009_2015", "마칸_2022_2024", "7시리즈_g11_2019_2022", "익스플로러_2016_2017", "트랙스_2013_2016", "코나_2018_2020", "x4_g02_2019_2021", "더_뉴_쏘렌토_2018_2020", "더_뉴_코나_2021_2023", "g90_rs4_2022_2025", "k5_2세대_2016_2018", "그랜저tg_2007_2008", "아반떼_cn7_2021_2023", "베뉴_2020_2024", "올_뉴_모닝_ja_2017_2020", "뉴_qm6_2021_2023", "올_뉴_카니발_2015_2019", "cls_클래스_c257_2019_2023", "x1_f48_2020_2022", "ct6_2016_2018", "익스플로러_2018_2019", "카니발_4세대_2022_2023", "lf_쏘나타_2015_2017", "더_뉴_아반떼_2014_2016", "렉스턴_스포츠_2018_2021", "스포티지_5세대_2022_2024", "디스커버리_5_2022_2024", "all_new_xj_2016_2019", "qm6_2017_2019", "파사트_gt_b8_2018_2022", "x5_g05_2019_2023", "쿠퍼_컨트리맨_2016_2024", "타이칸_2021_2025", "e_pace_2018_2020", "cla_클래스_c118_2020_2025", "x4_f26_2015_2018", "알티마_2017_2018", "gla_클래스_h247_2020_2025", "디스커버리_4_2010_2016", "프리우스_4세대_2016_2018", "그랜드_체로키_2014_2020", "911_2003_2019", "임팔라_2016_2019", "2008_2015_2017", "amg_gt_2016_2024", "cls_클래스_w218_2012_2017", "코세어_2020_2022", "그랜저_ig_2017_2019", "a7_4k_2020_2024", "디펜더_l663_2020_2025", "c_클래스_w204_2008_2015", "더_뉴_렉스턴_스포츠_칸_2021_2025", "스토닉_2018_2020", "gls_클래스_x166_2017_2019", "카이엔_po536_2019_2023", "glc_클래스_x253_2020_2022", "더_뉴_트랙스_2017_2022", "911_992_2020_2024", "s60_3세대_2020_2024", "5시리즈_f10_2010_2016", "x3_g01_2018_2021", "g4_렉스턴_2018_2020", "올란도_2012_2018", "xf_x260_2016_2020", "디_올뉴싼타페_2024_2025", "5시리즈_g30_2017_2023", "f_pace_2017_2019", "6시리즈_f12_2011_2018", "골프_7세대_2013_2016", "xe_2016_2019", "파나메라_2010_2016", "코나_sx2_2023_2025", "엑센트_신형_2011_2019", "xj_8세대_2010_2019", "더_올뉴투싼_하이브리드_2021_2023", "es300h_7세대_2019_2026", "뉴_gv80_2024_2025", "더_뉴_렉스턴_스포츠_2021_2025", "a_클래스_w177_2020_2025", "레인지로버_벨라_2018_2019", "g80_2017_2020", "레이_2012_2017", "레인지로버_이보크_2세대_2020_2022", "액티언_2세대_2025", "3시리즈_g20_2023_2025", "v60_크로스컨트리_2세대_2020_2025", "스팅어_2018_2020", "더_뉴_qm6_2020_2023", "xm3_2024" ]
gashiari/vit-utility-poles
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-utility-poles This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the utility-poles-local dataset. It achieves the following results on the evaluation set: - Loss: 4.1032 - Accuracy: 0.1818 - Precision: 0.1363 - Recall: 0.1818 - F1: 0.1347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 20 | 4.4879 | 0.0260 | 0.0017 | 0.0260 | 0.0031 | | No log | 2.0 | 40 | 4.4301 | 0.0390 | 0.0018 | 0.0390 | 0.0034 | | No log | 3.0 | 60 | 4.3691 | 0.0649 | 0.0549 | 0.0649 | 0.0379 | | No log | 4.0 | 80 | 4.3187 | 0.0779 | 0.1134 | 0.0779 | 0.0565 | | 3.9392 | 5.0 | 100 | 4.2822 | 0.1169 | 0.1133 | 0.1169 | 0.0926 | | 3.9392 | 6.0 | 120 | 4.2440 | 0.1299 | 0.1187 | 0.1299 | 0.0998 | | 3.9392 | 7.0 | 140 | 4.2341 | 0.1169 | 0.1239 | 0.1169 | 0.1001 | | 3.9392 | 8.0 | 160 | 4.2013 | 0.1558 | 0.1449 | 0.1558 | 0.1262 | | 3.9392 | 9.0 | 180 | 4.1658 | 0.1688 | 0.1301 | 0.1688 | 0.1303 | | 2.5046 | 10.0 | 200 | 4.1695 | 0.1429 | 0.1168 | 0.1429 | 0.1098 | | 2.5046 | 11.0 | 220 | 4.1433 | 0.1688 | 0.1245 | 0.1688 | 0.1257 | | 2.5046 | 12.0 | 240 | 4.1359 | 0.1818 | 0.1428 | 0.1818 | 0.1400 | | 2.5046 | 13.0 | 260 | 4.1293 | 0.1688 | 0.1276 | 0.1688 | 0.1259 | | 2.5046 | 14.0 | 280 | 4.1201 | 0.1688 | 0.1246 | 0.1688 | 0.1237 | | 1.6736 | 15.0 | 300 | 4.1130 | 0.1818 | 0.1445 | 0.1818 | 0.1413 | | 1.6736 | 16.0 | 320 | 4.1112 | 0.1818 | 0.1367 | 0.1818 | 0.1354 | | 1.6736 | 17.0 | 340 | 4.1046 | 0.1688 | 0.1285 | 0.1688 | 0.1267 | | 1.6736 | 18.0 | 360 | 4.1025 | 0.1818 | 0.1361 | 0.1818 | 0.1344 | | 1.6736 | 19.0 | 380 | 4.1043 | 0.1818 | 0.1363 | 0.1818 | 0.1347 | | 1.3141 | 20.0 | 400 | 4.1032 | 0.1818 | 0.1363 | 0.1818 | 0.1347 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu118 - Datasets 3.5.0 - Tokenizers 0.21.1
[ "albania", "argentina", "australia", "austria", "bangladesh", "belgium", "bhutan", "bolivia", "bosnia", "botswana", "brazil", "bulgaria", "cambodia", "canada", "chile", "china", "christmas", "colombia", "costa", "curaçao", "cyprus", "czech", "dominican", "ecuador", "estonia", "eswatini", "faroe", "france", "germany", "ghana", "greece", "greenland", "guam", "guatemala", "hong", "hungary", "iceland", "india", "indonesia", "ireland", "israel", "italy", "japan", "jordan", "kazakhstan", "kenya", "kyrgyzstan", "laos", "latvia", "lebanon", "lesotho", "liechtenstein", "lithuania", "luxembourg", "madagascar", "malaysia", "mali", "malta", "mexico", "montenegro", "netherlands", "new", "nigeria", "north", "norway", "oman", "panama", "peru", "philippines", "pitcairn", "poland", "portugal", "puerto", "romania", "russia", "rwanda", "san", "senegal", "serbia", "singapore", "slovakia", "slovenia", "south", "spain", "sri", "sweden", "switzerland", "são", "taiwan", "thailand", "tunisia", "turkey", "uganda", "ukraine", "united", "uruguay" ]
forrestng/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0524 - Accuracy: 0.9848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1968 | 1.0 | 190 | 0.1134 | 0.9644 | | 0.1388 | 2.0 | 380 | 0.0574 | 0.9837 | | 0.1071 | 3.0 | 570 | 0.0524 | 0.9848 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "annual crop", "forest", "herbaceous vegetation", "highway", "industrial", "pasture", "permanent crop", "residential", "river", "sea or lake" ]
fuji12345/vit-base-anime-e100
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-anime-e100 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0757 - Accuracy: 0.9804 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "hallucination", "normal" ]
BeckerAnas/hardy-bee-220
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hardy-bee-220 This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0965 - Accuracy: 0.9648 - Precision: 0.9660 - Recall: 0.9648 - F1: 0.9650 - Roc Auc: 0.9978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 1.3971 | 1.0 | 17 | 1.3628 | 0.3997 | 0.4611 | 0.3997 | 0.3922 | 0.6197 | | 1.343 | 2.0 | 34 | 1.2959 | 0.4206 | 0.5256 | 0.4206 | 0.4184 | 0.7270 | | 1.218 | 3.0 | 51 | 1.1328 | 0.5104 | 0.5209 | 0.5104 | 0.5114 | 0.7725 | | 1.0866 | 4.0 | 68 | 1.0127 | 0.5482 | 0.5251 | 0.5482 | 0.5135 | 0.7935 | | 0.979 | 5.0 | 85 | 0.9274 | 0.5182 | 0.5688 | 0.5182 | 0.5264 | 0.7989 | | 0.7709 | 6.0 | 102 | 0.8019 | 0.5508 | 0.5959 | 0.5508 | 0.5541 | 0.8207 | | 0.6793 | 7.0 | 119 | 0.7300 | 0.5729 | 0.6291 | 0.5729 | 0.5779 | 0.8376 | | 0.7004 | 8.0 | 136 | 0.7979 | 0.5820 | 0.6178 | 0.5820 | 0.5900 | 0.8433 | | 0.5933 | 9.0 | 153 | 0.7219 | 0.5456 | 0.6553 | 0.5456 | 0.5559 | 0.8498 | | 0.5292 | 10.0 | 170 | 0.6114 | 0.6888 | 0.7024 | 0.6888 | 0.6911 | 0.8923 | | 0.4391 | 11.0 | 187 | 0.5453 | 0.6758 | 0.7536 | 0.6758 | 0.6758 | 0.9062 | | 0.3856 | 12.0 | 204 | 0.5515 | 0.6810 | 0.7299 | 0.6810 | 0.6793 | 0.9156 | | 0.3171 | 13.0 | 221 | 0.4486 | 0.7188 | 0.7829 | 0.7188 | 0.7172 | 0.9330 | | 0.3035 | 14.0 | 238 | 0.4295 | 0.7227 | 0.7859 | 0.7227 | 0.7248 | 0.9364 | | 0.2196 | 15.0 | 255 | 0.3438 | 0.8229 | 0.8298 | 0.8229 | 0.8243 | 0.9563 | | 0.1842 | 16.0 | 272 | 0.2979 | 0.8190 | 0.8539 | 0.8190 | 0.8180 | 0.9690 | | 0.1423 | 17.0 | 289 | 0.2747 | 0.8646 | 0.8638 | 0.8646 | 0.8638 | 0.9717 | | 0.1387 | 18.0 | 306 | 0.3680 | 0.7904 | 0.8535 | 0.7904 | 0.7934 | 0.9793 | | 0.1137 | 19.0 | 323 | 0.2358 | 0.8490 | 0.8744 | 0.8490 | 0.8486 | 0.9824 | | 0.1013 | 20.0 | 340 | 0.1700 | 0.9193 | 0.9191 | 0.9193 | 0.9191 | 0.9894 | | 0.0763 | 21.0 | 357 | 0.1573 | 0.9193 | 0.9241 | 0.9193 | 0.9199 | 0.9906 | | 0.0648 | 22.0 | 374 | 0.1423 | 0.9323 | 0.9336 | 0.9323 | 0.9324 | 0.9915 | | 0.0433 | 23.0 | 391 | 0.1344 | 0.9414 | 0.9413 | 0.9414 | 0.9412 | 0.9933 | | 0.0392 | 24.0 | 408 | 0.1444 | 0.9427 | 0.9444 | 0.9427 | 0.9423 | 0.9938 | | 0.0282 | 25.0 | 425 | 0.1134 | 0.9622 | 0.9627 | 0.9622 | 0.9623 | 0.9952 | | 0.0249 | 26.0 | 442 | 0.1243 | 0.9466 | 0.9500 | 0.9466 | 0.9470 | 0.9953 | | 0.015 | 27.0 | 459 | 0.1377 | 0.9336 | 0.9379 | 0.9336 | 0.9339 | 0.9959 | | 0.0175 | 28.0 | 476 | 0.1320 | 0.9492 | 0.9497 | 0.9492 | 0.9493 | 0.9954 | | 0.029 | 29.0 | 493 | 0.1202 | 0.9583 | 0.9592 | 0.9583 | 0.9582 | 0.9961 | | 0.0138 | 30.0 | 510 | 0.0889 | 0.9714 | 0.9714 | 0.9714 | 0.9714 | 0.9976 | | 0.0135 | 31.0 | 527 | 0.1064 | 0.9622 | 0.9635 | 0.9622 | 0.9624 | 0.9969 | | 0.0076 | 32.0 | 544 | 0.1238 | 0.9427 | 0.9466 | 0.9427 | 0.9428 | 0.9969 | | 0.0098 | 33.0 | 561 | 0.0871 | 0.9635 | 0.9643 | 0.9635 | 0.9636 | 0.9974 | | 0.0066 | 34.0 | 578 | 0.1342 | 0.9518 | 0.9547 | 0.9518 | 0.9522 | 0.9968 | | 0.0055 | 35.0 | 595 | 0.0965 | 0.9648 | 0.9660 | 0.9648 | 0.9650 | 0.9978 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.0
[ "mild_demented", "moderate_demented", "non_demented", "very_mild_demented" ]
tomdickharryeth/roadwork
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ferzanagehringer/vit-food-classification
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-food-classification This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3174 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 53 | 0.6396 | 0.9143 | | 0.845 | 2.0 | 106 | 0.3710 | 0.9333 | | 0.845 | 3.0 | 159 | 0.3014 | 0.9333 | | 0.2995 | 4.0 | 212 | 0.2783 | 0.9238 | | 0.2995 | 5.0 | 265 | 0.2712 | 0.9143 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.7.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.1
[ "0", "1", "2", "3", "4", "5", "6" ]
ideepankarsharma2003/clothing-multilabel
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clothing-multilabel This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3360 - Exact Match Accuracy: 0.7919 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.2955 | 1.0 | 1864 | 0.2995 | 0.7884 | | 0.228 | 2.0 | 3728 | 0.2937 | 0.7958 | | 0.117 | 3.0 | 5592 | 0.3360 | 0.7919 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cu126 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "longpants", "longsleeves" ]
obiwan001/roadwork
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
walzsil1/vit-base-fruits-360_1
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-fruits-360_1 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the PedroSampaio/fruits-360 dataset. It achieves the following results on the evaluation set: - Loss: 0.0267 - Accuracy: 0.9921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0196 | 1.0 | 2091 | 0.0714 | 0.9828 | | 0.0072 | 2.0 | 4182 | 0.0452 | 0.9868 | | 0.0038 | 3.0 | 6273 | 0.0369 | 0.9904 | | 0.0021 | 4.0 | 8364 | 0.0348 | 0.9900 | | 0.0019 | 5.0 | 10455 | 0.0324 | 0.9912 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
[ "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "2", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "3", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "4", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "5", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "6", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "7", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "8", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "9", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "10", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "11", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "12", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "13", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "14", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "15", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "16", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "17", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "18", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "19", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "20", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "21", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "22", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "23", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "24", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "25", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "26", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "27", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "28", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "29", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "30", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "31", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "32", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "33", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "34", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "35", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "36", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "37", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "38", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "39", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "40", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "41", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "42", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "43", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "44", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "45", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "46", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "47", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "48", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "49", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "50", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "51", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "52", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "53", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "54", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "55", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "56", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "57", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "58", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "59", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "60", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "61", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "62", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "63", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "64", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "65", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "66", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "67", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "68", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "69", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "70", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "71", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "72", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "73", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "74", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "75", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "76", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "77", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "78", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "79", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "80", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "81", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "82", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "83", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "84", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "85", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "86", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "87", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "88", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "89", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "90", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "91", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "92", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "93", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "94", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "95", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "96", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "97", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "98", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "99", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "100", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "101", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "102", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "103", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "104", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "105", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "106", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "107", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "108", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "109", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "110", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "111", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112", "112" ]
juliansalas080/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0531 - Accuracy: 0.9826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 34 - eval_batch_size: 34 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 136 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2187 | 1.0 | 179 | 0.1116 | 0.9637 | | 0.1768 | 2.0 | 358 | 0.0645 | 0.98 | | 0.1325 | 3.0 | 537 | 0.0531 | 0.9826 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "annual crop", "forest", "herbaceous vegetation", "highway", "industrial", "pasture", "permanent crop", "residential", "river", "sea or lake" ]
fuji12345/vit-base-anime-e10
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-anime-e10 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0075 - Accuracy: 0.9986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "hallucination", "normal" ]
roobahist/swin-tiny-patch4-window7-224-finetuned-eurosat
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0741 - Accuracy: 0.9746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5073 | 1.0 | 352 | 0.1282 | 0.9588 | | 0.3935 | 2.0 | 704 | 0.0865 | 0.9716 | | 0.3491 | 3.0 | 1056 | 0.0741 | 0.9746 | ### Framework versions - Transformers 4.52.2 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck" ]
fuji12345/vit-base-anime-e10_pure
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-anime-e10_pure This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0274 - Accuracy: 0.9947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.53.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
[ "hallucination", "normal" ]
BeckerAnas/polar-pond-221
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # polar-pond-221 This model is a fine-tuned version of [facebook/convnextv2-base-1k-224](https://huggingface.co/facebook/convnextv2-base-1k-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1960 - Accuracy: 0.9688 - Precision: 0.9703 - Recall: 0.9688 - F1: 0.9683 - Roc Auc: 0.9987 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| | 1.3843 | 1.0 | 17 | 1.3585 | 0.4896 | 0.2458 | 0.4896 | 0.3273 | 0.7094 | | 1.3004 | 2.0 | 34 | 1.2273 | 0.4180 | 0.5441 | 0.4180 | 0.4640 | 0.7541 | | 1.0841 | 3.0 | 51 | 1.1291 | 0.5430 | 0.5094 | 0.5430 | 0.4415 | 0.8077 | | 0.8792 | 4.0 | 68 | 0.7692 | 0.4831 | 0.6620 | 0.4831 | 0.5065 | 0.7978 | | 0.7057 | 5.0 | 85 | 0.7192 | 0.6510 | 0.6349 | 0.6510 | 0.6354 | 0.8585 | | 0.6424 | 6.0 | 102 | 0.6292 | 0.5299 | 0.6840 | 0.5299 | 0.5244 | 0.8628 | | 0.5829 | 7.0 | 119 | 0.5684 | 0.5898 | 0.7109 | 0.5898 | 0.6047 | 0.8724 | | 0.4313 | 8.0 | 136 | 0.3756 | 0.7930 | 0.7945 | 0.7930 | 0.7936 | 0.9476 | | 0.2881 | 9.0 | 153 | 0.2655 | 0.8516 | 0.8713 | 0.8516 | 0.8530 | 0.9716 | | 0.1871 | 10.0 | 170 | 0.3171 | 0.8060 | 0.8729 | 0.8060 | 0.8089 | 0.9834 | | 0.158 | 11.0 | 187 | 0.1419 | 0.9440 | 0.9441 | 0.9440 | 0.9439 | 0.9921 | | 0.1137 | 12.0 | 204 | 0.1567 | 0.9245 | 0.9283 | 0.9245 | 0.9232 | 0.9932 | | 0.0658 | 13.0 | 221 | 0.1298 | 0.9453 | 0.9462 | 0.9453 | 0.9455 | 0.9944 | | 0.0696 | 14.0 | 238 | 0.1345 | 0.9466 | 0.9470 | 0.9466 | 0.9467 | 0.9948 | | 0.043 | 15.0 | 255 | 0.1541 | 0.9674 | 0.9684 | 0.9674 | 0.9674 | 0.9972 | | 0.0393 | 16.0 | 272 | 0.0805 | 0.9622 | 0.9633 | 0.9622 | 0.9624 | 0.9973 | | 0.0339 | 17.0 | 289 | 0.1905 | 0.9466 | 0.9522 | 0.9466 | 0.9469 | 0.9966 | | 0.0466 | 18.0 | 306 | 0.1001 | 0.9401 | 0.9468 | 0.9401 | 0.9413 | 0.9975 | | 0.0322 | 19.0 | 323 | 0.0643 | 0.9792 | 0.9800 | 0.9792 | 0.9792 | 0.9990 | | 0.0184 | 20.0 | 340 | 0.1204 | 0.9844 | 0.9846 | 0.9844 | 0.9842 | 0.9985 | | 0.0201 | 21.0 | 357 | 0.0876 | 0.9779 | 0.9786 | 0.9779 | 0.9778 | 0.9992 | | 0.0212 | 22.0 | 374 | 0.1340 | 0.9570 | 0.9611 | 0.9570 | 0.9574 | 0.9985 | | 0.0168 | 23.0 | 391 | 0.0807 | 0.9661 | 0.9689 | 0.9661 | 0.9664 | 0.9992 | | 0.0158 | 24.0 | 408 | 0.1210 | 0.9740 | 0.9745 | 0.9740 | 0.9738 | 0.9983 | | 0.0135 | 25.0 | 425 | 0.1960 | 0.9688 | 0.9703 | 0.9688 | 0.9683 | 0.9987 | ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0+cpu - Datasets 3.6.0 - Tokenizers 0.21.0
[ "mild_demented", "moderate_demented", "non_demented", "very_mild_demented" ]
takahashi111/vit-base-patch16-224_identify-alzheimer-stage
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "mild dementia", "moderate dementia", "non dementia", "very mild dementia" ]
obiwan001/roadwork1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
obiwan001/roadwork2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
obiwan001/roadwork3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
obiwan001/roadwork4
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
obiwan001/roadwork5
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
obiwan001/roadwork6
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_1
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_2
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_3
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_4
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_5
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_6
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ibuki95/vision_172_7
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/z4d120gp
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/rslmt4qc
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/q9zi2d5l
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/bticuhww
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/rbvzru6i
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/f1dykuac
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/c94cqfjr
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/fjvu20dc
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/wxnag424
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/paof862m
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/fr1foukk
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/to74gu2o
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/rqza8di9
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/swq5ka1u
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/hgvpub96
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/ncnhuoqa
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/xjmuddvt
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/ti92d88o
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/bcgvmz6o
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/y4wr84z8
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
wetherbeep/abc
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
softdev629/otx2ym4b
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ProDev9515/roadwork-72-gqVkSn
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]
ProDev9515/roadwork-72-qu1MFL
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
[ "none", "roadwork" ]