poudel commited on
Commit
509772e
·
verified ·
1 Parent(s): fbd7818

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -35
README.md CHANGED
@@ -16,15 +16,12 @@ pipeline_tag: text-classification
16
 
17
  # Model Card for Model ID
18
 
19
- <!-- This is a fine-tuned BERT model (`bert-base-uncased`) used for classifying text into two categories: **Depression** or **Non-depression**. The model is designed for text classification and has been trained on a custom dataset of mental health-related posts from social media. -->
20
 
21
 
22
-
23
- ## Model Details
24
-
25
  ### Model Description
26
 
27
- <!-- This model aims to identify signs of depression in written text. It was trained on social media posts labeled as either indicative of depression or not. The model uses the BERT architecture for text classification and was fine-tuned specifically for this task. -->
28
 
29
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
30
 
@@ -36,40 +33,35 @@ This is the model card of a 🤗 transformers model that has been pushed on the
36
 
37
  ### Model Sources [optional]
38
 
39
- <!-- Provide the basic links for the model. -->
40
 
41
  - **Repository:** [Sentiment Classifier for Depression](https://huggingface.co/poudel/sentiment-classifier)
42
  - **Demo [optional]:** [Live Gradio App](https://huggingface.co/spaces/poudel/Sentiment_classifier)
43
 
44
- ## Uses
45
-
46
- <!-- -->
47
-
48
- ### Direct Use
49
 
50
- <!-- This model is designed to classify text as either depression-related or non-depression-related. It can be used in social media sentiment analysis, mental health research, and automated text analysis systems. -->
51
 
 
52
 
53
- ### Downstream Use [optional]
54
 
55
- <!-- The model can be further fine-tuned for other types of sentiment analysis tasks related to mental health. -->
56
 
 
57
 
58
 
59
  ### Out-of-Scope Use
60
 
61
- <!-- The model should not be used for clinical diagnosis or decision-making without the input of medical professionals. It is also unsuitable for text that is not in English or very short/ambiguous inputs. -->
62
 
63
 
64
 
65
  ## Bias, Risks, and Limitations
66
 
67
- <!-- The model may suffer from biases inherent in the dataset, such as overrepresenting certain language patterns. It is trained on social media posts, which may not capture all the nuances of real-world conversations about mental health -->
68
 
69
 
70
  ### Recommendations
71
 
72
- <!-- Users should use the model with caution in sensitive applications such as mental health monitoring. It is advised that the model be used alongside professional judgment. -->
73
 
74
 
75
  ## How to Get Started with the Model
@@ -89,16 +81,16 @@ predicted_class = torch.argmax(outputs.logits).item()
89
 
90
  ### Training Data
91
 
92
- <!-- The model was trained on a custom dataset of tweets labeled as either depression-related or not. Data pre-processing included tokenization and removal of special characters. -->
93
 
94
 
95
  ### Training Procedure
96
 
97
- <!-- The model was trained using Hugging Face's transformers library. The training was conducted on a T4 GPU over 3 epochs, with a batch size of 16 and a learning rate of 5e-5. -->
98
 
99
  #### Preprocessing
100
 
101
- <!-- Text was lowercased, and special characters were removed as well as Tokenization was done using the bert-base-uncased tokenizer.-->
102
 
103
 
104
  #### Training Hyperparameters
@@ -110,21 +102,18 @@ predicted_class = torch.argmax(outputs.logits).item()
110
 
111
  #### Speeds, Sizes, Times
112
 
113
- <!--Training was conducted for approximately 1 hour on a T4 GPU in Google Colab. -->
114
 
115
 
116
- ## Evaluation
117
 
118
- ### Testing Data, Factors & Metrics
119
 
120
- #### Testing Data
121
 
122
- <!-- The model was evaluated on a 20% holdout set from the custom dataset. -->
123
 
124
 
125
  #### Metrics
126
-
127
- <!-- The model was evaluated using accuracy, precision, recall, and F1 score. -->
128
 
129
 
130
  ### Results
@@ -141,28 +130,29 @@ The model achieved high performance across all key metrics, indicating strong pr
141
 
142
  ## Environmental Impact
143
 
144
- <!-- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). -->
145
 
146
  - **Hardware Type:** [T4 GPU]
147
  - **Hours used:** [ 1 hour]
148
  - **Cloud Provider:** [Google Cloud (Colab)]
149
  - **Carbon Emitted:** [Estimated at 0.45 kg CO2eq]
150
 
151
- ## Technical Specifications [The model uses the BERT (bert-base-uncased) architecture and was fine-tuned for binary classification (depression vs non-depression).]
 
 
152
 
153
  ### Model Architecture and Objective
154
 
155
  #### Hardware
156
 
157
- [T4 GPU]
158
 
159
  #### Software
160
-
161
- [Hugging Face transformers library.]
162
 
163
  ## Citation [optional]
164
 
165
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
166
 
167
  **BibTeX:**
168
 
@@ -176,7 +166,7 @@ The model achieved high performance across all key metrics, indicating strong pr
176
 
177
  **APA:**
178
 
179
- [Poudel, A. (2024). Sentiment Classifier for Depression. Retrieved from https://huggingface.co/poudel/sentiment-classifier.]
180
 
181
 
182
  ## Model Card Authors
@@ -185,4 +175,4 @@ The model achieved high performance across all key metrics, indicating strong pr
185
 
186
  ## Model Card Contact
187
 
188
 
16
 
17
  # Model Card for Model ID
18
 
19
+ This is a fine-tuned BERT model (`bert-base-uncased`) used for classifying text into two categories: **Depression** or **Non-depression**. The model is designed for text classification and has been trained on a custom dataset of mental health-related posts from social media.
20
 
21
 
 
 
 
22
  ### Model Description
23
 
24
+ This model aims to identify signs of depression in written text. It was trained on social media posts labeled as either indicative of depression or not. The model uses the BERT architecture for text classification and was fine-tuned specifically for this task.
25
 
26
  This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
27
 
 
33
 
34
  ### Model Sources [optional]
35
 
 
36
 
37
  - **Repository:** [Sentiment Classifier for Depression](https://huggingface.co/poudel/sentiment-classifier)
38
  - **Demo [optional]:** [Live Gradio App](https://huggingface.co/spaces/poudel/Sentiment_classifier)
39
 
 
 
 
 
 
40
 
41
+ ### Use
42
 
43
+ This model is designed to classify text as either depression-related or non-depression-related. It can be used in social media sentiment analysis, mental health research, and automated text analysis systems.
44
 
 
45
 
46
+ ### Downstream Use
47
 
48
+ The model can be further fine-tuned for other types of sentiment analysis tasks related to mental health.
49
 
50
 
51
  ### Out-of-Scope Use
52
 
53
+ The model should not be used for clinical diagnosis or decision-making without the input of medical professionals. It is also unsuitable for text that is not in English or very short/ambiguous inputs.
54
 
55
 
56
 
57
  ## Bias, Risks, and Limitations
58
 
59
+ <!-- The model may suffer from biases inherent in the dataset, such as overrepresenting certain language patterns. It is trained on social media posts, which may not capture all the nuances of real-world conversations about mental health
60
 
61
 
62
  ### Recommendations
63
 
64
+ Users should use the model with caution in sensitive applications such as mental health monitoring. It is advised that the model be used alongside professional judgment.
65
 
66
 
67
  ## How to Get Started with the Model
 
81
 
82
  ### Training Data
83
 
84
+ The model was trained on a custom dataset of tweets labeled as either depression-related or not. Data pre-processing included tokenization and removal of special characters.
85
 
86
 
87
  ### Training Procedure
88
 
89
+ The model was trained using Hugging Face's transformers library. The training was conducted on a T4 GPU over 3 epochs, with a batch size of 16 and a learning rate of 5e-5.
90
 
91
  #### Preprocessing
92
 
93
+ Text was lowercased, and special characters were removed as well as Tokenization was done using the bert-base-uncased tokenizer.
94
 
95
 
96
  #### Training Hyperparameters
 
102
 
103
  #### Speeds, Sizes, Times
104
 
105
+ Training was conducted for approximately 1 hour on a T4 GPU in Google Colab.
106
 
107
 
 
108
 
 
109
 
110
+ #### Evaluation and Testing Data
111
 
112
+ The model was evaluated on a 20% holdout set from the custom dataset.
113
 
114
 
115
  #### Metrics
116
+ The model was evaluated using accuracy, precision, recall, and F1 score.
 
117
 
118
 
119
  ### Results
 
130
 
131
  ## Environmental Impact
132
 
133
+ <!-- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
134
 
135
  - **Hardware Type:** [T4 GPU]
136
  - **Hours used:** [ 1 hour]
137
  - **Cloud Provider:** [Google Cloud (Colab)]
138
  - **Carbon Emitted:** [Estimated at 0.45 kg CO2eq]
139
 
140
+ ## Technical Specifications
141
+
142
+ The model uses the BERT (bert-base-uncased) architecture and was fine-tuned for binary classification (depression vs non-depression).
143
 
144
  ### Model Architecture and Objective
145
 
146
  #### Hardware
147
 
148
+ T4 GPU
149
 
150
  #### Software
151
+ Hugging Face transformers library.
 
152
 
153
  ## Citation [optional]
154
 
155
+ If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.
156
 
157
  **BibTeX:**
158
 
 
166
 
167
  **APA:**
168
 
169
+ Poudel, A. (2024). Sentiment Classifier for Depression. Retrieved from https://huggingface.co/poudel/sentiment-classifier.
170
 
171
 
172
  ## Model Card Authors
 
175
 
176
  ## Model Card Contact
177
 
178