kparkhade commited on
Commit
2e964d1
·
verified ·
1 Parent(s): 4cce243

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -1
README.md CHANGED
@@ -10,4 +10,52 @@ metrics:
10
  - precision
11
  base_model:
12
  - google-bert/bert-base-uncased
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - precision
11
  base_model:
12
  - google-bert/bert-base-uncased
13
+ ---
14
+
15
+ # Fine-Tuned BERT for IMDB Sentiment Classification
16
+
17
+ ![Hugging Face Model](https://huggingface.co/front/assets/huggingface_logo-noborder.svg)
18
+
19
+ ## Model Description
20
+ This is a fine-tuned version of [BERT-Base-Uncased](https://huggingface.co/google-bert/bert-base-uncased) for binary sentiment classification on the [IMDB dataset](https://huggingface.co/datasets/stanfordnlp/imdb). The model is trained to classify movie reviews as either **positive** or **negative**.
21
+
22
+ ## Model Details
23
+ - **Base Model**: [BERT-Base-Uncased](https://huggingface.co/google-bert/bert-base-uncased)
24
+ - **Dataset**: [IMDB Movie Reviews](https://huggingface.co/datasets/stanfordnlp/imdb)
25
+ - **Languages**: English (`en`)
26
+ - **Fine-tuning Epochs**: 3
27
+ - **Batch Size**: 8
28
+ - **Evaluation Metrics**: Accuracy, Precision, Recall
29
+ - **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
30
+
31
+ ## Usage
32
+ ### Load the Model
33
+ ```python
34
+ from transformers import BertForSequenceClassification, BertTokenizer
35
+
36
+ model_name = "kparkhade/Fine-tuned-BERT-Imdb"
37
+
38
+ model = BertForSequenceClassification.from_pretrained(model_name)
39
+ tokenizer = BertTokenizer.from_pretrained(model_name)
40
+ ```
41
+
42
+ ### Inference Example
43
+ ```python
44
+ from transformers import pipeline
45
+
46
+ sentiment_pipeline = pipeline("text-classification", model=model_name)
47
+ result = sentiment_pipeline("The movie was absolutely fantastic! I loved it.")
48
+ print(result)
49
+ ```
50
+
51
+ ## Citation
52
+ If you use this model, please cite:
53
+ @article{devlin2019bert,
54
+ title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
55
+ author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
56
+ journal={arXiv preprint arXiv:1810.04805},
57
+ year={2019}
58
+ }
59
+
60
+ ## License
61
+ This model is released under the Apache 2.0 License.