Update README.md
Browse files
README.md
CHANGED
@@ -100,9 +100,32 @@ print(f"Confidence (Negative): {probabilities[0][0].item():.4f}")
|
|
100 |
print(f"Confidence (Positive): {probabilities[0][1].item():.4f}")
|
101 |
```
|
102 |
|
103 |
-
|
104 |
|
105 |
-
|
106 |
The model was fine-tuned on the IMDb Large Movie Review Dataset. This dataset consists of 50,000 highly polar movie reviews (25,000 for training, 25,000 for testing), labeled as either positive or negative. Reviews with a score of <= 4 out of 10 are labeled negative, and those with a score of >= 7 out of 10 are labeled positive.
|
107 |
|
108 |
-
Dataset Card: https://huggingface.co/datasets/ajaykarthick/imdb-movie-reviews (or the official IMDb dataset link if different)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
print(f"Confidence (Positive): {probabilities[0][1].item():.4f}")
|
101 |
```
|
102 |
|
103 |
+
## Training Details
|
104 |
|
105 |
+
### Training Data
|
106 |
The model was fine-tuned on the IMDb Large Movie Review Dataset. This dataset consists of 50,000 highly polar movie reviews (25,000 for training, 25,000 for testing), labeled as either positive or negative. Reviews with a score of <= 4 out of 10 are labeled negative, and those with a score of >= 7 out of 10 are labeled positive.
|
107 |
|
108 |
+
Dataset Card: https://huggingface.co/datasets/ajaykarthick/imdb-movie-reviews (or the official IMDb dataset link if different)
|
109 |
+
|
110 |
+
## Preprocessing
|
111 |
+
Text was tokenized using the DistilBertTokenizerFast associated with the base model. Input sequences were truncated to a maximum length of 512 tokens and padded to the longest sequence in the batch. Labels were mapped to 0 for negative and 1 for positive.
|
112 |
+
|
113 |
+
## Training Hyperparameters
|
114 |
+
- Training regime: Mixed precision (fp16) was likely used for faster training and reduced memory footprint. (Confirm this if you know your specific training setup)
|
115 |
+
|
116 |
+
- Optimizer: AdamW
|
117 |
+
|
118 |
+
- Learning Rate: Learning rate scheduler is used
|
119 |
+
|
120 |
+
- Epochs: 3
|
121 |
+
|
122 |
+
- Batch Size: 8
|
123 |
+
|
124 |
+
- Hardware: Google Colab A100 GPU
|
125 |
+
|
126 |
+
- Framework: PyTorch
|
127 |
+
|
128 |
+
## Speeds, Sizes, Times
|
129 |
+
Training Time: [E.g., Approximately 1-2 hours on a single Colab T4 GPU] (Estimate based on your experience)
|
130 |
+
|
131 |
+
Model Size: The model.safetensors file is approximately 255 MB.
|