Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,46 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
# Skin Cancer Image Classification Model
|
| 5 |
+
|
| 6 |
+
## Introduction
|
| 7 |
+
|
| 8 |
+
This model is designed for the classification of skin cancer images into various categories including benign keratosis-like lesions, basal cell carcinoma, actinic keratoses, vascular lesions, melanocytic nevi, melanoma, and dermatofibroma.
|
| 9 |
+
|
| 10 |
+
## Model Overview
|
| 11 |
+
|
| 12 |
+
- Model Architecture: Vision Transformer (ViT)
|
| 13 |
+
- Pre-trained Model: Google's ViT with 16x16 patch size and trained on ImageNet21k dataset
|
| 14 |
+
- Modified Classification Head: The classification head has been replaced to adapt the model to the skin cancer classification task.
|
| 15 |
+
|
| 16 |
+
## Dataset
|
| 17 |
+
|
| 18 |
+
- Dataset Name: Skin Cancer Dataset
|
| 19 |
+
- Source: [Marmal88's Skin Cancer Dataset on Hugging Face](https://huggingface.co/datasets/marmal88/skin_cancer)
|
| 20 |
+
- Classes: Benign keratosis-like lesions, Basal cell carcinoma, Actinic keratoses, Vascular lesions, Melanocytic nevi, Melanoma, Dermatofibroma
|
| 21 |
+
|
| 22 |
+
## Training
|
| 23 |
+
|
| 24 |
+
- Optimizer: Adam optimizer with a learning rate of 1e-4
|
| 25 |
+
- Loss Function: Cross-Entropy Loss
|
| 26 |
+
- Batch Size: 32
|
| 27 |
+
- Number of Epochs: 5
|
| 28 |
+
|
| 29 |
+
## Evaluation Metrics
|
| 30 |
+
|
| 31 |
+
- Train Loss: Average loss over the training dataset
|
| 32 |
+
- Train Accuracy: Accuracy over the training dataset
|
| 33 |
+
- Validation Loss: Average loss over the validation dataset
|
| 34 |
+
- Validation Accuracy: Accuracy over the validation dataset
|
| 35 |
+
|
| 36 |
+
## Results
|
| 37 |
+
|
| 38 |
+
- Epoch 1/5, Train Loss: 0.7168, Train Accuracy: 0.7586, Val Loss: 0.4994, Val Accuracy: 0.8355
|
| 39 |
+
- Epoch 2/5, Train Loss: 0.4550, Train Accuracy: 0.8466, Val Loss: 0.3237, Val Accuracy: 0.8973
|
| 40 |
+
- Epoch 3/5, Train Loss: 0.2959, Train Accuracy: 0.9028, Val Loss: 0.1790, Val Accuracy: 0.9530
|
| 41 |
+
- Epoch 4/5, Train Loss: 0.1595, Train Accuracy: 0.9482, Val Loss: 0.1498, Val Accuracy: 0.9555
|
| 42 |
+
- Epoch 5/5, Train Loss: 0.1208, Train Accuracy: 0.9614, Val Loss: 0.1000, Val Accuracy: 0.9695
|
| 43 |
+
## Conclusion
|
| 44 |
+
|
| 45 |
+
The model demonstrates good performance in classifying skin cancer images into various categories. Further fine-tuning or experimentation may improve performance on this task.
|
| 46 |
+
|