Update README.md
Browse files
README.md
CHANGED
@@ -11,57 +11,60 @@ tags:
|
|
11 |
- exnrt.com
|
12 |
---
|
13 |
|
14 |
-
#
|
15 |
|
16 |
-
This model is
|
17 |
|
18 |
<p>
|
19 |
<a href="https://exnrt.com/blog/ai/fine-tuning-siglip2/" target="_blank">
|
20 |
<img src="https://img.shields.io/badge/View%20Training%20Code-blue?style=for-the-badge&logo=readthedocs"/>
|
21 |
</a>
|
|
|
22 |
</p>
|
23 |
|
|
|
|
|
24 |
## 🧠 Model Details
|
25 |
|
26 |
* **Base model**: `google/siglip2-base-patch16-224`
|
27 |
-
* **Task**: Image Classification (
|
28 |
-
* **Framework**: PyTorch
|
29 |
-
* **Fine-tuned on**: Custom dataset with 3
|
30 |
-
* **Selected checkpoint**: Epoch
|
31 |
* **Batch size**: 64
|
32 |
-
* **Epochs**:
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
The model classifies images into the following categories:
|
37 |
|
38 |
-
|
39 |
-
| -- | --------------------- |
|
40 |
-
| 0 | `graphically_violent` |
|
41 |
-
| 1 | `nudity_pornography` |
|
42 |
-
| 2 | `safe_normal` |
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
-
label2id = {'graphically_violent': 0, 'nudity_pornography': 1, 'safe_normal': 2}
|
48 |
-
id2label = {0: 'graphically_violent', 1: 'nudity_pornography', 2: 'safe_normal'}
|
49 |
-
```
|
50 |
|
|
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
|
61 |
|
|
|
|
|
|
|
|
|
62 |
---
|
63 |
|
64 |
-
## 🚀 Usage
|
65 |
|
66 |
```python
|
67 |
import torch
|
@@ -73,46 +76,33 @@ model_path = "Ateeqq/nsfw-image-detection"
|
|
73 |
processor = AutoImageProcessor.from_pretrained(model_path)
|
74 |
model = SiglipForImageClassification.from_pretrained(model_path)
|
75 |
|
76 |
-
|
77 |
-
image = Image.open(image_path).convert("RGB")
|
78 |
inputs = processor(images=image, return_tensors="pt")
|
79 |
|
80 |
with torch.no_grad():
|
81 |
logits = model(**inputs).logits
|
82 |
-
|
83 |
|
84 |
predicted_class_id = logits.argmax().item()
|
85 |
-
|
86 |
-
confidence_scores = probabilities[0].tolist()
|
87 |
-
|
88 |
-
print(f"Predicted class ID: {predicted_class_id}")
|
89 |
-
print(f"Predicted class label: {predicted_class_label}\n")
|
90 |
-
for i, score in enumerate(confidence_scores):
|
91 |
-
label = model.config.id2label[i]
|
92 |
-
print(f"Confidence for '{label}': {score:.6f}")
|
93 |
-
```
|
94 |
-
|
95 |
-
### Output
|
96 |
-
|
97 |
-
```
|
98 |
-
Predicted class ID: 0
|
99 |
-
Predicted class label: graphically_violent
|
100 |
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
```
|
105 |
|
106 |
---
|
107 |
|
108 |
-
## 📊 Training Metrics (Epoch
|
109 |
|
110 |
| Epoch | Training Loss | Validation Loss | Accuracy |
|
111 |
| ----- | ------------- | --------------- | ---------- |
|
112 |
-
| 1 | 0.
|
113 |
-
| 2 | 0.
|
114 |
-
| 3
|
115 |
-
| 4 | 0.
|
116 |
-
| 5
|
117 |
-
|
118 |
-
|
|
|
|
|
|
11 |
- exnrt.com
|
12 |
---
|
13 |
|
14 |
+
# NSFW Image Detection
|
15 |
|
16 |
+
This model is fine-tuned for **NSFW image classification**. It classifies content into three safety-critical categories, making it useful for moderation, safety filtering, and compliant content handling systems.
|
17 |
|
18 |
<p>
|
19 |
<a href="https://exnrt.com/blog/ai/fine-tuning-siglip2/" target="_blank">
|
20 |
<img src="https://img.shields.io/badge/View%20Training%20Code-blue?style=for-the-badge&logo=readthedocs"/>
|
21 |
</a>
|
22 |
+
<a href="https://exnrt.com/blog/ai/fine-tuning-siglip2/" target="_blank">https://exnrt.com/blog/ai/fine-tuning-siglip2/</a>
|
23 |
</p>
|
24 |
|
25 |
+
---
|
26 |
+
|
27 |
## 🧠 Model Details
|
28 |
|
29 |
* **Base model**: `google/siglip2-base-patch16-224`
|
30 |
+
* **Task**: Image Classification (NSFW/Safe detection)
|
31 |
+
* **Framework**: PyTorch / Hugging Face Transformers
|
32 |
+
* **Fine-tuned on**: Custom dataset with 3 content categories
|
33 |
+
* **Selected checkpoint**: Epoch 5
|
34 |
* **Batch size**: 64
|
35 |
+
* **Epochs trained**: 5
|
36 |
|
37 |
+
---
|
|
|
|
|
38 |
|
39 |
+
### 📌 Epoch Training Results
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+

|
42 |
|
43 |
+
### 📌 Confusion Matrix
|
|
|
|
|
|
|
44 |
|
45 |
+

|
46 |
|
47 |
+

|
48 |
|
49 |
+
---
|
50 |
|
51 |
+
### 🏷️ Categories
|
52 |
|
53 |
+
| ID | Label |
|
54 |
+
| -- | ------------------------ |
|
55 |
+
| 0 | `gore_bloodshed_violent` |
|
56 |
+
| 1 | `nudity_pornography` |
|
57 |
+
| 2 | `safe_normal` |
|
58 |
|
59 |
+
### 🧾 Label Mapping
|
60 |
|
61 |
+
```python
|
62 |
+
label2id = {'gore_bloodshed_violent': 0, 'nudity_pornography': 1, 'safe_normal': 2}
|
63 |
+
id2label = {0: 'gore_bloodshed_violent', 1: 'nudity_pornography', 2: 'safe_normal'}
|
64 |
+
```
|
65 |
---
|
66 |
|
67 |
+
## 🚀 Usage Example
|
68 |
|
69 |
```python
|
70 |
import torch
|
|
|
76 |
processor = AutoImageProcessor.from_pretrained(model_path)
|
77 |
model = SiglipForImageClassification.from_pretrained(model_path)
|
78 |
|
79 |
+
image = Image.open("your_image_path.jpg").convert("RGB")
|
|
|
80 |
inputs = processor(images=image, return_tensors="pt")
|
81 |
|
82 |
with torch.no_grad():
|
83 |
logits = model(**inputs).logits
|
84 |
+
probs = F.softmax(logits, dim=1)
|
85 |
|
86 |
predicted_class_id = logits.argmax().item()
|
87 |
+
predicted_class = model.config.id2label[predicted_class_id]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
+
print(f"Predicted class: {predicted_class}")
|
90 |
+
for i, score in enumerate(probs[0]):
|
91 |
+
print(f"{model.config.id2label[i]}: {score:.4f}")
|
92 |
```
|
93 |
|
94 |
---
|
95 |
|
96 |
+
## 📊 Training Metrics (Epoch 5 Selected ✅)
|
97 |
|
98 |
| Epoch | Training Loss | Validation Loss | Accuracy |
|
99 |
| ----- | ------------- | --------------- | ---------- |
|
100 |
+
| 1 | 0.0765 | 0.1166 | 95.70% |
|
101 |
+
| 2 | 0.0719 | 0.0477 | 98.34% |
|
102 |
+
| 3 | 0.0089 | 0.0634 | 98.05% |
|
103 |
+
| 4 | 0.0109 | 0.0437 | 98.61% |
|
104 |
+
| 5 ✅ | 0.0001 | 0.0389 | **99.02%** |
|
105 |
+
|
106 |
+
- **Training runtime**: 1h 21m 40s
|
107 |
+
- **Final Training Loss**: 0.0727
|
108 |
+
- **Steps/sec**: 0.11 | **Samples/sec**: 6.99
|