Ateeqq's picture
Update README.md
a6bcf3b verified
|
raw
history blame
3.41 kB
metadata
license: cc-by-nd-4.0
language:
  - en
base_model:
  - google/siglip2-base-patch16-224
pipeline_tag: image-classification
library_name: transformers
tags:
  - nswf
  - exnrt.com

Nsfw Image Detection

This model is a fine-tuned for nsfw image classification. It has been trained to classify images into three safety-related categories, making it suitable for content moderation, filtering, or safety-aware applications.

🧠 Model Details

  • Base model: google/siglip2-base-patch16-224
  • Task: Image Classification (Safety Filtering)
  • Framework: PyTorch
  • Fine-tuned on: Custom dataset with 3 safety-related categories
  • Selected checkpoint: Epoch 3
  • Batch size: 64
  • Epochs: 7

🏷️ Categories

The model classifies images into the following categories:

ID Label
0 graphically_violent
1 nudity_pornography
2 safe_normal

🧾 Label Mapping

label2id = {'graphically_violent': 0, 'nudity_pornography': 1, 'safe_normal': 2}
id2label = {0: 'graphically_violent', 1: 'nudity_pornography', 2: 'safe_normal'}

📈 Visual Results

📌 Epoch Training Results

Epoch Results

📌 Final Metrics & Confusion Matrix

Metrics


🚀 Usage

import torch
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch.nn.functional as F

model_path = "Ateeqq/nsfw-image-detection"
processor = AutoImageProcessor.from_pretrained(model_path)
model = SiglipForImageClassification.from_pretrained(model_path)

image = Image.open("your_image_path.jpg")
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
    logits = model(**inputs).logits

probabilities = F.softmax(logits, dim=1)

predicted_class_id = logits.argmax().item()
predicted_class_label = model.config.id2label[predicted_class_id]

confidence_scores = probabilities[0].tolist()

print(f"Predicted class ID: {predicted_class_id}")
print(f"Predicted class label: {predicted_class_label}\n")

for i, score in enumerate(confidence_scores):
    label = model.config.id2label[i]
    print(f"Confidence for '{label}': {score:.4f}")

Output

Predicted class ID: 0
Predicted class label: graphically_violent

Confidence for 'graphically_violent': 0.9941
Confidence for 'nudity_pornography': 0.0040
Confidence for 'safe_normal': 0.0019

📊 Training Metrics (Epoch 3 Selected ✅)

Epoch Training Loss Validation Loss Accuracy
1 0.1086 0.0817 97.05%
2 0.0415 0.1233 95.50%
3 ✅ 0.0302 0.0516 98.45%
4 0.0271 0.0799 97.89%
5 0.0222 0.1015 98.03%
6 0.0026 0.0707 98.45%
7 0.0178 0.0665 98.59%