prithivMLmods commited on
Commit
5f7b4c4
·
verified ·
1 Parent(s): 39d5a87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md CHANGED
@@ -13,6 +13,9 @@ tags:
13
  - adult-content-detection
14
  - explicit-content-detection
15
  ---
 
 
 
16
 
17
  ```py
18
  Classification Report:
@@ -30,3 +33,88 @@ Enticing or Sensual 0.9132 0.9429 0.9278 5600
30
  ```
31
 
32
  ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/psonZ0OXSjqgLRDkFtRTh.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - adult-content-detection
14
  - explicit-content-detection
15
  ---
16
+ # **siglip2-x256-explicit-content**
17
+
18
+ > **siglip2-x256-explicit-content** is a vision-language encoder model fine-tuned from **google/siglip2-base-patch16-224** for **multi-class image classification**. Built on the **SiglipForImageClassification** architecture, the model is trained to identify and categorize content types in images, especially for **explicit, suggestive, or safe media filtering**.
19
 
20
  ```py
21
  Classification Report:
 
33
  ```
34
 
35
  ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/psonZ0OXSjqgLRDkFtRTh.png)
36
+
37
+ ---
38
+
39
+ ## **Label Space: 5 Classes**
40
+
41
+ The model classifies each image into one of the following content categories:
42
+
43
+ ```
44
+ Class 0: "Anime Picture"
45
+ Class 1: "Hentai"
46
+ Class 2: "Normal"
47
+ Class 3: "Pornography"
48
+ Class 4: "Enticing or Sensual"
49
+ ```
50
+
51
+ ---
52
+
53
+ ## **Install Dependencies**
54
+
55
+ ```bash
56
+ pip install -q transformers torch pillow gradio
57
+ ```
58
+
59
+ ---
60
+
61
+ ## **Inference Code**
62
+
63
+ ```python
64
+ import gradio as gr
65
+ from transformers import AutoImageProcessor, SiglipForImageClassification
66
+ from PIL import Image
67
+ import torch
68
+
69
+ # Load model and processor
70
+ model_name = "prithivMLmods/siglip2-x256-explicit-content" # Replace with your model path if needed
71
+ model = SiglipForImageClassification.from_pretrained(model_name)
72
+ processor = AutoImageProcessor.from_pretrained(model_name)
73
+
74
+ # ID to Label mapping
75
+ id2label = {
76
+ "0": "Anime Picture",
77
+ "1": "Hentai",
78
+ "2": "Normal",
79
+ "3": "Pornography",
80
+ "4": "Enticing or Sensual"
81
+ }
82
+
83
+ def classify_explicit_content(image):
84
+ image = Image.fromarray(image).convert("RGB")
85
+ inputs = processor(images=image, return_tensors="pt")
86
+
87
+ with torch.no_grad():
88
+ outputs = model(**inputs)
89
+ logits = outputs.logits
90
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
91
+
92
+ prediction = {
93
+ id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
94
+ }
95
+
96
+ return prediction
97
+
98
+ # Gradio Interface
99
+ iface = gr.Interface(
100
+ fn=classify_explicit_content,
101
+ inputs=gr.Image(type="numpy"),
102
+ outputs=gr.Label(num_top_classes=5, label="Predicted Content Type"),
103
+ title="siglip2-x256-explicit-content",
104
+ description="Classifies images into explicit, suggestive, or safe categories (e.g., Hentai, Pornography, Normal)."
105
+ )
106
+
107
+ if __name__ == "__main__":
108
+ iface.launch()
109
+ ```
110
+
111
+ ---
112
+
113
+ ## **Intended Use**
114
+
115
+ This model is intended for applications such as:
116
+
117
+ - **Content Moderation**: Automatically detect NSFW or suggestive content.
118
+ - **Parental Controls**: Enable AI-based filtering for safe media browsing.
119
+ - **Dataset Preprocessing**: Clean and categorize image datasets for research or deployment.
120
+ - **Online Platforms**: Help enforce content guidelines for uploads and user-generated media.