Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,177 @@
|
|
1 |
-
---
|
2 |
-
license: gemma
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: gemma
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- google/gemma-3-4b-it
|
7 |
+
---
|
8 |
+
|
9 |
+
<div align="center">
|
10 |
+
<b style="font-size: 40px;">X-Ray_Alpha</b>
|
11 |
+
|
12 |
+
|
13 |
+
</div>
|
14 |
+
|
15 |
+
|
16 |
+
<img src="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha/resolve/main/Images/X-Ray_Alpha.png" alt="X-Ray_Alpha" style="width: 30%; min-width: 450px; display: block; margin: auto;">
|
17 |
+
|
18 |
+
|
19 |
+
---
|
20 |
+
|
21 |
+
<div style="display: flex; justify-content: center; align-items: center;">
|
22 |
+
<a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#tldr"
|
23 |
+
style="color: #800080; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;">
|
24 |
+
Click here
|
25 |
+
for TL;DR
|
26 |
+
</a>
|
27 |
+
<a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#why-is-this-important"
|
28 |
+
style="color: #1E90FF; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;">
|
29 |
+
Why it's
|
30 |
+
important
|
31 |
+
</a>
|
32 |
+
<a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#how-can-you-help"
|
33 |
+
style="color: #32CD32; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;">
|
34 |
+
How can YOU
|
35 |
+
help?
|
36 |
+
</a>
|
37 |
+
<a href="https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha#how-to-run-it"
|
38 |
+
style="color: #E31515; font-weight: bold; font-size: 28px; text-decoration: none; margin: 0 20px;">
|
39 |
+
How to
|
40 |
+
RUN IT
|
41 |
+
</a>
|
42 |
+
</div>
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
---
|
47 |
+
|
48 |
+
This is a pre-alpha proof-of-concept of **a real fully uncensored vision model**.
|
49 |
+
|
50 |
+
Why do I say **"real"**? The few vision models we got (qwen, llama 3.2) were "censored," and their fine-tunes were made only for the **text portion** of the model, as training a vision model is a serious pain.
|
51 |
+
|
52 |
+
The only actually trained and uncensored vision model I am aware of is [ToriiGate](https://huggingface.co/Minthy/ToriiGate-v0.4-7B); the rest of the vision models are just the stock vision + a fine-tuned LLM.
|
53 |
+
|
54 |
+
# Does this even work?
|
55 |
+
|
56 |
+
<h2 style="color: green; font-weight: bold; font-size: 80px; text-align: center;">YES!</h2>
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
# Why is this Important?
|
61 |
+
|
62 |
+
Having a **fully compliant** vision model is a critical step toward democratizing vision capabilities for various tasks, especially **image tagging**. This is a critical step in both making LORAs for image diffusion models, and for mass tagging images to pretrain a diffusion model.
|
63 |
+
|
64 |
+
In other words, having a fully compliant and accurate vision model will allow the open source community to easily train both loras and even pretrain image diffusion models.
|
65 |
+
|
66 |
+
Another important task can be content moderation and classification, in various use cases there might not be black and white, where some content that might be considered NSFW by corporations, is allowed, while other content is not, there's nuance. Today's vision models **do not let the users decide**, as they will straight up **refuse** to inference any content that Google \ Some other corporations decided is not to their liking, and therefore these stock models are useless in a lot of cases.
|
67 |
+
|
68 |
+
What if someone wants to classify art that includes nudity? Having a naked statue over 1,000 years old displayed in the middle of a city, in a museum, or at the city square is perfectly acceptable, however, a stock vision model will straight up refuse to inference something like that.
|
69 |
+
|
70 |
+
It's like in many "sensitive" topics that LLMs will straight up **refuse to answer**, while the content is **publicly available on Wikipedia**. This is an attitude of **cynical patronism**, I say cynical because corporations **take private data to train their models**, and it is "perfectly fine", yet- they serve as the **arbitrators of morality** and indirectly preach to us from a position of a suggested moral superiority. This **gatekeeping hurts innovation badly**, with vision models **especially so**, as the task of **tagging cannot be done by a single person at scale**, but a corporation can.
|
71 |
+
|
72 |
+
# How can YOU help?
|
73 |
+
|
74 |
+
This is sort of **"Pre-Alpha"**, a proof of concept, I did **A LOT** of shortcuts and "hacking" to make this work, and I would greatly appreciate some help to make it into an accurate and powerful open tool. I am not asking for money, but well-tagged data. I will take the burden and costs of the compute on myself, but I **cannot do tagging** at a large scale by myself.
|
75 |
+
|
76 |
+
## Bottom line, I need a lot of well-tagged, diverse data
|
77 |
+
|
78 |
+
So:
|
79 |
+
|
80 |
+
- If you have well-tagged images
|
81 |
+
- If you have a link to a well-tagged image dataset
|
82 |
+
- If you can, and willing to do image tagging
|
83 |
+
|
84 |
+
Then please send an email with [DATASET] in the title to:
|
85 |
+
|
86 |
+
```
|
87 | |
88 |
+
```
|
89 |
+
|
90 |
+
As you probably figured by the email address name, this is not my main email, and I expect it to be spammed with junk, so **please use the [DATASET] tag** so I can more easily find the emails of **the good people** who are actually trying to help.
|
91 |
+
|
92 |
+
|
93 |
+
|
94 |
+
---
|
95 |
+
|
96 |
+
### TL;DR
|
97 |
+
- **Fully uncensored and trained** so there's no moderation in the vision model, I actually did train it.
|
98 |
+
- **The 2nd uncensored vision model in the world**, ToriiGate being the first as far as I know.
|
99 |
+
- **In-depth descriptions** very detailed, long descriptions.
|
100 |
+
- The text portion is **somewhat uncensored** as well, I didn't want to butcher and fry it too much, so it remain "smart".
|
101 |
+
- **NOT perfect** This is a POC that shows that the task can even be done, a lot more work is needed.
|
102 |
+
|
103 |
+
---
|
104 |
+
|
105 |
+
# How to run it:
|
106 |
+
|
107 |
+
|
108 |
+
## VRAM needed for FP16: 15.9 GB
|
109 |
+
|
110 |
+
[Run inference with this](https://github.com/SicariusSicariiStuff/X-Ray_Vision)
|
111 |
+
|
112 |
+
# This is a pre-alpha POC (Proof Of Concept)
|
113 |
+
|
114 |
+
## Instructions:
|
115 |
+
clone:
|
116 |
+
```
|
117 |
+
git clone https://github.com/SicariusSicariiStuff/X-Ray_Vision.git
|
118 |
+
```
|
119 |
+
|
120 |
+
Settings up venv, (tested for python 3.11, probably works with 3.10)
|
121 |
+
```
|
122 |
+
python3.11 -m venv env
|
123 |
+
source env/bin/activate
|
124 |
+
```
|
125 |
+
|
126 |
+
Install dependencies
|
127 |
+
```
|
128 |
+
pip install git+https://github.com/huggingface/[email protected]
|
129 |
+
pip install torch
|
130 |
+
pip install pillow
|
131 |
+
pip install accelerate
|
132 |
+
```
|
133 |
+
|
134 |
+
# Running inference
|
135 |
+
|
136 |
+
Usage:
|
137 |
+
```
|
138 |
+
python xRay-Vision.py /path/to/model/ /dir/with/images/
|
139 |
+
```
|
140 |
+
The output will print to the console, and export the results into a dir named after your image dir with the suffix "_TXT"
|
141 |
+
|
142 |
+
So if you run:
|
143 |
+
```
|
144 |
+
python xRay-Vision.py /some_path/x-Ray_model/ /home/images/weird_cats/
|
145 |
+
```
|
146 |
+
The results will be exported to:
|
147 |
+
```
|
148 |
+
/home/images/weird_cats_TXT/
|
149 |
+
```
|
150 |
+
|
151 |
+
---
|
152 |
+
|
153 |
+
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
|
154 |
+
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
|
155 |
+
|
156 |
+
---
|
157 |
+
|
158 |
+
|
159 |
+
## Citation Information
|
160 |
+
|
161 |
+
```
|
162 |
+
@llm{X-Ray_Alpha,
|
163 |
+
author = {SicariusSicariiStuff},
|
164 |
+
title = {X-Ray_Alpha},
|
165 |
+
year = {2025},
|
166 |
+
publisher = {Hugging Face},
|
167 |
+
url = {https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha}
|
168 |
+
}
|
169 |
+
```
|
170 |
+
|
171 |
+
---
|
172 |
+
|
173 |
+
## Other stuff
|
174 |
+
- [X-Ray_Vision](https://github.com/SicariusSicariiStuff/X-Ray_Vision) Easy stand-alone mass vision inference (inference a folder of images).
|
175 |
+
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
|
176 |
+
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
|
177 |
+
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|