Upload folder using huggingface_hub
Browse files- README.md +1 -1
- generation_config.json +11 -0
- model-00001-of-00002.safetensors +2 -2
- model-00002-of-00002.safetensors +2 -2
- model.safetensors.index.json +0 -0
- processor_config.json +4 -0
README.md
CHANGED
|
@@ -13,7 +13,7 @@ tags:
|
|
| 13 |
---
|
| 14 |
|
| 15 |
# mlx-community/gemma-3-12b-it-4bit
|
| 16 |
-
This model was converted to MLX format from [`google/gemma-3-12b-it`]() using mlx-vlm version **0.1.
|
| 17 |
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it) for more details on the model.
|
| 18 |
## Use with mlx
|
| 19 |
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
# mlx-community/gemma-3-12b-it-4bit
|
| 16 |
+
This model was converted to MLX format from [`google/gemma-3-12b-it`]() using mlx-vlm version **0.1.18**.
|
| 17 |
Refer to the [original model card](https://huggingface.co/google/gemma-3-12b-it) for more details on the model.
|
| 18 |
## Use with mlx
|
| 19 |
|
generation_config.json
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 2,
|
| 4 |
+
"cache_implementation": "hybrid",
|
| 5 |
+
"eos_token_id": [
|
| 6 |
+
1,
|
| 7 |
+
106
|
| 8 |
+
],
|
| 9 |
+
"pad_token_id": 0,
|
| 10 |
+
"transformers_version": "4.50.0.dev0"
|
| 11 |
+
}
|
model-00001-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:995cbd05b7bfd8f5ab5307b476eb5496b5ec3f5256a9dd26366236ce8816c93f
|
| 3 |
+
size 5367455313
|
model-00002-of-00002.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b7af7eb5ff32109fc65cbcd0af5b8016ac0de46df17f40705f043f899495333
|
| 3 |
+
size 2661219935
|
model.safetensors.index.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
processor_config.json
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"image_seq_length": 256,
|
| 3 |
+
"processor_class": "Gemma3Processor"
|
| 4 |
+
}
|