prince-canuma commited on
Commit
9440bb0
·
verified ·
1 Parent(s): 228e34b

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,15 +1,15 @@
1
  ---
2
  license: gemma
 
3
  tags:
4
  - mlx
5
  library_name: mlx
6
  pipeline_tag: text-generation
7
- base_model: gg-hf-gm/gemma-3-270m
8
  ---
9
 
10
- # mlx-gg-hf/gemma-3-270m-4bit
11
 
12
- This model [mlx-gg-hf/gemma-3-270m-4bit](https://huggingface.co/mlx-gg-hf/gemma-3-270m-4bit) was
13
  converted to MLX format from [gg-hf-gm/gemma-3-270m](https://huggingface.co/gg-hf-gm/gemma-3-270m)
14
  using mlx-lm version **0.26.3**.
15
 
@@ -22,7 +22,7 @@ pip install mlx-lm
22
  ```python
23
  from mlx_lm import load, generate
24
 
25
- model, tokenizer = load("mlx-gg-hf/gemma-3-270m-4bit")
26
 
27
  prompt = "hello"
28
 
 
1
  ---
2
  license: gemma
3
+ base_model: gg-hf-gm/gemma-3-270m
4
  tags:
5
  - mlx
6
  library_name: mlx
7
  pipeline_tag: text-generation
 
8
  ---
9
 
10
+ # google/gemma-3-270m-4bit
11
 
12
+ This model [google/gemma-3-270m-4bit](https://huggingface.co/google/gemma-3-270m-4bit) was
13
  converted to MLX format from [gg-hf-gm/gemma-3-270m](https://huggingface.co/gg-hf-gm/gemma-3-270m)
14
  using mlx-lm version **0.26.3**.
15
 
 
22
  ```python
23
  from mlx_lm import load, generate
24
 
25
+ model, tokenizer = load("google/gemma-3-270m-4bit")
26
 
27
  prompt = "hello"
28