Files changed (1) hide show
  1. README.md +40 -28
README.md CHANGED
@@ -1,28 +1,40 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-14B
3
- language:
4
- - en
5
- license: apache-2.0
6
- license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
7
- pipeline_tag: text-generation
8
- tags:
9
- - chat
10
- - mlx
11
- ---
12
-
13
- # mlx-community/Qwen2.5-14B-Instruct-8bit
14
-
15
- The Model [mlx-community/Qwen2.5-14B-Instruct-8bit](https://huggingface.co/mlx-community/Qwen2.5-14B-Instruct-8bit) was converted to MLX format from [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using mlx-lm version **0.18.1**.
16
-
17
- ## Use with mlx
18
-
19
- ```bash
20
- pip install mlx-lm
21
- ```
22
-
23
- ```python
24
- from mlx_lm import load, generate
25
-
26
- model, tokenizer = load("mlx-community/Qwen2.5-14B-Instruct-8bit")
27
- response = generate(model, tokenizer, prompt="hello", verbose=True)
28
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-14B
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ license: apache-2.0
18
+ license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
19
+ pipeline_tag: text-generation
20
+ tags:
21
+ - chat
22
+ - mlx
23
+ ---
24
+
25
+ # mlx-community/Qwen2.5-14B-Instruct-8bit
26
+
27
+ The Model [mlx-community/Qwen2.5-14B-Instruct-8bit](https://huggingface.co/mlx-community/Qwen2.5-14B-Instruct-8bit) was converted to MLX format from [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) using mlx-lm version **0.18.1**.
28
+
29
+ ## Use with mlx
30
+
31
+ ```bash
32
+ pip install mlx-lm
33
+ ```
34
+
35
+ ```python
36
+ from mlx_lm import load, generate
37
+
38
+ model, tokenizer = load("mlx-community/Qwen2.5-14B-Instruct-8bit")
39
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
40
+ ```