Update README.md
Browse files
README.md
CHANGED
@@ -28,12 +28,12 @@ library_name: transformers
|
|
28 |
|
29 |
(uploading, quants to follow)
|
30 |
|
31 |
-
<h2>Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-
|
32 |
|
33 |
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
|
34 |
The source code can also be used directly.
|
35 |
|
36 |
-
Coder MOE with 3 top coder models in a Mixture of Experts config, using the full power of each model to code.
|
37 |
|
38 |
Included:
|
39 |
- Qwen/Qwen2.5-Coder-7B-Instruct (500+ likes)
|
|
|
28 |
|
29 |
(uploading, quants to follow)
|
30 |
|
31 |
+
<h2>Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1</h2>
|
32 |
|
33 |
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
|
34 |
The source code can also be used directly.
|
35 |
|
36 |
+
Coder MOE with 3 top coder models in a Mixture of Experts config, using the full power of each model to code in a 25B model.
|
37 |
|
38 |
Included:
|
39 |
- Qwen/Qwen2.5-Coder-7B-Instruct (500+ likes)
|