File size: 1,303 Bytes
1b3f28e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d1568e
1b3f28e
 
596299c
1b3f28e
5d1568e
 
652b9b4
1b3f28e
2abc5c9
1b3f28e
652b9b4
 
5886ec1
 
 
 
f5c015e
 
ea49c5e
 
 
 
1b3f28e
 
ea49c5e
1b3f28e
 
 
 
 
 
 
5d1568e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
- open-r1/OlympicCoder-32B
pipeline_tag: text-generation
tags:
- merge
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- mixture of experts
- qwen2moe
- 2X32B Shared.
- shared expert
library_name: transformers
---

<h2>Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B</h2>

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

The monster coder in MOE (Mixture of Experts) 2x32B (with shared expert) configuration.

The two best Coders in one that are stronger than the sum of their parts.

Both models code together.

Max context: 32k.

[There are versions at 32k, 64k, 96k and 128k context]

Super special thanks to Qwen and Open-R1 for making such fantastic models.

For more information / other Qwen/Mistral Coders see:

[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]

[model card pending updates]

For settings, parameters and other details also see:

https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct

and/or

https://huggingface.co/open-r1/OlympicCoder-32B

More to come...