DavidAU's picture
Update README.md
652b9b4 verified
|
raw
history blame
1.17 kB
metadata
license: apache-2.0
language:
  - en
base_model:
  - Qwen/Qwen2.5-Coder-32B-Instruct
  - open-r1/OlympicCoder-32B
pipeline_tag: text-generation
tags:
  - merge
  - programming
  - code generation
  - code
  - codeqwen
  - moe
  - coding
  - coder
  - qwen2
  - chat
  - qwen
  - qwen-coder
  - mixture of experts
  - qwen2moe
  - 2X32B Shared.
  - shared expert
library_name: transformers

(uploading...)

Qwen2.5-2X32B-CoderInstruct-OlympicCoder-80B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

The monster coder in MOE (Mixture of Experts) 2x32B (with shared expert) configuration.

The two best Coders in one that are stronger than the sum of their parts.

Both models code together.

For more information / other Qwen/Mistral Coders see:

[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]

[model card pending updates]

For settings, parameters and other details also see:

https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct

and/or

https://huggingface.co/open-r1/OlympicCoder-32B

More to come...