GroveMoE-Inst / README.md
HaoxingChen's picture
Update README.md
fd68b04 verified
|
raw
history blame
3.75 kB
metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation

GroveMoE-Inst

Highlights

We introduce GroveMoE, a new sparse architecture using adjugate experts for dynamic computation allocation, featuring the following key highlights:

  • Architecture: Novel adjugate experts grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs.
  • Sparse Activation: 33 B params total, only 3.14–3.28 B active per token.
  • Traning: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.

Model Downloads

Model #Total Params #Activated Params Download
GroveMoE-Base 33B 3.14~3.28B 🤗 HuggingFace
GroveMoE-Inst 33B 3.14~3.28B 🤗 HuggingFace

Performance

Model Activated Params MMLU-Pro SuperGPQA GPQA-Diamond Omni-math AIME'25 MultiPL-E LiveCodeBench v6
Llama4-Scout 17B 64.9 42.0 55.6 56.6 30.2 10.0 45.0
Qwen3-30B-A3B 3B 63.3 40.5 51.7 60.3 33.7 21.7 66.0
Qwen3-32B 32B 68.2 43.0 53.6 59.5 31.8 22.9 68.6
Gemma3-27B-IT 27B 67.1 35.6 45.3 59.9 33.3 23.1 65.5
Mistral-Small-3.2 24B 68.1 37.5 59.9 61.9 33.4 28.1 69.5
GroveMoE-Inst 3.14~3.28B 72.8 47.7 61.3 71.2 43.5 44.4 74.5

We bold the top1 scores separately for all models. More details will be reported in our technical report.

Usage

Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library.

$ pip install transformers==4.51.3

Then, copy the snippet from the section that is relevant for your use case.

from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/GroveMoE-Inst"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 
content = tokenizer.decode(output_ids, skip_special_tokens=True)

print("content:", content)

Citation

@article{GroveMoE,
title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts},
author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo},
journal = {arXiv preprint arXiv:2508.07785},
year = {2025}
}