Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx

Performance Evaluation

The model was evaluated on seven standard NLP benchmarks:

Benchmark  brainstormed‑q6	bf16	q6
ARC‑challenge      0.387	0.387	0.378
ARC‑easy           0.447	0.436	0.434
BoolQ              0.625	0.628	0.636
HellaSwag          0.648	0.616	0.618
OpenBookQA         0.380	0.400	0.400
PiQA               0.768	0.763	0.765
Winogrande         0.636	0.639	0.634
Avg (7)           0.5559	0.5527	0.5521

The brain‑stormed module consistently improves performance on ARC‑easy, HellaSwag and PiQA, while matching or slightly underperforming on the other tasks. The overall average performance is +0.0038 (+0.4 %) over the non‑brainstormed baselines.

This model Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx was converted to MLX format from DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER using mlx-lm version 0.26.3.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
109
Safetensors
Model size
42.4B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx

Collection including nightmedia/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-q6-mlx