🧠 ZeroXClem/Qwen-4B-Valiant-Polaris
Overview
ZeroXClem/Qwen-4B-Valiant-Polaris is a thoughtfully blended model crafted using Model Stock merging via MergeKit. It fuses the structured reasoning of Polaris, the creative expressiveness of Dot-Goat and RP-V3, and the scientific depth of ShiningValiant3 into a powerful 4B architecture built atop the official Qwen/Qwen3-4B
.
Designed for enhanced reasoning, uncensored creativity, deep roleplay, and advanced agentic performance — this model is both lightweight and intellectually formidable.
🔧 Merge Details
- Merge Method:
model_stock
- Base Model:
Qwen/Qwen3-4B
- Dtype:
bfloat16
- int8_mask:
true
- normalize:
false
- Tokenizer Source:
Qwen/Qwen3-4B
Merge Configuration
models:
- model: bunnycore/Qwen3-4B-Dot-Goat
- model: bunnycore/Qwen3-4B-RP-V3
- model: POLARIS-Project/Polaris-4B-Preview
- model: ValiantLabs/Qwen3-4B-ShiningValiant3
- model: Qwen/Qwen3-4B
merge_method: model_stock
base_model: Qwen/Qwen3-4B
normalize: false
int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen3-4B
🧬 Models Merged
🐐 bunnycore/Qwen3-4B-Dot-Goat
Uncensored, multi-domain, LoRA-infused Qwen model focusing on creativity, tool-use, and deep chat alignment.
🎭 bunnycore/Qwen3-4B-RP-V3
Character-rich roleplay personality fusion from the amoral
, mixture-of-thought
, and SuperbEmphasis
trees.
🌌 POLARIS-Project/Polaris-4B-Preview
Post-trained with advanced reinforcement learning on reasoning-heavy datasets. Surpasses Claude Opus & Grok on math and logic benchmarks.
✨ ValiantLabs/Qwen3-4B-ShiningValiant3
Expertly aligned to scientific reasoning, agentic workflows, and multi-domain creative logic.
🔧 Qwen/Qwen3-4B
Official pretrained Qwen3 model with support for thinking / non-thinking modes, multilingual reasoning, and tool-calling capabilities.
✨ Features & Highlights
🔹 Advanced Reasoning — Polaris post-training brings SOTA performance in chain-of-thought, math, and symbolic logic.
🔹 Roleplay & Uncensored Expressiveness — RP-V3 and Dot-Goat contribute dynamic personas and emotion-rich conversational modeling.
🔹 Scientific & Engineering Alignment — ShiningValiant3 ensures excellent handling of complex scientific and analytical queries.
🔹 Multimodal-Friendly & Tool-Aware — Qwen’s native agentic design enables external tool use and seamless task execution.
🔹 Lightweight Excellence — At just 4B parameters, this model performs impressively for its size with long context (32k+) and efficient inference.
🎯 Use Cases
- 💬 Conversational & RP Agents
- 📚 Scientific Reasoning & Educational Tutoring
- 🔍 Advanced Math & Logic Problem Solving
- ✍️ Creative Writing & Storyworld Simulation
- 🧠 Tool-Integrated Autonomous Agents
🚀 Usage Instructions
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZeroXClem/Qwen-4B-Valiant-Polaris"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
prompt = "Solve: What is the smallest prime greater than 100?"
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
🧭 Alignment & Ethics
⚠️ Unfiltered Behavior: Some sub-models are uncensored and may produce unmoderated content. Please implement safety layers when deploying in public-facing apps.
⚠️ Responsible Use: Outputs are governed by their inputs. Always review critical output for bias, hallucination, or ethical misalignment.
📜 License: Apache 2.0 + governed by respective base model licenses (see individual repos).
💌 Feedback & Contributions
Got thoughts, benchmarks, or new merge suggestions? We’d love to hear from you! Feel free to:
- Submit issues or pull requests 💡
- Tag us in your Hugging Face projects ❤️
- Join the discussion around merging and alignment at
@ZeroXClem
on HF and GitHub!
ZeroXClem Team | 2025 ✨
- Downloads last month
- 50