YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
ARRL Qwen3-4B Consilience (LoRA Adapters)
- Base model: sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
- Adapters: LoRA adapters trained with ARRL (BNPO + EMA normalization, invariance, edge-of-chaos oscillators)
- Usage (apply adapters to base model):
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base = "sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning"
adapter = "wheattoast11/arrl-qwen3-4b-consilience-adapter"
tok = AutoTokenizer.from_pretrained(base)
base_model = AutoModelForCausalLM.from_pretrained(base, torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(base_model, adapter)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support