Phr00tyMix-v1-32B
Note: this model has been superseded by Phr00tyMix-v2
This is a merge of pre-trained language models created using mergekit.
The goal is to be smart, obedient, creative and coherent. This isn't 100% censored, but some simple prompting to disallow refusals seems to do the trick.
These are the iMatrix GGUFs for Phr00t/Phr00tyMix-v1-32B
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base.
This base model was chosen as a smart, non-thinking foundation.
Models Merged
The following models were included in the merge:
- Delta-Vector/Hamanasu-QwQ-V1.5-Instruct (non-thinking QwQ instruction finetune)
- allura-org/Qwen2.5-32b-RP-Ink (spicy color and prose)
- Delta-Vector/Hamanasu-Magnum-QwQ-32B (non-thinking QwQ creative finetune)
- THU-KEG/LongWriter-Zero-32B (coherency for longer writing)
- zetasepic/Qwen2.5-32B-Instruct-abliterated-v2 (reduced refusals)
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
dtype: bfloat16
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
normalize_weights: true
models:
- model: Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
parameters:
weight: 0.3
density: 1
- model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
parameters:
weight: 0.1
density: 0.8
- model: THU-KEG/LongWriter-Zero-32B
parameters:
weight: 0.1
density: 0.8
- model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
parameters:
weight: 0.3
density: 0.8
- model: allura-org/Qwen2.5-32b-RP-Ink
parameters:
weight: 0.2
density: 0.5
- Downloads last month
- 15
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Phr00t/Phr00tyMix-v1-32B-GGUF
Base model
Phr00t/Phr00tyMix-v1-32B