Phr00tyMix-v1-32B / README.md
Phr00t's picture
Update README.md
6bcb3c5 verified
metadata
base_model:
  - Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
  - allura-org/Qwen2.5-32b-RP-Ink
  - Delta-Vector/Hamanasu-Magnum-QwQ-32B
  - THU-KEG/LongWriter-Zero-32B
  - zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
  - rombodawg/Rombos-LLM-V2.5-Qwen-32b
library_name: transformers
tags:
  - mergekit
  - merge
  - qwen2
  - qwq
  - creative writing
  - storytelling
  - roleplay
  - non-thinking

Phr00tyMix-v1-32B

Note: this model has been superseded by Phr00tyMix-v2

image/png

This is a merge of pre-trained language models created using mergekit.

The goal is to be smart, obedient, creative and coherent. This isn't 100% censored, but some simple prompting to disallow refusals seems to do the trick.

GGUFs can be found here

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base.

This base model was chosen as a smart, non-thinking foundation.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
dtype: bfloat16
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
  normalize_weights: true
models:
  - model: Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
    parameters:
      weight: 0.3
      density: 1
  - model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
    parameters:
      weight: 0.1
      density: 0.8
  - model: THU-KEG/LongWriter-Zero-32B
    parameters:
      weight: 0.1
      density: 0.8
  - model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
    parameters:
      weight: 0.3
      density: 0.8
  - model: allura-org/Qwen2.5-32b-RP-Ink
    parameters:
      weight: 0.2
      density: 0.5