Phr00t commited on
Commit
f08acfa
·
verified ·
1 Parent(s): 9ca5ce0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -64
README.md CHANGED
@@ -1,64 +1,70 @@
1
- ---
2
- base_model:
3
- - Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
4
- - allura-org/Qwen2.5-32b-RP-Ink
5
- - Delta-Vector/Hamanasu-Magnum-QwQ-32B
6
- - THU-KEG/LongWriter-Zero-32B
7
- - zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
8
- - rombodawg/Rombos-LLM-V2.5-Qwen-32b
9
- library_name: transformers
10
- tags:
11
- - mergekit
12
- - merge
13
-
14
- ---
15
- # Phr00tyMix-v1-32B
16
-
17
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
18
-
19
- ## Merge Details
20
- ### Merge Method
21
-
22
- This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [rombodawg/Rombos-LLM-V2.5-Qwen-32b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b) as a base.
23
-
24
- ### Models Merged
25
-
26
- The following models were included in the merge:
27
- * [Delta-Vector/Hamanasu-QwQ-V1.5-Instruct](https://huggingface.co/Delta-Vector/Hamanasu-QwQ-V1.5-Instruct)
28
- * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
29
- * [Delta-Vector/Hamanasu-Magnum-QwQ-32B](https://huggingface.co/Delta-Vector/Hamanasu-Magnum-QwQ-32B)
30
- * [THU-KEG/LongWriter-Zero-32B](https://huggingface.co/THU-KEG/LongWriter-Zero-32B)
31
- * [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
32
-
33
- ### Configuration
34
-
35
- The following YAML configuration was used to produce this model:
36
-
37
- ```yaml
38
- merge_method: dare_ties
39
- dtype: bfloat16
40
- base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
41
- parameters:
42
- normalize_weights: true
43
- models:
44
- - model: Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
45
- parameters:
46
- weight: 0.3
47
- density: 1
48
- - model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
49
- parameters:
50
- weight: 0.1
51
- density: 0.8
52
- - model: THU-KEG/LongWriter-Zero-32B
53
- parameters:
54
- weight: 0.1
55
- density: 0.8
56
- - model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
57
- parameters:
58
- weight: 0.3
59
- density: 0.8
60
- - model: allura-org/Qwen2.5-32b-RP-Ink
61
- parameters:
62
- weight: 0.2
63
- density: 0.5
64
- ```
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
4
+ - allura-org/Qwen2.5-32b-RP-Ink
5
+ - Delta-Vector/Hamanasu-Magnum-QwQ-32B
6
+ - THU-KEG/LongWriter-Zero-32B
7
+ - zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
8
+ - rombodawg/Rombos-LLM-V2.5-Qwen-32b
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+ - qwen
14
+ - qwq
15
+ - creative writing
16
+ - storytelling
17
+ - roleplay
18
+ ---
19
+ # Phr00tyMix-v1-32B
20
+
21
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/qPjrxIblNAdqDxhTQ5uAG.png)
22
+
23
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
+
25
+ ## Merge Details
26
+ ### Merge Method
27
+
28
+ This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [rombodawg/Rombos-LLM-V2.5-Qwen-32b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b) as a base.
29
+
30
+ ### Models Merged
31
+
32
+ The following models were included in the merge:
33
+ * [Delta-Vector/Hamanasu-QwQ-V1.5-Instruct](https://huggingface.co/Delta-Vector/Hamanasu-QwQ-V1.5-Instruct)
34
+ * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
35
+ * [Delta-Vector/Hamanasu-Magnum-QwQ-32B](https://huggingface.co/Delta-Vector/Hamanasu-Magnum-QwQ-32B)
36
+ * [THU-KEG/LongWriter-Zero-32B](https://huggingface.co/THU-KEG/LongWriter-Zero-32B)
37
+ * [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
38
+
39
+ ### Configuration
40
+
41
+ The following YAML configuration was used to produce this model:
42
+
43
+ ```yaml
44
+ merge_method: dare_ties
45
+ dtype: bfloat16
46
+ base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
47
+ parameters:
48
+ normalize_weights: true
49
+ models:
50
+ - model: Delta-Vector/Hamanasu-QwQ-V1.5-Instruct
51
+ parameters:
52
+ weight: 0.3
53
+ density: 1
54
+ - model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
55
+ parameters:
56
+ weight: 0.1
57
+ density: 0.8
58
+ - model: THU-KEG/LongWriter-Zero-32B
59
+ parameters:
60
+ weight: 0.1
61
+ density: 0.8
62
+ - model: Delta-Vector/Hamanasu-Magnum-QwQ-32B
63
+ parameters:
64
+ weight: 0.3
65
+ density: 0.8
66
+ - model: allura-org/Qwen2.5-32b-RP-Ink
67
+ parameters:
68
+ weight: 0.2
69
+ density: 0.5
70
+ ```