DavidAU commited on
Commit
5881dde
·
verified ·
1 Parent(s): c883644

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -7
README.md CHANGED
@@ -26,12 +26,6 @@ tags:
26
  library_name: transformers
27
  ---
28
 
29
- <font color="red">There is an issue with GGUFs that affect SPECIFIC "Cuda" runtimes currently.
30
- Use "general" Cuda runtime rather than "version specific" if you get "GIBBISH" output.
31
-
32
- Versions with context of 64k, 128k and 192k context pending...
33
- </font>
34
-
35
  <h2>Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-V1.1</h2>
36
 
37
  This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
@@ -159,4 +153,53 @@ and/or
159
 
160
  https://huggingface.co/open-r1/OlympicCoder-32B
161
 
162
- More to come...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  library_name: transformers
27
  ---
28
 
 
 
 
 
 
 
29
  <h2>Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-V1.1</h2>
30
 
31
  This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
 
153
 
154
  https://huggingface.co/open-r1/OlympicCoder-32B
155
 
156
+ ---
157
+
158
+ <H2>Help, Adjustments, Samplers, Parameters and More</H2>
159
+
160
+ ---
161
+
162
+ <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
163
+
164
+ See this document:
165
+
166
+ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
167
+
168
+ <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
169
+
170
+ In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
171
+
172
+ Set the "Smoothing_factor" to 1.5
173
+
174
+ : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
175
+
176
+ : in text-generation-webui -> parameters -> lower right.
177
+
178
+ : In Silly Tavern this is called: "Smoothing"
179
+
180
+
181
+ NOTE: For "text-generation-webui"
182
+
183
+ -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
184
+
185
+ Source versions (and config files) of my models are here:
186
+
187
+ https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
188
+
189
+ OTHER OPTIONS:
190
+
191
+ - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
192
+
193
+ - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
194
+
195
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
196
+
197
+ This a "Class 1" model:
198
+
199
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
200
+
201
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
202
+
203
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
204
+
205
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]