bobig commited on
Commit
a6b9807
·
verified ·
1 Parent(s): 9bebb31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -7,9 +7,11 @@ tags:
7
 
8
  13 TPS
9
 
10
- 27 TPS with Speculative decoding in LMstudio, yeah instant 100% upgrade for math/code stuff.
 
 
 
11
 
12
- Draft model: [DeepScaleR-1.5B-Preview-Q8](https://huggingface.co/mlx-community/DeepScaleR-1.5B-Preview-Q8)
13
 
14
  Macbook M4 Max: high power (10 TPS on low-power, GPU draws only 5 watts...less than your brain)
15
 
@@ -23,11 +25,10 @@ Context: 131072, Temp: 0
23
 
24
  Try this model in Visual Studio Code with the Roo Code extension. Starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
25
  Getting close to last year's Claude Sonnet for small projects. It actually stays reasonably stable even with Roo Code's huge 10k system prompt. The model still shits the bed for big projects but better after adding roo-code-memory-bank.
26
-
27
  So far (Feb 20, 2025) this is the only model & quant that runs fast on Mac, spits decent code on projects AND works with Speculative Decoding.
28
 
29
 
30
- Huge thanks to all who helped Macs get this far!
31
 
32
  # bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8
33
 
@@ -57,9 +58,9 @@ if tokenizer.chat_template is not None:
57
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
58
  ```
59
 
60
- Are you still reading down here?
61
 
62
- Maybe check out this new Q4 lossless quant compression from NexaAI and tell the MLX community how to improve mlx-lm to get 8-bit quality at 4-bit speed!
63
 
64
  [DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant](https://huggingface.co/NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant)
65
 
 
7
 
8
  13 TPS
9
 
10
+ 27 TPS with Draft model: [DeepScaleR-1.5B-Preview-Q8](https://huggingface.co/mlx-community/DeepScaleR-1.5B-Preview-Q8)
11
+
12
+ oh yeah, 100% faster for math/code stuff.
13
+
14
 
 
15
 
16
  Macbook M4 Max: high power (10 TPS on low-power, GPU draws only 5 watts...less than your brain)
17
 
 
25
 
26
  Try this model in Visual Studio Code with the Roo Code extension. Starting in Architect Mode and letting it auto switch to Code Mode.... it actually spits decent code for small projects with multiple files.
27
  Getting close to last year's Claude Sonnet for small projects. It actually stays reasonably stable even with Roo Code's huge 10k system prompt. The model still shits the bed for big projects but better after adding roo-code-memory-bank.
 
28
  So far (Feb 20, 2025) this is the only model & quant that runs fast on Mac, spits decent code on projects AND works with Speculative Decoding.
29
 
30
 
31
+ Huge thanks to all who helped Macs get this far!
32
 
33
  # bobig/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview-Q8
34
 
 
58
  response = generate(model, tokenizer, prompt=prompt, verbose=True)
59
  ```
60
 
61
+ Are you still reading down here?
62
 
63
+ Maybe check out this new Q4 lossless from NexaAI and tell the MLX community how to improve mlx-lm to get 8-bit quality at 4-bit speed!
64
 
65
  [DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant](https://huggingface.co/NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant)
66