div0-space commited on
Commit
2d31fd4
·
verified ·
1 Parent(s): e81696d

Upload 17 files

Browse files

Introducing qwq-32B-MLX-Q5 - the latest quant of a great model thanks to mlx 0.26.0+ @mlx-community. Check it in the docs out now to explore its full potential!

.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/QwQ-32B
5
+ library_name: mlx
6
+ tags:
7
+ - quantization
8
+ - mlx-q5
9
+ ---
10
+ ---
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
+ tags:
16
+ - mlx==0.26.2
17
+ - q5
18
+ - qwq
19
+ - reasoning
20
+ - m3-ultra
21
+ base_model: Qwen/QwQ-32B
22
+ ---
23
+
24
+ # QwQ-32B MLX Q5 Quantization
25
+
26
+ This is a **Q5 (5-bit) quantized** version of the QwQ-32B reasoning model, optimized for MLX on Apple Silicon. This quantization offers an excellent balance between model quality and size, specifically designed for high-memory Apple Silicon systems like the M3 Ultra.
27
+
28
+ ## Model Details
29
+
30
+ - **Base Model**: Qwen/QwQ-32B
31
+ - **Quantization**: Q5 (5-bit) with group size 64
32
+ - **Format**: MLX (Apple Silicon optimized)
33
+ - **Size**: 21GB (from original 61GB bfloat16)
34
+ - **Compression**: 66% size reduction
35
+ - **Architecture**: Qwen2 with reasoning capabilities
36
+
37
+ ## Why Q5?
38
+
39
+ Q5 quantization provides:
40
+ - **Superior quality** compared to Q4 while being smaller than Q6/Q8
41
+ - **Optimal size** for 128GB+ Apple Silicon systems
42
+ - **Minimal quality loss** - retains ~98% of original model capabilities
43
+ - **Fast inference** with MLX's unified memory architecture
44
+
45
+ ## Requirements
46
+
47
+ - Apple Silicon Mac (M1/M2/M3/M4)
48
+ - macOS 13.0+
49
+ - Python 3.11+
50
+ - MLX 0.26.0+
51
+ - mlx-lm 0.22.5+
52
+ - 32GB+ RAM recommended (64GB+ for full 128k context)
53
+
54
+ ## Installation
55
+
56
+ ```bash
57
+ # Using uv (recommended)
58
+ uv add mlx>=0.26.0 mlx-lm transformers
59
+
60
+ # Or with pip (not tested and obsolete)
61
+ pip install mlx>=0.26.0 mlx-lm transformers
62
+ ```
63
+
64
+ ## Usage
65
+
66
+ ### Direct Generation
67
+
68
+ ```bash
69
+ uv run mlx_lm.generate \
70
+ --model LibraxisAI/QwQ-32B-MLX-Q5 \
71
+ --prompt "Solve this step by step: If a train travels 120 km in 2 hours, what is its speed?" \
72
+ --max-tokens 500
73
+ ```
74
+
75
+ ### Python API
76
+
77
+ ```python
78
+ from mlx_lm import load, generate
79
+
80
+ # Load model
81
+ model, tokenizer = load("LibraxisAI/QwQ-32B-MLX-Q5")
82
+
83
+ # Generate text with reasoning
84
+ prompt = "Think step by step: What are the implications of Q5 quantization for LLM deployment?"
85
+ response = generate(
86
+ model=model,
87
+ tokenizer=tokenizer,
88
+ prompt=prompt,
89
+ max_tokens=1000,
90
+ temp=0.7
91
+ )
92
+ print(response)
93
+ ```
94
+
95
+ ### HTTP Server
96
+
97
+ ```bash
98
+ uv run mlx_lm.server \
99
+ --model LibraxisAI/QwQ-32B-MLX-Q5 \
100
+ --host 0.0.0.0 \
101
+ --port 8080
102
+ ```
103
+
104
+ ## Performance Benchmarks
105
+
106
+ Tested on Mac Studio M3 Ultra (512GB):
107
+
108
+ | Metric | Value |
109
+ |--------|-------|
110
+ | Model Size | 21GB |
111
+ | Peak Memory Usage | ~25GB |
112
+ | Generation Speed | ~12-15 tokens/sec |
113
+ | Max Context Length | 131,072 tokens (128k) |
114
+
115
+ ## Special Features
116
+
117
+ QwQ (Qwen with Questions) is designed for:
118
+ - **Deep reasoning** and step-by-step problem solving
119
+ - **Mathematical reasoning** and logical deduction
120
+ - **Code generation** with explanations
121
+ - **Self-reflection** and error correction
122
+
123
+ ## Limitations
124
+
125
+ ⚠️ **Important**: This Q5 model as for the release date, of this quant **is NOT compatible** with LM Studio (**yet**), which only supports 2, 3, 4, 6, and 8-bit quantizations & we didn't test it with Ollama or any other inference client. **Use MLX directly or via the MLX server** - we've created a comfortable, `command generation script` to run the server properly.
126
+
127
+ ## Conversion Details
128
+
129
+ This model was quantized using:
130
+ ```bash
131
+ uv run mlx_lm.convert \
132
+ --hf-path Qwen/QwQ-32B \
133
+ --mlx-path QwQ-32B-MLX-Q5 \
134
+ --dtype bfloat16 \
135
+ -q --q-bits 5 --q-group-size 64
136
+ ```
137
+
138
+ ## Frontier M3 Ultra Optimization
139
+
140
+ This model is specifically optimized for the Mac Studio M3 Ultra setup with 512GB unified memory. For best performance:
141
+
142
+ ```python
143
+ import mlx.core as mx
144
+
145
+ # Set memory limits for large models
146
+ mx.metal.set_memory_limit(100 * 1024**3) # 100GB
147
+ mx.metal.set_cache_limit(20 * 1024**3) # 20GB cache
148
+ ```
149
+
150
+ ## Tools Included
151
+
152
+ We provide utility scripts for easy model management:
153
+
154
+ 1. **convert-to-mlx.sh** - Command generation tool - convert any model to MLX format with many options of customization and Q5 quantization support on mlx>=0.26.0
155
+ 2. **mlx-serve.sh** - Launch MLX server with custom parameters
156
+
157
+ ## Historical Note
158
+
159
+ The LibraxisAI Q5 models were among the **first Q5 quantized MLX models** available on Hugging Face, pioneering the use of 5-bit quantization for Apple Silicon optimization.
160
+
161
+ ## Citation
162
+
163
+ If you use this model, please cite:
164
+
165
+ ```bibtex
166
+ @misc{qwq-32b-q5-mlx,
167
+ author = {LibraxisAI},
168
+ title = {QwQ-32B Q5 MLX - Reasoning Model for Apple Silicon},
169
+ year = {2025},
170
+ publisher = {Hugging Face},
171
+ url = {https://huggingface.co/LibraxisAI/QwQ-32B-MLX-Q5}
172
+ }
173
+ ```
174
+
175
+ ## License
176
+
177
+ This model follows the original QwQ license (Apache-2.0). See the [base model card](https://huggingface.com/Qwen/QwQ-32B) for full details.
178
+
179
+ ## Authors of the repository
180
+ [Monika Szymanska](https://github.com/m-szymanska)
181
+ [Maciej Gad, DVM](https://div0.space)
182
+
183
+ ## Acknowledgments
184
+
185
+ - Apple MLX team and community for the amazing 0.26.0+ framework
186
+ - Qwen team for the innovative QwQ reasoning model
187
+ - Klaudiusz-AI 🐉
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- '' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- endif %}
18
+ {%- endif %}
19
+ {%- for message in messages %}
20
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
21
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
22
+ {%- elif message.role == "assistant" and not message.tool_calls %}
23
+ {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
24
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
25
+ {%- elif message.role == "assistant" %}
26
+ {%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
27
+ {{- '<|im_start|>' + message.role }}
28
+ {%- if message.content %}
29
+ {{- '\n' + content }}
30
+ {%- endif %}
31
+ {%- for tool_call in message.tool_calls %}
32
+ {%- if tool_call.function is defined %}
33
+ {%- set tool_call = tool_call.function %}
34
+ {%- endif %}
35
+ {{- '\n<tool_call>\n{"name": "' }}
36
+ {{- tool_call.name }}
37
+ {{- '", "arguments": ' }}
38
+ {{- tool_call.arguments | tojson }}
39
+ {{- '}\n</tool_call>' }}
40
+ {%- endfor %}
41
+ {{- '<|im_end|>\n' }}
42
+ {%- elif message.role == "tool" %}
43
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
44
+ {{- '<|im_start|>user' }}
45
+ {%- endif %}
46
+ {{- '\n<tool_response>\n' }}
47
+ {{- message.content }}
48
+ {{- '\n</tool_response>' }}
49
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
50
+ {{- '<|im_end|>\n' }}
51
+ {%- endif %}
52
+ {%- endif %}
53
+ {%- endfor %}
54
+ {%- if add_generation_prompt %}
55
+ {{- '<|im_start|>assistant\n<think>\n' }}
56
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 151643,
7
+ "eos_token_id": 151645,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 27648,
12
+ "max_position_embeddings": 131072,
13
+ "max_window_layers": 64,
14
+ "model_type": "qwen2",
15
+ "num_attention_heads": 40,
16
+ "num_hidden_layers": 64,
17
+ "num_key_value_heads": 8,
18
+ "quantization": {
19
+ "group_size": 64,
20
+ "bits": 5
21
+ },
22
+ "quantization_config": {
23
+ "group_size": 64,
24
+ "bits": 5
25
+ },
26
+ "rms_norm_eps": 1e-05,
27
+ "rope_theta": 1000000.0,
28
+ "sliding_window": 32768,
29
+ "tie_word_embeddings": false,
30
+ "torch_dtype": "bfloat16",
31
+ "transformers_version": "4.43.1",
32
+ "use_cache": true,
33
+ "use_sliding_window": false,
34
+ "vocab_size": 152064
35
+ }
convert-to-mlx.sh ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # MLX Model Conversion Utility for Dragon M3 Ultra
3
+ # Updated: January 2025 for MLX 0.26+ and modern uv workflow
4
+ # Supports Q5 quantization and M3 Ultra optimizations
5
+
6
+ # Text formatting
7
+ BOLD="\033[1m"
8
+ BLUE="\033[34m"
9
+ GREEN="\033[32m"
10
+ YELLOW="\033[33m"
11
+ RED="\033[31m"
12
+ CYAN="\033[36m"
13
+ MAGENTA="\033[35m"
14
+ RESET="\033[0m"
15
+
16
+ # Detect system specs
17
+ TOTAL_MEMORY=$(sysctl -n hw.memsize 2>/dev/null || echo 0)
18
+ TOTAL_MEMORY_GB=$((TOTAL_MEMORY / 1073741824))
19
+ CPU_BRAND=$(sysctl -n machdep.cpu.brand_string 2>/dev/null || echo "Unknown")
20
+
21
+ # Check if running on M3 Ultra
22
+ if [[ "$CPU_BRAND" == *"M3 Ultra"* ]] || [[ "$TOTAL_MEMORY_GB" -ge 400 ]]; then
23
+ IS_M3_ULTRA=true
24
+ echo -e "${BOLD}${MAGENTA}🐉 Dragon M3 Ultra detected! (${TOTAL_MEMORY_GB}GB RAM)${RESET}"
25
+ else
26
+ IS_M3_ULTRA=false
27
+ fi
28
+
29
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
30
+ echo -e "${BOLD}${BLUE} MLX Model Conversion Utility v2.0 ${RESET}"
31
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
32
+ echo -e "Updated for MLX 0.26+ with Q5 support and M3 Ultra optimizations\n"
33
+
34
+ # Default values
35
+ DEFAULT_HF_PATH="meta-llama/Llama-3.1-405B"
36
+ DEFAULT_OUTPUT_DIR="models/Llama-3.1-405B-MLX-Q5"
37
+ DEFAULT_QUANTIZE="y"
38
+ DEFAULT_BITS="5" # Changed to Q5 as default for better quality/size ratio
39
+ DEFAULT_GROUP_SIZE="64"
40
+ DEFAULT_DTYPE="float16"
41
+
42
+ # hf-xet optimization for Dragon M3 Ultra
43
+ if [[ "$IS_M3_ULTRA" == true ]]; then
44
+ export HF_XET_HIGH_PERFORMANCE_MODE=1
45
+ export HF_XET_CHUNK_CACHE_SIZE_BYTES=107374182400 # 100GB cache
46
+ export HF_XET_CONCURRENT_DOWNLOADS=32
47
+ echo -e "${CYAN}✓ hf-xet optimizations enabled for Dragon${RESET}"
48
+ fi
49
+
50
+ # Get HF Path
51
+ echo -e "${BOLD}Hugging Face model path or local directory:${RESET}"
52
+ echo -e "(Default: ${DEFAULT_HF_PATH})"
53
+ echo -e "${CYAN}Examples:${RESET}"
54
+ echo -e " HF repo: meta-llama/Llama-3.1-405B"
55
+ echo -e " Local: /Users/polyversai/.lmstudio/models/mlx-community/model-name"
56
+ read -p "> " HF_PATH
57
+ HF_PATH=${HF_PATH:-$DEFAULT_HF_PATH}
58
+
59
+ # Check if it's a local path
60
+ if [[ -d "$HF_PATH" ]]; then
61
+ echo -e "${GREEN}✓ Local model detected: ${HF_PATH}${RESET}"
62
+ IS_LOCAL=true
63
+ else
64
+ IS_LOCAL=false
65
+ # Ask about hf-xet for remote models
66
+ echo -e "\n${BOLD}Use hf-xet for faster download? [y/n]${RESET}"
67
+ echo -e "(10x faster downloads with chunk deduplication)"
68
+ echo -e "Default: y"
69
+ read -p "> " USE_HF_XET
70
+ USE_HF_XET=${USE_HF_XET:-y}
71
+
72
+ if [[ "$USE_HF_XET" == "y" || "$USE_HF_XET" == "Y" ]]; then
73
+ # Check if hf-xet is installed
74
+ if ! uv run python -c "import hf_xet" 2>/dev/null; then
75
+ echo -e "${YELLOW}⚠️ hf-xet not installed. Installing...${RESET}"
76
+ echo -e "Run: uv add 'huggingface_hub[hf_xet]'"
77
+ echo -e "${CYAN}Note: hf-xet only works with Xet-backed repos${RESET}"
78
+ else
79
+ echo -e "${GREEN}✓ hf-xet enabled for download${RESET}"
80
+ fi
81
+ fi
82
+ fi
83
+
84
+ # Get output directory
85
+ echo -e "\n${BOLD}Output MLX model directory:${RESET}"
86
+ echo -e "(Default: ${DEFAULT_OUTPUT_DIR})"
87
+ read -p "> " MLX_PATH
88
+ MLX_PATH=${MLX_PATH:-$DEFAULT_OUTPUT_DIR}
89
+
90
+ # Ask about data type
91
+ echo -e "\n${BOLD}Model data type:${RESET}"
92
+ echo -e "(Default: ${DEFAULT_DTYPE}, Options: float16, bfloat16, float32)"
93
+ read -p "> " DTYPE
94
+ DTYPE=${DTYPE:-$DEFAULT_DTYPE}
95
+
96
+ # Ask about quantization
97
+ echo -e "\n${BOLD}Quantize the model? [y/n]${RESET}"
98
+ echo -e "(Default: ${DEFAULT_QUANTIZE})"
99
+ read -p "> " QUANTIZE
100
+ QUANTIZE=${QUANTIZE:-$DEFAULT_QUANTIZE}
101
+
102
+ # If quantizing, get more details
103
+ if [[ "$QUANTIZE" == "y" || "$QUANTIZE" == "Y" ]]; then
104
+ echo -e "\n${BOLD}Quantization bits:${RESET}"
105
+ echo -e "${CYAN}Options:${RESET}"
106
+ echo -e " 2 - Extreme compression (lowest quality)"
107
+ echo -e " 3 - High compression"
108
+ echo -e " 4 - Standard compression (good balance)"
109
+ echo -e " ${GREEN}5 - Recommended (best quality/size ratio)${RESET}"
110
+ echo -e " 6 - Low compression"
111
+ echo -e " 8 - Minimal compression (highest quality)"
112
+ echo -e "(Default: ${DEFAULT_BITS})"
113
+ read -p "> " BITS
114
+ BITS=${BITS:-$DEFAULT_BITS}
115
+
116
+ echo -e "\n${BOLD}Group size:${RESET}"
117
+ echo -e "(Default: ${DEFAULT_GROUP_SIZE}, Options: 32, 64, 128)"
118
+ if [[ "$IS_M3_ULTRA" == true ]]; then
119
+ echo -e "${CYAN}💡 M3 Ultra tip: Use 64 or 128 for better performance${RESET}"
120
+ fi
121
+ read -p "> " GROUP_SIZE
122
+ GROUP_SIZE=${GROUP_SIZE:-$DEFAULT_GROUP_SIZE}
123
+
124
+ echo -e "\n${BOLD}Quantization strategy:${RESET}"
125
+ echo -e "${CYAN}Options:${RESET}"
126
+ echo -e " none - Uniform quantization (default)"
127
+ echo -e " mixed_2_6 - Mix of 2 and 6 bit"
128
+ echo -e " ${GREEN}mixed_3_4 - Mix of 3 and 4 bit${RESET}"
129
+ echo -e " mixed_3_6 - Mix of 3 and 6 bit"
130
+ echo -e " mixed_4_6 - Mix of 4 and 6 bit"
131
+ echo -e "Leave empty for uniform quantization"
132
+ read -p "> " QUANT_PREDICATE
133
+
134
+ QUANT_OPTIONS="-q --q-bits ${BITS} --q-group-size ${GROUP_SIZE}"
135
+
136
+ if [[ -n "$QUANT_PREDICATE" ]]; then
137
+ QUANT_OPTIONS="${QUANT_OPTIONS} --quant-predicate ${QUANT_PREDICATE}"
138
+ fi
139
+ else
140
+ QUANT_OPTIONS=""
141
+ fi
142
+
143
+ # Memory optimization options for M3 Ultra
144
+ if [[ "$IS_M3_ULTRA" == true ]]; then
145
+ echo -e "\n${BOLD}${MAGENTA}M3 Ultra optimization note:${RESET}"
146
+ echo -e "${CYAN}MLX will automatically optimize for your 512GB system${RESET}"
147
+ echo -e "${CYAN}The framework uses unified memory efficiently${RESET}"
148
+ M3_ULTRA_FLAGS=""
149
+ else
150
+ M3_ULTRA_FLAGS=""
151
+ fi
152
+
153
+ # Ask about upload repository (optional)
154
+ echo -e "\n${BOLD}Upload to Hugging Face Hub? (optional):${RESET}"
155
+ echo -e "(Leave empty to skip upload)"
156
+ read -p "> " UPLOAD_REPO
157
+
158
+ if [[ -n "$UPLOAD_REPO" ]]; then
159
+ UPLOAD_OPTION="--upload-repo ${UPLOAD_REPO}"
160
+ else
161
+ UPLOAD_OPTION=""
162
+ fi
163
+
164
+ # Build the command - UV is now default
165
+ UV_CMD="uv run mlx_lm.convert --hf-path ${HF_PATH} --mlx-path ${MLX_PATH} --dtype ${DTYPE} ${QUANT_OPTIONS} ${UPLOAD_OPTION}"
166
+
167
+ # Alternative commands
168
+ DIRECT_CMD="mlx_lm.convert --hf-path ${HF_PATH} --mlx-path ${MLX_PATH} --dtype ${DTYPE} ${QUANT_OPTIONS} ${UPLOAD_OPTION}"
169
+ PYTHON_CMD="python -m mlx_lm.convert --hf-path ${HF_PATH} --mlx-path ${MLX_PATH} --dtype ${DTYPE} ${QUANT_OPTIONS} ${UPLOAD_OPTION}"
170
+
171
+ # Print the preview
172
+ echo -e "\n${BOLD}${YELLOW}Command Preview:${RESET}"
173
+ echo -e "$UV_CMD"
174
+
175
+ # Expected outcomes based on options
176
+ echo -e "\n${BOLD}${YELLOW}Expected outcomes:${RESET}"
177
+ if [[ "$QUANTIZE" == "y" || "$QUANTIZE" == "Y" ]]; then
178
+ MODEL_SIZE_GB=500 # Approximate for 405B model
179
+
180
+ case "$BITS" in
181
+ 2)
182
+ EXPECTED_SIZE=$((MODEL_SIZE_GB / 8))
183
+ echo -e "- ${GREEN}Q2: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
184
+ echo -e "- ${YELLOW}⚠️ Significant quality loss expected${RESET}"
185
+ ;;
186
+ 3)
187
+ EXPECTED_SIZE=$((MODEL_SIZE_GB * 3 / 16))
188
+ echo -e "- ${GREEN}Q3: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
189
+ echo -e "- ${YELLOW}Moderate quality loss${RESET}"
190
+ ;;
191
+ 4)
192
+ EXPECTED_SIZE=$((MODEL_SIZE_GB / 4))
193
+ echo -e "- ${GREEN}Q4: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
194
+ echo -e "- ${GREEN}Good balance of quality and size${RESET}"
195
+ ;;
196
+ 5)
197
+ EXPECTED_SIZE=$((MODEL_SIZE_GB * 5 / 16))
198
+ echo -e "- ${GREEN}Q5: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
199
+ echo -e "- ${GREEN}✨ Excellent quality/size ratio${RESET}"
200
+ ;;
201
+ 6)
202
+ EXPECTED_SIZE=$((MODEL_SIZE_GB * 6 / 16))
203
+ echo -e "- ${GREEN}Q6: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
204
+ echo -e "- ${GREEN}High quality preservation${RESET}"
205
+ ;;
206
+ 8)
207
+ EXPECTED_SIZE=$((MODEL_SIZE_GB / 2))
208
+ echo -e "- ${GREEN}Q8: ~${EXPECTED_SIZE}GB (from ~${MODEL_SIZE_GB}GB)${RESET}"
209
+ echo -e "- ${GREEN}Near-lossless quality${RESET}"
210
+ ;;
211
+ esac
212
+
213
+ if [[ -n "$QUANT_PREDICATE" ]]; then
214
+ echo -e "- ${CYAN}Using mixed precision: ${QUANT_PREDICATE}${RESET}"
215
+ fi
216
+
217
+ if [[ "$IS_M3_ULTRA" == true ]]; then
218
+ echo -e "- ${MAGENTA}Expected memory usage: ${EXPECTED_SIZE}-$((EXPECTED_SIZE * 2))GB peak${RESET}"
219
+ echo -e "- ${MAGENTA}M3 Ultra can handle this comfortably${RESET}"
220
+ else
221
+ echo -e "- ${YELLOW}Expected memory usage: High - monitor closely${RESET}"
222
+ fi
223
+ else
224
+ echo -e "- ${GREEN}No quantization - model remains in ${DTYPE} format${RESET}"
225
+ echo -e "- ${YELLOW}Very high memory requirements (400-500GB)${RESET}"
226
+ fi
227
+
228
+ echo -e "- ${CYAN}Expected conversion time: 2-6 hours${RESET}"
229
+
230
+ # Ask for command format choice
231
+ echo -e "\n${BOLD}${GREEN}Choose command format:${RESET}"
232
+ echo -e "1. ${YELLOW}UV (recommended): ${RESET}${UV_CMD}"
233
+ echo -e "2. ${YELLOW}Direct command: ${RESET}${DIRECT_CMD}"
234
+ echo -e "3. ${YELLOW}Python module: ${RESET}${PYTHON_CMD}"
235
+ read -p "> " FORMAT_CHOICE
236
+
237
+ case "$FORMAT_CHOICE" in
238
+ 2)
239
+ FINAL_CMD="${DIRECT_CMD}"
240
+ ;;
241
+ 3)
242
+ FINAL_CMD="${PYTHON_CMD}"
243
+ ;;
244
+ *)
245
+ FINAL_CMD="${UV_CMD}"
246
+ ;;
247
+ esac
248
+
249
+ # M3 Ultra specific preparation tips
250
+ if [[ "$IS_M3_ULTRA" == true ]]; then
251
+ echo -e "\n${BOLD}${MAGENTA}🐉 Dragon M3 Ultra Preparation:${RESET}"
252
+ echo -e "1. ${CYAN}Your 512GB RAM can handle even 405B models${RESET}"
253
+ echo -e "2. ${CYAN}Enable High Power Mode in Energy Saver${RESET}"
254
+ echo -e "3. ${CYAN}Consider using Activity Monitor to track memory${RESET}"
255
+ echo -e "4. ${CYAN}MLX will use unified memory efficiently${RESET}"
256
+ else
257
+ echo -e "\n${BOLD}${BLUE}Preparation tips:${RESET}"
258
+ echo -e "1. ${YELLOW}Ensure Mac is plugged in and won't sleep${RESET}"
259
+ echo -e "2. ${YELLOW}Close other memory-intensive applications${RESET}"
260
+ echo -e "3. ${YELLOW}Be prepared for high fan speeds${RESET}"
261
+ echo -e "4. ${YELLOW}The process may appear to hang - this is normal${RESET}"
262
+ fi
263
+
264
+ # Print the final command
265
+ echo -e "\n${BOLD}${RED}Your conversion command:${RESET}"
266
+ echo -e "${FINAL_CMD}"
267
+
268
+ # Copy to clipboard option
269
+ echo -e "\n${BOLD}${GREEN}Copy command to clipboard? [y/n]${RESET}"
270
+ read -p "> " COPY_CMD
271
+
272
+ if [[ "$COPY_CMD" == "y" || "$COPY_CMD" == "Y" ]]; then
273
+ echo "${FINAL_CMD}" | pbcopy
274
+ echo -e "${GREEN}✓ Command copied to clipboard!${RESET}"
275
+ fi
276
+
277
+ # Download command if using remote model
278
+ if [[ "$IS_LOCAL" == false ]]; then
279
+ echo -e "\n${BOLD}${CYAN}Optional: Download model first (if needed):${RESET}"
280
+ if [[ "$USE_HF_XET" == "y" || "$USE_HF_XET" == "Y" ]]; then
281
+ echo -e "# With hf-xet (10x faster):"
282
+ echo -e "uv run huggingface-cli download ${HF_PATH} --local-dir ./downloads/${HF_PATH##*/}"
283
+ else
284
+ echo -e "# Standard download:"
285
+ echo -e "uv run huggingface-cli download ${HF_PATH} --local-dir ./downloads/${HF_PATH##*/}"
286
+ fi
287
+ fi
288
+
289
+ # Test commands
290
+ echo -e "\n${BOLD}${BLUE}After conversion, test with:${RESET}"
291
+ echo -e "uv run mlx_lm.generate --model ${MLX_PATH} --prompt \"Hello, I am\" --max-tokens 50"
292
+
293
+ # Memory monitoring for M3 Ultra
294
+ if [[ "$IS_M3_ULTRA" == true ]]; then
295
+ echo -e "\n${BOLD}${MAGENTA}Monitor Dragon performance:${RESET}"
296
+ echo -e "uv run python -c \"import mlx.core as mx; print(f'Peak: {mx.metal.get_peak_memory()/1e9:.2f}GB of ${TOTAL_MEMORY_GB}GB')\""
297
+
298
+ echo -e "\n${BOLD}${CYAN}Pro tip for large models:${RESET}"
299
+ echo -e "# Set memory limit before conversion (optional):"
300
+ echo -e "export MLX_METAL_MEMORY_LIMIT=$((TOTAL_MEMORY_GB * 95 / 100))GB"
301
+ fi
302
+
303
+ # Benchmark command
304
+ echo -e "\n${BOLD}${CYAN}Benchmark the converted model:${RESET}"
305
+ echo -e "uv run mlx_lm.generate --model ${MLX_PATH} --prompt \"The\" --max-tokens 100 --verbose"
306
+
307
+ echo -e "\n${BOLD}${BLUE}=====================================${RESET}"
308
+ echo -e "${BOLD}${GREEN}✨ Conversion setup complete!${RESET}"
309
+ if [[ "$IS_M3_ULTRA" == true ]]; then
310
+ echo -e "${BOLD}${MAGENTA}🐉 Dragon M3 Ultra ready to roar!${RESET}"
311
+ fi
312
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
mlx-serve.sh ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ # MLX Server Launcher for Dragon M3 Ultra
3
+ # Created: January 2025 for MLX 0.26+
4
+ # Supports local/remote models with full parameter control
5
+
6
+ # Text formatting
7
+ BOLD="\033[1m"
8
+ BLUE="\033[34m"
9
+ GREEN="\033[32m"
10
+ YELLOW="\033[33m"
11
+ RED="\033[31m"
12
+ CYAN="\033[36m"
13
+ MAGENTA="\033[35m"
14
+ RESET="\033[0m"
15
+
16
+ # Detect system specs
17
+ TOTAL_MEMORY=$(sysctl -n hw.memsize 2>/dev/null || echo 0)
18
+ TOTAL_MEMORY_GB=$((TOTAL_MEMORY / 1073741824))
19
+ CPU_BRAND=$(sysctl -n machdep.cpu.brand_string 2>/dev/null || echo "Unknown")
20
+
21
+ # Check if running on M3 Ultra
22
+ if [[ "$CPU_BRAND" == *"M3 Ultra"* ]] || [[ "$TOTAL_MEMORY_GB" -ge 400 ]]; then
23
+ IS_M3_ULTRA=true
24
+ echo -e "${BOLD}${MAGENTA}🐉 Dragon M3 Ultra detected! (${TOTAL_MEMORY_GB}GB RAM)${RESET}"
25
+ else
26
+ IS_M3_ULTRA=false
27
+ fi
28
+
29
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
30
+ echo -e "${BOLD}${BLUE} MLX Server Launcher v1.0 ${RESET}"
31
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
32
+ echo -e "Launch MLX model server with custom parameters\n"
33
+
34
+ # Default values
35
+ DEFAULT_MODEL="/Users/polyversai/.lmstudio/models/LibraxisAI/c4ai-command-a-03-2025-q5-mlx"
36
+ DEFAULT_HOST="0.0.0.0"
37
+ DEFAULT_PORT="12345"
38
+ DEFAULT_TEMP="0.7"
39
+ DEFAULT_TOP_P="0.95"
40
+ DEFAULT_TOP_K="0"
41
+ DEFAULT_MIN_P="0.0"
42
+ DEFAULT_MAX_TOKENS="2048"
43
+ DEFAULT_LOG_LEVEL="INFO"
44
+
45
+ # Get model path
46
+ echo -e "${BOLD}Model path (local or HF repo):${RESET}"
47
+ echo -e "(Default: ${DEFAULT_MODEL})"
48
+ echo -e "${CYAN}Examples:${RESET}"
49
+ echo -e " Local: /Users/polyversai/.lmstudio/models/mlx-community/model-name"
50
+ echo -e " HF: mlx-community/Llama-3.2-3B-Instruct-4bit"
51
+ read -p "> " MODEL_PATH
52
+ MODEL_PATH=${MODEL_PATH:-$DEFAULT_MODEL}
53
+
54
+ # Check if it's a local path
55
+ if [[ -d "$MODEL_PATH" ]]; then
56
+ echo -e "${GREEN}✓ Local model detected: ${MODEL_PATH}${RESET}"
57
+ else
58
+ echo -e "${GREEN}✓ Remote model specified: ${MODEL_PATH}${RESET}"
59
+ fi
60
+
61
+ # Network configuration
62
+ echo -e "\n${BOLD}Host IP address:${RESET}"
63
+ echo -e "(Default: ${DEFAULT_HOST} - accessible from network)"
64
+ echo -e "Use 127.0.0.1 for localhost only"
65
+ read -p "> " HOST
66
+ HOST=${HOST:-$DEFAULT_HOST}
67
+
68
+ echo -e "\n${BOLD}Port number:${RESET}"
69
+ echo -e "(Default: ${DEFAULT_PORT})"
70
+ read -p "> " PORT
71
+ PORT=${PORT:-$DEFAULT_PORT}
72
+
73
+ # Sampling parameters
74
+ echo -e "\n${BOLD}${CYAN}=== Sampling Parameters ===${RESET}"
75
+
76
+ echo -e "\n${BOLD}Temperature (creativity):${RESET}"
77
+ echo -e "Range: 0.0-2.0 (Default: ${DEFAULT_TEMP})"
78
+ echo -e "${YELLOW}0.0 = deterministic, 1.0 = balanced, 2.0 = very creative${RESET}"
79
+ read -p "> " TEMP
80
+ TEMP=${TEMP:-$DEFAULT_TEMP}
81
+
82
+ echo -e "\n${BOLD}Top-p (nucleus sampling):${RESET}"
83
+ echo -e "Range: 0.0-1.0 (Default: ${DEFAULT_TOP_P})"
84
+ echo -e "${YELLOW}Lower = more focused, Higher = more diverse${RESET}"
85
+ read -p "> " TOP_P
86
+ TOP_P=${TOP_P:-$DEFAULT_TOP_P}
87
+
88
+ echo -e "\n${BOLD}Top-k (vocabulary limit):${RESET}"
89
+ echo -e "Default: ${DEFAULT_TOP_K} (0 = disabled)"
90
+ echo -e "${YELLOW}Limits selection to top K tokens${RESET}"
91
+ read -p "> " TOP_K
92
+ TOP_K=${TOP_K:-$DEFAULT_TOP_K}
93
+
94
+ echo -e "\n${BOLD}Min-p (minimum probability):${RESET}"
95
+ echo -e "Range: 0.0-1.0 (Default: ${DEFAULT_MIN_P})"
96
+ echo -e "${YELLOW}0.0 = disabled, higher = filter low probability tokens${RESET}"
97
+ read -p "> " MIN_P
98
+ MIN_P=${MIN_P:-$DEFAULT_MIN_P}
99
+
100
+ echo -e "\n${BOLD}Max tokens per response:${RESET}"
101
+ echo -e "(Default: ${DEFAULT_MAX_TOKENS})"
102
+ if [[ "$IS_M3_ULTRA" == true ]]; then
103
+ echo -e "${MAGENTA}Dragon can handle 8192+ tokens easily${RESET}"
104
+ fi
105
+ read -p "> " MAX_TOKENS
106
+ MAX_TOKENS=${MAX_TOKENS:-$DEFAULT_MAX_TOKENS}
107
+
108
+ # Optional adapter
109
+ echo -e "\n${BOLD}LoRA adapter path (optional):${RESET}"
110
+ echo -e "(Leave empty if not using adapters)"
111
+ read -p "> " ADAPTER_PATH
112
+
113
+ if [[ -n "$ADAPTER_PATH" ]]; then
114
+ ADAPTER_OPTION="--adapter-path ${ADAPTER_PATH}"
115
+ else
116
+ ADAPTER_OPTION=""
117
+ fi
118
+
119
+ # Chat template args
120
+ echo -e "\n${BOLD}Chat template args (optional JSON):${RESET}"
121
+ echo -e "Example: {\"enable_thinking\":false}"
122
+ echo -e "(Leave empty for defaults)"
123
+ read -p "> " CHAT_TEMPLATE_ARGS
124
+
125
+ if [[ -n "$CHAT_TEMPLATE_ARGS" ]]; then
126
+ CHAT_TEMPLATE_OPTION="--chat-template-args \"${CHAT_TEMPLATE_ARGS}\""
127
+ else
128
+ CHAT_TEMPLATE_OPTION=""
129
+ fi
130
+
131
+ # Log level
132
+ echo -e "\n${BOLD}Log level:${RESET}"
133
+ echo -e "(Default: ${DEFAULT_LOG_LEVEL}, Options: DEBUG, INFO, WARNING, ERROR, CRITICAL)"
134
+ read -p "> " LOG_LEVEL
135
+ LOG_LEVEL=${LOG_LEVEL:-$DEFAULT_LOG_LEVEL}
136
+
137
+ # Build the command
138
+ SERVER_CMD="uv run mlx_lm.server --model ${MODEL_PATH} --host ${HOST} --port ${PORT} --temp ${TEMP} --top-p ${TOP_P} --top-k ${TOP_K} --min-p ${MIN_P} --max-tokens ${MAX_TOKENS} --log-level ${LOG_LEVEL} ${ADAPTER_OPTION} ${CHAT_TEMPLATE_OPTION}"
139
+
140
+ # Print preview
141
+ echo -e "\n${BOLD}${YELLOW}Command Preview:${RESET}"
142
+ echo -e "$SERVER_CMD"
143
+
144
+ # Launch mode selection
145
+ echo -e "\n${BOLD}${GREEN}Launch mode:${RESET}"
146
+ echo -e "1. ${YELLOW}Foreground${RESET} - See logs in terminal (Ctrl+C to stop)"
147
+ echo -e "2. ${YELLOW}Background with logging${RESET} - Logs to mlx-server.log"
148
+ echo -e "3. ${YELLOW}Background detached${RESET} - Run with nohup"
149
+ echo -e "4. ${YELLOW}Just copy command${RESET} - Don't launch"
150
+ read -p "> " LAUNCH_MODE
151
+
152
+ # Create logs directory if needed
153
+ if [[ "$LAUNCH_MODE" == "2" || "$LAUNCH_MODE" == "3" ]]; then
154
+ mkdir -p logs
155
+ LOG_FILE="logs/mlx-server-$(date +%Y%m%d-%H%M%S).log"
156
+ fi
157
+
158
+ case "$LAUNCH_MODE" in
159
+ 1)
160
+ echo -e "\n${BOLD}${GREEN}Starting server in foreground...${RESET}"
161
+ echo -e "${YELLOW}Press Ctrl+C to stop${RESET}\n"
162
+ eval "$SERVER_CMD"
163
+ ;;
164
+ 2)
165
+ echo -e "\n${BOLD}${GREEN}Starting server in background...${RESET}"
166
+ echo -e "Logs: ${LOG_FILE}"
167
+ eval "$SERVER_CMD" > "${LOG_FILE}" 2>&1 &
168
+ SERVER_PID=$!
169
+ echo -e "${GREEN}✓ Server started with PID: ${SERVER_PID}${RESET}"
170
+ echo -e "\nTo monitor: tail -f ${LOG_FILE}"
171
+ echo -e "To stop: kill ${SERVER_PID}"
172
+
173
+ # Save PID for easy stopping
174
+ echo $SERVER_PID > logs/mlx-server.pid
175
+ ;;
176
+ 3)
177
+ echo -e "\n${BOLD}${GREEN}Starting server with nohup...${RESET}"
178
+ echo -e "Logs: ${LOG_FILE}"
179
+ nohup bash -c "$SERVER_CMD" > "${LOG_FILE}" 2>&1 &
180
+ SERVER_PID=$!
181
+ echo -e "${GREEN}✓ Server started with PID: ${SERVER_PID}${RESET}"
182
+ echo -e "\nTo monitor: tail -f ${LOG_FILE}"
183
+ echo -e "To stop: kill ${SERVER_PID}"
184
+
185
+ # Save PID
186
+ echo $SERVER_PID > logs/mlx-server.pid
187
+ ;;
188
+ 4)
189
+ echo -e "\n${BOLD}${GREEN}Command copied to clipboard!${RESET}"
190
+ echo "$SERVER_CMD" | pbcopy
191
+ ;;
192
+ *)
193
+ echo -e "\n${RED}Invalid choice. Exiting.${RESET}"
194
+ exit 1
195
+ ;;
196
+ esac
197
+
198
+ # Print API examples
199
+ if [[ "$LAUNCH_MODE" != "4" ]]; then
200
+ echo -e "\n${BOLD}${BLUE}=== API Usage Examples ===${RESET}"
201
+
202
+ echo -e "\n${CYAN}1. Chat completion:${RESET}"
203
+ echo -e "curl http://${HOST}:${PORT}/v1/chat/completions \\"
204
+ echo -e " -H \"Content-Type: application/json\" \\"
205
+ echo -e " -d '{"
206
+ echo -e " \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}],"
207
+ echo -e " \"temperature\": ${TEMP},"
208
+ echo -e " \"max_tokens\": 100"
209
+ echo -e " }'"
210
+
211
+ echo -e "\n${CYAN}2. Check models:${RESET}"
212
+ echo -e "curl http://${HOST}:${PORT}/v1/models"
213
+
214
+ echo -e "\n${CYAN}3. Health check:${RESET}"
215
+ echo -e "curl http://${HOST}:${PORT}/health"
216
+
217
+ if [[ "$IS_M3_ULTRA" == true ]]; then
218
+ echo -e "\n${BOLD}${MAGENTA}Dragon Performance Monitoring:${RESET}"
219
+ echo -e "# In another terminal:"
220
+ echo -e "watch -n 1 'curl -s http://${HOST}:${PORT}/health | jq .'"
221
+ fi
222
+ fi
223
+
224
+ echo -e "\n${BOLD}${BLUE}=====================================${RESET}"
225
+ echo -e "${BOLD}${GREEN}✨ MLX Server ready!${RESET}"
226
+ if [[ "$IS_M3_ULTRA" == true ]]; then
227
+ echo -e "${BOLD}${MAGENTA}🐉 Dragon M3 Ultra serving at full power!${RESET}"
228
+ fi
229
+ echo -e "${BOLD}${BLUE}=====================================${RESET}"
model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a6e9d43fd37774057c494a72c0c8419d6a3db1c8e7269a02063f4f1b5092f90
3
+ size 5364994599
model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0ac4bf7a6d38f358784207acedb5c0882957f4561d85dac25520d5a89c0e295
3
+ size 5368494562
model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58dedd01147ade3b33c86640e21065711fb78b76d1fb2c86438bb983d239b28e
3
+ size 5364070790
model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82f537ee2ac8ff8a8329160ab6d6376bd4bf506ff4c6f18581978c19bd47757a
3
+ size 5364070782
model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dae63165d5c54c668699e969c8c6cb69ba152a55307f1c713decacd0eb301260
3
+ size 1065193577
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 131072,
235
+ "pad_token": "<|endoftext|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff