Update README.md
Browse files
README.md
CHANGED
@@ -79,13 +79,21 @@ II-Search-4B is designed for:
|
|
79 |
- Educational and research applications requiring factual accuracy
|
80 |
|
81 |
## Usage
|
82 |
-
|
|
|
|
|
83 |
```bash
|
84 |
vllm serve Intelligent-Internet/II-Search-4B --served-model-name II-Search-4B --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --rope-scaling '{"rope_type":"yarn","factor":1.5,"original_max_position_embeddings":98304}' --max-model-len 131072
|
85 |
```
|
86 |
-
|
|
|
|
|
|
|
87 |
|
88 |
-
|
|
|
|
|
|
|
89 |
|
90 |
```python
|
91 |
generate_cfg = {
|
@@ -95,7 +103,6 @@ generate_cfg = {
|
|
95 |
'repetition_penalty': 1.1,
|
96 |
'max_tokens': 2048
|
97 |
}
|
98 |
-
|
99 |
```
|
100 |
|
101 |
- For a query that you need to find a short and accurate answer. Add the following phrase: "\n\nPlease reason step-by-step and put the final answer within \\boxed{}."
|
|
|
79 |
- Educational and research applications requiring factual accuracy
|
80 |
|
81 |
## Usage
|
82 |
+
To deploy and interact with the II-Search-4B model effectively, follow these options:
|
83 |
+
1. Serve the model using vLLM or SGLang
|
84 |
+
Use the following command to serve the model with vLLM (adjust parameters as needed for your hardware setup):
|
85 |
```bash
|
86 |
vllm serve Intelligent-Internet/II-Search-4B --served-model-name II-Search-4B --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --rope-scaling '{"rope_type":"yarn","factor":1.5,"original_max_position_embeddings":98304}' --max-model-len 131072
|
87 |
```
|
88 |
+
This configuration enables distributed tensor parallelism across 8 GPUs, reasoning capabilities, custom RoPE scaling for extended context, and a maximum context length of 131,072 tokens.
|
89 |
+
|
90 |
+
2. Integrate web_search and web_visit tools
|
91 |
+
Equip the served model with web_search and web_visit tools to enable internet-aware functionality. Alternatively, use a middleware like MCP for tool integration—see this example repository: https://github.com/hoanganhpham1006/mcp-server-template.
|
92 |
|
93 |
+
## Host on macOS with MLX for local use
|
94 |
+
As an alternative for Apple Silicon users, host the quantized [II-Search-4B-MLX](https://huggingface.co/Intelligent-Internet/II-Search-4B-MLX) version on your Mac. Then, interact with it via user-friendly interfaces like LM Studio or Ollama Desktop.
|
95 |
+
|
96 |
+
## Recommended Generation Parameters
|
97 |
|
98 |
```python
|
99 |
generate_cfg = {
|
|
|
103 |
'repetition_penalty': 1.1,
|
104 |
'max_tokens': 2048
|
105 |
}
|
|
|
106 |
```
|
107 |
|
108 |
- For a query that you need to find a short and accurate answer. Add the following phrase: "\n\nPlease reason step-by-step and put the final answer within \\boxed{}."
|