Update README.md
Browse files
README.md
CHANGED
@@ -33,38 +33,41 @@ Jan-v1's strategic scaling has resulted in a notable performance uplift, particu
|
|
33 |
|
34 |
The **91.2% accuracy on SimpleQA** underscores Jan-v1's advanced ability to precisely retrieve and synthesize information, showcasing the effectiveness of our model scaling approach for agentic intelligence.
|
35 |
|
36 |
-
##
|
37 |
|
38 |
-
Jan-v1 is designed for flexible deployment, compatible with various inference engines including vLLM, llama.cpp, and local applications like Jan and LMStudio. Its integration with search APIs and web browsing tools is facilitated through the MCP.
|
39 |
-
|
40 |
-
### Deployment
|
41 |
### Integration with Jan App
|
42 |
|
43 |
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
44 |
|
45 |
-
|
|
|
|
|
46 |
```bash
|
47 |
-
vllm serve Menlo/Jan-v1 \
|
48 |
--host 0.0.0.0 \
|
49 |
--port 1234 \
|
50 |
--enable-auto-tool-choice \
|
51 |
-
--tool-call-parser hermes
|
52 |
```
|
53 |
|
54 |
-
|
55 |
```bash
|
56 |
-
llama-server
|
|
|
|
|
57 |
```
|
58 |
|
59 |
-
### Recommended
|
60 |
|
61 |
```yaml
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
66 |
```
|
67 |
|
|
|
68 |
## 🤝 Community & Support
|
69 |
|
70 |
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) <!-- Update with your HF model ID -->
|
|
|
33 |
|
34 |
The **91.2% accuracy on SimpleQA** underscores Jan-v1's advanced ability to precisely retrieve and synthesize information, showcasing the effectiveness of our model scaling approach for agentic intelligence.
|
35 |
|
36 |
+
## Quick Start
|
37 |
|
|
|
|
|
|
|
38 |
### Integration with Jan App
|
39 |
|
40 |
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
41 |
|
42 |
+
### Local Deployment
|
43 |
+
|
44 |
+
**Using vLLM:**
|
45 |
```bash
|
46 |
+
vllm serve Menlo/Jan-v1 \
|
47 |
--host 0.0.0.0 \
|
48 |
--port 1234 \
|
49 |
--enable-auto-tool-choice \
|
50 |
+
--tool-call-parser hermes
|
51 |
```
|
52 |
|
53 |
+
**Using llama.cpp:**
|
54 |
```bash
|
55 |
+
llama-server --model jan-v1.gguf \
|
56 |
+
--host 0.0.0.0 \
|
57 |
+
--port 1234
|
58 |
```
|
59 |
|
60 |
+
### Recommended Parameters
|
61 |
|
62 |
```yaml
|
63 |
+
temperature: 0.7
|
64 |
+
top_p: 0.9
|
65 |
+
top_k: 20
|
66 |
+
min_p: 0.0
|
67 |
+
max_tokens: 2048
|
68 |
```
|
69 |
|
70 |
+
|
71 |
## 🤝 Community & Support
|
72 |
|
73 |
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) <!-- Update with your HF model ID -->
|