Update README.md
Browse files
README.md
CHANGED
@@ -17,17 +17,10 @@ pipeline_tag: text-generation
|
|
17 |
|
18 |
## Overview
|
19 |
|
20 |
-
Introducing **Jan-v1**, the
|
21 |
|
22 |
By leveraging a larger **Qwen3-4B** base, Jan-v1 demonstrates profoundly enhanced 'thinking' and reasoning capabilities. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.
|
23 |
|
24 |
-
## What Jan-v1 Excels At
|
25 |
-
|
26 |
-
- **🧠 Enhanced Agentic Reasoning**: With its larger parameter count, Jan-v1 excels at deeper reasoning, complex problem-solving, and sophisticated multi-step agentic planning.
|
27 |
-
- **🎯 Superior Question Answering**: Achieves an impressive **91.2% accuracy on SimpleQA**, significantly advancing performance for factoid question answering.
|
28 |
-
- **🔍 Advanced Agentic Web Search**: Inherits and refines Lucy's strong capabilities for agentic web search and lightweight browsing via MCP-enabled tools.
|
29 |
-
- **📱 Optimized for Jan App**: Specifically engineered to provide unique and highly optimized support for the Jan App, ensuring seamless integration and superior user experience.
|
30 |
-
|
31 |
## Evaluation
|
32 |
|
33 |
Jan-v1's strategic scaling has resulted in a notable performance uplift, particularly evident in its "thinking" and reasoning prowess. Following the established MCP benchmark methodology, Jan-v1 sets a new standard for models in its class.
|
@@ -45,6 +38,9 @@ The **91.2% accuracy on SimpleQA** underscores Jan-v1's advanced ability to prec
|
|
45 |
Jan-v1 is designed for flexible deployment, compatible with various inference engines including vLLM, llama.cpp, and local applications like Jan and LMStudio. Its integration with search APIs and web browsing tools is facilitated through the MCP.
|
46 |
|
47 |
### Deployment
|
|
|
|
|
|
|
48 |
|
49 |
Deploy using VLLM:
|
50 |
```bash
|
|
|
17 |
|
18 |
## Overview
|
19 |
|
20 |
+
Introducing **Jan-v1**, the first release in the **Jan Family** – specifically designed for advanced agentic reasoning and complex problem-solving within the [Jan App](https://jan.ai/). Building on the innovative agentic capabilities of our earlier **Lucy** model, Jan-v1 represents a significant leap forward through strategic model scaling.
|
21 |
|
22 |
By leveraging a larger **Qwen3-4B** base, Jan-v1 demonstrates profoundly enhanced 'thinking' and reasoning capabilities. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## Evaluation
|
25 |
|
26 |
Jan-v1's strategic scaling has resulted in a notable performance uplift, particularly evident in its "thinking" and reasoning prowess. Following the established MCP benchmark methodology, Jan-v1 sets a new standard for models in its class.
|
|
|
38 |
Jan-v1 is designed for flexible deployment, compatible with various inference engines including vLLM, llama.cpp, and local applications like Jan and LMStudio. Its integration with search APIs and web browsing tools is facilitated through the MCP.
|
39 |
|
40 |
### Deployment
|
41 |
+
### Integration with Jan App
|
42 |
+
|
43 |
+
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
44 |
|
45 |
Deploy using VLLM:
|
46 |
```bash
|