license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B-Thinking-2507
pipeline_tag: text-generation
Jan-v1: The Inaugural Model of the Jan Family – Redefining Agentic Reasoning
Overview
Introducing Jan-v1, the first release in the Jan Family – specifically designed for advanced agentic reasoning and complex problem-solving within the Jan App. Building on the innovative agentic capabilities of our earlier Lucy model, Jan-v1 represents a significant leap forward through strategic model scaling.
By leveraging a larger Qwen3-4B base, Jan-v1 demonstrates profoundly enhanced 'thinking' and reasoning capabilities. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.
Evaluation
Jan-v1's strategic scaling has resulted in a notable performance uplift, particularly evident in its "thinking" and reasoning prowess. Following the established MCP benchmark methodology, Jan-v1 sets a new standard for models in its class.
Model | SimpleQA Accuracy |
---|---|
Jan-v1 (Qwen3-4B) | 91.2% |
Lucy (Qwen3-1.7B) | [Lucy's Score] |
DeepSeek-v3 (Comparison from Lucy) | [DeepSeek's Score] |
The 91.2% accuracy on SimpleQA underscores Jan-v1's advanced ability to precisely retrieve and synthesize information, showcasing the effectiveness of our model scaling approach for agentic intelligence.
Quick Start
Integration with Jan App
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
Local Deployment
Using vLLM:
vllm serve Menlo/Jan-v1 \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model jan-v1.gguf \
--host 0.0.0.0 \
--port 1234
Recommended Parameters
temperature: 0.7
top_p: 0.9
top_k: 20
min_p: 0.0
max_tokens: 2048
🤝 Community & Support
- Discussions: HuggingFace Community
- Jan App: Learn more about the Jan App at jan.ai
📄 Citation
Updated Soon