Jan-v1-4B / README.md
jan-hq's picture
Update README.md
5783e79 verified
|
raw
history blame
4.61 kB
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B-Thinking-2507
pipeline_tag: text-generation
---
# Jan-v1: The Inaugural Model of the Jan Family – Redefining Agentic Reasoning
[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research)
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
[![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) # Adding a badge for Jan App
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/PA6JCiYLPJX_WFO42ClTd.jpeg" width="300" alt="Jan-v1-Demo-Image"> <!-- Placeholder or replace with actual Jan-v1 specific image/GIF -->
</div>
**Authors:** [Alan Dao](https://scholar.google.com/citations?user=eGWws2UAAAAJ&hl=en), [Bach Vu Dinh](https://scholar.google.com/citations?user=7Lr6hdoAAAAJ&hl=vi), [Alex Nguyen](https://github.com/nguyenhoangthuan99), [Norapat Buppodom](https://scholar.google.com/citations?user=utfEThsAAAAJ&hl=th&authuser=1) # Assuming same authors, adjust if needed.
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
<!-- ![image/gif](jan_v1_demo.gif) -->
## Overview
Introducing **Jan-v1**, the foundational model in the **Jan Family** – a new lineage of highly capable language models developed to power the next generation of intelligent agents within the [Jan App](https://jan.ai/) ecosystem. Building on the innovative agentic capabilities of our earlier **Lucy** model, Jan-v1 represents a significant leap forward through strategic model scaling.
By leveraging a larger **Qwen3-4B** base, Jan-v1 demonstrates profoundly enhanced 'thinking' and reasoning capabilities. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.
## What Jan-v1 Excels At
- **🧠 Enhanced Agentic Reasoning**: With its larger parameter count, Jan-v1 excels at deeper reasoning, complex problem-solving, and sophisticated multi-step agentic planning.
- **🎯 Superior Question Answering**: Achieves an impressive **91.2% accuracy on SimpleQA**, significantly advancing performance for factoid question answering.
- **🔍 Advanced Agentic Web Search**: Inherits and refines Lucy's strong capabilities for agentic web search and lightweight browsing via MCP-enabled tools.
- **📱 Optimized for Jan App**: Specifically engineered to provide unique and highly optimized support for the Jan App, ensuring seamless integration and superior user experience.
## Evaluation
Jan-v1's strategic scaling has resulted in a notable performance uplift, particularly evident in its "thinking" and reasoning prowess. Following the established MCP benchmark methodology, Jan-v1 sets a new standard for models in its class.
| Model | SimpleQA Accuracy |
| :---------------------------------- | :---------------- |
| **Jan-v1 (Qwen3-4B)** | **91.2%** |
| Lucy (Qwen3-1.7B) | [Lucy's Score] | <!-- Insert Lucy's actual SimpleQA score from its README here for direct comparison -->
| DeepSeek-v3 (Comparison from Lucy) | [DeepSeek's Score]| <!-- Insert DeepSeek's score from Lucy's README here -->
The **91.2% accuracy on SimpleQA** underscores Jan-v1's advanced ability to precisely retrieve and synthesize information, showcasing the effectiveness of our model scaling approach for agentic intelligence.
## 🖥️ How to Run Locally
Jan-v1 is designed for flexible deployment, compatible with various inference engines including vLLM, llama.cpp, and local applications like Jan and LMStudio. Its integration with search APIs and web browsing tools is facilitated through the MCP (Mobile-Cloud Protocol).
### Deployment
Deploy using VLLM:
```bash
vllm serve Menlo/Jan-v1 \ # Update with your HF model ID (e.g., Menlo/Jan-v1)
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
Or `llama-server` from `llama.cpp`:
```bash
llama-server ...
```
### Recommended Sampling Parameters
```yaml
Temperature: 0.7
Top-p: 0.9
Top-k: 20
Min-p: 0.0
```
## 🤝 Community & Support
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) <!-- Update with your HF model ID -->
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
## 📄 Citation
```bibtex
Updated Soon
```
**Paper **: *Jan-v1
---