nexaml commited on
Commit
1fb9fd4
·
verified ·
1 Parent(s): d337712

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: Qwen/Qwen3-0.6B
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ # NexaAI/Qwen3-4B
8
+
9
+ ## Quickstart
10
+
11
+ Run them directly with [nexa-sdk](https://github.com/NexaAI/nexa-sdk) installed
12
+ In nexa-sdk CLI:
13
+
14
+ ```bash
15
+ nexa infer NexaAI/Qwen3-4B-GGUF
16
+ ```
17
+
18
+ ## Overview
19
+
20
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
21
+
22
+ - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
23
+ - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
24
+ - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
25
+ - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
26
+ - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
27
+
28
+ #### Model Overview
29
+
30
+ **Qwen3-4B** has the following features:
31
+ - Type: Causal Language Models
32
+ - Training Stage: Pretraining & Post-training
33
+ - Number of Parameters: 4.0B
34
+ - Number of Paramaters (Non-Embedding): 3.6B
35
+ - Number of Layers: 36
36
+ - Number of Attention Heads (GQA): 32 for Q and 8 for KV
37
+ - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
38
+
39
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
40
+
41
+ ## Benchmark Results
42
+
43
+
44
+ ## Reference
45
+ **Original model card**: [Qwen/Qwen3-4](https://huggingface.co/Qwen/Qwen3-4B)