prithivMLmods commited on
Commit
3c71223
·
verified ·
1 Parent(s): 894783b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -1
README.md CHANGED
@@ -1,4 +1,102 @@
1
  ---
2
  datasets:
3
  - prithivMLmods/Open-Omega-Explora-2.5M
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  datasets:
3
  - prithivMLmods/Open-Omega-Explora-2.5M
4
+ license: apache-2.0
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen3-0.6B
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - text-generation-inference
13
+ - moe
14
+ - code
15
+ - science
16
+ - biology
17
+ - chemistry
18
+ - thinking
19
+ ---
20
+
21
+ # **Explora-0.6B**
22
+
23
+ > **Explora-0.6B** is a lightweight and efficient **general-purpose reasoning model**, fine-tuned on **Qwen3-0.6B** using the first 100,000 entries of the **Open-Omega-Explora-2.5M** dataset. It is tailored for **science and code**-focused reasoning tasks, combining symbolic clarity with fluent instruction-following, ideal for exploratory workflows in STEM domains.
24
+
25
+ > \[!note]
26
+ > GGUF: [https://huggingface.co/prithivMLmods/Explora-0.6B-GGUF](https://huggingface.co/prithivMLmods/Explora-0.6B-GGUF)
27
+
28
+ ## **Key Features**
29
+
30
+ 1. **General-Purpose STEM Reasoning**
31
+ Fine-tuned for **code and science problems**, the model handles symbolic reasoning, basic computations, and structured logic with clarity and fluency.
32
+
33
+ 2. **Built on Qwen3-0.6B**
34
+ Leverages the multilingual and instruction-tuned capabilities of **Qwen3-0.6B**, making it well-suited for lightweight deployments with strong core reasoning ability.
35
+
36
+ 3. **Open-Omega-Explora Dataset**
37
+ Trained on the **first 100k entries** of the **Open-Omega-Explora-2.5M** dataset, which includes a diverse mix of problems from math, code, and science domains.
38
+
39
+ 4. **Balanced Thinking Mode**
40
+ Supports moderate reasoning depth while avoiding excessive hallucination—great for **step-by-step problem solving**, **function generation**, and **explanatory output**.
41
+
42
+ 5. **Compact & Deployable**
43
+ At just **0.6B parameters**, it’s ideal for **offline environments**, **low-resource inference setups**, and **educational tools** requiring fast, reliable logic.
44
+
45
+ 6. **Output Flexibility**
46
+ Capable of producing answers in **Markdown**, **Python**, **JSON**, or plain text depending on the task—suitable for both human readability and integration into pipelines.
47
+
48
+
49
+ ## **Quickstart with Transformers**
50
+
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer
53
+
54
+ model_name = "prithivMLmods/Explora-0.6B"
55
+
56
+ model = AutoModelForCausalLM.from_pretrained(
57
+ model_name,
58
+ torch_dtype="auto",
59
+ device_map="auto"
60
+ )
61
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
62
+
63
+ prompt = "Explain Newton's second law of motion with a Python code example."
64
+
65
+ messages = [
66
+ {"role": "system", "content": "You are a helpful science and code reasoning assistant."},
67
+ {"role": "user", "content": prompt}
68
+ ]
69
+
70
+ text = tokenizer.apply_chat_template(
71
+ messages,
72
+ tokenize=False,
73
+ add_generation_prompt=True
74
+ )
75
+
76
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
77
+
78
+ generated_ids = model.generate(
79
+ **model_inputs,
80
+ max_new_tokens=256
81
+ )
82
+ generated_ids = [
83
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
84
+ ]
85
+
86
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
87
+ print(response)
88
+ ```
89
+
90
+ ## **Intended Use**
91
+
92
+ * Educational and lightweight research tools
93
+ * General science and programming help
94
+ * Low-resource STEM assistant for code labs or classrooms
95
+ * Fast-response agent for structured reasoning tasks
96
+
97
+ ## **Limitations**
98
+
99
+ * Not optimized for deep multi-hop reasoning or creative tasks
100
+ * May require prompt engineering for highly specific technical queries
101
+ * Smaller context window and lower fluency compared to larger models
102
+ * Best used with **specific and scoped questions** for accurate outputs