Update README.md
Browse files
README.md
CHANGED
@@ -4,12 +4,16 @@ library_name: peft
|
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
|
7 |
-
|
8 |
|
9 |
🔗 Paper link: [Arxiv preprint](https://arxiv.org/abs/2507.02851)
|
10 |
|
11 |
🔗 Link to the trained models: [Hugging Face collection](https://huggingface.co/collections/purbeshmitra/motif-paper-models-686a2f36407bb88f750eef75)
|
12 |
|
|
|
|
|
|
|
|
|
13 |
The INFTYTHINK architecture, shown below, allows multi-round thinking for extended LLM reasoning beyond its context size.
|
14 |
<p align="center">
|
15 |
<img src="assets/multiround.png" alt="Alt Text" width="750">
|
@@ -28,6 +32,11 @@ from transformers import AutoModelForCausalLM
|
|
28 |
|
29 |
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit")
|
30 |
model = PeftModel.from_pretrained(base_model, "purbeshmitra/MOTIF")
|
|
|
|
|
|
|
|
|
|
|
31 |
```
|
32 |
|
33 |
## Citation
|
|
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
|
7 |
+
## MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs
|
8 |
|
9 |
🔗 Paper link: [Arxiv preprint](https://arxiv.org/abs/2507.02851)
|
10 |
|
11 |
🔗 Link to the trained models: [Hugging Face collection](https://huggingface.co/collections/purbeshmitra/motif-paper-models-686a2f36407bb88f750eef75)
|
12 |
|
13 |
+
- **Algorithm**: MOTIF
|
14 |
+
- **Data**: [GSM8K](https://huggingface.co/datasets/openai/gsm8k)
|
15 |
+
- **Base model**: [unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct-bnb-4bit)
|
16 |
+
|
17 |
The INFTYTHINK architecture, shown below, allows multi-round thinking for extended LLM reasoning beyond its context size.
|
18 |
<p align="center">
|
19 |
<img src="assets/multiround.png" alt="Alt Text" width="750">
|
|
|
32 |
|
33 |
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit")
|
34 |
model = PeftModel.from_pretrained(base_model, "purbeshmitra/MOTIF")
|
35 |
+
|
36 |
+
SYSTEM_PROMPT = You are a helpful assistant. When the user asks a question, you solve it in 3 rounds. In each round, you first think about the reasoning process of answering and then provide the user with a detailed progress about it. The reasoning process and the progress are enclosed within <reasoning> </reasoning> and <answer> </answer> tags, respectively. Therefore, you follow the strict format:
|
37 |
+
<reasoning> reasoning process here </reasoning> <answer> detailed progress here </answer>
|
38 |
+
|
39 |
+
The User provides this detailed progress as additional context in the next round. You then respond again with further thinking and further progress. When the User says that the current round is the final (third) round, you provide an answer inside the answer tags. You also enclose a final answer in third round in the box: \boxed{}. Only this boxed final answer is used for evaluation.
|
40 |
```
|
41 |
|
42 |
## Citation
|