|
---
|
|
license: mit
|
|
license_link: https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-32B/blob/main/LICENSE
|
|
language:
|
|
- zho
|
|
- eng
|
|
- fra
|
|
- spa
|
|
- por
|
|
- deu
|
|
- ita
|
|
- rus
|
|
- jpn
|
|
- kor
|
|
- vie
|
|
- tha
|
|
- ara
|
|
pipeline_tag: text-generation
|
|
base_model:
|
|
- Qwen/Qwen2.5-32B-Instruct
|
|
tags:
|
|
- chat
|
|
library_name: transformers
|
|
---
|
|
|
|
# Deductive-Reasoning-Qwen-32B
|
|
|
|

|
|
|
|
Deductive Reasoning Qwen 32B is a reinforcement fine-tune of [Qwen 2.5 32B Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) to solve challenging deduction problems from the [Temporal Clue](https://github.com/bradhilton/temporal-clue) dataset, trained by [OpenPipe](https://openpipe.ai)!
|
|
|
|
Here are some additional resources to check out:
|
|
|
|
- [Blog Post](https://openpipe.ai/blog/using-grpo-to-beat-o1-o3-mini-and-r1-on-temporal-clue)
|
|
- [Training Recipe](https://github.com/openpipe/deductive-reasoning)
|
|
- [RL Experiments](https://github.com/openpipe/rl-experiments)
|
|
- [Deductive Reasoning Qwen 14B](https://huggingface.co/OpenPipe/Deductive-Reasoning-Qwen-14B)
|
|
|
|
If you're interested in training your own models with reinforcement learning or just chatting, feel free to [reach out](https://openpipe.ai/contact) or email Kyle directly at [email protected]!
|
|
|
|
|