It works!

  • The fine-tuning dataset consists of ~1100 diverse examples, ~300 of which may be directly from Gemini 2.5 Pro (before they hid the original thinking process).

  • Rest of the data was "distilled" from experimental models and verified to be true.


  • No benchmark data as I do not have any funds to run them. However, it works and consistently thinks in gemini style.

  • Feel free to generate new training data and share it around.

  • Credit the model author (me!) if you deploy or generate datasets please.

  • Hit me up if you would like to fund my rock smashing experiments.

Example run:

Write a simple tetris game with score tracking using pygame.

min_p: 0.05
top_k: 40
temp: 0.4

https://peerpad.net/#/r/markdown/7g2Ab3xNUKHNSJhQsJe6bktjs45gUbGSf4t5R46LbqaQ/4XTTM1SmwN2m9xqtQB9rq6MTfYyK5AAQBdUyzhLVTG1S6txNp

image/png

image/png

Downloads last month
48
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ba2han/qwen3-32b-geminized-experimental

Base model

Qwen/Qwen3-32B
Finetuned
(72)
this model
Quantizations
2 models

Collection including Ba2han/qwen3-32b-geminized-experimental