Rain-100M is a raw base model (not instruction-tuned or safety-aligned), aimed at small-scale research, debugging training pipelines, and CPU/edge experiments. If you run evaluations, finetunes, or visualizations with it, I would be very interested in your results!
Just tried to create an educational assistant for younger people who can struggle with visualsation of 'what is this sorcery all about'. Its first step of my spare time projects, sft on Qwen3-8B,
EduHelper is a child-friendly tutoring assistant fine-tuned from the Qwen3-8B base model using parameter-efficient fine-tuning (PEFT) with LoRA on the ajibawa-2023/Education-Young-Children dataset.
Eduhelp with more empathy, based on model finetuned on psychotheraputic preferences just landed on
Beck-8B as a base model, 13000 steps on educational dataset. Time to go further and build more 🥰 s3nh/EduHelp_Beck_8B Thanks to @basilic_ai for computations <3