R1_Qwen3_8B_0719

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-0528-Qwen3-8B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4267
  • Map@3: 0.9177

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Map@3
26.6085 0.0523 20 1.3286 0.7507
9.7222 0.1046 40 1.0625 0.7933
7.9943 0.1569 60 0.8487 0.8183
7.4982 0.2092 80 0.8259 0.8315
6.7844 0.2615 100 0.7845 0.8407
6.1752 0.3138 120 0.7051 0.8571
5.3012 0.3661 140 0.6606 0.8683
4.7654 0.4184 160 0.5941 0.8830
5.3467 0.4707 180 0.6074 0.8771
4.4068 0.5230 200 0.5947 0.8880
4.9025 0.5754 220 0.5081 0.8986
4.3179 0.6277 240 0.5520 0.8941
4.4065 0.6800 260 0.4970 0.9040
3.7451 0.7323 280 0.4987 0.9045
4.4839 0.7846 300 0.4905 0.9085
3.5164 0.8369 320 0.4644 0.9067
3.9504 0.8892 340 0.4650 0.9066
3.6298 0.9415 360 0.4461 0.9106
3.6195 0.9938 380 0.4242 0.9173
3.0214 1.0445 400 0.5402 0.9058
2.7135 1.0968 420 0.4302 0.9203
2.6106 1.1491 440 0.4071 0.9252
2.8122 1.2014 460 0.4366 0.9188
3.0033 1.2537 480 0.4178 0.9230
2.59 1.3060 500 0.4116 0.9233
3.0395 1.3583 520 0.4267 0.9177

Framework versions

  • PEFT 0.17.0
  • Transformers 4.55.2
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.4
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prl90777/R1_Qwen3_8B_0719

Adapter
(6)
this model