Qwen3-32B-alpaca-th-52k-dolly-th-15k-wangchan-instruct-seed-2405
This model is a fine-tuned version of Qwen/Qwen3-32B on the alpaca-th-52k, the dolly-th-15k and the wangchan-instruct datasets. It achieves the following results on the evaluation set:
- Loss: 0.6359
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 2405
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.9657 | 0.0575 | 10 | 1.0428 |
0.8056 | 0.1149 | 20 | 0.8197 |
0.7664 | 0.1724 | 30 | 0.7526 |
0.7675 | 0.2299 | 40 | 0.7330 |
0.7185 | 0.2874 | 50 | 0.7189 |
0.6613 | 0.3448 | 60 | 0.7070 |
0.7022 | 0.4023 | 70 | 0.6979 |
0.7143 | 0.4598 | 80 | 0.6901 |
0.6819 | 0.5172 | 90 | 0.6846 |
0.6613 | 0.5747 | 100 | 0.6784 |
0.6887 | 0.6322 | 110 | 0.6724 |
0.6812 | 0.6897 | 120 | 0.6693 |
0.677 | 0.7471 | 130 | 0.6659 |
0.6793 | 0.8046 | 140 | 0.6632 |
0.6701 | 0.8621 | 150 | 0.6610 |
0.6533 | 0.9195 | 160 | 0.6587 |
0.6666 | 0.9770 | 170 | 0.6571 |
0.6506 | 1.0345 | 180 | 0.6560 |
0.6385 | 1.0920 | 190 | 0.6541 |
0.6499 | 1.1494 | 200 | 0.6528 |
0.6515 | 1.2069 | 210 | 0.6511 |
0.6352 | 1.2644 | 220 | 0.6504 |
0.6337 | 1.3218 | 230 | 0.6487 |
0.6438 | 1.3793 | 240 | 0.6475 |
0.6529 | 1.4368 | 250 | 0.6465 |
0.6469 | 1.4943 | 260 | 0.6452 |
0.6087 | 1.5517 | 270 | 0.6444 |
0.6477 | 1.6092 | 280 | 0.6435 |
0.6302 | 1.6667 | 290 | 0.6425 |
0.6477 | 1.7241 | 300 | 0.6415 |
0.6245 | 1.7816 | 310 | 0.6405 |
0.6486 | 1.8391 | 320 | 0.6398 |
0.6376 | 1.8966 | 330 | 0.6389 |
0.6437 | 1.9540 | 340 | 0.6382 |
0.6161 | 2.0115 | 350 | 0.6380 |
0.6057 | 2.0690 | 360 | 0.6382 |
0.5957 | 2.1264 | 370 | 0.6382 |
0.6465 | 2.1839 | 380 | 0.6380 |
0.5811 | 2.2414 | 390 | 0.6378 |
0.6004 | 2.2989 | 400 | 0.6375 |
0.6484 | 2.3563 | 410 | 0.6370 |
0.5845 | 2.4138 | 420 | 0.6369 |
0.5928 | 2.4713 | 430 | 0.6365 |
0.5795 | 2.5287 | 440 | 0.6364 |
0.6306 | 2.5862 | 450 | 0.6360 |
0.6362 | 2.6437 | 460 | 0.6362 |
0.6316 | 2.7011 | 470 | 0.6361 |
0.6355 | 2.7586 | 480 | 0.6359 |
0.5977 | 2.8161 | 490 | 0.6359 |
0.6007 | 2.8736 | 500 | 0.6359 |
0.615 | 2.9310 | 510 | 0.6359 |
0.6153 | 2.9885 | 520 | 0.6359 |
Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for airesearch/Qwen3-32B-alpaca-th-52k-dolly-th-15k-wangchan-instruct-seed-2405
Base model
Qwen/Qwen3-32B