Suu's picture
Update README.md
4f60c61 verified
metadata
license: apache-2.0

Dataset Summary

This dataset is a high-quality subset of the Klear-Reasoner Code RL dataset, derived from the RL data used in the rllm project. Part of this data contributed to training Klear-Reasoner’s code reasoning models. The dataset is carefully cleaned and filtered to include only reliable samples suitable for reinforcement learning. Models trained with this dataset have shown substantial performance improvements across various code reasoning benchmarks. You can load the dataset via the Hugging Face datasets library:

from datasets import load_dataset
dataset = load_dataset("Suu/KlearReasoner-CodeSub-15K")
Resource Link
📝 Preprints Paper
🤗 Daily Paper Paper
🤗 Model Hub Klear-Reasoner-8B
🤗 Dataset Hub Math RL
🤗 Dataset Hub Code RL
🐛 Issues & Discussions GitHub Issues
📧 Contact [email protected]

Data Fields

  • data_source (string) — The source identifier for the sample.
  • prompt (list of dict) — The input prompt, stored as a list of message objects in chat format.
  • ability (string) — The skill or task category associated with the sample.
  • reward_model (dict) — Information about the ground truth or reward signal.
    • ground_truth (string) — The expected correct answer (may include LaTeX formatting).
    • style (string) — The method or type of evaluation, e.g., "rule".
  • index_level_0 (int) — An internal index or unique identifier for the sample.

Demonstration of Data Quality

This dataset contains exclusively high-quality, filtered samples.
All samples have been selected to ensure accurate reward signals for reinforcement learning, following the gradient-preserving clipping policy optimization (GPPO) method introduced in our paper. Models trained using this dataset achieve strong generalization and reliable performance on a range of math reasoning tasks.

Citation

Please consider citing our paper if you find this dataset useful:

@misc{su2025klearreasoneradvancingreasoningcapability,
      title={Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization}, 
      author={Zhenpeng Su and Leiyu Pan and Xue Bai and Dening Liu and Guanting Dong and Jiaming Huang and Wenping Hu and Fuzheng Zhang and Kun Gai and Guorui Zhou},
      year={2025},
      eprint={2508.07629},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2508.07629}, 
}