Improving Pretraining Data Using Perplexity Correlations
Paper
•
2409.05816
•
Published
This repository contains a fastText pretraining data filter targeting the LAMBADA task, as discussed in the paper Improving Pretraining Data Using Perplexity Correlations. This filter selects high-quality pretraining data based on correlations between LLM perplexity and downstream benchmark performance.
Code: https://github.com/TristanThrush/perplexity-correlations
Totally Free + Zero Barriers + No Login Required