library_name: transformers
tags: []
LB Reranker v1.0
The LB Reranker has been trained to determine the relatedness of a given query to a piece of text, therefore allowing it to be used as a ranker or reranker in various retrieval-based tasks.
This model is fine-tuned from a Qwen/Qwen2.5-0.5B-Instruct model checkpoint.
The training data for this model can be found at lightblue/reranker_continuous_filt_max7_train and the code for generating this data as well as running the training of the model can be found on our Github repo.
Trained on data in over 95 languages, this model is applicable to a broad range of use cases.
Evaluation
We perform an evaluation on 9 datasets from the BEIR benchmark that none of the evaluated models have been trained upon (to our knowledge). We evaluate on a subset of all queries (the first 250) to save evaluation time.
We find that our model performs similarly or better than many of the state-of-the-art reranker models in our evaluation, without compromising on inference speed.
We make our evaluation code and results available on our Github.