BM25 retrieval results
#3
by
OfirResearch
- opened
Hi @intfloat ,
First, thank you for your work and for publishing your code! I have a few questions about your BM25 implementation:
- Are you using Pyserini for BM25? If so, what parameters (e.g., b, k1) did you use?
- The scores in your BM25 results file don’t seem to match typical BM25 score ranges—could you clarify what they represent?
- Is there any published code or instructions for reproducing your BM25 retrieval results?
- I also noticed that the retrieved documents had a huge positive impact on my training—your hard negatives significantly improved my results. However, I saw that not all queries from the MS MARCO train qrels file were used. Was there a specific reason for this?
- Lastly, for queries that didn’t retrieve enough documents with BM25 due to a lack of lexical matches, how did you select hard negatives to reach the target of 200 per query?
Thanks again for your contributions, and I’d greatly appreciate any insights!
OfirResearch
changed discussion status to
closed
OfirResearch
changed discussion status to
open