--- language: - en license: apache-2.0 tags: - biencoder - sentence-transformers - text-classification - sentence-pair-classification - semantic-similarity - semantic-search - retrieval - reranking - generated_from_trainer - dataset_size:76349300 - loss:ArcFaceInBatchLoss base_model: Alibaba-NLP/gte-modernbert-base widget: - source_sentence: '"How much would I need to narrate a ""Let''s Play"" video in order to make money from it on YouTube?"' sentences: - How much money do people make from YouTube videos with 1 million views? - '"How much would I need to narrate a ""Let''s Play"" video in order to make money from it on YouTube?"' - '"Does the sentence, ""I expect to be disappointed,"" make sense?"' - source_sentence: '"I appreciate that.' sentences: - '"How is the Mariner rewarded in ""The Rime of the Ancient Mariner"" by Samuel Taylor Coleridge?"' - '"I appreciate that.' - I can appreciate that. - source_sentence: '"""It is very easy to defeat someone, but too hard to win some one"". What does the previous sentence mean?"' sentences: - '"How can you use the word ""visceral"" in a sentence?"' - '"""It is very easy to defeat someone, but too hard to win some one"". What does the previous sentence mean?"' - '"What does ""The loudest one in the room is the weakest one in the room."" Mean?"' - source_sentence: '" We condemn this raid which is in our view illegal and morally and politically unjustifiable , " London-based NCRI official Ali Safavi told Reuters by telephone .' sentences: - 'London-based NCRI official Ali Safavi told Reuters : " We condemn this raid , which is in our view illegal and morally and politically unjustifiable . "' - The social awkwardness is complicated by the fact that Marianne is a white girl living with a black family . - art's cause, this in my opinion - source_sentence: '"If you click ""like"" on an old post that someone made on your wall yet you''re no longer Facebook friends, will they still receive a notification?"' sentences: - '"Is there is any two wheeler having a gear box which has the feature ""automatic neutral"" when the engine is off while it is in gear?"' - '"If you click ""like"" on an old post that someone made on your wall yet you''re no longer Facebook friends, will they still receive a notification?"' - '"If your teenage son posted ""La commedia e finita"" on his Facebook wall, would you be concerned?"' datasets: - redis/langcache-sentencepairs-v2 pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_precision@1 - cosine_recall@1 - cosine_ndcg@10 - cosine_mrr@1 - cosine_map@100 - cosine_auc_precision_cache_hit_ratio - cosine_auc_similarity_distribution model-index: - name: Redis fine-tuned BiEncoder model for semantic caching on LangCache results: - task: type: custom-information-retrieval name: Custom Information Retrieval dataset: name: test type: test metrics: - type: cosine_accuracy@1 value: 0.5955802603036876 name: Cosine Accuracy@1 - type: cosine_precision@1 value: 0.5955802603036876 name: Cosine Precision@1 - type: cosine_recall@1 value: 0.5780913232288468 name: Cosine Recall@1 - type: cosine_ndcg@10 value: 0.777639866271746 name: Cosine Ndcg@10 - type: cosine_mrr@1 value: 0.5955802603036876 name: Cosine Mrr@1 - type: cosine_map@100 value: 0.7275779687157514 name: Cosine Map@100 - type: cosine_auc_precision_cache_hit_ratio value: 0.3639683124583609 name: Cosine Auc Precision Cache Hit Ratio - type: cosine_auc_similarity_distribution value: 0.15401896350374616 name: Cosine Auc Similarity Distribution --- # Redis fine-tuned BiEncoder model for semantic caching on LangCache This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) on the [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v2) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for sentence pair similarity. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) - **Maximum Sequence Length:** 100 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v2) - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 100, 'do_lower_case': False, 'architecture': 'ModernBertModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("redis/langcache-embed-v3") # Run inference sentences = [ '"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"', '"If you click ""like"" on an old post that someone made on your wall yet you\'re no longer Facebook friends, will they still receive a notification?"', '"If your teenage son posted ""La commedia e finita"" on his Facebook wall, would you be concerned?"', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 1.0000, 0.6758], # [1.0000, 1.0000, 0.6758], # [0.6758, 0.6758, 1.0078]], dtype=torch.bfloat16) ``` ## Evaluation ### Metrics #### Custom Information Retrieval * Dataset: `test` * Evaluated with ir_evaluator.CustomInformationRetrievalEvaluator | Metric | Value | |:-------------------------------------|:-----------| | cosine_accuracy@1 | 0.5956 | | cosine_precision@1 | 0.5956 | | cosine_recall@1 | 0.5781 | | **cosine_ndcg@10** | **0.7776** | | cosine_mrr@1 | 0.5956 | | cosine_map@100 | 0.7276 | | cosine_auc_precision_cache_hit_ratio | 0.364 | | cosine_auc_similarity_distribution | 0.154 | ## Training Details ### Training Dataset #### LangCache Sentence Pairs (all) * Dataset: [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v2) * Size: 132,354 training samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | anchor | positive | negative | |:----------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------| | What high potential jobs are there other than computer science? | What high potential jobs are there other than computer science? | Why IT or Computer Science jobs are being over rated than other Engineering jobs? | | Would India ever be able to develop a missile system like S300 or S400 missile? | Would India ever be able to develop a missile system like S300 or S400 missile? | Should India buy the Russian S400 air defence missile system? | | water from the faucet is being drunk by a yellow dog | A yellow dog is drinking water from the faucet | Childlessness is low in Eastern European countries. | * Loss: losses.ArcFaceInBatchLoss with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Evaluation Dataset #### LangCache Sentence Pairs (all) * Dataset: [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v2) * Size: 132,354 evaluation samples * Columns: anchor, positive, and negative * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | | | | * Samples: | anchor | positive | negative | |:----------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------| | What high potential jobs are there other than computer science? | What high potential jobs are there other than computer science? | Why IT or Computer Science jobs are being over rated than other Engineering jobs? | | Would India ever be able to develop a missile system like S300 or S400 missile? | Would India ever be able to develop a missile system like S300 or S400 missile? | Should India buy the Russian S400 air defence missile system? | | water from the faucet is being drunk by a yellow dog | A yellow dog is drinking water from the faucet | Childlessness is low in Eastern European countries. | * Loss: losses.ArcFaceInBatchLoss with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Logs | Epoch | Step | test_cosine_ndcg@10 | |:-----:|:----:|:-------------------:| | -1 | -1 | 0.7776 | ### Framework Versions - Python: 3.12.3 - Sentence Transformers: 5.1.0 - Transformers: 4.56.0 - PyTorch: 2.8.0+cu128 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ```