pawasthy commited on
Commit
5b47a77
·
verified ·
1 Parent(s): 0c6d683

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -12
README.md CHANGED
@@ -14,9 +14,9 @@ tags:
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
- **Model Summary:** Granite-embedding-english-r2 is a 149M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
18
 
19
- The r2 models feature an increased context length of 8192 and delivers superior performance across standard and IBM-built information retrieval benchmarks (BEIR, UnifiedSearch, RedHAT, ClapNQ), code retrieval (COIR), and long-document search benchmarks (MLDR).
20
 
21
  These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, granite-embedding-english-r2 is optimized to ensure strong alignment between query and passage embeddings.
22
 
@@ -66,7 +66,7 @@ input_passages = [
66
  "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
67
  ]
68
 
69
- # encode queries and passages
70
  query_embeddings = model.encode(input_queries)
71
  passage_embeddings = model.encode(input_passages)
72
 
@@ -117,14 +117,31 @@ query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
117
  ```
118
 
119
  ## Evaluation Results
120
- The performance of the granite embedding r2 models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below. The average time required to encode and retrieve per query is also reported.
121
-
122
- | Model | Parameters (M) | Embedding Size | MTEB-v1 Retrieval (15) | CoIR (10) | MLDR (En) | Retrieval Time (seconds/query) |
123
- |------------------------------------|:--------------:|:--------------:|:---------------------:|:---------:|:---------:|:------------------------------:|
124
- | granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 53.8 | 39.8 | TBD |
125
- | granite-embedding-english-r2 | 149 | 768 | 53.0 | 55.3 | 40.7 | TBD |
126
- | granite-embedding-30m-english | 30 | 384 | 49.1 | 47.0 | 32.6 | TBD |
127
- | granite-embedding-125m-english | 125 | 768 | 52.3 | 50.3 | 35.0 | TBD |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
128
 
129
  ### Model Architecture and Key Features
130
 
@@ -170,7 +187,7 @@ The underlying encoder models using GneissWeb, an IBM-curated dataset composed e
170
  For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
171
 
172
  ## Infrastructure
173
- We trained the granite embedding english r2 models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80GB GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
174
 
175
  ## Ethical Considerations and Limitations
176
  Granite-embedding-english-r2 leverages both permissively licensed open-source and select proprietary data for enhanced performance. The training data for the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-embedding-english-r2 is trained only for English texts, and has a context length of 8192 tokens (longer texts will be truncated to this size).
 
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
+ **Model Summary:** Granite-embedding-english-r2 is a 149M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
18
 
19
+ The r2 models feature an increased context length of **8192** and deliver superior performance across standard and IBM-built information retrieval benchmarks (BEIR, ClapNQ), code retrieval (COIR), long-document search benchmarks (MLDR), conversational multi-turn (MTRAG), TableIR (TBD), and on many enterprise use cases.
20
 
21
  These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, granite-embedding-english-r2 is optimized to ensure strong alignment between query and passage embeddings.
22
 
 
66
  "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
67
  ]
68
 
69
+ # encode queries and passages. The model produces unnormalized vectors. If your task requires normalized embeddings pass normalize_embeddings=True to encode as below.
70
  query_embeddings = model.encode(input_queries)
71
  passage_embeddings = model.encode(input_passages)
72
 
 
117
  ```
118
 
119
  ## Evaluation Results
120
+ The performance of the Granite Embedding English models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below.
121
+ The average speed to encode documents on a single 5090 GPU using a sliding window with 512 context length is also reported.
122
+
123
+ | Model | Parameters (M) | Embedding Size | BEIR Retrieval (15) | MTEB-v2 (56)| CoIR (10) | MLDR (En) | MTRAG (4) | Encoding Speed (documents/sec) |
124
+ |------------------------------------|:--------------:|:--------------:|:-------------------:|:-----------:|:---------:|:---------:|:---------:|:-------------------------------:|
125
+ | granite-embedding-30m-english | 30 | 384 | 49.1 | 59.45 | 47.0 | 32.6 | 48.61 | 140.8 |
126
+ | granite-embedding-125m-english | 125 | 768 | 52.3 | 61.37 | 50.3 | 35.0 | 49.37 | 80.7 |
127
+ | granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 60.38 | 53.8 | 39.8 | 48.11 | 138.8 |
128
+ | granite-embedding-english-r2 | 149 | 768 | 53.0 | 62.18 | 55.3 | 40.7 | 56.73 | 80.9 |
129
+
130
+
131
+ | Model | Parameters (M) | Embedding Size | Average | MTEB-v2 Retrieval (10) | CoIR (10) | MLDR (En) | Table IR | MTRAG |
132
+ |------------------------------------|:--------------:|:--------------:| ------- |:----------------------:|:---------:|:---------:|:--------:|:-----:|
133
+ | gte-modernbert-base | | | | | | | | |
134
+ | nomic-ai/modernbert-embed-base | | | | | | | | |
135
+ | snowflake-arctic-embed-m-v2.0 | | | | | | | | |
136
+ | gte-base-en-v1.5 | | | | | | | | |
137
+ | e5-base-v2 | | | | | | | | |
138
+ | e5-small-v2 | | | | | | | | |
139
+ | bge-base-en-v1.5 | | | | | | | | |
140
+ | bge-small-en-v1.5 | | | | | | | | |
141
+ | granite-embedding-125m-english | | | | | | | | |
142
+ | granite-embedding-30m-english | | | | | | | | |
143
+ | granite-embedding-english-r2 | | | | | | | | |
144
+ | granite-embedding-small-english-r2 | | | | | | | | |
145
 
146
  ### Model Architecture and Key Features
147
 
 
187
  For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
188
 
189
  ## Infrastructure
190
+ We trained the granite embedding english r2 models using IBM's computing cluster, BlueVela Cluster, which is outfitted with NVIDIA H100 80GB GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
191
 
192
  ## Ethical Considerations and Limitations
193
  Granite-embedding-english-r2 leverages both permissively licensed open-source and select proprietary data for enhanced performance. The training data for the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-embedding-english-r2 is trained only for English texts, and has a context length of 8192 tokens (longer texts will be truncated to this size).