Update README.md
Browse filesmore performance numbers, fixed typos.
README.md
CHANGED
@@ -10,13 +10,13 @@ tags:
|
|
10 |
- transformers
|
11 |
- mteb
|
12 |
---
|
13 |
-
#
|
14 |
|
15 |
<!-- Provide a quick summary of what the model is/does. -->
|
16 |
|
17 |
-
|
18 |
|
19 |
-
The r2 models feature an increased context length of **8192** and deliver superior performance across standard and IBM-built information retrieval benchmarks (BEIR,
|
20 |
|
21 |
The Granite Embedding collection delivers innovative sentence-transformer models purposefully built for retrieval-based applications. These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, the Granite Embedding lineup is optimized to ensure strong alignment between query and passage embeddings.
|
22 |
|
@@ -70,9 +70,9 @@ input_passages = [
|
|
70 |
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
|
71 |
]
|
72 |
|
73 |
-
# encode queries and passages
|
74 |
-
query_embeddings = model.encode(input_queries)
|
75 |
-
passage_embeddings = model.encode(input_passages)
|
76 |
|
77 |
# calculate cosine similarity
|
78 |
print(util.cos_sim(query_embeddings, passage_embeddings))
|
@@ -120,14 +120,32 @@ query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
|
|
120 |
```
|
121 |
|
122 |
## Evaluation Results
|
123 |
-
The performance of the Granite Embedding English models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
|
125 |
-
| Model | Parameters (M) | Embedding Size | MTEB-v1 Retrieval (15) | CoIR (10) | MLDR (En) | Retrieval Time (seconds/query) |
|
126 |
-
|------------------------------------|:--------------:|:--------------:|:---------------------:|:---------:|:---------:|:------------------------------:|
|
127 |
-
| granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 53.8 | 39.8 | TBD |
|
128 |
-
| granite-embedding-english-r2 | 149 | 768 | 53.0 | 55.3 | 40.7 | TBD |
|
129 |
-
| granite-embedding-30m-english | 30 | 384 | 49.1 | 47.0 | 32.6 | TBD |
|
130 |
-
| granite-embedding-125m-englis | 125 | 768 | 52.3 | 50.3 | 35.0 | TBD |
|
131 |
|
132 |
### Model Architecture and Key Features
|
133 |
|
@@ -135,17 +153,17 @@ The latest Granite Embedding r2 release introduces two English embedding models,
|
|
135 |
- _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
|
136 |
- _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
|
137 |
|
138 |
-
The following table shows the structure of the two models:
|
139 |
|
140 |
| Model | granite-embedding-small-english-r2 | granite-embedding-english-r2 |
|
141 |
| :--------- | :-------:|:--------:|
|
142 |
| Embedding size | 384 | 768 |
|
143 |
-
| Number of layers | 12 | 22
|
144 |
| Number of attention heads | 12 | 12 |
|
145 |
| Intermediate size | 1536 | 1152 |
|
146 |
-
| Activation Function | GeGLU | GeGLU
|
147 |
| Vocabulary Size | 50368| 50368 |
|
148 |
-
| Max. Sequence Length | 8192 | 8192
|
149 |
| # Parameters | 47M | 149M |
|
150 |
|
151 |
|
@@ -158,7 +176,6 @@ The r2 models incorporate key enhancements from the ModernBERT architecture, inc
|
|
158 |
- Flash Attention 2.0 for improved efficiency
|
159 |
- Streamlined parameters, eliminating unnecessary bias terms
|
160 |
|
161 |
-
|
162 |
## Data Collection
|
163 |
Granite embedding models are trained using data from four key sources:
|
164 |
1. Unsupervised title-body paired data scraped from the web
|
@@ -170,13 +187,14 @@ Notably, we _do not use_ the popular MS-MARCO retrieval dataset in our training
|
|
170 |
|
171 |
The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
|
172 |
|
173 |
-
For governance, all our data undergoes a data clearance process subject to technical, business, and governance review.
|
|
|
174 |
|
175 |
## Infrastructure
|
176 |
-
We train Granite Embedding Models using IBM's computing cluster,
|
177 |
|
178 |
## Ethical Considerations and Limitations
|
179 |
-
The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity.
|
180 |
|
181 |
## Resources
|
182 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|
|
|
10 |
- transformers
|
11 |
- mteb
|
12 |
---
|
13 |
+
# granite-embedding-english-r2
|
14 |
|
15 |
<!-- Provide a quick summary of what the model is/does. -->
|
16 |
|
17 |
+
granite-embedding-english-r2 is a 147M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
|
18 |
|
19 |
+
The r2 models feature an increased context length of **8192** and deliver superior performance across standard and IBM-built information retrieval benchmarks (BEIR, ClapNQ), code retrieval (COIR), long-document search benchmarks (MLDR), conversational multi-turn (MTRAG), TableIR (TBD), and on many enterprise use cases.
|
20 |
|
21 |
The Granite Embedding collection delivers innovative sentence-transformer models purposefully built for retrieval-based applications. These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, the Granite Embedding lineup is optimized to ensure strong alignment between query and passage embeddings.
|
22 |
|
|
|
70 |
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
|
71 |
]
|
72 |
|
73 |
+
# encode queries and passages. The model produces unnormalized vectors. If your task requires normalized embeddings pass normalize_embeddings=True to encode as below.
|
74 |
+
query_embeddings = model.encode(input_queries, normalize_embeddings=True)
|
75 |
+
passage_embeddings = model.encode(input_passages, normalize_embeddings=True)
|
76 |
|
77 |
# calculate cosine similarity
|
78 |
print(util.cos_sim(query_embeddings, passage_embeddings))
|
|
|
120 |
```
|
121 |
|
122 |
## Evaluation Results
|
123 |
+
The performance of the Granite Embedding English models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below.
|
124 |
+
The average speed to encode documents on a single 5090 GPU using a sliding window with 512 context length is also reported.
|
125 |
+
|
126 |
+
| Model | Parameters (M) | Embedding Size | BEIR Retrieval (15) | MTEB-v2 (56)| CoIR (10) | MLDR (En) | MTRAG (4) | Encoding Speed (documents/sec) |
|
127 |
+
|------------------------------------|:--------------:|:--------------:|:-------------------:|:-----------:|:---------:|:---------:|:---------:|:-------------------------------:|
|
128 |
+
| granite-embedding-30m-english | 30 | 384 | 49.1 | 59.45 | 47.0 | 32.6 | 48.61 | 140.8 |
|
129 |
+
| granite-embedding-125m-english | 125 | 768 | 52.3 | 61.37 | 50.3 | 35.0 | 49.37 | 80.7 |
|
130 |
+
| granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 60.38 | 53.8 | 39.8 | 48.11 | 138.8 |
|
131 |
+
| granite-embedding-english-r2 | 149 | 768 | 53.0 | 62.18 | 55.3 | 40.7 | 56.73 | 80.9 |
|
132 |
+
|
133 |
+
|
134 |
+
| Model | Parameters (M) | Embedding Size | Average | MTEB-v2 Retrieval (10) | CoIR (10) | MLDR (En) | Table IR | MTRAG |
|
135 |
+
|------------------------------------|:--------------:|:--------------:| ------- |:----------------------:|:---------:|:---------:|:--------:|:-----:|
|
136 |
+
| gte-modernbert-base | | | | | | | | |
|
137 |
+
| nomic-ai/modernbert-embed-base | | | | | | | | |
|
138 |
+
| snowflake-arctic-embed-m-v2.0 | | | | | | | | |
|
139 |
+
| gte-base-en-v1.5 | | | | | | | | |
|
140 |
+
| e5-base-v2 | | | | | | | | |
|
141 |
+
| e5-small-v2 | | | | | | | | |
|
142 |
+
| bge-base-en-v1.5 | | | | | | | | |
|
143 |
+
| bge-small-en-v1.5 | | | | | | | | |
|
144 |
+
| granite-embedding-125m-english | | | | | | | | |
|
145 |
+
| granite-embedding-30m-english | | | | | | | | |
|
146 |
+
| granite-embedding-english-r2 | | | | | | | | |
|
147 |
+
| granite-embedding-small-english-r2 | | | | | | | | |
|
148 |
|
|
|
|
|
|
|
|
|
|
|
|
|
149 |
|
150 |
### Model Architecture and Key Features
|
151 |
|
|
|
153 |
- _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
|
154 |
- _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
|
155 |
|
156 |
+
The following table shows the structure of the two R2 models:
|
157 |
|
158 |
| Model | granite-embedding-small-english-r2 | granite-embedding-english-r2 |
|
159 |
| :--------- | :-------:|:--------:|
|
160 |
| Embedding size | 384 | 768 |
|
161 |
+
| Number of layers | 12 | 22 |
|
162 |
| Number of attention heads | 12 | 12 |
|
163 |
| Intermediate size | 1536 | 1152 |
|
164 |
+
| Activation Function | GeGLU | GeGLU |
|
165 |
| Vocabulary Size | 50368| 50368 |
|
166 |
+
| Max. Sequence Length | 8192 | 8192 |
|
167 |
| # Parameters | 47M | 149M |
|
168 |
|
169 |
|
|
|
176 |
- Flash Attention 2.0 for improved efficiency
|
177 |
- Streamlined parameters, eliminating unnecessary bias terms
|
178 |
|
|
|
179 |
## Data Collection
|
180 |
Granite embedding models are trained using data from four key sources:
|
181 |
1. Unsupervised title-body paired data scraped from the web
|
|
|
187 |
|
188 |
The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
|
189 |
|
190 |
+
For governance, all our data undergoes a data clearance process subject to technical, business, and governance review.
|
191 |
+
This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
|
192 |
|
193 |
## Infrastructure
|
194 |
+
We train Granite Embedding Models using IBM's computing cluster, BlueVela Cluster, which is outfitted with NVIDIA H100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
|
195 |
|
196 |
## Ethical Considerations and Limitations
|
197 |
+
The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. granite-embedding-english-r2 is finetuned on English, and has a context length of 8192 tokens (longer texts will be truncated to this size).
|
198 |
|
199 |
## Resources
|
200 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|