abedaniels commited on
Commit
0c6d683
·
verified ·
1 Parent(s): bd57e8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -54
README.md CHANGED
@@ -10,40 +10,36 @@ tags:
10
  - transformers
11
  - mteb
12
  ---
13
- # granite-embedding-english-r2
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
- granite-embedding-english-r2 is a 147M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768 based on context length of upto 8192 tokens. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
18
 
19
- The r2 models feature an increased context length of **8192** and deliver superior performance across standard and IBM-built information retrieval benchmarks (BEIR, ClapNQ), code retrieval (COIR), long-document search benchmarks (MLDR), conversational multi-turn (MTRAG), TableIR (TBD), and on many enterprise use cases.
20
 
21
- The Granite Embedding collection delivers innovative sentence-transformer models purposefully built for retrieval-based applications. These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, the Granite Embedding lineup is optimized to ensure strong alignment between query and passage embeddings.
22
 
23
- The latest Granite Embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
24
  - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
25
  - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
26
 
27
-
28
  ## Model Details
29
 
30
- ### Model Description
31
-
32
- <!-- Provide a longer summary of what this model is. -->
33
-
34
  - **Developed by:** Granite Embedding Team, IBM
35
  - **Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
36
- - **Paper [optional]:** [Technical Report](https://arxiv.org/abs/2502.20204)
37
  - **Language(s) (NLP):** English
38
  - **Release Date**: July 31, 2024
39
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
40
 
41
- ## Uses
 
42
 
43
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
- The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
45
 
46
  **Usage with Sentence Transformers:**
 
47
  The model is compatible with SentenceTransformer library and is very easy to use:
48
 
49
  First, install the sentence transformers library
@@ -70,16 +66,17 @@ input_passages = [
70
  "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
71
  ]
72
 
73
- # encode queries and passages. The model produces unnormalized vectors. If your task requires normalized embeddings pass normalize_embeddings=True to encode as below.
74
- query_embeddings = model.encode(input_queries, normalize_embeddings=True)
75
- passage_embeddings = model.encode(input_passages, normalize_embeddings=True)
76
 
77
  # calculate cosine similarity
78
  print(util.cos_sim(query_embeddings, passage_embeddings))
79
  ```
80
 
81
  **Usage with Huggingface Transformers:**
82
- This is a simple example of how to use the Granite-Embedding-278m-Multilingual model with the Transformers library and PyTorch.
 
83
 
84
  First, install the required libraries
85
  ```shell
@@ -120,64 +117,47 @@ query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
120
  ```
121
 
122
  ## Evaluation Results
123
- The performance of the Granite Embedding English models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below.
124
- The average speed to encode documents on a single 5090 GPU using a sliding window with 512 context length is also reported.
125
-
126
- | Model | Parameters (M) | Embedding Size | BEIR Retrieval (15) | MTEB-v2 (56)| CoIR (10) | MLDR (En) | MTRAG (4) | Encoding Speed (documents/sec) |
127
- |------------------------------------|:--------------:|:--------------:|:-------------------:|:-----------:|:---------:|:---------:|:---------:|:-------------------------------:|
128
- | granite-embedding-30m-english | 30 | 384 | 49.1 | 59.45 | 47.0 | 32.6 | 48.61 | 140.8 |
129
- | granite-embedding-125m-english | 125 | 768 | 52.3 | 61.37 | 50.3 | 35.0 | 49.37 | 80.7 |
130
- | granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 60.38 | 53.8 | 39.8 | 48.11 | 138.8 |
131
- | granite-embedding-english-r2 | 149 | 768 | 53.0 | 62.18 | 55.3 | 40.7 | 56.73 | 80.9 |
132
-
133
-
134
- | Model | Parameters (M) | Embedding Size | Average | MTEB-v2 Retrieval (10) | CoIR (10) | MLDR (En) | Table IR | MTRAG |
135
- |------------------------------------|:--------------:|:--------------:| ------- |:----------------------:|:---------:|:---------:|:--------:|:-----:|
136
- | gte-modernbert-base | | | | | | | | |
137
- | nomic-ai/modernbert-embed-base | | | | | | | | |
138
- | snowflake-arctic-embed-m-v2.0 | | | | | | | | |
139
- | gte-base-en-v1.5 | | | | | | | | |
140
- | e5-base-v2 | | | | | | | | |
141
- | e5-small-v2 | | | | | | | | |
142
- | bge-base-en-v1.5 | | | | | | | | |
143
- | bge-small-en-v1.5 | | | | | | | | |
144
- | granite-embedding-125m-english | | | | | | | | |
145
- | granite-embedding-30m-english | | | | | | | | |
146
- | granite-embedding-english-r2 | | | | | | | | |
147
- | granite-embedding-small-english-r2 | | | | | | | | |
148
 
 
 
 
 
 
 
149
 
150
  ### Model Architecture and Key Features
151
 
152
- The latest Granite Embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
153
  - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
154
  - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
155
 
156
- The following table shows the structure of the two R2 models:
157
 
158
  | Model | granite-embedding-small-english-r2 | granite-embedding-english-r2 |
159
  | :--------- | :-------:|:--------:|
160
  | Embedding size | 384 | 768 |
161
- | Number of layers | 12 | 22 |
162
  | Number of attention heads | 12 | 12 |
163
  | Intermediate size | 1536 | 1152 |
164
- | Activation Function | GeGLU | GeGLU |
165
  | Vocabulary Size | 50368| 50368 |
166
- | Max. Sequence Length | 8192 | 8192 |
167
  | # Parameters | 47M | 149M |
168
 
169
 
170
  ### Training and Optimization
171
 
172
- The r2 models incorporate key enhancements from the ModernBERT architecture, including:
173
  - Alternating attention lengths to accelerate processing
174
  - Rotary position embeddings for extended sequence length
175
  - A newly trained tokenizer optimized with code and text data
176
  - Flash Attention 2.0 for improved efficiency
177
  - Streamlined parameters, eliminating unnecessary bias terms
178
 
 
179
  ## Data Collection
180
- Granite embedding models are trained using data from four key sources:
181
  1. Unsupervised title-body paired data scraped from the web
182
  2. Publicly available paired with permissive, enterprise-friendly license
183
  3. IBM-internal paired data targetting specific technical domains
@@ -187,16 +167,14 @@ Notably, we _do not use_ the popular MS-MARCO retrieval dataset in our training
187
 
188
  The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
189
 
190
- For governance, all our data undergoes a data clearance process subject to technical, business, and governance review.
191
- This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
192
 
193
  ## Infrastructure
194
- We train Granite Embedding Models using IBM's computing cluster, BlueVela Cluster, which is outfitted with NVIDIA H100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
195
 
196
  ## Ethical Considerations and Limitations
197
- The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. granite-embedding-english-r2 is finetuned on English, and has a context length of 8192 tokens (longer texts will be truncated to this size).
198
 
199
- ## Resources
200
  - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
201
  - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
202
  - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
 
10
  - transformers
11
  - mteb
12
  ---
13
+ # Granite-Embedding-English-R2
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
 
17
+ **Model Summary:** Granite-embedding-english-r2 is a 149M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
18
 
19
+ The r2 models feature an increased context length of 8192 and delivers superior performance across standard and IBM-built information retrieval benchmarks (BEIR, UnifiedSearch, RedHAT, ClapNQ), code retrieval (COIR), and long-document search benchmarks (MLDR).
20
 
21
+ These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, granite-embedding-english-r2 is optimized to ensure strong alignment between query and passage embeddings.
22
 
23
+ The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
24
  - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
25
  - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
26
 
 
27
  ## Model Details
28
 
 
 
 
 
29
  - **Developed by:** Granite Embedding Team, IBM
30
  - **Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
31
+ - **Paper:** Coming Soon
32
  - **Language(s) (NLP):** English
33
  - **Release Date**: July 31, 2024
34
  - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
35
 
36
+ **Intended Use:** The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
37
+
38
 
39
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
40
 
41
  **Usage with Sentence Transformers:**
42
+
43
  The model is compatible with SentenceTransformer library and is very easy to use:
44
 
45
  First, install the sentence transformers library
 
66
  "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
67
  ]
68
 
69
+ # encode queries and passages
70
+ query_embeddings = model.encode(input_queries)
71
+ passage_embeddings = model.encode(input_passages)
72
 
73
  # calculate cosine similarity
74
  print(util.cos_sim(query_embeddings, passage_embeddings))
75
  ```
76
 
77
  **Usage with Huggingface Transformers:**
78
+
79
+ This is a simple example of how to use the granite-embedding-english-r2 model with the Transformers library and PyTorch.
80
 
81
  First, install the required libraries
82
  ```shell
 
117
  ```
118
 
119
  ## Evaluation Results
120
+ The performance of the granite embedding r2 models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below. The average time required to encode and retrieve per query is also reported.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
+ | Model | Parameters (M) | Embedding Size | MTEB-v1 Retrieval (15) | CoIR (10) | MLDR (En) | Retrieval Time (seconds/query) |
123
+ |------------------------------------|:--------------:|:--------------:|:---------------------:|:---------:|:---------:|:------------------------------:|
124
+ | granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 53.8 | 39.8 | TBD |
125
+ | granite-embedding-english-r2 | 149 | 768 | 53.0 | 55.3 | 40.7 | TBD |
126
+ | granite-embedding-30m-english | 30 | 384 | 49.1 | 47.0 | 32.6 | TBD |
127
+ | granite-embedding-125m-english | 125 | 768 | 52.3 | 50.3 | 35.0 | TBD |
128
 
129
  ### Model Architecture and Key Features
130
 
131
+ The latest granite embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
132
  - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
133
  - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
134
 
135
+ The following table shows the structure of the two models:
136
 
137
  | Model | granite-embedding-small-english-r2 | granite-embedding-english-r2 |
138
  | :--------- | :-------:|:--------:|
139
  | Embedding size | 384 | 768 |
140
+ | Number of layers | 12 | 22 |
141
  | Number of attention heads | 12 | 12 |
142
  | Intermediate size | 1536 | 1152 |
143
+ | Activation Function | GeGLU | GeGLU |
144
  | Vocabulary Size | 50368| 50368 |
145
+ | Max. Sequence Length | 8192 | 8192 |
146
  | # Parameters | 47M | 149M |
147
 
148
 
149
  ### Training and Optimization
150
 
151
+ The granite embedding r2 models incorporate key enhancements from the ModernBERT architecture, including:
152
  - Alternating attention lengths to accelerate processing
153
  - Rotary position embeddings for extended sequence length
154
  - A newly trained tokenizer optimized with code and text data
155
  - Flash Attention 2.0 for improved efficiency
156
  - Streamlined parameters, eliminating unnecessary bias terms
157
 
158
+
159
  ## Data Collection
160
+ Granite embedding r2 models are trained using data from four key sources:
161
  1. Unsupervised title-body paired data scraped from the web
162
  2. Publicly available paired with permissive, enterprise-friendly license
163
  3. IBM-internal paired data targetting specific technical domains
 
167
 
168
  The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
169
 
170
+ For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
 
171
 
172
  ## Infrastructure
173
+ We trained the granite embedding english r2 models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80GB GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
174
 
175
  ## Ethical Considerations and Limitations
176
+ Granite-embedding-english-r2 leverages both permissively licensed open-source and select proprietary data for enhanced performance. The training data for the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-embedding-english-r2 is trained only for English texts, and has a context length of 8192 tokens (longer texts will be truncated to this size).
177
 
 
178
  - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
179
  - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
180
  - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources