pawasthy commited on
Commit
6743ff3
·
verified ·
1 Parent(s): b04df4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -3
README.md CHANGED
@@ -1,3 +1,197 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: sentence-similarity
6
+ library_name: sentence-transformers
7
+ tags:
8
+ - granite
9
+ - embeddings
10
+ - transformers
11
+ - mteb
12
+ ---
13
+ # Granite-Embedding-English-R2
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+ Granite-Embedding-English-R2 is a 147M parameter dense biencoder embedding model from the Granite Embeddings collection that can be used to generate high quality text embeddings. This model produces embedding vectors of size 768. Compared to most other open-source models, this model was only trained using open-source relevance-pair datasets with permissive, enterprise-friendly license, plus IBM collected and generated datasets.
18
+
19
+ The r2 models feature an increased context length of **8192** and deliver superior performance across standard and IBM-built information retrieval benchmarks (BEIR, UnifiedSearch, RedHAT, ClapNQ), code retrieval (COIR), and long-document search benchmarks (MLDR).
20
+
21
+ The Granite Embedding collection delivers innovative sentence-transformer models purposefully built for retrieval-based applications. These models use a bi-encoder architecture to generate high-quality embeddings from text inputs such as queries, passages, and documents, enabling seamless comparison through cosine similarity. Built using retrieval oriented pretraining, contrastive finetuning, knowledge distillation, and model merging, the Granite Embedding lineup is optimized to ensure strong alignment between query and passage embeddings.
22
+
23
+ The latest Granite Embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
24
+ - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
25
+ - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
26
+
27
+
28
+ ## Model Details
29
+
30
+ ### Model Description
31
+
32
+ <!-- Provide a longer summary of what this model is. -->
33
+
34
+ - **Developed by:** Granite Embedding Team, IBM
35
+ - **Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
36
+ - **Paper [optional]:** [Technical Report](https://arxiv.org/abs/2502.20204)
37
+ - **Language(s) (NLP):** English
38
+ - **Release Date**: July 31, 2024
39
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+ The model is designed to produce fixed length vector representations for a given text, which can be used for text similarity, retrieval, and search applications.
45
+
46
+ **Usage with Sentence Transformers:**
47
+ The model is compatible with SentenceTransformer library and is very easy to use:
48
+
49
+ First, install the sentence transformers library
50
+ ```shell
51
+ pip install sentence_transformers
52
+ ```
53
+
54
+ The model can then be used to encode pairs of text and find the similarity between their representations
55
+
56
+ ```python
57
+ from sentence_transformers import SentenceTransformer, util
58
+
59
+ model_path = "ibm-granite/granite-embedding-english-r2"
60
+ # Load the Sentence Transformer model
61
+ model = SentenceTransformer(model_path)
62
+
63
+ input_queries = [
64
+ ' Who made the song My achy breaky heart? ',
65
+ 'summit define'
66
+ ]
67
+
68
+ input_passages = [
69
+ "Achy Breaky Heart is a country song written by Don Von Tress. Originally titled Don't Tell My Heart and performed by The Marcy Brothers in 1991. ",
70
+ "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
71
+ ]
72
+
73
+ # encode queries and passages
74
+ query_embeddings = model.encode(input_queries)
75
+ passage_embeddings = model.encode(input_passages)
76
+
77
+ # calculate cosine similarity
78
+ print(util.cos_sim(query_embeddings, passage_embeddings))
79
+ ```
80
+
81
+ **Usage with Huggingface Transformers:**
82
+ This is a simple example of how to use the Granite-Embedding-278m-Multilingual model with the Transformers library and PyTorch.
83
+
84
+ First, install the required libraries
85
+ ```shell
86
+ pip install transformers torch
87
+ ```
88
+
89
+ The model can then be used to encode pairs of text
90
+
91
+ ```python
92
+ import torch
93
+ from transformers import AutoModel, AutoTokenizer
94
+
95
+ model_path = "ibm-granite/granite-embedding-english-r2"
96
+
97
+ # Load the model and tokenizer
98
+ model = AutoModel.from_pretrained(model_path)
99
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
100
+ model.eval()
101
+
102
+ input_queries = [
103
+ ' Who made the song My achy breaky heart? ',
104
+ 'summit define'
105
+ ]
106
+
107
+ # tokenize inputs
108
+ tokenized_queries = tokenizer(input_queries, padding=True, truncation=True, return_tensors='pt')
109
+
110
+ # encode queries
111
+ with torch.no_grad():
112
+ # Queries
113
+ model_output = model(**tokenized_queries)
114
+ # Perform pooling. granite-embedding-278m-multilingual uses CLS Pooling
115
+ query_embeddings = model_output[0][:, 0]
116
+
117
+ # normalize the embeddings
118
+ query_embeddings = torch.nn.functional.normalize(query_embeddings, dim=1)
119
+
120
+ ```
121
+
122
+ ## Evaluation Results
123
+ The performance of the Granite Embedding English models on MTEB Retrieval (i.e., BEIR) and code retrieval (CoIR) benchmarks is reported below. The average time required to encode and retrieve per query is also reported.
124
+
125
+ | Model | Parameters (M) | Embedding Size | MTEB-v1 Retrieval (15) | CoIR (10) | MLDR (En) | Retrieval Time (seconds/query) |
126
+ |------------------------------------|:--------------:|:--------------:|:---------------------:|:---------:|:---------:|:------------------------------:|
127
+ | granite-embedding-small-english-r2 | 47 | 384 | 50.8 | 53.8 | 39.8 | TBD |
128
+ | granite-embedding-english-r2 | 149 | 768 | 53.0 | 55.3 | 40.7 | TBD |
129
+ | granite-embedding-30m-english | 30 | 384 | 49.1 | 47.0 | 32.6 | TBD |
130
+ | granite-embedding-125m-englis | 125 | 768 | 52.3 | 50.3 | 35.0 | TBD |
131
+
132
+ ### Model Architecture and Key Features
133
+
134
+ The latest Granite Embedding r2 release introduces two English embedding models, both based on the ModernBERT architecture:
135
+ - _granite-embedding-english-r2_ (**149M** parameters): with an output embedding size of _768_, replacing _granite-embedding-125m-english_.
136
+ - _granite-embedding-small-english-r2_ (**47M** parameters): A _first-of-its-kind_ reduced-size model, with fewer layers and a smaller output embedding size (_384_), replacing _granite-embedding-30m-english_.
137
+
138
+ The following table shows the structure of the two models:
139
+
140
+ | Model | granite-embedding-small-english-r2 | granite-embedding-english-r2 |
141
+ | :--------- | :-------:|:--------:|
142
+ | Embedding size | 384 | 768 |
143
+ | Number of layers | 12 | 22 |
144
+ | Number of attention heads | 12 | 12 |
145
+ | Intermediate size | 1536 | 1152 |
146
+ | Activation Function | GeGLU | GeGLU |
147
+ | Vocabulary Size | 50368| 50368 |
148
+ | Max. Sequence Length | 8192 | 8192 |
149
+ | # Parameters | 47M | 149M |
150
+
151
+
152
+ ### Training and Optimization
153
+
154
+ The r2 models incorporate key enhancements from the ModernBERT architecture, including:
155
+ - Alternating attention lengths to accelerate processing
156
+ - Rotary position embeddings for extended sequence length
157
+ - A newly trained tokenizer optimized with code and text data
158
+ - Flash Attention 2.0 for improved efficiency
159
+ - Streamlined parameters, eliminating unnecessary bias terms
160
+
161
+
162
+ ## Data Collection
163
+ Granite embedding models are trained using data from four key sources:
164
+ 1. Unsupervised title-body paired data scraped from the web
165
+ 2. Publicly available paired with permissive, enterprise-friendly license
166
+ 3. IBM-internal paired data targetting specific technical domains
167
+ 4. IBM-generated synthetic data
168
+
169
+ Notably, we _do not use_ the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license (many open-source models use this dataset due to its high quality).
170
+
171
+ The underlying encoder models using GneissWeb, an IBM-curated dataset composed exclusively of open, commercial-friendly sources.
172
+
173
+ For governance, all our data undergoes a data clearance process subject to technical, business, and governance review. This comprehensive process captures critical information about the data, including but not limited to their content description ownership, intended use, data classification, licensing information, usage restrictions, how the data will be acquired, as well as an assessment of sensitive information (i.e, personal information).
174
+
175
+ ## Infrastructure
176
+ We train Granite Embedding Models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
177
+
178
+ ## Ethical Considerations and Limitations
179
+ The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-Embedding-278m-Multilingual is finetuned on 12 languages, and has a context length of 512 tokens (longer texts will be truncated to this size).
180
+
181
+ ## Resources
182
+ - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
183
+ - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
184
+ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
185
+
186
+ ## Citation
187
+ ```
188
+ @misc{awasthy2025graniteembeddingmodels,
189
+ title={Granite Embedding Models},
190
+ author={Parul Awasthy and Aashka Trivedi and Yulong Li and Mihaela Bornea and David Cox and Abraham Daniels and Martin Franz and Gabe Goodhart and Bhavani Iyer and Vishwajeet Kumar and Luis Lastras and Scott McCarley and Rudra Murthy and Vignesh P and Sara Rosenthal and Salim Roukos and Jaydeep Sen and Sukriti Sharma and Avirup Sil and Kate Soule and Arafat Sultan and Radu Florian},
191
+ year={2025},
192
+ eprint={2502.20204},
193
+ archivePrefix={arXiv},
194
+ primaryClass={cs.IR},
195
+ url={https://arxiv.org/abs/2502.20204},
196
+ }
197
+ ```