rasyosef commited on
Commit
4a7d9bc
·
verified ·
1 Parent(s): 3f0b7e9

Add new SparseEncoder model

Browse files
Files changed (4) hide show
  1. README.md +124 -158
  2. config.json +1 -1
  3. config_sentence_transformers.json +1 -1
  4. model.safetensors +1 -1
README.md CHANGED
@@ -1,47 +1,37 @@
1
  ---
2
- language:
3
- - en
4
- license: mit
5
  tags:
6
  - sentence-transformers
7
  - sparse-encoder
8
  - sparse
9
  - splade
10
  - generated_from_trainer
11
- - dataset_size:250000
12
  - loss:SpladeLoss
13
  - loss:SparseMarginMSELoss
14
  - loss:FlopsLoss
15
- base_model: prajjwal1/bert-mini
16
  widget:
17
- - text: what did marlo thomas play on
18
- - text: >-
19
- Unused vacation does not roll over. to next calendar year and is not paid
20
- out at termination. Please Note: This table applies to employees in
21
- positions with 100% FTE. The number of hours/days of vacation are pro-rated
22
- for FTEs between 75%. and 99%. For Example: If a non-exempt employee who is
23
- within their first five years of service has an FTE of 75%, then the number
24
- of hours they would. accrue each month would be 6 (8 x .75 = 6) and not 8.
25
- - text: >-
26
- To convert from miles to feet by hand, multiply miles by 5280. miles * 5280
27
- = feet. To convert from feet to miles by hand, divide feet by 5280. feet /
28
- 5280 = miles. An automated version of this calculator can be found here:
29
- - text: >-
30
- All you have to do is click on the button directly below and follow the
31
- instructions. Click he relevant payment button below for £55 payment.
32
- --------------------------. Star Attuned Crystals for Activation. Another
33
- way to receive star energies and activations is to buy the attuned crystals
34
- that I provide.These carry the energies of specific stars or star beings and
35
- guides. By holding these you can feel the energies of the star or star being
36
- and this brings healing, activation, spiritual growth and sometimes
37
- communication.he highest form of star energy work / activation is to receive
38
- a Star Attunement. Star Attunements carry the power of stars and evolved
39
- star beings. They are off the scale and profoundly beautiful and spiritual.
40
- - text: >-
41
- Fermentation is a metabolic pathway that produce ATP molecules under
42
- anaerobic conditions (only undergoes glycolysis), NAD+ is used directly in
43
- glycolysis to form ATP molecules, which is not as efficient as cellular
44
- respiration because only 2ATP molecules are formed during the glycolysis.
45
  pipeline_tag: feature-extraction
46
  library_name: sentence-transformers
47
  metrics:
@@ -65,7 +55,7 @@ metrics:
65
  - corpus_active_dims
66
  - corpus_sparsity_ratio
67
  model-index:
68
- - name: SPLADE-BERT-Mini-Distil
69
  results:
70
  - task:
71
  type: sparse-information-retrieval
@@ -75,86 +65,94 @@ model-index:
75
  type: unknown
76
  metrics:
77
  - type: dot_accuracy@1
78
- value: 0.4828
79
  name: Dot Accuracy@1
80
  - type: dot_accuracy@3
81
- value: 0.8052
82
  name: Dot Accuracy@3
83
  - type: dot_accuracy@5
84
- value: 0.9046
85
  name: Dot Accuracy@5
86
  - type: dot_accuracy@10
87
- value: 0.9666
88
  name: Dot Accuracy@10
89
  - type: dot_precision@1
90
- value: 0.4828
91
  name: Dot Precision@1
92
  - type: dot_precision@3
93
- value: 0.27566666666666667
94
  name: Dot Precision@3
95
  - type: dot_precision@5
96
- value: 0.18787999999999996
97
  name: Dot Precision@5
98
  - type: dot_precision@10
99
- value: 0.10156
100
  name: Dot Precision@10
101
  - type: dot_recall@1
102
- value: 0.4673
103
  name: Dot Recall@1
104
  - type: dot_recall@3
105
- value: 0.792
106
  name: Dot Recall@3
107
  - type: dot_recall@5
108
- value: 0.8949
109
  name: Dot Recall@5
110
  - type: dot_recall@10
111
- value: 0.9624166666666668
112
  name: Dot Recall@10
113
  - type: dot_ndcg@10
114
- value: 0.7302009825334612
115
  name: Dot Ndcg@10
116
  - type: dot_mrr@10
117
- value: 0.6579904761904781
118
  name: Dot Mrr@10
119
  - type: dot_map@100
120
- value: 0.6534502206938125
121
  name: Dot Map@100
122
  - type: query_active_dims
123
- value: 19.52400016784668
124
  name: Query Active Dims
125
  - type: query_sparsity_ratio
126
- value: 0.9993603302480883
127
  name: Query Sparsity Ratio
128
  - type: corpus_active_dims
129
- value: 113.4705113862854
130
  name: Corpus Active Dims
131
  - type: corpus_sparsity_ratio
132
- value: 0.9962823369573983
133
  name: Corpus Sparsity Ratio
134
- datasets:
135
- - microsoft/ms_marco
136
  ---
137
 
138
- # SPLADE-BERT-Mini-Distil
139
 
140
- This is a SPLADE sparse retrieval model based on BERT-Mini (11M) that was trained by distilling a Cross-Encoder on the MSMARCO dataset. The cross-encoder used was [ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2).
 
141
 
142
- This mini SPLADE model is `6x` smaller than Naver's official `splade-v3-distilbert` while having `85%` of it's performance on the MSMARCO benchmark. This model is small enough to be used without a GPU on a dataset of a few thousand documents.
 
 
 
 
 
 
 
 
143
 
144
- - `Collection:` https://huggingface.co/collections/rasyosef/splade-tiny-msmarco-687c548c0691d95babf65b70
145
- - `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
146
- - `Code:` https://github.com/rasyosef/splade-tiny-msmarco
147
 
148
- ## Performance
 
 
 
149
 
150
- The splade models were evaluated on 55 thousand queries and 8 million documents from the [MSMARCO](https://huggingface.co/datasets/microsoft/ms_marco) dataset.
151
 
152
- ||Size (# Params)|MRR@10 (MS MARCO dev)|
153
- |:---|:----|:-------------------|
154
- |`BM25`|-|18.6|-|-|
155
- |`rasyosef/splade-tiny`|4.4M|30.8|
156
- |`rasyosef/splade-mini`|11.2M|32.8|
157
- |`naver/splade-v3-distilbert`|67.0M|38.7|
158
 
159
  ## Usage
160
 
@@ -171,15 +169,15 @@ Then you can load this model and run inference.
171
  from sentence_transformers import SparseEncoder
172
 
173
  # Download from the 🤗 Hub
174
- model = SparseEncoder("rasyosef/splade-mini")
175
  # Run inference
176
  queries = [
177
- "definition of fermentation in the lab",
178
  ]
179
  documents = [
180
- 'Fermentation is a metabolic pathway that produce ATP molecules under anaerobic conditions (only undergoes glycolysis), NAD+ is used directly in glycolysis to form ATP molecules, which is not as efficient as cellular respiration because only 2ATP molecules are formed during the glycolysis.',
181
- 'Essay on Yeast Fermentation ... Yeast Fermentation Lab Report The purpose of this experiment was to observe the process in which cells must partake in a respiration process called anaerobic fermentation and as the name suggests, oxygen is not required.',
182
- '\ufeffYeast Fermentation Lab Report The purpose of this experiment was to observe the process in which cells must partake in a respiration process called anaerobic fermentation and as the name suggests, oxygen is not required.',
183
  ]
184
  query_embeddings = model.encode_query(queries)
185
  document_embeddings = model.encode_document(documents)
@@ -189,7 +187,7 @@ print(query_embeddings.shape, document_embeddings.shape)
189
  # Get the similarity scores for the embeddings
190
  similarities = model.similarity(query_embeddings, document_embeddings)
191
  print(similarities)
192
- # tensor([[20.0220, 17.1372, 15.9159]])
193
  ```
194
 
195
  <!--
@@ -216,36 +214,6 @@ You can finetune this model on your own dataset.
216
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
217
  -->
218
 
219
- ## Model Details
220
-
221
- ### Model Description
222
- - **Model Type:** SPLADE Sparse Encoder
223
- - **Base model:** [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini) <!-- at revision 5e123abc2480f0c4b4cac186d3b3f09299c258fc -->
224
- - **Maximum Sequence Length:** 512 tokens
225
- - **Output Dimensionality:** 30522 dimensions
226
- - **Similarity Function:** Dot Product
227
- <!-- - **Training Dataset:** Unknown -->
228
- - **Language:** en
229
- - **License:** mit
230
-
231
- ### Model Sources
232
-
233
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
234
- - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
235
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
236
- - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
237
-
238
- ### Full Model Architecture
239
-
240
- ```
241
- SparseEncoder(
242
- (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
243
- (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
244
- )
245
- ```
246
-
247
- ## More
248
- <details><summary>Click to expand</summary>
249
  ## Evaluation
250
 
251
  ### Metrics
@@ -256,25 +224,25 @@ SparseEncoder(
256
 
257
  | Metric | Value |
258
  |:----------------------|:-----------|
259
- | dot_accuracy@1 | 0.4828 |
260
- | dot_accuracy@3 | 0.8052 |
261
- | dot_accuracy@5 | 0.9046 |
262
- | dot_accuracy@10 | 0.9666 |
263
- | dot_precision@1 | 0.4828 |
264
- | dot_precision@3 | 0.2757 |
265
- | dot_precision@5 | 0.1879 |
266
- | dot_precision@10 | 0.1016 |
267
- | dot_recall@1 | 0.4673 |
268
- | dot_recall@3 | 0.792 |
269
- | dot_recall@5 | 0.8949 |
270
- | dot_recall@10 | 0.9624 |
271
- | **dot_ndcg@10** | **0.7302** |
272
- | dot_mrr@10 | 0.658 |
273
- | dot_map@100 | 0.6535 |
274
- | query_active_dims | 19.524 |
275
  | query_sparsity_ratio | 0.9994 |
276
- | corpus_active_dims | 113.4705 |
277
- | corpus_sparsity_ratio | 0.9963 |
278
 
279
  <!--
280
  ## Bias, Risks and Limitations
@@ -294,25 +262,25 @@ SparseEncoder(
294
 
295
  #### Unnamed Dataset
296
 
297
- * Size: 250,000 training samples
298
- * Columns: <code>query</code>, <code>positive</code>, <code>negative_1</code>, <code>negative_2</code>, <code>negative_3</code>, <code>negative_4</code>, and <code>label</code>
299
  * Approximate statistics based on the first 1000 samples:
300
- | | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
301
- |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
302
- | type | string | string | string | string | string | string | list |
303
- | details | <ul><li>min: 4 tokens</li><li>mean: 8.87 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 81.23 tokens</li><li>max: 259 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 79.21 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 77.89 tokens</li><li>max: 207 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 76.38 tokens</li><li>max: 271 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 75.46 tokens</li><li>max: 214 tokens</li></ul> | <ul><li>size: 4 elements</li></ul> |
304
  * Samples:
305
- | query | positive | negative_1 | negative_2 | negative_3 | negative_4 | label |
306
- |:------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|
307
- | <code>heart specialists in ridgeland ms</code> | <code>Dr. George Reynolds Jr, MD is a cardiology specialist in Ridgeland, MS and has been practicing for 35 years. He graduated from Vanderbilt University School Of Medicine in 1977 and specializes in cardiology and internal medicine.</code> | <code>Dr. James Kramer is a Internist in Ridgeland, MS. Find Dr. Kramer's phone number, address and more.</code> | <code>Dr. James Kramer is an internist in Ridgeland, Mississippi. He received his medical degree from Loma Linda University School of Medicine and has been in practice for more than 20 years. Dr. James Kramer's Details</code> | <code>Chronic Pulmonary Heart Diseases (incl. Pulmonary Hypertension) Coarctation of the Aorta; Congenital Aortic Valve Disorders; Congenital Heart Defects; Congenital Heart Disease; Congestive Heart Failure; Coronary Artery Disease (CAD) Endocarditis; Heart Attack (Acute Myocardial Infarction) Heart Disease; Heart Murmur; Heart Palpitations; Hyperlipidemia; Hypertension</code> | <code>A growing shortage of primary care doctors means you might have to look harder for ongoing care. How to Read an OTC Medication Label Purvi Parikh, M.D. | Feb. 12, 2018</code> | <code>[6.058592796325684, 6.587987422943115, 19.88274383544922, 20.211898803710938]</code> |
308
- | <code>does baytril otic require a prescription</code> | <code>Baytril Otic Ear Drops-Enrofloxacin/Silver Sulfadiazine-Prices & Information. A prescription is required for this item. A prescription is required for this item. Brand medication is not available at this time.</code> | <code>RX required for this item. Click here for our full Prescription Policy and Form. Baytril Otic (enrofloxacin/silver sulfadiazine) Emulsion from Bayer is the first fluoroquinolone approved by the Food and Drug Administration for the topical treatment of canine otitis externa.</code> | <code>Product Details. Baytril Otic is a highly effective treatment prescribed by many veterinarians when your pet has an ear infection caused by susceptible bacteria or fungus. Baytril Otic is: a liquid emulsion that is used topically directly in the ear or on the skin in order to treat susceptible bacterial and yeast infections.</code> | <code>Baytril for dogs is an antibiotic often prescribed for bacterial infections, particularly those involving the ears. Ear infections are rare in many animals, but quite common in dogs. This is particularly true for dogs with long droopy ears, where it will stay very warm and moist.</code> | <code>Administer 5-10 Baytril ear drops per treatment in dogs 35 lbs or less and 10-15 drops per treatment in dogs more than 35 lbs.</code> | <code>[1.0, 3.640146493911743, 6.450072288513184, 11.96937084197998]</code> |
309
- | <code>what is on a gyro</code> | <code>Report Abuse. Gyros or gyro (giros) (pronounced /ˈjɪəroʊ/ or /ˈdʒaɪroʊ/, Greek: γύρος turn) is a Greek dish consisting of meat (typically lamb and/or beef), tomato, onion, and tzatziki sauce, and is served with pita bread. Chicken and pork meat can be used too.</code> | <code>A gyroscope (from Ancient Greek γῦρος gûros, circle and σκοπέω skopéō, to look) is a spinning wheel or disc in which the axis of rotation is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum.</code> | <code>Diagram of a gyro wheel. Reaction arrows about the output axis (blue) correspond to forces applied about the input axis (green), and vice versa. A gyroscope is a wheel mounted in two or three gimbals, which are a pivoted supports that allow the rotation of the wheel about a single axis.</code> | <code>A fair number of our users are unsure of how to pronounce gyro. This isn't surprising, since there are two different gyros and they have two different pronunciations. The earlier gyro is the one that is a shortened form of gyrocompass or gyroscope, and it has a pronunciation that conforms to one's expectations: /JEYE-roh/.</code> | <code>Vibration Gyro Sensors. Vibration gyro sensors sense angular velocity from the Coriolis force applied to a vibrating element. For this reason, the accuracy with which angular velocity is measured differs significantly depending on element material and structural differences.</code> | <code>[2.1750364303588867, 2.634796142578125, 4.30520486831665, 6.382436752319336]</code> |
310
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
311
  ```json
312
  {
313
  "loss": "SparseMarginMSELoss",
314
- "document_regularizer_weight": 0.3,
315
- "query_regularizer_weight": 0.5
316
  }
317
  ```
318
 
@@ -320,15 +288,16 @@ SparseEncoder(
320
  #### Non-Default Hyperparameters
321
 
322
  - `eval_strategy`: epoch
323
- - `per_device_train_batch_size`: 48
324
- - `per_device_eval_batch_size`: 48
325
- - `learning_rate`: 8e-05
326
- - `num_train_epochs`: 6
327
  - `lr_scheduler_type`: cosine
328
- - `warmup_ratio`: 0.025
329
  - `fp16`: True
330
  - `load_best_model_at_end`: True
331
  - `optim`: adamw_torch_fused
 
332
 
333
  #### All Hyperparameters
334
  <details><summary>Click to expand</summary>
@@ -337,24 +306,24 @@ SparseEncoder(
337
  - `do_predict`: False
338
  - `eval_strategy`: epoch
339
  - `prediction_loss_only`: True
340
- - `per_device_train_batch_size`: 48
341
- - `per_device_eval_batch_size`: 48
342
  - `per_gpu_train_batch_size`: None
343
  - `per_gpu_eval_batch_size`: None
344
  - `gradient_accumulation_steps`: 1
345
  - `eval_accumulation_steps`: None
346
  - `torch_empty_cache_steps`: None
347
- - `learning_rate`: 8e-05
348
  - `weight_decay`: 0.0
349
  - `adam_beta1`: 0.9
350
  - `adam_beta2`: 0.999
351
  - `adam_epsilon`: 1e-08
352
  - `max_grad_norm`: 1.0
353
- - `num_train_epochs`: 6
354
  - `max_steps`: -1
355
  - `lr_scheduler_type`: cosine
356
  - `lr_scheduler_kwargs`: {}
357
- - `warmup_ratio`: 0.025
358
  - `warmup_steps`: 0
359
  - `log_level`: passive
360
  - `log_level_replica`: warning
@@ -411,7 +380,7 @@ SparseEncoder(
411
  - `dataloader_persistent_workers`: False
412
  - `skip_memory_metrics`: True
413
  - `use_legacy_prediction_loop`: False
414
- - `push_to_hub`: False
415
  - `resume_from_checkpoint`: None
416
  - `hub_model_id`: None
417
  - `hub_strategy`: every_save
@@ -454,21 +423,19 @@ SparseEncoder(
454
  </details>
455
 
456
  ### Training Logs
457
- | Epoch | Step | Training Loss | dot_ndcg@10 |
458
- |:-------:|:---------:|:-------------:|:-----------:|
459
- | 1.0 | 5209 | 30541.8683 | 0.6969 |
460
- | 2.0 | 10418 | 13.3966 | 0.7167 |
461
- | 3.0 | 15627 | 11.6531 | 0.7262 |
462
- | 4.0 | 20836 | 9.9781 | 0.7280 |
463
- | 5.0 | 26045 | 8.881 | 0.7289 |
464
- | **6.0** | **31254** | **8.3454** | **0.7302** |
465
 
466
  * The bold row denotes the saved checkpoint.
467
 
468
  ### Framework Versions
469
  - Python: 3.11.13
470
  - Sentence Transformers: 5.0.0
471
- - Transformers: 4.53.2
472
  - PyTorch: 2.6.0+cu124
473
  - Accelerate: 1.8.1
474
  - Datasets: 4.0.0
@@ -542,5 +509,4 @@ SparseEncoder(
542
  ## Model Card Contact
543
 
544
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
545
- -->
546
- </details>
 
1
  ---
 
 
 
2
  tags:
3
  - sentence-transformers
4
  - sparse-encoder
5
  - sparse
6
  - splade
7
  - generated_from_trainer
8
+ - dataset_size:1350000
9
  - loss:SpladeLoss
10
  - loss:SparseMarginMSELoss
11
  - loss:FlopsLoss
12
+ base_model: yosefw/SPLADE-BERT-Mini-BS256
13
  widget:
14
+ - text: Coinsurance is a health care cost sharing between you and your insurance company.
15
+ The cost sharing ranges from 80/20 to even 50/50. For example, if your coinsurance
16
+ is 80/20, that means that your insurer covers 80% of annual medical expenses and
17
+ you pay the remaining 20%. The cost sharing stops when medical expenses reach
18
+ your out-of-pocket maximum, which usually is between $1,000 and $5,000.
19
+ - text: The Definition of Success. 1 In 1806, the definition of Success in the Webster
20
+ dictionary was to be fortunate, happy, kind and prosperous. In 2013 the definition
21
+ of success is the attainment of wealth, fame and power. 2 The purpose of forming
22
+ a company is not to obtain substantial wealth.
23
+ - text: 'It wouldn''t be completely accurate to say 10 syllables, because in English
24
+ Sonnet writing, they are written in iambic pentameter, which is ten syllables,
25
+ but it''s not just any syllables, they have to be in rhythm.da-DA-da-DA-da-DA-da-DA-da-DA.
26
+ And the rhymes are ABAB/CDCD/EFEF/GG for each stanza.hat makes a sonnet a sonnet
27
+ is the rhyme scheme and the 10 syllable lines. Check out this site and it may
28
+ help you: http://www.elfwood.com/farp/thewriting/2... Sptfyr · 7 years ago. Thumbs
29
+ up.'
30
+ - text: Dragon horn. A dragon horn is a sorcerous horn that is used to control dragons.
31
+ - text: Social Sciences. Background research refers to accessing the collection of
32
+ previously published and unpublished information about a site, region, or particular
33
+ topic of interest and it is the first step of all good archaeological investigations,
34
+ as well as that of all writers of any kind of research paper.
 
 
 
 
 
 
 
35
  pipeline_tag: feature-extraction
36
  library_name: sentence-transformers
37
  metrics:
 
55
  - corpus_active_dims
56
  - corpus_sparsity_ratio
57
  model-index:
58
+ - name: SPLADE Sparse Encoder
59
  results:
60
  - task:
61
  type: sparse-information-retrieval
 
65
  type: unknown
66
  metrics:
67
  - type: dot_accuracy@1
68
+ value: 0.4976
69
  name: Dot Accuracy@1
70
  - type: dot_accuracy@3
71
+ value: 0.8154
72
  name: Dot Accuracy@3
73
  - type: dot_accuracy@5
74
+ value: 0.9122
75
  name: Dot Accuracy@5
76
  - type: dot_accuracy@10
77
+ value: 0.9684
78
  name: Dot Accuracy@10
79
  - type: dot_precision@1
80
+ value: 0.4976
81
  name: Dot Precision@1
82
  - type: dot_precision@3
83
+ value: 0.2791333333333333
84
  name: Dot Precision@3
85
  - type: dot_precision@5
86
+ value: 0.18991999999999998
87
  name: Dot Precision@5
88
  - type: dot_precision@10
89
+ value: 0.10178
90
  name: Dot Precision@10
91
  - type: dot_recall@1
92
+ value: 0.4821
93
  name: Dot Recall@1
94
  - type: dot_recall@3
95
+ value: 0.80205
96
  name: Dot Recall@3
97
  - type: dot_recall@5
98
+ value: 0.9034833333333334
99
  name: Dot Recall@5
100
  - type: dot_recall@10
101
+ value: 0.9639
102
  name: Dot Recall@10
103
  - type: dot_ndcg@10
104
+ value: 0.739184491374207
105
  name: Dot Ndcg@10
106
  - type: dot_mrr@10
107
+ value: 0.6690194444444474
108
  name: Dot Mrr@10
109
  - type: dot_map@100
110
+ value: 0.6646610700105045
111
  name: Dot Map@100
112
  - type: query_active_dims
113
+ value: 16.810400009155273
114
  name: Query Active Dims
115
  - type: query_sparsity_ratio
116
+ value: 0.9994492366159113
117
  name: Query Sparsity Ratio
118
  - type: corpus_active_dims
119
+ value: 100.62213478240855
120
  name: Corpus Active Dims
121
  - type: corpus_sparsity_ratio
122
+ value: 0.996703291567315
123
  name: Corpus Sparsity Ratio
 
 
124
  ---
125
 
126
+ # SPLADE Sparse Encoder
127
 
128
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
129
+ ## Model Details
130
 
131
+ ### Model Description
132
+ - **Model Type:** SPLADE Sparse Encoder
133
+ - **Base model:** [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) <!-- at revision 986bc55b61d9f0559f86423fb5807b9f4a3b7094 -->
134
+ - **Maximum Sequence Length:** 512 tokens
135
+ - **Output Dimensionality:** 30522 dimensions
136
+ - **Similarity Function:** Dot Product
137
+ <!-- - **Training Dataset:** Unknown -->
138
+ <!-- - **Language:** Unknown -->
139
+ <!-- - **License:** Unknown -->
140
 
141
+ ### Model Sources
 
 
142
 
143
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
144
+ - **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
145
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
146
+ - **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
147
 
148
+ ### Full Model Architecture
149
 
150
+ ```
151
+ SparseEncoder(
152
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
153
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
154
+ )
155
+ ```
156
 
157
  ## Usage
158
 
 
169
  from sentence_transformers import SparseEncoder
170
 
171
  # Download from the 🤗 Hub
172
+ model = SparseEncoder("yosefw/SPLADE-BERT-Mini-BS256-distil-v2")
173
  # Run inference
174
  queries = [
175
+ "research background definition",
176
  ]
177
  documents = [
178
+ 'Social Sciences. Background research refers to accessing the collection of previously published and unpublished information about a site, region, or particular topic of interest and it is the first step of all good archaeological investigations, as well as that of all writers of any kind of research paper.',
179
+ 'This Research Paper Background and Problem Definition and other 62,000+ term papers, college essay examples and free essays are available now on ReviewEssays.com. Autor: dharath1 July 22, 2014 Research Paper 442 Words (2 Pages) 448 Views.',
180
+ 'About the Month of February. February is the 2nd month of the year and has 28 or 29 days. The 29th day is every 4 years during leap year. Season (Northern Hemisphere): Winter. Holidays. Chinese New Year. National Freedom Day. Groundhog Day.',
181
  ]
182
  query_embeddings = model.encode_query(queries)
183
  document_embeddings = model.encode_document(documents)
 
187
  # Get the similarity scores for the embeddings
188
  similarities = model.similarity(query_embeddings, document_embeddings)
189
  print(similarities)
190
+ # tensor([[22.7011, 11.1635, 0.0000]])
191
  ```
192
 
193
  <!--
 
214
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
215
  -->
216
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
217
  ## Evaluation
218
 
219
  ### Metrics
 
224
 
225
  | Metric | Value |
226
  |:----------------------|:-----------|
227
+ | dot_accuracy@1 | 0.4976 |
228
+ | dot_accuracy@3 | 0.8154 |
229
+ | dot_accuracy@5 | 0.9122 |
230
+ | dot_accuracy@10 | 0.9684 |
231
+ | dot_precision@1 | 0.4976 |
232
+ | dot_precision@3 | 0.2791 |
233
+ | dot_precision@5 | 0.1899 |
234
+ | dot_precision@10 | 0.1018 |
235
+ | dot_recall@1 | 0.4821 |
236
+ | dot_recall@3 | 0.8021 |
237
+ | dot_recall@5 | 0.9035 |
238
+ | dot_recall@10 | 0.9639 |
239
+ | **dot_ndcg@10** | **0.7392** |
240
+ | dot_mrr@10 | 0.669 |
241
+ | dot_map@100 | 0.6647 |
242
+ | query_active_dims | 16.8104 |
243
  | query_sparsity_ratio | 0.9994 |
244
+ | corpus_active_dims | 100.6221 |
245
+ | corpus_sparsity_ratio | 0.9967 |
246
 
247
  <!--
248
  ## Bias, Risks and Limitations
 
262
 
263
  #### Unnamed Dataset
264
 
265
+ * Size: 1,350,000 training samples
266
+ * Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>label</code>
267
  * Approximate statistics based on the first 1000 samples:
268
+ | | query | positive | negative | label |
269
+ |:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
270
+ | type | string | string | string | list |
271
+ | details | <ul><li>min: 4 tokens</li><li>mean: 8.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 79.36 tokens</li><li>max: 215 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 78.39 tokens</li><li>max: 233 tokens</li></ul> | <ul><li>size: 1 elements</li></ul> |
272
  * Samples:
273
+ | query | positive | negative | label |
274
+ |:--------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
275
+ | <code>what causes protruding stomach</code> | <code>Some of the less common causes of Protruding abdomen may include: 1 Constipation. 2 Chronic constipation. 3 Poor muscle tone. Poor muscle tone after 1 childbirth. Lactose intolerance. Food 1 allergies. Food intolerances. 2 Pregnancy. 3 Hernia. Malabsorption. Irritable bowel 1 syndrome. Colonic bacterial fermentation. 2 Gastroparesis. Diabetic gastroparesis.</code> | <code>Protruding abdomen: Introduction. Protruding abdomen: abdominal distension. See detailed information below for a list of 56 causes of Protruding abdomen, Symptom Checker, including diseases and drug side effect causes. » Review Causes of Protruding abdomen: Causes | Symptom Checker ». Home Diagnostic Testing and Protruding abdomen.</code> | <code>[3.2738194465637207]</code> |
276
+ | <code>what is bialys</code> | <code>The bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction.Like the Lowland gorilla, the cassette tape and Madagascar forest coconuts, the bialy is rapidly becoming extinct. Sure, if you live in New York (where the Jewish tenements on the Lower East Side once overflowed with Eastern European foodstuffs that are now hard to locate), you have a few decent options.he bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction.</code> | <code>This homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking.his homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking.</code> | <code>[5.632390975952148]</code> |
277
+ | <code>dhow definition</code> | <code>Definition of dhow. : an Arab lateen-rigged boat usually having a long overhang forward, a high poop, and a low waist.</code> | <code>Freebase(0.00 / 0 votes)Rate this definition: Dhow. Dhow is the generic name of a number of traditional sailing vessels with one or more masts with lateen sails used in the Red Sea and Indian Ocean region. Historians are divided as to whether the dhow was invented by Arabs or Indians.</code> | <code>[0.8292264938354492]</code> |
278
  * Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
279
  ```json
280
  {
281
  "loss": "SparseMarginMSELoss",
282
+ "document_regularizer_weight": 0.2,
283
+ "query_regularizer_weight": 0.3
284
  }
285
  ```
286
 
 
288
  #### Non-Default Hyperparameters
289
 
290
  - `eval_strategy`: epoch
291
+ - `per_device_train_batch_size`: 32
292
+ - `per_device_eval_batch_size`: 32
293
+ - `learning_rate`: 6e-05
294
+ - `num_train_epochs`: 4
295
  - `lr_scheduler_type`: cosine
296
+ - `warmup_ratio`: 0.05
297
  - `fp16`: True
298
  - `load_best_model_at_end`: True
299
  - `optim`: adamw_torch_fused
300
+ - `push_to_hub`: True
301
 
302
  #### All Hyperparameters
303
  <details><summary>Click to expand</summary>
 
306
  - `do_predict`: False
307
  - `eval_strategy`: epoch
308
  - `prediction_loss_only`: True
309
+ - `per_device_train_batch_size`: 32
310
+ - `per_device_eval_batch_size`: 32
311
  - `per_gpu_train_batch_size`: None
312
  - `per_gpu_eval_batch_size`: None
313
  - `gradient_accumulation_steps`: 1
314
  - `eval_accumulation_steps`: None
315
  - `torch_empty_cache_steps`: None
316
+ - `learning_rate`: 6e-05
317
  - `weight_decay`: 0.0
318
  - `adam_beta1`: 0.9
319
  - `adam_beta2`: 0.999
320
  - `adam_epsilon`: 1e-08
321
  - `max_grad_norm`: 1.0
322
+ - `num_train_epochs`: 4
323
  - `max_steps`: -1
324
  - `lr_scheduler_type`: cosine
325
  - `lr_scheduler_kwargs`: {}
326
+ - `warmup_ratio`: 0.05
327
  - `warmup_steps`: 0
328
  - `log_level`: passive
329
  - `log_level_replica`: warning
 
380
  - `dataloader_persistent_workers`: False
381
  - `skip_memory_metrics`: True
382
  - `use_legacy_prediction_loop`: False
383
+ - `push_to_hub`: True
384
  - `resume_from_checkpoint`: None
385
  - `hub_model_id`: None
386
  - `hub_strategy`: every_save
 
423
  </details>
424
 
425
  ### Training Logs
426
+ | Epoch | Step | Training Loss | dot_ndcg@10 |
427
+ |:-------:|:----------:|:-------------:|:-----------:|
428
+ | 1.0 | 42188 | 8.6242 | 0.7262 |
429
+ | 2.0 | 84376 | 7.0404 | 0.7362 |
430
+ | 3.0 | 126564 | 5.3661 | 0.7388 |
431
+ | **4.0** | **168752** | **4.4807** | **0.7392** |
 
 
432
 
433
  * The bold row denotes the saved checkpoint.
434
 
435
  ### Framework Versions
436
  - Python: 3.11.13
437
  - Sentence Transformers: 5.0.0
438
+ - Transformers: 4.53.3
439
  - PyTorch: 2.6.0+cu124
440
  - Accelerate: 1.8.1
441
  - Datasets: 4.0.0
 
509
  ## Model Card Contact
510
 
511
  *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
512
+ -->
 
config.json CHANGED
@@ -17,7 +17,7 @@
17
  "pad_token_id": 0,
18
  "position_embedding_type": "absolute",
19
  "torch_dtype": "float32",
20
- "transformers_version": "4.53.2",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
 
17
  "pad_token_id": 0,
18
  "position_embedding_type": "absolute",
19
  "torch_dtype": "float32",
20
+ "transformers_version": "4.54.0",
21
  "type_vocab_size": 2,
22
  "use_cache": true,
23
  "vocab_size": 30522
config_sentence_transformers.json CHANGED
@@ -2,7 +2,7 @@
2
  "model_type": "SparseEncoder",
3
  "__version__": {
4
  "sentence_transformers": "5.0.0",
5
- "transformers": "4.53.2",
6
  "pytorch": "2.6.0+cu124"
7
  },
8
  "prompts": {
 
2
  "model_type": "SparseEncoder",
3
  "__version__": {
4
  "sentence_transformers": "5.0.0",
5
+ "transformers": "4.54.0",
6
  "pytorch": "2.6.0+cu124"
7
  },
8
  "prompts": {
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eca5ef6ed2e950b988214239e64f87b79f40b939a74ad47b6994ffa4b5de2c25
3
  size 44814856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4050564a649d96030d5ec42b38ae323f47c9454d87af057a42721bf892ed32a7
3
  size 44814856