Add new SparseEncoder model
Browse files- README.md +124 -158
- config.json +1 -1
- config_sentence_transformers.json +1 -1
- model.safetensors +1 -1
README.md
CHANGED
|
@@ -1,47 +1,37 @@
|
|
| 1 |
---
|
| 2 |
-
language:
|
| 3 |
-
- en
|
| 4 |
-
license: mit
|
| 5 |
tags:
|
| 6 |
- sentence-transformers
|
| 7 |
- sparse-encoder
|
| 8 |
- sparse
|
| 9 |
- splade
|
| 10 |
- generated_from_trainer
|
| 11 |
-
- dataset_size:
|
| 12 |
- loss:SpladeLoss
|
| 13 |
- loss:SparseMarginMSELoss
|
| 14 |
- loss:FlopsLoss
|
| 15 |
-
base_model:
|
| 16 |
widget:
|
| 17 |
-
- text:
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
of
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
and
|
| 37 |
-
|
| 38 |
-
a Star Attunement. Star Attunements carry the power of stars and evolved
|
| 39 |
-
star beings. They are off the scale and profoundly beautiful and spiritual.
|
| 40 |
-
- text: >-
|
| 41 |
-
Fermentation is a metabolic pathway that produce ATP molecules under
|
| 42 |
-
anaerobic conditions (only undergoes glycolysis), NAD+ is used directly in
|
| 43 |
-
glycolysis to form ATP molecules, which is not as efficient as cellular
|
| 44 |
-
respiration because only 2ATP molecules are formed during the glycolysis.
|
| 45 |
pipeline_tag: feature-extraction
|
| 46 |
library_name: sentence-transformers
|
| 47 |
metrics:
|
|
@@ -65,7 +55,7 @@ metrics:
|
|
| 65 |
- corpus_active_dims
|
| 66 |
- corpus_sparsity_ratio
|
| 67 |
model-index:
|
| 68 |
-
- name: SPLADE
|
| 69 |
results:
|
| 70 |
- task:
|
| 71 |
type: sparse-information-retrieval
|
|
@@ -75,86 +65,94 @@ model-index:
|
|
| 75 |
type: unknown
|
| 76 |
metrics:
|
| 77 |
- type: dot_accuracy@1
|
| 78 |
-
value: 0.
|
| 79 |
name: Dot Accuracy@1
|
| 80 |
- type: dot_accuracy@3
|
| 81 |
-
value: 0.
|
| 82 |
name: Dot Accuracy@3
|
| 83 |
- type: dot_accuracy@5
|
| 84 |
-
value: 0.
|
| 85 |
name: Dot Accuracy@5
|
| 86 |
- type: dot_accuracy@10
|
| 87 |
-
value: 0.
|
| 88 |
name: Dot Accuracy@10
|
| 89 |
- type: dot_precision@1
|
| 90 |
-
value: 0.
|
| 91 |
name: Dot Precision@1
|
| 92 |
- type: dot_precision@3
|
| 93 |
-
value: 0.
|
| 94 |
name: Dot Precision@3
|
| 95 |
- type: dot_precision@5
|
| 96 |
-
value: 0.
|
| 97 |
name: Dot Precision@5
|
| 98 |
- type: dot_precision@10
|
| 99 |
-
value: 0.
|
| 100 |
name: Dot Precision@10
|
| 101 |
- type: dot_recall@1
|
| 102 |
-
value: 0.
|
| 103 |
name: Dot Recall@1
|
| 104 |
- type: dot_recall@3
|
| 105 |
-
value: 0.
|
| 106 |
name: Dot Recall@3
|
| 107 |
- type: dot_recall@5
|
| 108 |
-
value: 0.
|
| 109 |
name: Dot Recall@5
|
| 110 |
- type: dot_recall@10
|
| 111 |
-
value: 0.
|
| 112 |
name: Dot Recall@10
|
| 113 |
- type: dot_ndcg@10
|
| 114 |
-
value: 0.
|
| 115 |
name: Dot Ndcg@10
|
| 116 |
- type: dot_mrr@10
|
| 117 |
-
value: 0.
|
| 118 |
name: Dot Mrr@10
|
| 119 |
- type: dot_map@100
|
| 120 |
-
value: 0.
|
| 121 |
name: Dot Map@100
|
| 122 |
- type: query_active_dims
|
| 123 |
-
value:
|
| 124 |
name: Query Active Dims
|
| 125 |
- type: query_sparsity_ratio
|
| 126 |
-
value: 0.
|
| 127 |
name: Query Sparsity Ratio
|
| 128 |
- type: corpus_active_dims
|
| 129 |
-
value:
|
| 130 |
name: Corpus Active Dims
|
| 131 |
- type: corpus_sparsity_ratio
|
| 132 |
-
value: 0.
|
| 133 |
name: Corpus Sparsity Ratio
|
| 134 |
-
datasets:
|
| 135 |
-
- microsoft/ms_marco
|
| 136 |
---
|
| 137 |
|
| 138 |
-
# SPLADE
|
| 139 |
|
| 140 |
-
This is a SPLADE
|
|
|
|
| 141 |
|
| 142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
-
|
| 145 |
-
- `Distillation Dataset:` https://huggingface.co/datasets/yosefw/msmarco-train-distil-v2
|
| 146 |
-
- `Code:` https://github.com/rasyosef/splade-tiny-msmarco
|
| 147 |
|
| 148 |
-
|
|
|
|
|
|
|
|
|
|
| 149 |
|
| 150 |
-
|
| 151 |
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
|
| 159 |
## Usage
|
| 160 |
|
|
@@ -171,15 +169,15 @@ Then you can load this model and run inference.
|
|
| 171 |
from sentence_transformers import SparseEncoder
|
| 172 |
|
| 173 |
# Download from the 🤗 Hub
|
| 174 |
-
model = SparseEncoder("
|
| 175 |
# Run inference
|
| 176 |
queries = [
|
| 177 |
-
"
|
| 178 |
]
|
| 179 |
documents = [
|
| 180 |
-
'
|
| 181 |
-
'
|
| 182 |
-
'
|
| 183 |
]
|
| 184 |
query_embeddings = model.encode_query(queries)
|
| 185 |
document_embeddings = model.encode_document(documents)
|
|
@@ -189,7 +187,7 @@ print(query_embeddings.shape, document_embeddings.shape)
|
|
| 189 |
# Get the similarity scores for the embeddings
|
| 190 |
similarities = model.similarity(query_embeddings, document_embeddings)
|
| 191 |
print(similarities)
|
| 192 |
-
# tensor([[
|
| 193 |
```
|
| 194 |
|
| 195 |
<!--
|
|
@@ -216,36 +214,6 @@ You can finetune this model on your own dataset.
|
|
| 216 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 217 |
-->
|
| 218 |
|
| 219 |
-
## Model Details
|
| 220 |
-
|
| 221 |
-
### Model Description
|
| 222 |
-
- **Model Type:** SPLADE Sparse Encoder
|
| 223 |
-
- **Base model:** [prajjwal1/bert-mini](https://huggingface.co/prajjwal1/bert-mini) <!-- at revision 5e123abc2480f0c4b4cac186d3b3f09299c258fc -->
|
| 224 |
-
- **Maximum Sequence Length:** 512 tokens
|
| 225 |
-
- **Output Dimensionality:** 30522 dimensions
|
| 226 |
-
- **Similarity Function:** Dot Product
|
| 227 |
-
<!-- - **Training Dataset:** Unknown -->
|
| 228 |
-
- **Language:** en
|
| 229 |
-
- **License:** mit
|
| 230 |
-
|
| 231 |
-
### Model Sources
|
| 232 |
-
|
| 233 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 234 |
-
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
|
| 235 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 236 |
-
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
| 237 |
-
|
| 238 |
-
### Full Model Architecture
|
| 239 |
-
|
| 240 |
-
```
|
| 241 |
-
SparseEncoder(
|
| 242 |
-
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
|
| 243 |
-
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
| 244 |
-
)
|
| 245 |
-
```
|
| 246 |
-
|
| 247 |
-
## More
|
| 248 |
-
<details><summary>Click to expand</summary>
|
| 249 |
## Evaluation
|
| 250 |
|
| 251 |
### Metrics
|
|
@@ -256,25 +224,25 @@ SparseEncoder(
|
|
| 256 |
|
| 257 |
| Metric | Value |
|
| 258 |
|:----------------------|:-----------|
|
| 259 |
-
| dot_accuracy@1 | 0.
|
| 260 |
-
| dot_accuracy@3 | 0.
|
| 261 |
-
| dot_accuracy@5 | 0.
|
| 262 |
-
| dot_accuracy@10 | 0.
|
| 263 |
-
| dot_precision@1 | 0.
|
| 264 |
-
| dot_precision@3 | 0.
|
| 265 |
-
| dot_precision@5 | 0.
|
| 266 |
-
| dot_precision@10 | 0.
|
| 267 |
-
| dot_recall@1 | 0.
|
| 268 |
-
| dot_recall@3 | 0.
|
| 269 |
-
| dot_recall@5 | 0.
|
| 270 |
-
| dot_recall@10 | 0.
|
| 271 |
-
| **dot_ndcg@10** | **0.
|
| 272 |
-
| dot_mrr@10 | 0.
|
| 273 |
-
| dot_map@100 | 0.
|
| 274 |
-
| query_active_dims |
|
| 275 |
| query_sparsity_ratio | 0.9994 |
|
| 276 |
-
| corpus_active_dims |
|
| 277 |
-
| corpus_sparsity_ratio | 0.
|
| 278 |
|
| 279 |
<!--
|
| 280 |
## Bias, Risks and Limitations
|
|
@@ -294,25 +262,25 @@ SparseEncoder(
|
|
| 294 |
|
| 295 |
#### Unnamed Dataset
|
| 296 |
|
| 297 |
-
* Size:
|
| 298 |
-
* Columns: <code>query</code>, <code>positive</code>, <code>
|
| 299 |
* Approximate statistics based on the first 1000 samples:
|
| 300 |
-
| | query | positive |
|
| 301 |
-
|
| 302 |
-
| type | string | string | string |
|
| 303 |
-
| details | <ul><li>min: 4 tokens</li><li>mean: 8.
|
| 304 |
* Samples:
|
| 305 |
-
| query
|
| 306 |
-
|
| 307 |
-
| <code>
|
| 308 |
-
| <code>
|
| 309 |
-
| <code>
|
| 310 |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
|
| 311 |
```json
|
| 312 |
{
|
| 313 |
"loss": "SparseMarginMSELoss",
|
| 314 |
-
"document_regularizer_weight": 0.
|
| 315 |
-
"query_regularizer_weight": 0.
|
| 316 |
}
|
| 317 |
```
|
| 318 |
|
|
@@ -320,15 +288,16 @@ SparseEncoder(
|
|
| 320 |
#### Non-Default Hyperparameters
|
| 321 |
|
| 322 |
- `eval_strategy`: epoch
|
| 323 |
-
- `per_device_train_batch_size`:
|
| 324 |
-
- `per_device_eval_batch_size`:
|
| 325 |
-
- `learning_rate`:
|
| 326 |
-
- `num_train_epochs`:
|
| 327 |
- `lr_scheduler_type`: cosine
|
| 328 |
-
- `warmup_ratio`: 0.
|
| 329 |
- `fp16`: True
|
| 330 |
- `load_best_model_at_end`: True
|
| 331 |
- `optim`: adamw_torch_fused
|
|
|
|
| 332 |
|
| 333 |
#### All Hyperparameters
|
| 334 |
<details><summary>Click to expand</summary>
|
|
@@ -337,24 +306,24 @@ SparseEncoder(
|
|
| 337 |
- `do_predict`: False
|
| 338 |
- `eval_strategy`: epoch
|
| 339 |
- `prediction_loss_only`: True
|
| 340 |
-
- `per_device_train_batch_size`:
|
| 341 |
-
- `per_device_eval_batch_size`:
|
| 342 |
- `per_gpu_train_batch_size`: None
|
| 343 |
- `per_gpu_eval_batch_size`: None
|
| 344 |
- `gradient_accumulation_steps`: 1
|
| 345 |
- `eval_accumulation_steps`: None
|
| 346 |
- `torch_empty_cache_steps`: None
|
| 347 |
-
- `learning_rate`:
|
| 348 |
- `weight_decay`: 0.0
|
| 349 |
- `adam_beta1`: 0.9
|
| 350 |
- `adam_beta2`: 0.999
|
| 351 |
- `adam_epsilon`: 1e-08
|
| 352 |
- `max_grad_norm`: 1.0
|
| 353 |
-
- `num_train_epochs`:
|
| 354 |
- `max_steps`: -1
|
| 355 |
- `lr_scheduler_type`: cosine
|
| 356 |
- `lr_scheduler_kwargs`: {}
|
| 357 |
-
- `warmup_ratio`: 0.
|
| 358 |
- `warmup_steps`: 0
|
| 359 |
- `log_level`: passive
|
| 360 |
- `log_level_replica`: warning
|
|
@@ -411,7 +380,7 @@ SparseEncoder(
|
|
| 411 |
- `dataloader_persistent_workers`: False
|
| 412 |
- `skip_memory_metrics`: True
|
| 413 |
- `use_legacy_prediction_loop`: False
|
| 414 |
-
- `push_to_hub`:
|
| 415 |
- `resume_from_checkpoint`: None
|
| 416 |
- `hub_model_id`: None
|
| 417 |
- `hub_strategy`: every_save
|
|
@@ -454,21 +423,19 @@ SparseEncoder(
|
|
| 454 |
</details>
|
| 455 |
|
| 456 |
### Training Logs
|
| 457 |
-
| Epoch | Step
|
| 458 |
-
|
| 459 |
-
| 1.0 |
|
| 460 |
-
| 2.0 |
|
| 461 |
-
| 3.0 |
|
| 462 |
-
| 4.0
|
| 463 |
-
| 5.0 | 26045 | 8.881 | 0.7289 |
|
| 464 |
-
| **6.0** | **31254** | **8.3454** | **0.7302** |
|
| 465 |
|
| 466 |
* The bold row denotes the saved checkpoint.
|
| 467 |
|
| 468 |
### Framework Versions
|
| 469 |
- Python: 3.11.13
|
| 470 |
- Sentence Transformers: 5.0.0
|
| 471 |
-
- Transformers: 4.53.
|
| 472 |
- PyTorch: 2.6.0+cu124
|
| 473 |
- Accelerate: 1.8.1
|
| 474 |
- Datasets: 4.0.0
|
|
@@ -542,5 +509,4 @@ SparseEncoder(
|
|
| 542 |
## Model Card Contact
|
| 543 |
|
| 544 |
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
| 545 |
-
-->
|
| 546 |
-
</details>
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
- sentence-transformers
|
| 4 |
- sparse-encoder
|
| 5 |
- sparse
|
| 6 |
- splade
|
| 7 |
- generated_from_trainer
|
| 8 |
+
- dataset_size:1350000
|
| 9 |
- loss:SpladeLoss
|
| 10 |
- loss:SparseMarginMSELoss
|
| 11 |
- loss:FlopsLoss
|
| 12 |
+
base_model: yosefw/SPLADE-BERT-Mini-BS256
|
| 13 |
widget:
|
| 14 |
+
- text: Coinsurance is a health care cost sharing between you and your insurance company.
|
| 15 |
+
The cost sharing ranges from 80/20 to even 50/50. For example, if your coinsurance
|
| 16 |
+
is 80/20, that means that your insurer covers 80% of annual medical expenses and
|
| 17 |
+
you pay the remaining 20%. The cost sharing stops when medical expenses reach
|
| 18 |
+
your out-of-pocket maximum, which usually is between $1,000 and $5,000.
|
| 19 |
+
- text: The Definition of Success. 1 In 1806, the definition of Success in the Webster
|
| 20 |
+
dictionary was to be fortunate, happy, kind and prosperous. In 2013 the definition
|
| 21 |
+
of success is the attainment of wealth, fame and power. 2 The purpose of forming
|
| 22 |
+
a company is not to obtain substantial wealth.
|
| 23 |
+
- text: 'It wouldn''t be completely accurate to say 10 syllables, because in English
|
| 24 |
+
Sonnet writing, they are written in iambic pentameter, which is ten syllables,
|
| 25 |
+
but it''s not just any syllables, they have to be in rhythm.da-DA-da-DA-da-DA-da-DA-da-DA.
|
| 26 |
+
And the rhymes are ABAB/CDCD/EFEF/GG for each stanza.hat makes a sonnet a sonnet
|
| 27 |
+
is the rhyme scheme and the 10 syllable lines. Check out this site and it may
|
| 28 |
+
help you: http://www.elfwood.com/farp/thewriting/2... Sptfyr · 7 years ago. Thumbs
|
| 29 |
+
up.'
|
| 30 |
+
- text: Dragon horn. A dragon horn is a sorcerous horn that is used to control dragons.
|
| 31 |
+
- text: Social Sciences. Background research refers to accessing the collection of
|
| 32 |
+
previously published and unpublished information about a site, region, or particular
|
| 33 |
+
topic of interest and it is the first step of all good archaeological investigations,
|
| 34 |
+
as well as that of all writers of any kind of research paper.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
pipeline_tag: feature-extraction
|
| 36 |
library_name: sentence-transformers
|
| 37 |
metrics:
|
|
|
|
| 55 |
- corpus_active_dims
|
| 56 |
- corpus_sparsity_ratio
|
| 57 |
model-index:
|
| 58 |
+
- name: SPLADE Sparse Encoder
|
| 59 |
results:
|
| 60 |
- task:
|
| 61 |
type: sparse-information-retrieval
|
|
|
|
| 65 |
type: unknown
|
| 66 |
metrics:
|
| 67 |
- type: dot_accuracy@1
|
| 68 |
+
value: 0.4976
|
| 69 |
name: Dot Accuracy@1
|
| 70 |
- type: dot_accuracy@3
|
| 71 |
+
value: 0.8154
|
| 72 |
name: Dot Accuracy@3
|
| 73 |
- type: dot_accuracy@5
|
| 74 |
+
value: 0.9122
|
| 75 |
name: Dot Accuracy@5
|
| 76 |
- type: dot_accuracy@10
|
| 77 |
+
value: 0.9684
|
| 78 |
name: Dot Accuracy@10
|
| 79 |
- type: dot_precision@1
|
| 80 |
+
value: 0.4976
|
| 81 |
name: Dot Precision@1
|
| 82 |
- type: dot_precision@3
|
| 83 |
+
value: 0.2791333333333333
|
| 84 |
name: Dot Precision@3
|
| 85 |
- type: dot_precision@5
|
| 86 |
+
value: 0.18991999999999998
|
| 87 |
name: Dot Precision@5
|
| 88 |
- type: dot_precision@10
|
| 89 |
+
value: 0.10178
|
| 90 |
name: Dot Precision@10
|
| 91 |
- type: dot_recall@1
|
| 92 |
+
value: 0.4821
|
| 93 |
name: Dot Recall@1
|
| 94 |
- type: dot_recall@3
|
| 95 |
+
value: 0.80205
|
| 96 |
name: Dot Recall@3
|
| 97 |
- type: dot_recall@5
|
| 98 |
+
value: 0.9034833333333334
|
| 99 |
name: Dot Recall@5
|
| 100 |
- type: dot_recall@10
|
| 101 |
+
value: 0.9639
|
| 102 |
name: Dot Recall@10
|
| 103 |
- type: dot_ndcg@10
|
| 104 |
+
value: 0.739184491374207
|
| 105 |
name: Dot Ndcg@10
|
| 106 |
- type: dot_mrr@10
|
| 107 |
+
value: 0.6690194444444474
|
| 108 |
name: Dot Mrr@10
|
| 109 |
- type: dot_map@100
|
| 110 |
+
value: 0.6646610700105045
|
| 111 |
name: Dot Map@100
|
| 112 |
- type: query_active_dims
|
| 113 |
+
value: 16.810400009155273
|
| 114 |
name: Query Active Dims
|
| 115 |
- type: query_sparsity_ratio
|
| 116 |
+
value: 0.9994492366159113
|
| 117 |
name: Query Sparsity Ratio
|
| 118 |
- type: corpus_active_dims
|
| 119 |
+
value: 100.62213478240855
|
| 120 |
name: Corpus Active Dims
|
| 121 |
- type: corpus_sparsity_ratio
|
| 122 |
+
value: 0.996703291567315
|
| 123 |
name: Corpus Sparsity Ratio
|
|
|
|
|
|
|
| 124 |
---
|
| 125 |
|
| 126 |
+
# SPLADE Sparse Encoder
|
| 127 |
|
| 128 |
+
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
|
| 129 |
+
## Model Details
|
| 130 |
|
| 131 |
+
### Model Description
|
| 132 |
+
- **Model Type:** SPLADE Sparse Encoder
|
| 133 |
+
- **Base model:** [yosefw/SPLADE-BERT-Mini-BS256](https://huggingface.co/yosefw/SPLADE-BERT-Mini-BS256) <!-- at revision 986bc55b61d9f0559f86423fb5807b9f4a3b7094 -->
|
| 134 |
+
- **Maximum Sequence Length:** 512 tokens
|
| 135 |
+
- **Output Dimensionality:** 30522 dimensions
|
| 136 |
+
- **Similarity Function:** Dot Product
|
| 137 |
+
<!-- - **Training Dataset:** Unknown -->
|
| 138 |
+
<!-- - **Language:** Unknown -->
|
| 139 |
+
<!-- - **License:** Unknown -->
|
| 140 |
|
| 141 |
+
### Model Sources
|
|
|
|
|
|
|
| 142 |
|
| 143 |
+
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 144 |
+
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
|
| 145 |
+
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 146 |
+
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
|
| 147 |
|
| 148 |
+
### Full Model Architecture
|
| 149 |
|
| 150 |
+
```
|
| 151 |
+
SparseEncoder(
|
| 152 |
+
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
|
| 153 |
+
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
| 154 |
+
)
|
| 155 |
+
```
|
| 156 |
|
| 157 |
## Usage
|
| 158 |
|
|
|
|
| 169 |
from sentence_transformers import SparseEncoder
|
| 170 |
|
| 171 |
# Download from the 🤗 Hub
|
| 172 |
+
model = SparseEncoder("yosefw/SPLADE-BERT-Mini-BS256-distil-v2")
|
| 173 |
# Run inference
|
| 174 |
queries = [
|
| 175 |
+
"research background definition",
|
| 176 |
]
|
| 177 |
documents = [
|
| 178 |
+
'Social Sciences. Background research refers to accessing the collection of previously published and unpublished information about a site, region, or particular topic of interest and it is the first step of all good archaeological investigations, as well as that of all writers of any kind of research paper.',
|
| 179 |
+
'This Research Paper Background and Problem Definition and other 62,000+ term papers, college essay examples and free essays are available now on ReviewEssays.com. Autor: dharath1 • July 22, 2014 • Research Paper • 442 Words (2 Pages) • 448 Views.',
|
| 180 |
+
'About the Month of February. February is the 2nd month of the year and has 28 or 29 days. The 29th day is every 4 years during leap year. Season (Northern Hemisphere): Winter. Holidays. Chinese New Year. National Freedom Day. Groundhog Day.',
|
| 181 |
]
|
| 182 |
query_embeddings = model.encode_query(queries)
|
| 183 |
document_embeddings = model.encode_document(documents)
|
|
|
|
| 187 |
# Get the similarity scores for the embeddings
|
| 188 |
similarities = model.similarity(query_embeddings, document_embeddings)
|
| 189 |
print(similarities)
|
| 190 |
+
# tensor([[22.7011, 11.1635, 0.0000]])
|
| 191 |
```
|
| 192 |
|
| 193 |
<!--
|
|
|
|
| 214 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 215 |
-->
|
| 216 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 217 |
## Evaluation
|
| 218 |
|
| 219 |
### Metrics
|
|
|
|
| 224 |
|
| 225 |
| Metric | Value |
|
| 226 |
|:----------------------|:-----------|
|
| 227 |
+
| dot_accuracy@1 | 0.4976 |
|
| 228 |
+
| dot_accuracy@3 | 0.8154 |
|
| 229 |
+
| dot_accuracy@5 | 0.9122 |
|
| 230 |
+
| dot_accuracy@10 | 0.9684 |
|
| 231 |
+
| dot_precision@1 | 0.4976 |
|
| 232 |
+
| dot_precision@3 | 0.2791 |
|
| 233 |
+
| dot_precision@5 | 0.1899 |
|
| 234 |
+
| dot_precision@10 | 0.1018 |
|
| 235 |
+
| dot_recall@1 | 0.4821 |
|
| 236 |
+
| dot_recall@3 | 0.8021 |
|
| 237 |
+
| dot_recall@5 | 0.9035 |
|
| 238 |
+
| dot_recall@10 | 0.9639 |
|
| 239 |
+
| **dot_ndcg@10** | **0.7392** |
|
| 240 |
+
| dot_mrr@10 | 0.669 |
|
| 241 |
+
| dot_map@100 | 0.6647 |
|
| 242 |
+
| query_active_dims | 16.8104 |
|
| 243 |
| query_sparsity_ratio | 0.9994 |
|
| 244 |
+
| corpus_active_dims | 100.6221 |
|
| 245 |
+
| corpus_sparsity_ratio | 0.9967 |
|
| 246 |
|
| 247 |
<!--
|
| 248 |
## Bias, Risks and Limitations
|
|
|
|
| 262 |
|
| 263 |
#### Unnamed Dataset
|
| 264 |
|
| 265 |
+
* Size: 1,350,000 training samples
|
| 266 |
+
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>label</code>
|
| 267 |
* Approximate statistics based on the first 1000 samples:
|
| 268 |
+
| | query | positive | negative | label |
|
| 269 |
+
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------|
|
| 270 |
+
| type | string | string | string | list |
|
| 271 |
+
| details | <ul><li>min: 4 tokens</li><li>mean: 8.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 79.36 tokens</li><li>max: 215 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 78.39 tokens</li><li>max: 233 tokens</li></ul> | <ul><li>size: 1 elements</li></ul> |
|
| 272 |
* Samples:
|
| 273 |
+
| query | positive | negative | label |
|
| 274 |
+
|:--------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
|
| 275 |
+
| <code>what causes protruding stomach</code> | <code>Some of the less common causes of Protruding abdomen may include: 1 Constipation. 2 Chronic constipation. 3 Poor muscle tone. Poor muscle tone after 1 childbirth. Lactose intolerance. Food 1 allergies. Food intolerances. 2 Pregnancy. 3 Hernia. Malabsorption. Irritable bowel 1 syndrome. Colonic bacterial fermentation. 2 Gastroparesis. Diabetic gastroparesis.</code> | <code>Protruding abdomen: Introduction. Protruding abdomen: abdominal distension. See detailed information below for a list of 56 causes of Protruding abdomen, Symptom Checker, including diseases and drug side effect causes. » Review Causes of Protruding abdomen: Causes | Symptom Checker ». Home Diagnostic Testing and Protruding abdomen.</code> | <code>[3.2738194465637207]</code> |
|
| 276 |
+
| <code>what is bialys</code> | <code>The bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction.Like the Lowland gorilla, the cassette tape and Madagascar forest coconuts, the bialy is rapidly becoming extinct. Sure, if you live in New York (where the Jewish tenements on the Lower East Side once overflowed with Eastern European foodstuffs that are now hard to locate), you have a few decent options.he bialy is not a sub-type of bagel, it’s a thing all to itself. Round with a depressed middle filled with cooked onions and sometimes poppy seeds, it is simply baked (bagels are boiled then baked). Purists prefer them straight up, preferably no more than five hours after being pulled from the oven. Extinction.</code> | <code>This homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking.his homemade bialy recipe is even easier to make than a bagel because it doesn’t require boiling prior to baking.</code> | <code>[5.632390975952148]</code> |
|
| 277 |
+
| <code>dhow definition</code> | <code>Definition of dhow. : an Arab lateen-rigged boat usually having a long overhang forward, a high poop, and a low waist.</code> | <code>Freebase(0.00 / 0 votes)Rate this definition: Dhow. Dhow is the generic name of a number of traditional sailing vessels with one or more masts with lateen sails used in the Red Sea and Indian Ocean region. Historians are divided as to whether the dhow was invented by Arabs or Indians.</code> | <code>[0.8292264938354492]</code> |
|
| 278 |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
|
| 279 |
```json
|
| 280 |
{
|
| 281 |
"loss": "SparseMarginMSELoss",
|
| 282 |
+
"document_regularizer_weight": 0.2,
|
| 283 |
+
"query_regularizer_weight": 0.3
|
| 284 |
}
|
| 285 |
```
|
| 286 |
|
|
|
|
| 288 |
#### Non-Default Hyperparameters
|
| 289 |
|
| 290 |
- `eval_strategy`: epoch
|
| 291 |
+
- `per_device_train_batch_size`: 32
|
| 292 |
+
- `per_device_eval_batch_size`: 32
|
| 293 |
+
- `learning_rate`: 6e-05
|
| 294 |
+
- `num_train_epochs`: 4
|
| 295 |
- `lr_scheduler_type`: cosine
|
| 296 |
+
- `warmup_ratio`: 0.05
|
| 297 |
- `fp16`: True
|
| 298 |
- `load_best_model_at_end`: True
|
| 299 |
- `optim`: adamw_torch_fused
|
| 300 |
+
- `push_to_hub`: True
|
| 301 |
|
| 302 |
#### All Hyperparameters
|
| 303 |
<details><summary>Click to expand</summary>
|
|
|
|
| 306 |
- `do_predict`: False
|
| 307 |
- `eval_strategy`: epoch
|
| 308 |
- `prediction_loss_only`: True
|
| 309 |
+
- `per_device_train_batch_size`: 32
|
| 310 |
+
- `per_device_eval_batch_size`: 32
|
| 311 |
- `per_gpu_train_batch_size`: None
|
| 312 |
- `per_gpu_eval_batch_size`: None
|
| 313 |
- `gradient_accumulation_steps`: 1
|
| 314 |
- `eval_accumulation_steps`: None
|
| 315 |
- `torch_empty_cache_steps`: None
|
| 316 |
+
- `learning_rate`: 6e-05
|
| 317 |
- `weight_decay`: 0.0
|
| 318 |
- `adam_beta1`: 0.9
|
| 319 |
- `adam_beta2`: 0.999
|
| 320 |
- `adam_epsilon`: 1e-08
|
| 321 |
- `max_grad_norm`: 1.0
|
| 322 |
+
- `num_train_epochs`: 4
|
| 323 |
- `max_steps`: -1
|
| 324 |
- `lr_scheduler_type`: cosine
|
| 325 |
- `lr_scheduler_kwargs`: {}
|
| 326 |
+
- `warmup_ratio`: 0.05
|
| 327 |
- `warmup_steps`: 0
|
| 328 |
- `log_level`: passive
|
| 329 |
- `log_level_replica`: warning
|
|
|
|
| 380 |
- `dataloader_persistent_workers`: False
|
| 381 |
- `skip_memory_metrics`: True
|
| 382 |
- `use_legacy_prediction_loop`: False
|
| 383 |
+
- `push_to_hub`: True
|
| 384 |
- `resume_from_checkpoint`: None
|
| 385 |
- `hub_model_id`: None
|
| 386 |
- `hub_strategy`: every_save
|
|
|
|
| 423 |
</details>
|
| 424 |
|
| 425 |
### Training Logs
|
| 426 |
+
| Epoch | Step | Training Loss | dot_ndcg@10 |
|
| 427 |
+
|:-------:|:----------:|:-------------:|:-----------:|
|
| 428 |
+
| 1.0 | 42188 | 8.6242 | 0.7262 |
|
| 429 |
+
| 2.0 | 84376 | 7.0404 | 0.7362 |
|
| 430 |
+
| 3.0 | 126564 | 5.3661 | 0.7388 |
|
| 431 |
+
| **4.0** | **168752** | **4.4807** | **0.7392** |
|
|
|
|
|
|
|
| 432 |
|
| 433 |
* The bold row denotes the saved checkpoint.
|
| 434 |
|
| 435 |
### Framework Versions
|
| 436 |
- Python: 3.11.13
|
| 437 |
- Sentence Transformers: 5.0.0
|
| 438 |
+
- Transformers: 4.53.3
|
| 439 |
- PyTorch: 2.6.0+cu124
|
| 440 |
- Accelerate: 1.8.1
|
| 441 |
- Datasets: 4.0.0
|
|
|
|
| 509 |
## Model Card Contact
|
| 510 |
|
| 511 |
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
|
| 512 |
+
-->
|
|
|
config.json
CHANGED
|
@@ -17,7 +17,7 @@
|
|
| 17 |
"pad_token_id": 0,
|
| 18 |
"position_embedding_type": "absolute",
|
| 19 |
"torch_dtype": "float32",
|
| 20 |
-
"transformers_version": "4.
|
| 21 |
"type_vocab_size": 2,
|
| 22 |
"use_cache": true,
|
| 23 |
"vocab_size": 30522
|
|
|
|
| 17 |
"pad_token_id": 0,
|
| 18 |
"position_embedding_type": "absolute",
|
| 19 |
"torch_dtype": "float32",
|
| 20 |
+
"transformers_version": "4.54.0",
|
| 21 |
"type_vocab_size": 2,
|
| 22 |
"use_cache": true,
|
| 23 |
"vocab_size": 30522
|
config_sentence_transformers.json
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
"model_type": "SparseEncoder",
|
| 3 |
"__version__": {
|
| 4 |
"sentence_transformers": "5.0.0",
|
| 5 |
-
"transformers": "4.
|
| 6 |
"pytorch": "2.6.0+cu124"
|
| 7 |
},
|
| 8 |
"prompts": {
|
|
|
|
| 2 |
"model_type": "SparseEncoder",
|
| 3 |
"__version__": {
|
| 4 |
"sentence_transformers": "5.0.0",
|
| 5 |
+
"transformers": "4.54.0",
|
| 6 |
"pytorch": "2.6.0+cu124"
|
| 7 |
},
|
| 8 |
"prompts": {
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 44814856
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4050564a649d96030d5ec42b38ae323f47c9454d87af057a42721bf892ed32a7
|
| 3 |
size 44814856
|