MiniLM-L12-H384 trained on GooAQ
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("zhensuuu/reranker-MiniLM-L12-H384-uncased-intent")
# Get scores for pairs of texts
pairs = [
['Add edge representing resource request', ' Model process-resource dependency relationship'],
['Split text into words list', ' Filter words matching given keyword.'],
['Calculate approximate cube root value', ' Find cube root using exponentiation'],
['Reverse sublist within linked list', ' Move nodes to new positions'],
['Defines neighbors for node A', ' Specifies direct connections from A'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'Add edge representing resource request',
[
' Model process-resource dependency relationship',
' Filter words matching given keyword.',
' Find cube root using exponentiation',
' Move nodes to new positions',
' Specifies direct connections from A',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.0735 (-0.4161) | 0.3017 (+0.0407) | 0.0837 (-0.3359) |
mrr@10 | 0.0476 (-0.4299) | 0.4457 (-0.0541) | 0.0661 (-0.3606) |
ndcg@10 | 0.0687 (-0.4718) | 0.2916 (-0.0335) | 0.0748 (-0.4258) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.1529 (-0.2371) |
mrr@10 | 0.1864 (-0.2816) |
ndcg@10 | 0.1450 (-0.3104) |
Training Details
Training Dataset
Unnamed Dataset
- Size: 85,938 training samples
- Columns:
question
andanswer
- Approximate statistics based on the first 1000 samples:
question answer type string string details - min: 18 characters
- mean: 33.49 characters
- max: 49 characters
- min: 18 characters
- mean: 35.88 characters
- max: 52 characters
- Samples:
question answer Check if configuration loaded successfully
prevent further actions if configuration absent
Add new user to list
Store received user in memory
Selects profitable jobs and schedules
Displays scheduled jobs and profit
- Loss:
CachedMultipleNegativesRankingLoss
with these parameters:{ "scale": 10.0, "num_negatives": 5, "activation_fn": "torch.nn.modules.activation.Sigmoid", "mini_batch_size": 16 }
Evaluation Dataset
Unnamed Dataset
- Size: 1,000 evaluation samples
- Columns:
question
andanswer
- Approximate statistics based on the first 1000 samples:
question answer type string string details - min: 20 characters
- mean: 33.63 characters
- max: 54 characters
- min: 18 characters
- mean: 35.86 characters
- max: 55 characters
- Samples:
question answer Add edge representing resource request
Model process-resource dependency relationship
Split text into words list
Filter words matching given keyword.
Calculate approximate cube root value
Find cube root using exponentiation
- Loss:
CachedMultipleNegativesRankingLoss
with these parameters:{ "scale": 10.0, "num_negatives": 5, "activation_fn": "torch.nn.modules.activation.Sigmoid", "mini_batch_size": 16 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 64per_device_eval_batch_size
: 64learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0146 (-0.5258) | 0.2622 (-0.0628) | 0.0058 (-0.4949) | 0.0942 (-0.3612) |
0.0030 | 1 | 1.7927 | - | - | - | - | - |
0.2976 | 100 | 1.2688 | - | - | - | - | - |
0.5952 | 200 | 0.8847 | - | - | - | - | - |
0.7440 | 250 | - | 0.8479 | 0.0586 (-0.4818) | 0.2978 (-0.0272) | 0.0717 (-0.4290) | 0.1427 (-0.3127) |
0.8929 | 300 | 0.8519 | - | - | - | - | - |
-1 | -1 | - | - | 0.0687 (-0.4718) | 0.2916 (-0.0335) | 0.0748 (-0.4258) | 0.1450 (-0.3104) |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.082 kWh
- Carbon Emitted: 0.000 kg of CO2
- Hours Used: 0.306 hours
Training Hardware
- On Cloud: No
- GPU Model: 4 x NVIDIA RTX 6000 Ada Generation
- CPU Model: AMD EPYC 7763 64-Core Processor
- RAM Size: 251.53 GB
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.48.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for zhensuuu/reranker-MiniLM-L12-H384-uncased-intent
Base model
microsoft/MiniLM-L12-H384-uncasedEvaluation results
- Map on NanoMSMARCO R100self-reported0.073
- Mrr@10 on NanoMSMARCO R100self-reported0.048
- Ndcg@10 on NanoMSMARCO R100self-reported0.069
- Map on NanoNFCorpus R100self-reported0.302
- Mrr@10 on NanoNFCorpus R100self-reported0.446
- Ndcg@10 on NanoNFCorpus R100self-reported0.292
- Map on NanoNQ R100self-reported0.084
- Mrr@10 on NanoNQ R100self-reported0.066
- Ndcg@10 on NanoNQ R100self-reported0.075
- Map on NanoBEIR R100 meanself-reported0.153