MiniLM-L12-H384 trained on GooAQ

This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

  • Model Type: Cross Encoder
  • Base model: microsoft/MiniLM-L12-H384-uncased
  • Maximum Sequence Length: 512 tokens
  • Number of Output Labels: 1 label
  • Language: en
  • License: apache-2.0

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("zhensuuu/reranker-MiniLM-L12-H384-uncased-intent")
# Get scores for pairs of texts
pairs = [
    ['Add edge representing resource request', ' Model process-resource dependency relationship'],
    ['Split text into words list', ' Filter words matching given keyword.'],
    ['Calculate approximate cube root value', ' Find cube root using exponentiation'],
    ['Reverse sublist within linked list', ' Move nodes to new positions'],
    ['Defines neighbors for node A', ' Specifies direct connections from A'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'Add edge representing resource request',
    [
        ' Model process-resource dependency relationship',
        ' Filter words matching given keyword.',
        ' Find cube root using exponentiation',
        ' Move nodes to new positions',
        ' Specifies direct connections from A',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

  • Datasets: NanoMSMARCO_R100, NanoNFCorpus_R100 and NanoNQ_R100
  • Evaluated with CrossEncoderRerankingEvaluator with these parameters:
    {
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric NanoMSMARCO_R100 NanoNFCorpus_R100 NanoNQ_R100
map 0.0735 (-0.4161) 0.3017 (+0.0407) 0.0837 (-0.3359)
mrr@10 0.0476 (-0.4299) 0.4457 (-0.0541) 0.0661 (-0.3606)
ndcg@10 0.0687 (-0.4718) 0.2916 (-0.0335) 0.0748 (-0.4258)

Cross Encoder Nano BEIR

  • Dataset: NanoBEIR_R100_mean
  • Evaluated with CrossEncoderNanoBEIREvaluator with these parameters:
    {
        "dataset_names": [
            "msmarco",
            "nfcorpus",
            "nq"
        ],
        "rerank_k": 100,
        "at_k": 10,
        "always_rerank_positives": true
    }
    
Metric Value
map 0.1529 (-0.2371)
mrr@10 0.1864 (-0.2816)
ndcg@10 0.1450 (-0.3104)

Training Details

Training Dataset

Unnamed Dataset

  • Size: 85,938 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 18 characters
    • mean: 33.49 characters
    • max: 49 characters
    • min: 18 characters
    • mean: 35.88 characters
    • max: 52 characters
  • Samples:
    question answer
    Check if configuration loaded successfully prevent further actions if configuration absent
    Add new user to list Store received user in memory
    Selects profitable jobs and schedules Displays scheduled jobs and profit
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 10.0,
        "num_negatives": 5,
        "activation_fn": "torch.nn.modules.activation.Sigmoid",
        "mini_batch_size": 16
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 1,000 evaluation samples
  • Columns: question and answer
  • Approximate statistics based on the first 1000 samples:
    question answer
    type string string
    details
    • min: 20 characters
    • mean: 33.63 characters
    • max: 54 characters
    • min: 18 characters
    • mean: 35.86 characters
    • max: 55 characters
  • Samples:
    question answer
    Add edge representing resource request Model process-resource dependency relationship
    Split text into words list Filter words matching given keyword.
    Calculate approximate cube root value Find cube root using exponentiation
  • Loss: CachedMultipleNegativesRankingLoss with these parameters:
    {
        "scale": 10.0,
        "num_negatives": 5,
        "activation_fn": "torch.nn.modules.activation.Sigmoid",
        "mini_batch_size": 16
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss NanoMSMARCO_R100_ndcg@10 NanoNFCorpus_R100_ndcg@10 NanoNQ_R100_ndcg@10 NanoBEIR_R100_mean_ndcg@10
-1 -1 - - 0.0146 (-0.5258) 0.2622 (-0.0628) 0.0058 (-0.4949) 0.0942 (-0.3612)
0.0030 1 1.7927 - - - - -
0.2976 100 1.2688 - - - - -
0.5952 200 0.8847 - - - - -
0.7440 250 - 0.8479 0.0586 (-0.4818) 0.2978 (-0.0272) 0.0717 (-0.4290) 0.1427 (-0.3127)
0.8929 300 0.8519 - - - - -
-1 -1 - - 0.0687 (-0.4718) 0.2916 (-0.0335) 0.0748 (-0.4258) 0.1450 (-0.3104)

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.082 kWh
  • Carbon Emitted: 0.000 kg of CO2
  • Hours Used: 0.306 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 4 x NVIDIA RTX 6000 Ada Generation
  • CPU Model: AMD EPYC 7763 64-Core Processor
  • RAM Size: 251.53 GB

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 5.1.0
  • Transformers: 4.48.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
9
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zhensuuu/reranker-MiniLM-L12-H384-uncased-intent

Finetuned
(103)
this model

Evaluation results