license: apache-2.0
language:
- en
Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation
Our work studies the ratio between signal, a benchmark's ability to separate models; and noise, a benchmark's sensitivity to random variability during training steps.
This dataset contains evaluation results. For utilites to use this dataset and to reproduce the findings in our paper, please see our github.
Main Eval Suite (375 models)
import pandas as pd
from snr.download.hf import pull_predictions_from_hf
local_path = pull_predictions_from_hf("allenai/signal-and-noise", split_name='core')
df = pd.read_parquet(local_path)
print(f'Loaded {len(df):,} model evaluations')
>>> Loaded 388,924 model evaluations
List of Included Tasks
agi_eval
, arc_challenge
, arc_challenge:mc
, arc_easy
, arc_easy:mc
, autobencher
, autobencher:mc
, boolq
, boolq:mc
, codex_humaneval
, codex_humanevalplus
, copycolors:mc
, csqa
, csqa:mc
, custom_loss_numia_math
, custom_loss_sky_t1
, custom_loss_tulu_if
, drop
, gsm8k
, gsm_plus
, gsm_symbolic_main
, gsm_symbolic_p1
, gsm_symbolic_p2
, hellaswag
, hellaswag:mc
, jeopardy
, mbpp
, mbppplus
, medmcqa
, medmcqa:mc
, minerva
, minerva_math_500
, mmlu
, multitask_all
, multitask_code
, multitask_knowledge
, multitask_math
, openbookqa
, openbookqa:mc
, paloma_4chan_meta_sep
, paloma_c4_100_domains
, paloma_c4_en
, paloma_dolma-v1_5
, paloma_dolma_100_programing_languages
, paloma_dolma_100_subreddits
, paloma_falcon-refinedweb
, paloma_gab
, paloma_m2d2_s2orc_unsplit
, paloma_m2d2_wikipedia_unsplit
, paloma_manosphere_meta_sep
, paloma_mc4
, paloma_ptb
, paloma_redpajama
, paloma_twitterAAE_HELM_fixed
, paloma_wikitext_103
, piqa
, piqa:mc
, socialiqa
, socialiqa:mc
, squad
, triviaqa
, winogrande
, winogrande:mc
List of Included Models
- Intermediate checkpoint models (2):
allenai/OLMo-2-1124-13B
,allenai/OLMo-2-1124-7B
- Ladder models (25):
allenai/OLMo-Ladder-{190M|370M|760M|1B|3B}-{0.5xC|1xC|2xC|5xC|10xC}
- Datadecide models (225):
allenai/DataDecide-{c4|dclm-baseline|dclm-baseline-25p-dolma1.7-75p|dclm-baseline-50p-dolma1.7-50p|dclm-baseline-75p-dolma1.7-25p|dclm-baseline-qc-10p|dclm-baseline-qc-20p|dclm-baseline-qc-7p-fw2|dclm-baseline-qc-7p-fw3|dclm-baseline-qc-fw-10p|dclm-baseline-qc-fw-3p|dolma1_6plus|dolma1_7|dolma1_7-no-code|dolma1_7-no-flan|dolma1_7-no-math-code|dolma1_7-no-reddit|falcon|falcon-and-cc|falcon-and-cc-qc-10p|falcon-and-cc-qc-20p|falcon-and-cc-qc-orig-10p|falcon-and-cc-qc-tulu-10p|fineweb-edu|fineweb-pro}-{4M|20M|60M|90M|150M|300M|530M|750M|1B}
- External models (119):
01-ai/Yi-1.5-34B
,01-ai/Yi-1.5-6B
,01-ai/Yi-1.5-9B
,01-ai/Yi-1.5-9B-32K
,01-ai/Yi-34B
,01-ai/Yi-6B
,01-ai/Yi-6B-200K
,01-ai/Yi-9B
,01-ai/Yi-9B-200K
,BEE-spoke-data/smol_llama-220M-GQA
,BEE-spoke-data/smol_llama-220M-GQA-fineweb_edu
,CortexLM/btlm-7b-base-v0.2
,Deci/DeciLM-7B
,EleutherAI/pythia-1.4b
,EleutherAI/pythia-12b
,EleutherAI/pythia-14m
,EleutherAI/pythia-160m
,EleutherAI/pythia-1b
,EleutherAI/pythia-2.8b
,EleutherAI/pythia-6.9b
,EleutherAI/pythia-70m
,HelpingAI/Priya-3B
,HuggingFaceTB/SmolLM-1.7B
,HuggingFaceTB/SmolLM-135M
,HuggingFaceTB/SmolLM-360M
,HuggingFaceTB/SmolLM2-1.7B
,HuggingFaceTB/SmolLM2-135M
,Qwen/CodeQwen1.5-7B
,Qwen/Qwen1.5-1.8B
,Qwen/Qwen1.5-110B
,Qwen/Qwen1.5-14B
,Qwen/Qwen1.5-32B
,Qwen/Qwen1.5-4B
,Qwen/Qwen1.5-72B
,Qwen/Qwen1.5-7B
,Qwen/Qwen2-0.5B
,Qwen/Qwen2-1.5B
,Qwen/Qwen2-72B
,Qwen/Qwen2-7B
,Qwen/Qwen2.5-0.5B
,Qwen/Qwen2.5-1.5B
,Qwen/Qwen2.5-14B
,Qwen/Qwen2.5-32B
,Qwen/Qwen2.5-3B
,Qwen/Qwen2.5-72B
,Qwen/Qwen2.5-7B
,Qwen/Qwen2.5-Coder-14B
,Qwen/Qwen2.5-Coder-7B
,Qwen/Qwen2.5-Math-7B
,TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
,TinyLlama/TinyLlama_v1.1
,allenai/OLMo-1B-0724-hf
,allenai/OLMo-1B-hf
,allenai/OLMo-2-0325-32B
,allenai/OLMo-2-0425-1B
,allenai/OLMo-2-1124-13B
,allenai/OLMo-2-1124-7B
,allenai/OLMo-7B-0424-hf
,allenai/OLMo-7B-0724-hf
,allenai/OLMo-7B-Twin-2T-hf
,allenai/OLMo-7B-hf
,allenai/OLMoE-1B-7B-0924
,amd/AMD-Llama-135m
,beomi/gemma-mling-7b
,bigcode/starcoder2-3b
,bigcode/starcoder2-7b
,databricks/dolly-v1-6b
,deepseek-ai/deepseek-llm-67b-base
,deepseek-ai/deepseek-llm-7b-base
,deepseek-ai/deepseek-moe-16b-base
,dicta-il/dictalm2.0
,facebook/opt-1.3b
,google/codegemma-1.1-2b
,google/gemma-2-2b
,google/gemma-2-9b
,google/gemma-2b
,google/gemma-7b
,h2oai/h2o-danube3-4b-base
,huggyllama/llama-13b
,huggyllama/llama-30b
,huggyllama/llama-65b
,huggyllama/llama-7b
,ibm/PowerLM-3b
,jebish7/Nemotron-4-Mini-Hindi-4B-Base
,m-a-p/neo_7b
,meta-llama/Llama-2-13b-hf
,meta-llama/Llama-2-7b-hf
,meta-llama/Llama-3.1-70B
,meta-llama/Llama-3.1-8B
,meta-llama/Llama-3.2-1B
,meta-llama/Llama-3.2-3B
,meta-llama/Meta-Llama-3-70B
,meta-llama/Meta-Llama-3-8B
,meta-llama/Meta-Llama-3.1-70B
,meta-llama/Meta-Llama-3.1-8B
,microsoft/Orca-2-13b
,microsoft/Orca-2-7b
,microsoft/phi-1
,microsoft/phi-1_5
,microsoft/phi-2
,microsoft/phi-4
,mistralai/Mathstral-7B-v0.1
,mistralai/Mixtral-8x22B-v0.1
,mistralai/Mixtral-8x7B-v0.1
,mosaicml/mpt-7b
,princeton-nlp/Sheared-LLaMA-1.3B
,princeton-nlp/Sheared-LLaMA-2.7B
,qingy2024/Qwen2.5-4B
,speakleash/Bielik-11B-v2
,stabilityai/stablelm-2-1_6b
,stabilityai/stablelm-3b-4e1t
,tiiuae/Falcon3-10B-Base
,tiiuae/Falcon3-3B-Base
,tiiuae/Falcon3-Mamba-7B-Base
,tiiuae/falcon-11B
,tiiuae/falcon-7b
,togethercomputer/RedPajama-INCITE-7B-Base
,upstage/SOLAR-10.7B-v1.0
,vonjack/MobileLLM-125M-HF
DataDecide Eval Suite (225 models with 4M to 1B params)
import pandas as pd
from snr.download.hf import pull_predictions_from_hf
local_path = pull_predictions_from_hf("allenai/signal-and-noise", split_name='datadecide_intermediate')
df = pd.read_parquet(local_path)
print(f'Loaded {len(df):,} model evaluations')
>>> Loaded 212,047 model evaluations
Random Seed Eval Suite (20 models with 1B params)
import pandas as pd
from snr.download.hf import pull_predictions_from_hf
local_path = pull_predictions_from_hf("allenai/signal-and-noise", split_name='random_seeds')
df = pd.read_parquet(local_path)
print(f'Loaded {len(df):,} model evaluations')
>>> Loaded 296,358 model evaluations
AutoBencher QA Benchmark
For the AutoBencher evaluation used in our work, please refer to huggingface.co/datasets/allenai/autobencher-qa-33k.
Dataset Description
- Developed by: Allen Institute for AI (Ai2)
- Language(s) (NLP): English
- License: The model evaluations are intended for research and educational use in accordance with Ai2's Responsible Use Guidelines
- Contact: Technical inquiries:
[email protected]
. Press:[email protected]
Citation
@article{heineman2025signal,
title={Signal and Noise: A Framework for Reducing Uncertainty in Language Model Evaluation},
author={Heineman, David and Hofmann, Valentin and Magnusson, Ian and Gu, Yuling and Smith, Noah A and Hajishirzi, Hannaneh and Lo, Kyle and Dodge, Jesse},
journal={arXiv preprint arXiv:2508.13144},
year={2025}
}