metadata
dataset_info:
features:
- name: text
dtype: string
- name: code
dtype: string
- name: task_id
dtype: int64
- name: test_setup_code
dtype: string
- name: test_list
sequence: string
- name: challenge_test_list
sequence: string
splits:
- name: train
num_bytes: 181176.50513347023
num_examples: 374
- name: few_shot
num_bytes: 4844.29158110883
num_examples: 10
- name: validation
num_bytes: 43598.62422997947
num_examples: 90
- name: test
num_bytes: 242214.57905544149
num_examples: 500
download_size: 230787
dataset_size: 471834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: few_shot
path: data/few_shot-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
This is the MBPP dataset. Downloaded from here and constructed as follows:
import datasets
ds = datasets.load_dataset("json", data_files="mbpp.jsonl", split="train")
test = ds.filter(lambda item: item['task_id'] >= 11 and item['task_id'] <= 510)
few_shot = ds.filter(lambda item: item['task_id'] >= 1 and item['task_id'] <= 10)
validation = ds.filter(lambda item: item['task_id'] >= 511 and item['task_id'] <= 600)
train = ds.filter(lambda item: item['task_id'] >= 601 and item['task_id'] <= 974)
ds = datasets.DatasetDict({ "train": train, "few_shot": few_shot, "validation": validation, "test": test })
ds.push_to_hub("arjunguha/mbpp")
Credit:
@misc{austin2021programsynthesislargelanguage,
title={Program Synthesis with Large Language Models},
author={Jacob Austin and Augustus Odena and Maxwell Nye and Maarten Bosma and Henryk Michalewski and David Dohan and Ellen Jiang and Carrie Cai and Michael Terry and Quoc Le and Charles Sutton},
year={2021},
eprint={2108.07732},
archivePrefix={arXiv},
primaryClass={cs.PL},
url={https://arxiv.org/abs/2108.07732},
}