Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
repo
stringlengths
10
39
commit_hash
stringlengths
40
40
completion_file
dict
completion_lines
dict
repo_snapshot
sequence
completion_lines_raw
dict
paddlepaddle__paddlespeech
8f5e61090b569f9bf77f53a16668a533ae925f99
{"filename":"paddlespeech/audio/stream_data/tariterators.py","content":"#\n# Copyright (c) 2017-2021(...TRUNCATED)
{"infile":[227,263,201,277],"inproject":[160,163,167,169,173,175,278,183,120,121,123,279],"common":[(...TRUNCATED)
{"filename":[".clang-format",".flake8",".gitconfig",".gitignore",".mergify.yml",".pre-commit-config.(...TRUNCATED)
{"commited":[256,138,268,52,89,282,251,189],"common":[208,234,53,78],"infile":[227,263,201,272,277],(...TRUNCATED)
paddlepaddle__paddlespeech
80b83b7434646d932912e3ef8a130f8b75715a96
{"filename":"paddlespeech/server/util.py","content":"# Copyright (c) 2021 PaddlePaddle Authors. All (...TRUNCATED)
{"infile":[157,161,167,168,169,117,181,184,336,200,201,329,330,211,236,113],"inproject":[90,307,108,(...TRUNCATED)
{"filename":[".clang-format",".flake8",".gitconfig",".gitignore",".mergify.yml",".pre-commit-config.(...TRUNCATED)
{"commited":[137,111,124,83,115,86,220],"common":[291,261,266,138,235,238,302,272,241,282,123,189,19(...TRUNCATED)
paddlepaddle__paddlespeech
89a0ec90184765e050ce10a2114f46a9a85510d0
{"filename":"demos/audio_searching/src/vpr_search.py","content":"# Copyright (c) 2022 PaddlePaddle A(...TRUNCATED)
{"infile":[131,143,157,162,163,199,171,51,183,58,59,61,70,198,72,84,110,116],"inproject":[118,119,10(...TRUNCATED)
{"filename":[".clang-format",".flake8",".gitconfig",".gitignore",".mergify.yml",".pre-commit-config.(...TRUNCATED)
{"commited":[],"common":[194,130,69,170,109,142,206,50,83,182,156],"infile":[131,143,157,162,163,164(...TRUNCATED)
paddlepaddle__paddlespeech
92d1d08b9a9c8fbe96457024d60b60240fa3bc79
{"filename":"paddlespeech/audio/text/utility.py","content":"# Copyright (c) 2021 PaddlePaddle Author(...TRUNCATED)
{"infile":[194,207,246,219,158],"inproject":[102,134,145,51,27,308,363,333,118,267,132,335],"common"(...TRUNCATED)
{"filename":[".clang-format",".flake8",".gitconfig",".gitignore",".mergify.yml",".pre-commit-config.(...TRUNCATED)
{"commited":[],"common":[338],"infile":[194,207,246,219,158],"inproject":[385,131,132,259,134,260,26(...TRUNCATED)
paddlepaddle__paddlespeech
6e0044be582ee12846a74d70e1ba024b11b561a3
{"filename":"paddlespeech/t2s/models/diffsinger/diffsinger_updater.py","content":"# Copyright (c) 20(...TRUNCATED)
{"infile":[],"inproject":[119,250,29,179,115,175,120,122,130,249,248,301],"common":[],"commited":[11(...TRUNCATED)
{"filename":[".clang-format",".flake8",".gitconfig",".gitignore",".mergify.yml",".pre-commit-config.(...TRUNCATED)
{"commited":[116,172,44,191],"common":[],"infile":[],"inproject":[130,260,133,263,137,138,278,154,29(...TRUNCATED)
openforcefield__openff-interchange
ed3807e183e7b64f11b1718af47b54126aa1a231
{"filename":"openff/interchange/interop/gromacs/models/models.py","content":"\"\"\"Classes used to r(...TRUNCATED)
{"infile":[163,167,202,171,206,151,155,220,159],"inproject":[],"common":[30],"commited":[],"non_info(...TRUNCATED)
{"filename":[".codecov.yml",".gitattributes",".gitignore",".pre-commit-config.yaml",".readthedocs.ym(...TRUNCATED)
{"commited":[],"common":[30],"infile":[163,167,202,171,206,151,155,220,159],"inproject":[],"non_info(...TRUNCATED)
openforcefield__openff-interchange
671f33c274ba62f6a0113bccb8a5548ffed47cc2
{"filename":"openff/interchange/interop/gromacs/export/_export.py","content":"import pathlib\n\nfrom(...TRUNCATED)
{"infile":[77,78,79,80,81,82,83,23,24,26,28,29],"inproject":[162,163,164,16,51],"common":[93],"commi(...TRUNCATED)
{"filename":[".codecov.yml",".gitattributes",".gitignore",".pre-commit-config.yaml",".readthedocs.ym(...TRUNCATED)
{"commited":[],"common":[93],"infile":[77,78,79,80,81,82,83,23,24,26,28,29],"inproject":[162,163,164(...TRUNCATED)
openforcefield__openff-interchange
516ea5157e832388b7980ed3c675b9979954c321
{"filename":"openff/interchange/smirnoff/_gromacs.py","content":"from typing import TYPE_CHECKING, D(...TRUNCATED)
{"infile":[130,131],"inproject":[221,13,46,118,29,107,85,10,217,234,84,81],"common":[122,123,119],"c(...TRUNCATED)
{"filename":[".codecov.yml",".gitattributes",".gitignore",".pre-commit-config.yaml",".readthedocs.ym(...TRUNCATED)
{"commited":[],"common":[122,123,119],"infile":[130,131],"inproject":[10,11,12,13,156,29,158,39,173,(...TRUNCATED)
openforcefield__openff-interchange
721c92278d0429b512da3e6d34b7424296ecc0b9
{"filename":"openff/interchange/interop/gromacs/_interchange.py","content":"import networkx\nfrom op(...TRUNCATED)
{"infile":[195],"inproject":[53,56,20,97,24,10,143,14,136,172,55,171],"common":[193,131,132,70,75,10(...TRUNCATED)
{"filename":[".codecov.yml",".git-blame-ignore-revs",".gitattributes",".gitignore",".pre-commit-conf(...TRUNCATED)
{"commited":[],"common":[193,131,132,70,75,108,109,81,85,216,186,187],"infile":[195],"inproject":[13(...TRUNCATED)
openforcefield__openff-interchange
03b93c8fbb51cb0c85e44dc6e0077ca9eab38dac
{"filename":"openff/interchange/components/packmol.py","content":"\"\"\"\nA wrapper around PACKMOL. (...TRUNCATED)
{"infile":[676,297,682,745,269,401,786,693,731],"inproject":[389],"common":[537,730,733,543],"commit(...TRUNCATED)
{"filename":[".codecov.yml",".git-blame-ignore-revs",".gitattributes",".gitignore",".pre-commit-conf(...TRUNCATED)
{"commited":[],"common":[537,730,733,543],"infile":[676,297,682,745,269,401,786,693,731],"inproject"(...TRUNCATED)
End of preview. Expand in Data Studio

LCA Project Level Code Completion

How to load the dataset

from datasets import load_dataset

ds = load_dataset('JetBrains-Research/lca-codegen-huge', split='test')

Data Point Structure

  • repo – repository name in format {GitHub_user_name}__{repository_name}
  • commit_hash – commit hash
  • completion_file – dictionary with the completion file content in the following format:
  • filename – filepath to the completion file
  • content – content of the completion file
  • completion_lines – dictionary where keys are classes of lines and values are a list of integers (numbers of lines to complete). The classes are:
  • committed – line contains at least one function or class that was declared in the committed files from commit_hash
  • inproject – line contains at least one function or class that was declared in the project (excluding previous)
  • infile – line contains at least one function or class that was declared in the completion file (excluding previous)
  • common – line contains at least one function or class that was classified to be common, e.g., main, get, etc (excluding previous)
  • non_informative – line that was classified to be non-informative, e.g. too short, contains comments, etc
  • random – randomly sampled from the rest of the lines
  • repo_snapshot – dictionary with a snapshot of the repository before the commit. Has the same structure as completion_file, but filenames and contents are orginized as lists.
  • completion_lines_raw – the same as completion_lines, but before sampling.

How we collected the data

To collect the data, we cloned repositories from GitHub where the main language is Python. The completion file for each data point is a .py file that was added to the repository in a commit. The state of the repository before this commit is the repo snapshot.

Huge dataset is defined by number of characters in .py files from the repository snapshot. This number larger then 768K.

Dataset Stats

  • Number of datapoints: 296
  • Number of repositories: 75
  • Number of commits: 252

Completion File

  • Number of lines, median: 313.5
  • Number of lines, min: 200
  • Number of lines, max: 1877

Repository Snapshot

  • .py files: median 261, from 47 to 5227
  • non .py files: median 262, from 24 to 7687
  • .py lines: median 49811
  • non .py lines: median 60163

Line Counts:

  • infile: 2608
  • inproject: 2901
  • common: 692
  • committed: 1019
  • non-informative: 1164
  • random: 1426
  • total: 9810

Scores

HF Space

Downloads last month
641