Datasets:
Improve dataset card: Add paper, code, project links, task category, and sample usage
Browse filesThis PR significantly enhances the `Code-Regression` dataset card by:
- Adding direct links to the associated paper, GitHub repository, and project page at the top of the README for quick access to source information.
- Updating the introductory paragraph to provide clearer context about the dataset's role in "code-to-metric regression."
- Incorporating `task_categories: ['text-generation']` into the metadata, making the dataset more discoverable under relevant AI task filters.
- Adding a dedicated "Sample Usage with `RegressLM`" section, featuring a Python code snippet from the official GitHub README that demonstrates how to perform inference with models trained using this dataset.
- Wrapping the existing BibTeX citations with ````bibtex` markers for better formatting.
@@ -7,12 +7,15 @@ tags:
|
|
7 |
- leetcode
|
8 |
- kernel
|
9 |
- text regression
|
|
|
|
|
10 |
---
|
|
|
11 |
# Code-Regression
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
16 |
|
17 |
**Link for Graph-Regression dataset**: https://huggingface.co/datasets/akhauriyash/GraphArch-Regression
|
18 |
|
@@ -49,6 +52,26 @@ from datasets import load_dataset
|
|
49 |
ds = load_dataset("akhauriyash/Code-Regression")
|
50 |
```
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
## Testing Code-Regression with a basic Gemma RLM model
|
53 |
|
54 |
Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )
|
@@ -81,9 +104,12 @@ for SPACE in spaces:
|
|
81 |
if SPACE != "CDSS" or language is None or lang == language:
|
82 |
targets.append(float(row["target"]))
|
83 |
if SPACE == "CDSS":
|
84 |
-
inputs.append(f"# {SPACE}
|
|
|
|
|
85 |
else:
|
86 |
-
inputs.append(f"{SPACE}
|
|
|
87 |
except: continue
|
88 |
if len(inputs) >= MAX_ITEMS: break
|
89 |
preds = []
|
@@ -129,7 +155,7 @@ Paliskara, S., & Saroufim, M. (2025). KernelBook. https://huggingface.co/dataset
|
|
129 |
|
130 |
If you found this dataset useful for your research, please cite the original sources above as well as:
|
131 |
|
132 |
-
```
|
133 |
@article{akhauri2025regressionlanguagemodelscode,
|
134 |
title={Regression Language Models for Code},
|
135 |
author={Yash Akhauri and Xingyou Song and Arissa Wongpanich and Bryan Lewandowski and Mohamed S. Abdelfattah},
|
|
|
7 |
- leetcode
|
8 |
- kernel
|
9 |
- text regression
|
10 |
+
task_categories:
|
11 |
+
- text-generation
|
12 |
---
|
13 |
+
|
14 |
# Code-Regression
|
15 |
|
16 |
+
[Paper](https://huggingface.co/papers/2509.26476) | [GitHub Repository](https://github.com/google-deepmind/regress-lm/tree/main) | [Project Page](https://research.google/blog/simulating-large-systems-with-regression-language-models/)
|
17 |
|
18 |
+
A unified regression dataset collated from three sources (APPS, KBSS, CDSS) along with our own custom profiling for training and evaluating regression models that map code strings to a target metric. This dataset supports "code-to-metric regression," which involves predicting numeric outcomes of code executions using Regression Language Models (RLM), as described in the linked paper.
|
19 |
|
20 |
**Link for Graph-Regression dataset**: https://huggingface.co/datasets/akhauriyash/GraphArch-Regression
|
21 |
|
|
|
52 |
ds = load_dataset("akhauriyash/Code-Regression")
|
53 |
```
|
54 |
|
55 |
+
## Sample Usage with `RegressLM`
|
56 |
+
|
57 |
+
The `regress_lm` library provides the `RegressLM` class for decoding floating-point predictions from a given input and fine-tuning against new data. Below is an example of how to instantiate `RegressLM` and use it for inference.
|
58 |
+
|
59 |
+
```python
|
60 |
+
from regress_lm import core
|
61 |
+
from regress_lm import rlm
|
62 |
+
|
63 |
+
# Create RegressLM from scratch. Optionally, use `from_t5gemma_encoder`.
|
64 |
+
reg_lm = rlm.RegressLM.from_scratch(max_input_len=2048)
|
65 |
+
|
66 |
+
# Example (x,y) pairs, which can be fine-tuned against.
|
67 |
+
examples = [core.Example(x='hello', y=0.3), core.Example(x='world', y=-0.3)]
|
68 |
+
reg_lm.fine_tune(examples)
|
69 |
+
|
70 |
+
# Query inputs.
|
71 |
+
query1, query2 = core.ExampleInput(x='hi'), core.ExampleInput(x='bye')
|
72 |
+
samples1, samples2 = reg_lm.sample([query1, query2], num_samples=128)
|
73 |
+
```
|
74 |
+
|
75 |
## Testing Code-Regression with a basic Gemma RLM model
|
76 |
|
77 |
Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )
|
|
|
104 |
if SPACE != "CDSS" or language is None or lang == language:
|
105 |
targets.append(float(row["target"]))
|
106 |
if SPACE == "CDSS":
|
107 |
+
inputs.append(f"# {SPACE}
|
108 |
+
# Language: {lang}
|
109 |
+
{row['input']}")
|
110 |
else:
|
111 |
+
inputs.append(f"{SPACE}
|
112 |
+
{row['input']}")
|
113 |
except: continue
|
114 |
if len(inputs) >= MAX_ITEMS: break
|
115 |
preds = []
|
|
|
155 |
|
156 |
If you found this dataset useful for your research, please cite the original sources above as well as:
|
157 |
|
158 |
+
```bibtex
|
159 |
@article{akhauri2025regressionlanguagemodelscode,
|
160 |
title={Regression Language Models for Code},
|
161 |
author={Yash Akhauri and Xingyou Song and Arissa Wongpanich and Bryan Lewandowski and Mohamed S. Abdelfattah},
|