Update README.md
Browse files
README.md
CHANGED
@@ -27,4 +27,323 @@ configs:
|
|
27 |
data_files:
|
28 |
- split: train
|
29 |
path: data/train-*
|
|
|
30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
data_files:
|
28 |
- split: train
|
29 |
path: data/train-*
|
30 |
+
license: mit
|
31 |
---
|
32 |
+
# PyTorch Semantic Code Dataset
|
33 |
+
|
34 |
+
A semantically-enriched Python code dataset combining syntactic tokenization with deep semantic analysis from Language Server Protocol (LSP) tools.
|
35 |
+
|
36 |
+
## 🎯 Overview
|
37 |
+
|
38 |
+
This dataset enhances tokenized Python code with semantic embeddings derived from static analysis tools (Tree-sitter + Jedi), providing models with both syntactic and semantic understanding of code symbols. Each token in the code is aligned with rich semantic information including type hints, definitions, documentation, and cross-references.
|
39 |
+
|
40 |
+
**Key Features:**
|
41 |
+
- 🔤 **Tokenized Python Code**: Using Qwen3-0.6B tokenizer
|
42 |
+
- 🧠 **Semantic Embeddings**: 1024D vectors from Qwen3-Embedding-0.6B
|
43 |
+
- 🔍 **Symbol Analysis**: Type information, definitions, and cross-references via Jedi
|
44 |
+
- 📍 **Precise Alignment**: Token-level mapping between syntax and semantics
|
45 |
+
- 🏗️ **Production Code**: Real PyTorch codebase for authentic patterns
|
46 |
+
|
47 |
+
## 📊 Dataset Statistics
|
48 |
+
|
49 |
+
| Metric | Value |
|
50 |
+
|--------|-------|
|
51 |
+
| **Total Sequences** | 1,540 |
|
52 |
+
| **Training Samples** | 1,232 |
|
53 |
+
| **Evaluation Samples** | 308 |
|
54 |
+
| **Average Sequence Length** | ~200 tokens |
|
55 |
+
| **Semantic Coverage** | ~35% of tokens have semantic information |
|
56 |
+
| **Embedding Dimension** | 1024 |
|
57 |
+
| **Source Code** | PyTorch codebase |
|
58 |
+
|
59 |
+
## 🏗️ Dataset Structure
|
60 |
+
|
61 |
+
Each sample contains:
|
62 |
+
|
63 |
+
```python
|
64 |
+
{
|
65 |
+
"input_ids": [2, 847, 3288, ...], # Tokenized code (Qwen3-0.6B)
|
66 |
+
"semantic_embeddings": [[0.1, -0.2, ...], ...], # 1024D embeddings per token
|
67 |
+
"semantic_positions": [0, 0, 1, 1, 0, ...], # Binary mask (1=has semantic info)
|
68 |
+
"attention_mask": [1, 1, 1, 1, 1, ...], # Standard attention mask
|
69 |
+
"file_path": "torch/nn/modules/linear.py", # Source file
|
70 |
+
"chunk_info": "lines_45_120", # Code chunk information
|
71 |
+
"num_symbols": 23 # Number of semantic symbols
|
72 |
+
}
|
73 |
+
```
|
74 |
+
|
75 |
+
### Field Descriptions
|
76 |
+
|
77 |
+
- **`input_ids`**: Token IDs from Qwen3-0.6B tokenizer
|
78 |
+
- **`semantic_embeddings`**: One 1024D vector per token, containing semantic information for symbol tokens or zeros for non-symbols
|
79 |
+
- **`semantic_positions`**: Binary mask indicating which tokens have meaningful semantic embeddings
|
80 |
+
- **`attention_mask`**: Standard attention mask for the sequence
|
81 |
+
- **`file_path`**: Path to the original Python file
|
82 |
+
- **`chunk_info`**: Information about which part of the file this sequence represents
|
83 |
+
- **`num_symbols`**: Count of tokens that received semantic enrichment
|
84 |
+
|
85 |
+
## 🔬 Semantic Information
|
86 |
+
|
87 |
+
The semantic embeddings encode rich information extracted via Jedi analysis:
|
88 |
+
|
89 |
+
### What's Embedded
|
90 |
+
- **Type Information**: `Type[ABC]`, `(self, event) -> None`
|
91 |
+
- **Definitions**: Function signatures, class definitions
|
92 |
+
- **Documentation**: Docstrings and comments
|
93 |
+
- **Cross-References**: Where symbols are defined
|
94 |
+
- **Import Resolution**: Module and package information
|
95 |
+
- **Scope Analysis**: Variable and function scope
|
96 |
+
|
97 |
+
### Example Semantic Descriptions
|
98 |
+
```python
|
99 |
+
# For token "_StreamBase"
|
100 |
+
"name: _StreamBase. kind: class_def. type: Type[_StreamBase].
|
101 |
+
definition: class _StreamBase. description: Base stream class abstraction."
|
102 |
+
```
|
103 |
+
|
104 |
+
## 🚀 Quick Start
|
105 |
+
|
106 |
+
### Loading the Dataset
|
107 |
+
|
108 |
+
```python
|
109 |
+
from datasets import load_dataset
|
110 |
+
from transformers import AutoTokenizer
|
111 |
+
|
112 |
+
# Load dataset
|
113 |
+
dataset = load_dataset("ant-des/pytorch-semantic-dataset-fixed")
|
114 |
+
|
115 |
+
# Load corresponding tokenizer
|
116 |
+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
|
117 |
+
|
118 |
+
# Access splits
|
119 |
+
train_dataset = dataset["train"]
|
120 |
+
eval_dataset = dataset["test"]
|
121 |
+
|
122 |
+
print(f"Training samples: {len(train_dataset)}")
|
123 |
+
print(f"Evaluation samples: {len(eval_dataset)}")
|
124 |
+
```
|
125 |
+
|
126 |
+
### Inspecting Samples
|
127 |
+
|
128 |
+
```python
|
129 |
+
# Get a sample
|
130 |
+
sample = train_dataset[0]
|
131 |
+
|
132 |
+
# Reconstruct the code
|
133 |
+
code = tokenizer.decode(sample["input_ids"], skip_special_tokens=True)
|
134 |
+
print("Code:", code[:200] + "...")
|
135 |
+
|
136 |
+
# Check semantic coverage
|
137 |
+
semantic_tokens = sum(sample["semantic_positions"])
|
138 |
+
total_tokens = len(sample["semantic_positions"])
|
139 |
+
coverage = semantic_tokens / total_tokens * 100
|
140 |
+
print(f"Semantic coverage: {coverage:.1f}%")
|
141 |
+
|
142 |
+
# Find semantic tokens
|
143 |
+
for i, (token_id, has_semantic) in enumerate(zip(sample["input_ids"], sample["semantic_positions"])):
|
144 |
+
if has_semantic:
|
145 |
+
token_text = tokenizer.decode([token_id])
|
146 |
+
print(f"Semantic token at position {i}: '{token_text}'")
|
147 |
+
```
|
148 |
+
|
149 |
+
## 🎯 Use Cases
|
150 |
+
|
151 |
+
### 1. **Semantic Code Completion**
|
152 |
+
Train language models that understand code semantics for better completions:
|
153 |
+
|
154 |
+
```python
|
155 |
+
# Model sees both syntax and semantics
|
156 |
+
input_ids = [class_token, identifier_token]
|
157 |
+
semantic_info = [zero_embedding, class_definition_embedding]
|
158 |
+
# → Better understanding of class structure
|
159 |
+
```
|
160 |
+
|
161 |
+
### 2. **Code Understanding Tasks**
|
162 |
+
- **Variable Type Inference**: Using semantic type information
|
163 |
+
- **Function Signature Prediction**: Leveraging parameter and return type data
|
164 |
+
- **Import Resolution**: Understanding cross-module dependencies
|
165 |
+
- **Refactoring Assistance**: Knowing symbol definitions and usages
|
166 |
+
|
167 |
+
### 3. **Multi-Modal Code Models**
|
168 |
+
Combine syntactic and semantic representations:
|
169 |
+
|
170 |
+
```python
|
171 |
+
class SemanticCodeModel(nn.Module):
|
172 |
+
def forward(self, input_ids, semantic_embeddings, semantic_positions):
|
173 |
+
# Process both streams
|
174 |
+
syntactic_repr = self.language_model(input_ids)
|
175 |
+
semantic_repr = self.semantic_projection(semantic_embeddings)
|
176 |
+
|
177 |
+
# Cross-attention fusion
|
178 |
+
enhanced_repr = self.cross_attention(
|
179 |
+
syntactic_repr, semantic_repr, semantic_positions
|
180 |
+
)
|
181 |
+
return enhanced_repr
|
182 |
+
```
|
183 |
+
|
184 |
+
## 🔧 Creation Methodology
|
185 |
+
|
186 |
+
### 1. **Source Selection**
|
187 |
+
- PyTorch codebase for production-quality Python code
|
188 |
+
- Filtered files: 1KB - 200KB size range
|
189 |
+
|
190 |
+
### 2. **Symbol Extraction**
|
191 |
+
```python
|
192 |
+
# Tree-sitter for precise symbol locations
|
193 |
+
tree = parser.parse(source_code)
|
194 |
+
symbols = extract_identifiers(tree) # Functions, classes, variables
|
195 |
+
|
196 |
+
# Jedi for semantic analysis
|
197 |
+
script = jedi.Script(code=source_code, path=file_path)
|
198 |
+
definitions = script.goto(line, column, follow_imports=True)
|
199 |
+
type_info = script.complete(line, column)
|
200 |
+
```
|
201 |
+
|
202 |
+
### 3. **Semantic Description Generation**
|
203 |
+
```python
|
204 |
+
def create_semantic_description(symbol_info):
|
205 |
+
description = f"name: {symbol.name}. kind: {symbol.type}. type: {symbol.type_hint}."
|
206 |
+
|
207 |
+
if symbol.definition:
|
208 |
+
description += f" definition: {symbol.definition}."
|
209 |
+
|
210 |
+
if symbol.docstring:
|
211 |
+
description += f" description: {symbol.docstring[:100]}."
|
212 |
+
|
213 |
+
return description
|
214 |
+
```
|
215 |
+
|
216 |
+
### 4. **Embedding and Alignment**
|
217 |
+
```python
|
218 |
+
# Generate embeddings
|
219 |
+
semantic_embeddings = embedding_model.encode(descriptions)
|
220 |
+
|
221 |
+
# Align to tokens using Tree-sitter locations
|
222 |
+
token_embeddings = align_symbols_to_tokens(
|
223 |
+
symbols, semantic_embeddings, tokenizer_output
|
224 |
+
)
|
225 |
+
```
|
226 |
+
|
227 |
+
## 📋 Model Training Example
|
228 |
+
|
229 |
+
```python
|
230 |
+
from transformers import AutoTokenizer, AutoModel, Trainer, TrainingArguments
|
231 |
+
import torch.nn as nn
|
232 |
+
|
233 |
+
class SemanticCodeModel(nn.Module):
|
234 |
+
def __init__(self, base_model_name, semantic_dim=1024):
|
235 |
+
super().__init__()
|
236 |
+
self.base_model = AutoModel.from_pretrained(base_model_name)
|
237 |
+
self.semantic_projection = nn.Linear(semantic_dim, self.base_model.config.hidden_size)
|
238 |
+
self.cross_attention = nn.MultiheadAttention(
|
239 |
+
self.base_model.config.hidden_size, num_heads=8
|
240 |
+
)
|
241 |
+
|
242 |
+
def forward(self, input_ids, semantic_embeddings, semantic_positions, **kwargs):
|
243 |
+
# Base language model
|
244 |
+
outputs = self.base_model(input_ids, **kwargs)
|
245 |
+
hidden_states = outputs.last_hidden_state
|
246 |
+
|
247 |
+
# Project semantic embeddings
|
248 |
+
semantic_proj = self.semantic_projection(semantic_embeddings)
|
249 |
+
|
250 |
+
# Apply semantic mask
|
251 |
+
masked_semantic = semantic_proj * semantic_positions.unsqueeze(-1)
|
252 |
+
|
253 |
+
# Cross-attention fusion
|
254 |
+
enhanced_states, _ = self.cross_attention(
|
255 |
+
hidden_states, masked_semantic, masked_semantic
|
256 |
+
)
|
257 |
+
|
258 |
+
return enhanced_states
|
259 |
+
|
260 |
+
# Data collator
|
261 |
+
class SemanticDataCollator:
|
262 |
+
def __init__(self, tokenizer):
|
263 |
+
self.tokenizer = tokenizer
|
264 |
+
|
265 |
+
def __call__(self, batch):
|
266 |
+
# Pad sequences and create batch tensors
|
267 |
+
max_len = max(len(item["input_ids"]) for item in batch)
|
268 |
+
|
269 |
+
# Implement padding logic for all fields
|
270 |
+
# ... (see full implementation in repository)
|
271 |
+
|
272 |
+
# Training setup
|
273 |
+
model = SemanticCodeModel("Qwen/Qwen3-0.6B")
|
274 |
+
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
|
275 |
+
data_collator = SemanticDataCollator(tokenizer)
|
276 |
+
|
277 |
+
training_args = TrainingArguments(
|
278 |
+
output_dir="./semantic-code-model",
|
279 |
+
learning_rate=5e-5,
|
280 |
+
per_device_train_batch_size=4,
|
281 |
+
num_train_epochs=3,
|
282 |
+
warmup_steps=100,
|
283 |
+
)
|
284 |
+
|
285 |
+
trainer = Trainer(
|
286 |
+
model=model,
|
287 |
+
args=training_args,
|
288 |
+
train_dataset=train_dataset,
|
289 |
+
eval_dataset=eval_dataset,
|
290 |
+
data_collator=data_collator,
|
291 |
+
)
|
292 |
+
|
293 |
+
trainer.train()
|
294 |
+
```
|
295 |
+
|
296 |
+
## 🛠️ Requirements
|
297 |
+
|
298 |
+
```bash
|
299 |
+
# Core dependencies
|
300 |
+
pip install datasets transformers torch
|
301 |
+
pip install tree-sitter tree-sitter-python
|
302 |
+
pip install jedi sentence-transformers
|
303 |
+
pip install numpy huggingface-hub
|
304 |
+
```
|
305 |
+
|
306 |
+
## 📖 Citation
|
307 |
+
|
308 |
+
If you use this dataset in your research, please cite:
|
309 |
+
|
310 |
+
```bibtex
|
311 |
+
@dataset{pytorch_semantic_dataset_2024,
|
312 |
+
title={PyTorch Semantic Code Dataset: Syntactic Tokenization with Semantic Enrichment},
|
313 |
+
author={Antoine Descamps},
|
314 |
+
year={2025},
|
315 |
+
url={https://huggingface.co/datasets/ant-des/pytorch-semantic-dataset-fixed},
|
316 |
+
note={A semantically-enriched Python code dataset combining Tree-sitter and Jedi analysis}
|
317 |
+
}
|
318 |
+
```
|
319 |
+
|
320 |
+
## 📄 License
|
321 |
+
|
322 |
+
This dataset is released under MIT.
|
323 |
+
|
324 |
+
**Note**: The source code is from the PyTorch project, which is licensed under BSD-3-Clause. This dataset contains processed representations of that code for research purposes.
|
325 |
+
|
326 |
+
## 📊 Dataset Card
|
327 |
+
|
328 |
+
### Dataset Summary
|
329 |
+
This dataset provides semantically-enriched Python code samples where each token is augmented with semantic information extracted through static analysis. It enables training of code models that can leverage both syntactic and semantic understanding.
|
330 |
+
|
331 |
+
### Supported Tasks
|
332 |
+
- Code completion with semantic awareness
|
333 |
+
- Type inference and prediction
|
334 |
+
- Symbol resolution and cross-referencing
|
335 |
+
- Code summarization and documentation
|
336 |
+
- Semantic code search and retrieval
|
337 |
+
|
338 |
+
### Languages
|
339 |
+
- Programming Language: Python
|
340 |
+
- Natural Language: English (for documentation and comments)
|
341 |
+
|
342 |
+
### Data Source
|
343 |
+
- PyTorch codebase (BSD-3-Clause licensed)
|
344 |
+
- Processed using Tree-sitter and Jedi static analysis tools
|
345 |
+
|
346 |
+
### Personal and Sensitive Information
|
347 |
+
This dataset contains only source code and does not include personal or sensitive information.
|
348 |
+
|
349 |
+
---
|