File size: 11,298 Bytes
13457d5 52ed995 13457d5 52ed995 7c9592d 52ed995 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 |
---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: semantic_embeddings
sequence:
sequence: float64
- name: semantic_positions
sequence: int64
- name: attention_mask
sequence: int8
- name: file_path
dtype: string
- name: chunk_info
dtype: string
- name: num_symbols
dtype: int64
splits:
- name: train
num_bytes: 2953926590
num_examples: 1540
download_size: 62991471
dataset_size: 2953926590
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
---
# PyTorch Semantic Code Dataset
A semantically-enriched Python code dataset combining syntactic tokenization with deep semantic analysis from Language Server Protocol (LSP) tools.
## 🎯 Overview
This dataset enhances tokenized Python code with semantic embeddings derived from static analysis tools (Tree-sitter + Jedi), providing models with both syntactic and semantic understanding of code symbols. Each token in the code is aligned with rich semantic information including type hints, definitions, documentation, and cross-references.
**Key Features:**
- 🔤 **Tokenized Python Code**: Using Qwen3-0.6B tokenizer
- 🧠 **Semantic Embeddings**: 1024D vectors from Qwen3-Embedding-0.6B
- 🔍 **Symbol Analysis**: Type information, definitions, and cross-references via Jedi
- 📍 **Precise Alignment**: Token-level mapping between syntax and semantics
- 🏗️ **Production Code**: Real PyTorch codebase for authentic patterns
## 📊 Dataset Statistics
| Metric | Value |
|--------|-------|
| **Total Sequences** | 1,540 |
| **Training Samples** | 1,232 |
| **Evaluation Samples** | 308 |
| **Average Sequence Length** | ~200 tokens (256 max length)|
| **Semantic Coverage** | ~35% of tokens have semantic information |
| **Embedding Dimension** | 1024 |
| **Source Code** | PyTorch codebase |
## 🏗️ Dataset Structure
Each sample contains:
```python
{
"input_ids": [2, 847, 3288, ...], # Tokenized code (Qwen3-0.6B)
"semantic_embeddings": [[0.1, -0.2, ...], ...], # 1024D embeddings per token
"semantic_positions": [0, 0, 1, 1, 0, ...], # Binary mask (1=has semantic info)
"attention_mask": [1, 1, 1, 1, 1, ...], # Standard attention mask
"file_path": "torch/nn/modules/linear.py", # Source file
"chunk_info": "lines_45_120", # Code chunk information
"num_symbols": 23 # Number of semantic symbols
}
```
### Field Descriptions
- **`input_ids`**: Token IDs from Qwen3-0.6B tokenizer
- **`semantic_embeddings`**: One 1024D vector per token, containing semantic information for symbol tokens or zeros for non-symbols
- **`semantic_positions`**: Binary mask indicating which tokens have meaningful semantic embeddings
- **`attention_mask`**: Standard attention mask for the sequence
- **`file_path`**: Path to the original Python file
- **`chunk_info`**: Information about which part of the file this sequence represents
- **`num_symbols`**: Count of tokens that received semantic enrichment
## 🔬 Semantic Information
The semantic embeddings encode rich information extracted via Jedi analysis:
### What's Embedded
- **Type Information**: `Type[ABC]`, `(self, event) -> None`
- **Definitions**: Function signatures, class definitions
- **Documentation**: Docstrings and comments
- **Cross-References**: Where symbols are defined
- **Import Resolution**: Module and package information
- **Scope Analysis**: Variable and function scope
### Example Semantic Descriptions
```python
# For token "_StreamBase"
"name: _StreamBase. kind: class_def. type: Type[_StreamBase].
definition: class _StreamBase. description: Base stream class abstraction."
```
## 🚀 Quick Start
### Loading the Dataset
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Load dataset
dataset = load_dataset("ant-des/pytorch-semantic-dataset-fixed")
# Load corresponding tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Access splits
train_dataset = dataset["train"]
eval_dataset = dataset["test"]
print(f"Training samples: {len(train_dataset)}")
print(f"Evaluation samples: {len(eval_dataset)}")
```
### Inspecting Samples
```python
# Get a sample
sample = train_dataset[0]
# Reconstruct the code
code = tokenizer.decode(sample["input_ids"], skip_special_tokens=True)
print("Code:", code[:200] + "...")
# Check semantic coverage
semantic_tokens = sum(sample["semantic_positions"])
total_tokens = len(sample["semantic_positions"])
coverage = semantic_tokens / total_tokens * 100
print(f"Semantic coverage: {coverage:.1f}%")
# Find semantic tokens
for i, (token_id, has_semantic) in enumerate(zip(sample["input_ids"], sample["semantic_positions"])):
if has_semantic:
token_text = tokenizer.decode([token_id])
print(f"Semantic token at position {i}: '{token_text}'")
```
## 🎯 Use Cases
### 1. **Semantic Code Completion**
Train language models that understand code semantics for better completions:
```python
# Model sees both syntax and semantics
input_ids = [class_token, identifier_token]
semantic_info = [zero_embedding, class_definition_embedding]
# → Better understanding of class structure
```
### 2. **Code Understanding Tasks**
- **Variable Type Inference**: Using semantic type information
- **Function Signature Prediction**: Leveraging parameter and return type data
- **Import Resolution**: Understanding cross-module dependencies
- **Refactoring Assistance**: Knowing symbol definitions and usages
### 3. **Multi-Modal Code Models**
Combine syntactic and semantic representations:
```python
class SemanticCodeModel(nn.Module):
def forward(self, input_ids, semantic_embeddings, semantic_positions):
# Process both streams
syntactic_repr = self.language_model(input_ids)
semantic_repr = self.semantic_projection(semantic_embeddings)
# Cross-attention fusion
enhanced_repr = self.cross_attention(
syntactic_repr, semantic_repr, semantic_positions
)
return enhanced_repr
```
## 🔧 Creation Methodology
### 1. **Source Selection**
- PyTorch codebase for production-quality Python code
- Filtered files: 1KB - 200KB size range
### 2. **Symbol Extraction**
```python
# Tree-sitter for precise symbol locations
tree = parser.parse(source_code)
symbols = extract_identifiers(tree) # Functions, classes, variables
# Jedi for semantic analysis
script = jedi.Script(code=source_code, path=file_path)
definitions = script.goto(line, column, follow_imports=True)
type_info = script.complete(line, column)
```
### 3. **Semantic Description Generation**
```python
def create_semantic_description(symbol_info):
description = f"name: {symbol.name}. kind: {symbol.type}. type: {symbol.type_hint}."
if symbol.definition:
description += f" definition: {symbol.definition}."
if symbol.docstring:
description += f" description: {symbol.docstring[:100]}."
return description
```
### 4. **Embedding and Alignment**
```python
# Generate embeddings
semantic_embeddings = embedding_model.encode(descriptions)
# Align to tokens using Tree-sitter locations
token_embeddings = align_symbols_to_tokens(
symbols, semantic_embeddings, tokenizer_output
)
```
## 📋 Model Training Example
```python
from transformers import AutoTokenizer, AutoModel, Trainer, TrainingArguments
import torch.nn as nn
class SemanticCodeModel(nn.Module):
def __init__(self, base_model_name, semantic_dim=1024):
super().__init__()
self.base_model = AutoModel.from_pretrained(base_model_name)
self.semantic_projection = nn.Linear(semantic_dim, self.base_model.config.hidden_size)
self.cross_attention = nn.MultiheadAttention(
self.base_model.config.hidden_size, num_heads=8
)
def forward(self, input_ids, semantic_embeddings, semantic_positions, **kwargs):
# Base language model
outputs = self.base_model(input_ids, **kwargs)
hidden_states = outputs.last_hidden_state
# Project semantic embeddings
semantic_proj = self.semantic_projection(semantic_embeddings)
# Apply semantic mask
masked_semantic = semantic_proj * semantic_positions.unsqueeze(-1)
# Cross-attention fusion
enhanced_states, _ = self.cross_attention(
hidden_states, masked_semantic, masked_semantic
)
return enhanced_states
# Data collator
class SemanticDataCollator:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, batch):
# Pad sequences and create batch tensors
max_len = max(len(item["input_ids"]) for item in batch)
# Implement padding logic for all fields
# ... (see full implementation in repository)
# Training setup
model = SemanticCodeModel("Qwen/Qwen3-0.6B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
data_collator = SemanticDataCollator(tokenizer)
training_args = TrainingArguments(
output_dir="./semantic-code-model",
learning_rate=5e-5,
per_device_train_batch_size=4,
num_train_epochs=3,
warmup_steps=100,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
)
trainer.train()
```
## 🛠️ Requirements
```bash
# Core dependencies
pip install datasets transformers torch
pip install tree-sitter tree-sitter-python
pip install jedi sentence-transformers
pip install numpy huggingface-hub
```
## 📖 Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{pytorch_semantic_dataset_2024,
title={PyTorch Semantic Code Dataset: Syntactic Tokenization with Semantic Enrichment},
author={Antoine Descamps},
year={2025},
url={https://huggingface.co/datasets/ant-des/pytorch-semantic-dataset-fixed},
note={A semantically-enriched Python code dataset combining Tree-sitter and Jedi analysis}
}
```
## 📄 License
This dataset is released under MIT.
**Note**: The source code is from the PyTorch project, which is licensed under BSD-3-Clause. This dataset contains processed representations of that code for research purposes.
## 📊 Dataset Card
### Dataset Summary
This dataset provides semantically-enriched Python code samples where each token is augmented with semantic information extracted through static analysis. It enables training of code models that can leverage both syntactic and semantic understanding.
### Supported Tasks
- Code completion with semantic awareness
- Type inference and prediction
- Symbol resolution and cross-referencing
- Code summarization and documentation
- Semantic code search and retrieval
### Languages
- Programming Language: Python
- Natural Language: English (for documentation and comments)
### Data Source
- PyTorch codebase (BSD-3-Clause licensed)
- Processed using Tree-sitter and Jedi static analysis tools
### Personal and Sensitive Information
This dataset contains only source code and does not include personal or sensitive information.
--- |