|
--- |
|
dataset_info: |
|
features: |
|
- name: input_ids |
|
sequence: int32 |
|
- name: semantic_embeddings |
|
sequence: |
|
sequence: float64 |
|
- name: semantic_positions |
|
sequence: int64 |
|
- name: attention_mask |
|
sequence: int8 |
|
- name: file_path |
|
dtype: string |
|
- name: chunk_info |
|
dtype: string |
|
- name: num_symbols |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 2953926590 |
|
num_examples: 1540 |
|
download_size: 62991471 |
|
dataset_size: 2953926590 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: mit |
|
--- |
|
# PyTorch Semantic Code Dataset |
|
|
|
A semantically-enriched Python code dataset combining syntactic tokenization with deep semantic analysis from Language Server Protocol (LSP) tools. |
|
|
|
## 🎯 Overview |
|
|
|
This dataset enhances tokenized Python code with semantic embeddings derived from static analysis tools (Tree-sitter + Jedi), providing models with both syntactic and semantic understanding of code symbols. Each token in the code is aligned with rich semantic information including type hints, definitions, documentation, and cross-references. |
|
|
|
**Key Features:** |
|
- 🔤 **Tokenized Python Code**: Using Qwen3-0.6B tokenizer |
|
- 🧠 **Semantic Embeddings**: 1024D vectors from Qwen3-Embedding-0.6B |
|
- 🔍 **Symbol Analysis**: Type information, definitions, and cross-references via Jedi |
|
- 📍 **Precise Alignment**: Token-level mapping between syntax and semantics |
|
- 🏗️ **Production Code**: Real PyTorch codebase for authentic patterns |
|
|
|
## 📊 Dataset Statistics |
|
|
|
| Metric | Value | |
|
|--------|-------| |
|
| **Total Sequences** | 1,540 | |
|
| **Training Samples** | 1,232 | |
|
| **Evaluation Samples** | 308 | |
|
| **Average Sequence Length** | ~200 tokens (256 max length)| |
|
| **Semantic Coverage** | ~35% of tokens have semantic information | |
|
| **Embedding Dimension** | 1024 | |
|
| **Source Code** | PyTorch codebase | |
|
|
|
## 🏗️ Dataset Structure |
|
|
|
Each sample contains: |
|
|
|
```python |
|
{ |
|
"input_ids": [2, 847, 3288, ...], # Tokenized code (Qwen3-0.6B) |
|
"semantic_embeddings": [[0.1, -0.2, ...], ...], # 1024D embeddings per token |
|
"semantic_positions": [0, 0, 1, 1, 0, ...], # Binary mask (1=has semantic info) |
|
"attention_mask": [1, 1, 1, 1, 1, ...], # Standard attention mask |
|
"file_path": "torch/nn/modules/linear.py", # Source file |
|
"chunk_info": "lines_45_120", # Code chunk information |
|
"num_symbols": 23 # Number of semantic symbols |
|
} |
|
``` |
|
|
|
### Field Descriptions |
|
|
|
- **`input_ids`**: Token IDs from Qwen3-0.6B tokenizer |
|
- **`semantic_embeddings`**: One 1024D vector per token, containing semantic information for symbol tokens or zeros for non-symbols |
|
- **`semantic_positions`**: Binary mask indicating which tokens have meaningful semantic embeddings |
|
- **`attention_mask`**: Standard attention mask for the sequence |
|
- **`file_path`**: Path to the original Python file |
|
- **`chunk_info`**: Information about which part of the file this sequence represents |
|
- **`num_symbols`**: Count of tokens that received semantic enrichment |
|
|
|
## 🔬 Semantic Information |
|
|
|
The semantic embeddings encode rich information extracted via Jedi analysis: |
|
|
|
### What's Embedded |
|
- **Type Information**: `Type[ABC]`, `(self, event) -> None` |
|
- **Definitions**: Function signatures, class definitions |
|
- **Documentation**: Docstrings and comments |
|
- **Cross-References**: Where symbols are defined |
|
- **Import Resolution**: Module and package information |
|
- **Scope Analysis**: Variable and function scope |
|
|
|
### Example Semantic Descriptions |
|
```python |
|
# For token "_StreamBase" |
|
"name: _StreamBase. kind: class_def. type: Type[_StreamBase]. |
|
definition: class _StreamBase. description: Base stream class abstraction." |
|
``` |
|
|
|
## 🚀 Quick Start |
|
|
|
### Loading the Dataset |
|
|
|
```python |
|
from datasets import load_dataset |
|
from transformers import AutoTokenizer |
|
|
|
# Load dataset |
|
dataset = load_dataset("ant-des/pytorch-semantic-dataset-fixed") |
|
|
|
# Load corresponding tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B") |
|
|
|
# Access splits |
|
train_dataset = dataset["train"] |
|
eval_dataset = dataset["test"] |
|
|
|
print(f"Training samples: {len(train_dataset)}") |
|
print(f"Evaluation samples: {len(eval_dataset)}") |
|
``` |
|
|
|
### Inspecting Samples |
|
|
|
```python |
|
# Get a sample |
|
sample = train_dataset[0] |
|
|
|
# Reconstruct the code |
|
code = tokenizer.decode(sample["input_ids"], skip_special_tokens=True) |
|
print("Code:", code[:200] + "...") |
|
|
|
# Check semantic coverage |
|
semantic_tokens = sum(sample["semantic_positions"]) |
|
total_tokens = len(sample["semantic_positions"]) |
|
coverage = semantic_tokens / total_tokens * 100 |
|
print(f"Semantic coverage: {coverage:.1f}%") |
|
|
|
# Find semantic tokens |
|
for i, (token_id, has_semantic) in enumerate(zip(sample["input_ids"], sample["semantic_positions"])): |
|
if has_semantic: |
|
token_text = tokenizer.decode([token_id]) |
|
print(f"Semantic token at position {i}: '{token_text}'") |
|
``` |
|
|
|
## 🎯 Use Cases |
|
|
|
### 1. **Semantic Code Completion** |
|
Train language models that understand code semantics for better completions: |
|
|
|
```python |
|
# Model sees both syntax and semantics |
|
input_ids = [class_token, identifier_token] |
|
semantic_info = [zero_embedding, class_definition_embedding] |
|
# → Better understanding of class structure |
|
``` |
|
|
|
### 2. **Code Understanding Tasks** |
|
- **Variable Type Inference**: Using semantic type information |
|
- **Function Signature Prediction**: Leveraging parameter and return type data |
|
- **Import Resolution**: Understanding cross-module dependencies |
|
- **Refactoring Assistance**: Knowing symbol definitions and usages |
|
|
|
### 3. **Multi-Modal Code Models** |
|
Combine syntactic and semantic representations: |
|
|
|
```python |
|
class SemanticCodeModel(nn.Module): |
|
def forward(self, input_ids, semantic_embeddings, semantic_positions): |
|
# Process both streams |
|
syntactic_repr = self.language_model(input_ids) |
|
semantic_repr = self.semantic_projection(semantic_embeddings) |
|
|
|
# Cross-attention fusion |
|
enhanced_repr = self.cross_attention( |
|
syntactic_repr, semantic_repr, semantic_positions |
|
) |
|
return enhanced_repr |
|
``` |
|
|
|
## 🔧 Creation Methodology |
|
|
|
### 1. **Source Selection** |
|
- PyTorch codebase for production-quality Python code |
|
- Filtered files: 1KB - 200KB size range |
|
|
|
### 2. **Symbol Extraction** |
|
```python |
|
# Tree-sitter for precise symbol locations |
|
tree = parser.parse(source_code) |
|
symbols = extract_identifiers(tree) # Functions, classes, variables |
|
|
|
# Jedi for semantic analysis |
|
script = jedi.Script(code=source_code, path=file_path) |
|
definitions = script.goto(line, column, follow_imports=True) |
|
type_info = script.complete(line, column) |
|
``` |
|
|
|
### 3. **Semantic Description Generation** |
|
```python |
|
def create_semantic_description(symbol_info): |
|
description = f"name: {symbol.name}. kind: {symbol.type}. type: {symbol.type_hint}." |
|
|
|
if symbol.definition: |
|
description += f" definition: {symbol.definition}." |
|
|
|
if symbol.docstring: |
|
description += f" description: {symbol.docstring[:100]}." |
|
|
|
return description |
|
``` |
|
|
|
### 4. **Embedding and Alignment** |
|
```python |
|
# Generate embeddings |
|
semantic_embeddings = embedding_model.encode(descriptions) |
|
|
|
# Align to tokens using Tree-sitter locations |
|
token_embeddings = align_symbols_to_tokens( |
|
symbols, semantic_embeddings, tokenizer_output |
|
) |
|
``` |
|
|
|
## 📋 Model Training Example |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel, Trainer, TrainingArguments |
|
import torch.nn as nn |
|
|
|
class SemanticCodeModel(nn.Module): |
|
def __init__(self, base_model_name, semantic_dim=1024): |
|
super().__init__() |
|
self.base_model = AutoModel.from_pretrained(base_model_name) |
|
self.semantic_projection = nn.Linear(semantic_dim, self.base_model.config.hidden_size) |
|
self.cross_attention = nn.MultiheadAttention( |
|
self.base_model.config.hidden_size, num_heads=8 |
|
) |
|
|
|
def forward(self, input_ids, semantic_embeddings, semantic_positions, **kwargs): |
|
# Base language model |
|
outputs = self.base_model(input_ids, **kwargs) |
|
hidden_states = outputs.last_hidden_state |
|
|
|
# Project semantic embeddings |
|
semantic_proj = self.semantic_projection(semantic_embeddings) |
|
|
|
# Apply semantic mask |
|
masked_semantic = semantic_proj * semantic_positions.unsqueeze(-1) |
|
|
|
# Cross-attention fusion |
|
enhanced_states, _ = self.cross_attention( |
|
hidden_states, masked_semantic, masked_semantic |
|
) |
|
|
|
return enhanced_states |
|
|
|
# Data collator |
|
class SemanticDataCollator: |
|
def __init__(self, tokenizer): |
|
self.tokenizer = tokenizer |
|
|
|
def __call__(self, batch): |
|
# Pad sequences and create batch tensors |
|
max_len = max(len(item["input_ids"]) for item in batch) |
|
|
|
# Implement padding logic for all fields |
|
# ... (see full implementation in repository) |
|
|
|
# Training setup |
|
model = SemanticCodeModel("Qwen/Qwen3-0.6B") |
|
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B") |
|
data_collator = SemanticDataCollator(tokenizer) |
|
|
|
training_args = TrainingArguments( |
|
output_dir="./semantic-code-model", |
|
learning_rate=5e-5, |
|
per_device_train_batch_size=4, |
|
num_train_epochs=3, |
|
warmup_steps=100, |
|
) |
|
|
|
trainer = Trainer( |
|
model=model, |
|
args=training_args, |
|
train_dataset=train_dataset, |
|
eval_dataset=eval_dataset, |
|
data_collator=data_collator, |
|
) |
|
|
|
trainer.train() |
|
``` |
|
|
|
## 🛠️ Requirements |
|
|
|
```bash |
|
# Core dependencies |
|
pip install datasets transformers torch |
|
pip install tree-sitter tree-sitter-python |
|
pip install jedi sentence-transformers |
|
pip install numpy huggingface-hub |
|
``` |
|
|
|
## 📖 Citation |
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
```bibtex |
|
@dataset{pytorch_semantic_dataset_2024, |
|
title={PyTorch Semantic Code Dataset: Syntactic Tokenization with Semantic Enrichment}, |
|
author={Antoine Descamps}, |
|
year={2025}, |
|
url={https://huggingface.co/datasets/ant-des/pytorch-semantic-dataset-fixed}, |
|
note={A semantically-enriched Python code dataset combining Tree-sitter and Jedi analysis} |
|
} |
|
``` |
|
|
|
## 📄 License |
|
|
|
This dataset is released under MIT. |
|
|
|
**Note**: The source code is from the PyTorch project, which is licensed under BSD-3-Clause. This dataset contains processed representations of that code for research purposes. |
|
|
|
## 📊 Dataset Card |
|
|
|
### Dataset Summary |
|
This dataset provides semantically-enriched Python code samples where each token is augmented with semantic information extracted through static analysis. It enables training of code models that can leverage both syntactic and semantic understanding. |
|
|
|
### Supported Tasks |
|
- Code completion with semantic awareness |
|
- Type inference and prediction |
|
- Symbol resolution and cross-referencing |
|
- Code summarization and documentation |
|
- Semantic code search and retrieval |
|
|
|
### Languages |
|
- Programming Language: Python |
|
- Natural Language: English (for documentation and comments) |
|
|
|
### Data Source |
|
- PyTorch codebase (BSD-3-Clause licensed) |
|
- Processed using Tree-sitter and Jedi static analysis tools |
|
|
|
### Personal and Sensitive Information |
|
This dataset contains only source code and does not include personal or sensitive information. |
|
|
|
--- |