File size: 3,806 Bytes
deccb8b 044c592 3f6dc50 9a5fd84 3f6dc50 9a5fd84 3f6dc50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4483331
num_examples: 342
- name: validation
num_bytes: 622617
num_examples: 39
download_size: 2534957
dataset_size: 5105948
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: apache-2.0
---
# Q Code Pretraining Corpus
This dataset provides a corpus of Q programming language code and documentation, curated for pretraining large language models and code models.
## 📊 Dataset Overview
- **Total Data**: Over 1.6 million Q tokens, 5+ million characters
- **Documents**: 342 training chunks, 39 validation chunks
- **Source Types**:
- Open-source Q repositories (MIT/Apache 2.0 licenses)
- Official KDB+/Q documentation and tutorials
- Hand-curated code snippets and scripts
- **Format**: Cleaned, deduplicated, chunked for efficient pretraining
## 🎯 Key Features
- **Q-Only**: All data is pure Q language (no mixed Python or non-code noise)
- **Permissive Licensing**: All source code is MIT or Apache 2.0, suitable for both research and commercial use
- **Coverage**: Includes code from analytics, time-series, database queries, and utilities
- **Filtered & Scored**: LLM-assisted quality scoring plus manual review for top-tier data fidelity
- **Chunked & Ready**: Delivered as 4k-token chunks for immediate use with Hugging Face, TRL, or custom pipelines
## 🏗️ Dataset Structure
Each record is a text chunk, containing code or documentation in Q.
Splits:
- `train`: Main corpus for pretraining (342 chunks)
- `validation`: Holdout set for evaluation (39 chunks)
Sample record:
```python
{
"text": str # Raw Q code or documentation chunk
}
```
## 🧑💻 Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the full Q pretraining dataset
dataset = load_dataset("morganstanley/q_pretrained_dataset")
# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]
```
### Example: Previewing Data
```python
sample = dataset["train"][0]
print(sample["text"])
```
### Training Usage
This dataset is designed for language model pretraining using next-token prediction or masked language modeling objectives.
Supports efficient training with Hugging Face Transformers, TRL, or custom frameworks.
## 🔤 About Q Programming Language
Q is a vector and array programming language developed by Kx Systems for high-performance analytics, finance, and time-series applications.
It features:
- Concise, functional, array-oriented syntax
- Powerful built-in operators for large-scale data manipulation
- Industry adoption in trading, banking, and real-time analytics
## 📁 Source Repositories
Major open-source Q repos included:
- DataIntellectTech/TorQ
- psaris/qtips
- psaris/funq
- KxSystems/ml
- finos/kdb
- LeslieGoldsmith/qprof
- jonathonmcmurray/reQ
- ...and more
All with permissive licenses (MIT or Apache 2.0).
## 📈 Data Preparation & Filtering
- **Automated Scoring**: Qwen-2.5-32B was used to score each file (0–10) for quality and relevance; only files scoring ≥4 were included.
- **Manual Review**: Additional cleaning to remove non-Q files or low-value content.
- **Deduplication**: Duplicate and boilerplate code removed.
## 📝 Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{q_pretraining_corpus_2024,
title={Q Code Pretraining Corpus},
author={Brendan Rappazzo Hogan},
year={2024},
url={https://huggingface.co/datasets/bhogan/q-pretraining-corpus},
note={Dataset for domain-adaptive pretraining of language models on the Q programming language}
}
```
**Associated Paper:** [Link to paper will be added here]
|