dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4483331
num_examples: 342
- name: validation
num_bytes: 622617
num_examples: 39
download_size: 2534957
dataset_size: 5105948
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: apache-2.0
Q Code Pretraining Corpus
This dataset provides a corpus of Q programming language code and documentation, curated for pretraining large language models and code models.
📊 Dataset Overview
- Total Data: Over 1.6 million Q tokens, 5+ million characters
- Documents: 342 training chunks, 39 validation chunks
- Source Types:
- Open-source Q repositories (MIT/Apache 2.0 licenses)
- Official KDB+/Q documentation and tutorials
- Hand-curated code snippets and scripts
- Format: Cleaned, deduplicated, chunked for efficient pretraining
🎯 Key Features
- Q-Only: All data is pure Q language (no mixed Python or non-code noise)
- Permissive Licensing: All source code is MIT or Apache 2.0, suitable for both research and commercial use
- Coverage: Includes code from analytics, time-series, database queries, and utilities
- Filtered & Scored: LLM-assisted quality scoring plus manual review for top-tier data fidelity
- Chunked & Ready: Delivered as 4k-token chunks for immediate use with Hugging Face, TRL, or custom pipelines
🏗️ Dataset Structure
Each record is a text chunk, containing code or documentation in Q.
Splits:
train
: Main corpus for pretraining (342 chunks)validation
: Holdout set for evaluation (39 chunks)
Sample record:
{
"text": str # Raw Q code or documentation chunk
}
🧑💻 Usage
Loading the Dataset
from datasets import load_dataset
# Load the full Q pretraining dataset
dataset = load_dataset("morganstanley/q_pretrained_dataset")
# Access splits
train_data = dataset["train"]
val_data = dataset["validation"]
Example: Previewing Data
sample = dataset["train"][0]
print(sample["text"])
Training Usage
This dataset is designed for language model pretraining using next-token prediction or masked language modeling objectives.
Supports efficient training with Hugging Face Transformers, TRL, or custom frameworks.
🔤 About Q Programming Language
Q is a vector and array programming language developed by Kx Systems for high-performance analytics, finance, and time-series applications.
It features:
- Concise, functional, array-oriented syntax
- Powerful built-in operators for large-scale data manipulation
- Industry adoption in trading, banking, and real-time analytics
📁 Source Repositories
Major open-source Q repos included:
- DataIntellectTech/TorQ
- psaris/qtips
- psaris/funq
- KxSystems/ml
- finos/kdb
- LeslieGoldsmith/qprof
- jonathonmcmurray/reQ
- ...and more
All with permissive licenses (MIT or Apache 2.0).
📈 Data Preparation & Filtering
- Automated Scoring: Qwen-2.5-32B was used to score each file (0–10) for quality and relevance; only files scoring ≥4 were included.
- Manual Review: Additional cleaning to remove non-Q files or low-value content.
- Deduplication: Duplicate and boilerplate code removed.
📝 Citation
If you use this dataset in your research, please cite:
@dataset{q_pretraining_corpus_2024,
title={Q Code Pretraining Corpus},
author={Brendan Rappazzo Hogan},
year={2024},
url={https://huggingface.co/datasets/bhogan/q-pretraining-corpus},
note={Dataset for domain-adaptive pretraining of language models on the Q programming language}
}
Associated Paper: [Link to paper will be added here]