Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowInvalid
Message:      JSON parse error: Column(/file_exif_data/FileVersion) changed from string to number in row 22
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Trailing data
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 362, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column(/file_exif_data/FileVersion) changed from string to number in row 22

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Traceix AI Security Telemetry

Each dataset is a JSONL file where each line describes a single file analyzed by Traceix. For every file you get:

  • file_capabilities – high-level behaviors and capabilities (CAPA-style + mapped to ATT&CK and MBC tags like Execution/T1129, Discovery/T1083, etc.).
  • file_exif_data – parsed EXIF metadata (file size, type, timestamps, company/product info, subsystem, linker/OS versions, etc.).
  • model_classification_infoTraceix model verdict (safe / malicious), classification timestamp, and inference latency in seconds.
  • decrypted_training_data – numeric feature vector actually used for training/inference (PE header fields, section statistics, imports/resources counts, entropy stats, etc.).
  • metadata – model version and accuracy, upload metadata (timestamp, SHA-256, license), and payment information (THRT amount, Solana transaction hash + explorer URL, price at time of payment).

All records are focused on malware analysis and are stored in JSONL format. Datasets are automatically exported by Traceix on a monthly schedule and published as-is under the CC BY 4.0 license.

You can quickly load and sanity-check any monthly corpus using:

from datasets import load_dataset
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split


# Load the Traceix telemetry dataset
ds = load_dataset(
    "PerkinsFund/traceix-ai-security-telemetry",
    data_files="traceix-telemetry-corpus-2025-12.jsonl",  # Or whatever month you want
    split="train",
)

# We will need to flatten nested JSON into columns
df = ds.to_pandas()
df_flat = pd.json_normalize(df.to_dict(orient="records"))

# Define the features and label based on schema
feature_cols = [
    "decrypted_training_data.SizeOfCode",
    "decrypted_training_data.SectionsMeanEntropy",
    "decrypted_training_data.ImportsNb",
]

label_col = "model_classification_info.identified_class"

# We don't have to but we will drop the rows with missing data
df_flat = df_flat.dropna(subset=feature_cols + [label_col])

X = df_flat[feature_cols].values
y = (df_flat[label_col] == "malicious").astype(int)

# Start the training and test split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Test the basic file classifier
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)

print("Test accuracy:", clf.score(X_test, y_test))
Downloads last month
40