million-song-subset / README.md
trojblue's picture
Update README.md
6fd7f2e verified
metadata
dataset_info:
  features:
    - name: analysis_sample_rate
      dtype: int32
    - name: artist_7digitalid
      dtype: int32
    - name: artist_familiarity
      dtype: float64
    - name: artist_hotttnesss
      dtype: float64
    - name: artist_id
      dtype: string
    - name: artist_latitude
      dtype: float64
    - name: artist_location
      dtype: string
    - name: artist_longitude
      dtype: float64
    - name: artist_mbid
      dtype: string
    - name: artist_mbtags
      sequence: binary
    - name: artist_mbtags_count
      sequence: int64
    - name: artist_name
      dtype: string
    - name: artist_playmeid
      dtype: int32
    - name: artist_terms
      sequence: binary
    - name: artist_terms_freq
      sequence: float64
    - name: artist_terms_weight
      sequence: float64
    - name: audio_md5
      dtype: string
    - name: bars_confidence
      sequence: float64
    - name: bars_start
      sequence: float64
    - name: beats_confidence
      sequence: float64
    - name: beats_start
      sequence: float64
    - name: danceability
      dtype: float64
    - name: duration
      dtype: float64
    - name: end_of_fade_in
      dtype: float64
    - name: energy
      dtype: float64
    - name: key
      dtype: int32
    - name: key_confidence
      dtype: float64
    - name: loudness
      dtype: float64
    - name: mode
      dtype: int32
    - name: mode_confidence
      dtype: float64
    - name: num_songs
      dtype: int64
    - name: release
      dtype: string
    - name: release_7digitalid
      dtype: int32
    - name: sections_confidence
      sequence: float64
    - name: sections_start
      sequence: float64
    - name: segments_confidence
      sequence: float64
    - name: segments_loudness_max
      sequence: float64
    - name: segments_loudness_max_time
      sequence: float64
    - name: segments_loudness_start
      sequence: float64
    - name: segments_pitches
      sequence:
        sequence: float64
    - name: segments_start
      sequence: float64
    - name: segments_timbre
      sequence:
        sequence: float64
    - name: similar_artists
      sequence: binary
    - name: song_hotttnesss
      dtype: float64
    - name: song_id
      dtype: string
    - name: start_of_fade_out
      dtype: float64
    - name: tatums_confidence
      sequence: float64
    - name: tatums_start
      sequence: float64
    - name: tempo
      dtype: float64
    - name: time_signature
      dtype: int32
    - name: time_signature_confidence
      dtype: float64
    - name: title
      dtype: string
    - name: track_7digitalid
      dtype: int32
    - name: track_id
      dtype: string
    - name: year
      dtype: int32
  splits:
    - name: train
      num_bytes: 2365768621
      num_examples: 10000
  download_size: 1041881893
  dataset_size: 2365768621
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Million Song Subset (Processed Version)

Overview

This dataset is a structured extraction of the Million Song Subset, derived from HDF5 files into a tabular format for easier accessibility and analysis.

Source

  • Original dataset: Million Song Dataset (LabROSA, Columbia University & The Echo Nest)
  • Subset used: Million Song Subset (10,000 songs)
  • URL: http://millionsongdataset.com

Processing Steps

  1. Extraction: Used hdf5_getters.py to retrieve all available fields.
  2. Parallel Processing: Optimized extraction with ProcessPoolExecutor for speed.
  3. Conversion: Structured into a Pandas DataFrame.
  4. Storage: Saved as a Parquet file for efficient usage.

Format

  • Columns: Contains all available attributes from the original dataset, including artist metadata, song features, and audio analysis.
  • File Format: Parquet (optimized for efficient querying & storage).

Usage

  • Load the dataset with Datasets:
    from datasets import load_dataset  
    ds = load_dataset("trojblue/million-song-subset")
    
  • Explore and analyze various musical attributes easily.

License

For more details, visit the Million Song Dataset website.

Appendix: Processing Code

The dataset was converted using the following snippet:

import os
import unibox as ub
import pandas as pd
import numpy as np
import h5py
from tqdm import tqdm
from concurrent.futures import ProcessPoolExecutor

# https://github.com/tbertinmahieux/MSongsDB/blob/0c276e289606d5bd6f3991f713e7e9b1d4384e44/PythonSrc/hdf5_getters.py
import hdf5_getters

# Define dataset path
dataset_path = "/lv0/yada/dataproc5/data/MillionSongSubset"

# Function to extract all available fields from an HDF5 file
def extract_song_data(file_path):
    """Extracts all available fields from an HDF5 song file using hdf5_getters."""
    song_data = {}

    try:
        with hdf5_getters.open_h5_file_read(file_path) as h5:
            # Get all getter functions from hdf5_getters
            getters = [func for func in dir(hdf5_getters) if func.startswith("get_")]

            for getter in getters:
                try:
                    # Dynamically call each getter function
                    value = getattr(hdf5_getters, getter)(h5)

                    # Optimize conversions
                    if isinstance(value, np.ndarray):
                        value = value.tolist()
                    elif isinstance(value, bytes):
                        value = value.decode()

                    # Store in dictionary with a cleaned-up key name
                    song_data[getter[4:]] = value

                except Exception:
                    continue  # Skip errors but don't slow down

    except Exception as e:
        print(f"Error processing {file_path}: {e}")
    
    return song_data

# Function to process multiple files in parallel
def process_files_in_parallel(h5_files, num_workers=8):
    """Processes multiple .h5 files in parallel."""
    all_songs = []

    with ProcessPoolExecutor(max_workers=num_workers) as executor:
        for song_data in tqdm(executor.map(extract_song_data, h5_files), total=len(h5_files)):
            if song_data:
                all_songs.append(song_data)
    
    return all_songs

# Find all .h5 files
h5_files = [os.path.join(root, file) for root, _, files in os.walk(dataset_path) for file in files if file.endswith(".h5")]

# Process files in parallel
all_songs = process_files_in_parallel(h5_files, num_workers=24)

# Convert to Pandas DataFrame
df = pd.DataFrame(all_songs)

ub.saves(df, "hf://trojblue/million-song-subset", private=False)