Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    RuntimeError
Message:      Failed to open input buffer: Invalid data found when processing input
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1953, in __iter__
                  batch = formatter.format_batch(pa_table)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
                  batch = self.python_features_decoder.decode_batch(batch)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
                  return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2147, in decode_batch
                  decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1409, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/audio.py", line 204, in decode_example
                  audio = AudioDecoder(f, stream_index=self.stream_index, sample_rate=self.sampling_rate)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/torchcodec/decoders/_audio_decoder.py", line 64, in __init__
                  self._decoder = create_decoder(source=source, seek_mode="approximate")
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/torchcodec/decoders/_decoder_utils.py", line 45, in create_decoder
                  return core.create_from_file_like(source, seek_mode)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/torchcodec/_core/ops.py", line 151, in create_from_file_like
                  return _convert_to_tensor(_pybind_ops.create_from_file_like(file_like, seek_mode))
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              RuntimeError: Failed to open input buffer: Invalid data found when processing input

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for PUUM_passive_recordings

Dataset Details

This is a dataset containing unlabelled, unprocessed passive acoustic recordings of Hawaiian birds in the Pu'u Maka'ala Natural Area Reserve (PUUM) in Hawaii. It is intended for use in unsupervised audio analysis methods, classification using existing models, and other machine learning and ecology research purposes. Additionally, this dataset contains dataframes with the weather and bird detections.

Supported Tasks and Leaderboards

This dataset contains passive acoustic recordings collected as part of the Experiential Introduction to AI and Ecology course through the Imageomics Institute and ABC Global Center during January 2025.

This dataset is intended for use with unsupervised computer vision or acoustic machine learning models. No labels are provided, but recorder locations and recording timestamps are included, allowing for analysis of the relationship between ecological factors and variations in birdsong.

The dataset contains passive acoustic recordings from 9 recorders (6 along a phenology transect, 3 in koa restoration sites) located in the Pu'u Maka'ala Natural Area Reserve (PUUM).

Recorder Placement Map Map showing the locations of acoustic recorders at PUUM. The phenology transect includes 6 recorders placed along an 800m transect in forested habitat. Three additional recorders were placed in koa restoration sites of varying maturity: Open Grassland, Park Land, and Closed Canopy.

Dataset Structure

csv/
    grouped_with_dist.csv
    koa_birds_single_species.csv
    koa_birds_ss_multiple_species_001.csv
    phenology_birds_single_species.csv
    phenology_birds_ss_multiple_species_001.csv
    phenology_sound_recorders.csv
    pukiawe_detections_w_visits.csv
    weather.csv
koa_data/ # 3 recorders
    <recorder_id>/
          <recorder_id>_Summary.txt
          Data/
              <recorder_id>_YYYYMMDD_HHMMSS.wav
              ...
    <recorder_id>/
          ...
      ...
    recorder_data_summary.txt
phenology_data/ # 6 recorders
    <recorder_id>/
          <recorder_id>_Summary.txt
          Data/
              <recorder_id>_YYYYMMDD_HHMMSS.wav
              ...
    <recorder_id>/
          ...
      ...
    phenology_metadata.csv

File Descriptions:

  • recorder_data_summary.txt: Summary statistics file for koa_data recordings, including file counts, total size in MB, count of files shorter/longer than 5 minutes, and total recording duration in hours for each recorder.

  • check.py: Python script that generates the recorder_data_summary.txt file by analyzing WAV files in the koa_data folder. Requires Python with standard libraries: wave, contextlib, os, datetime.

Data Instances

All audio files are named <recorder_id>_YYYYMMDD_HHMMSS.wav and included inside a folder named after the recorder ID. The audio files are within a Data/ folder under the recorder ID folder (for koa_data) or directly under the recorder ID folder (for phenology_data). Each recording starts at the time listed in the filename. Most recordings are 1 hour long, but some may be shorter. Recordings were taken using a SongMeter Micro 2.

Data Fields

Files in csv/

grouped_with_dist.csv Spatial relationships between acoustic recorders and camera trap locations, with distances in meters.

  • recorder_id: Unique identifier for the acoustic recorder
  • camera_ids: Array of identifiers for camera traps associated with the recorder
  • camera_names: Array of plant names associated with each camera
  • distances_m: Array of distances in meters between the recorder and plants

koa_birds_single_species.csv Bird detections from koa habitat recordings with single species identifications per detection.

  • label: Species code or abbreviation (e.g., omao)
  • date: Recording date
  • time: Recording time
  • common_name: Full common name of the bird species
  • recorder: Identifier for the acoustic recorder that captured the bird call
  • habitat: Type of habitat where the recording was made
  • time_label: Categorized time of day (e.g., Morning, Afternoon)

koa_birds_ss_multiple_species_001.csv Bird detections from koa habitat recordings processed with sound separation, allowing for multiple species detections per recording segment.

  • label: Species code or abbreviation (e.g., omao)
  • date: Recording date
  • time: Recording time
  • common_name: Full common name of the bird species
  • recorder: Identifier for the acoustic recorder that captured the bird call
  • probability: Classification probability assigned by Perch model
  • time_label: Categorized time of day (e.g., Morning, Afternoon)

phenology_birds_single_species.csv Bird detections from phenology study recordings with single species identifications and associated plant data.

  • label: Species code or abbreviation (e.g., omao)
  • date: Recording date
  • time: Recording time
  • common_name: Full common name of the bird species
  • recorder: Identifier for the acoustic recorder that captured the bird call
  • plant: Closest plant
  • time_label: Categorized time of day (e.g., Morning, Afternoon)

phenology_birds_ss_multiple_species_001.csv Bird detections from phenology study recordings processed with sound separation, allowing for multiple species detections per recording segment.

  • label: Species code or abbreviation (e.g., omao)
  • date: Recording date
  • time: Recording time
  • common_name: Full common name of the bird species
  • recorder: Identifier for the acoustic recorder that captured the bird call
  • probability: Classification probability assigned by Perch model
  • time_label: Categorized time of day (e.g., Morning, Afternoon)

phenology_sound_recorders.csv Detailed field installation data for phenology recorders, including exact coordinates, installation timing, and habitat information.

  • microphone_id: Unique identifier for each acoustic recorder
  • sd_card: Identifier for SD card used in the recorder
  • n: Latitude coordinate (decimal degrees)
  • w: Longitude coordinate (decimal degrees)
  • elevation_ft: Elevation in feet where recorder was placed
  • camera_trap_sd_card: Identifier for SD card used in nearby camera trap (if applicable)
  • camera_trap_id: Unique identifier for nearby camera trap (if applicable)
  • installation_time: Time when the recorder was installed (HH:MM format)
  • date: Date when the recorder was installed
  • note: Additional information about the location or installation
  • swapped_out_date: Date when the SD card was exchanged (if applicable)
  • habitat: Type of habitat where the recorder was placed
  • birds: Species of birds observed or targeted at the location

pukiawe_detections_w_visits.csv Camera trap detections of visitors to pukiawe plants, tracking visit patterns and species interactions. Includes image embeddings for classification.

  • filepath: Path to the cropped detection image
  • date: Date when the image was captured (YYYY-MM-DD format)
  • bbox: Bounding box coordinates for the detection
  • confidence: Detection confidence score
  • class: Detected class (e.g., Aves, Magnoliopsida)
  • timestamp: Full timestamp of the image capture
  • common_name: Common name of the plant species (e.g., pukiawe)
  • camera_id: Identifier for the camera trap
  • species: Identified bird species visiting the plant (if applicable)
  • visit_number: Sequential number for visits of the same object/animal
  • feature_0 to feature_511: 512-dimensional image embedding features

weather.csv Daily environmental measurements including temperature, rainfall, humidity, and vegetation indices for correlation with bird activity. Weather data were obtained from the Hawai'i Climate Data Portal for the PUUM site coordinates.

  • date: Date of environmental measurements in Mon-DD format (e.g., Jan-22). All dates are in 2025.
  • rainfall_mm: Daily rainfall measurement in millimeters
  • humidity_percent: Relative humidity percentage
  • mean_temp_c: Mean temperature in degrees Celsius
  • ndvi: Normalized Difference Vegetation Index (measure of vegetation health/density)
  • latitude: Latitude coordinate (decimal degrees)
  • longitude: Longitude coordinate (decimal degrees)
  • min_temp: Minimum temperature (°C)
  • max_temp: Maximum temperature (°C)

Files in phenology_data/

phenology_metadata.csv Metadata for phenology recorders including deployment details and geographic coordinates.

  • microphone_id: Unique identifier for each acoustic recorder
  • sd_card: Identifier for SD card used in the recorder
  • n: Latitude coordinate (decimal degrees)
  • w: Longitude coordinate (decimal degrees)
  • elevation_ft: Elevation in feet where recorder was placed
  • camera_trap_sd_card: Identifier for SD card used in nearby camera trap (if applicable)
  • camera_trap_id: Unique identifier for nearby camera trap (if applicable)
  • installation_time: Time when the recorder was installed (HH:MM format)
  • date: Date when the recorder was installed
  • note: Additional information about the location or installation
  • swapped_out_date: Date when the SD card was exchanged (if applicable)

Data Coverage and Distribution

The following figures illustrate the temporal coverage of audio recordings and the distribution of detected bird species across different habitat types.

Recording Coverage:

Phenology Audio Presence Temporal coverage of audio recordings across the six phenology transect recorders, showing recording availability by date and recorder.

Koa Audio Presence Temporal coverage of audio recordings across the three koa restoration site recorders.

Species Detection Patterns:

Phenology Species Count Number of unique bird species detected at phenology transect recorders, grouped by focal plant type. Comparison shows species counts before and after source separation processing.

Koa Species Count Number of unique bird species detected across koa restoration sites with different maturity levels (Open Grassland, Park Land, Closed Canopy).

Community Dissimilarity:

KL Divergence Kullback-Leibler divergence heatmap showing bird community dissimilarity between recorders. Lower values (darker colors) indicate more similar bird communities, while higher values (lighter colors) indicate greater dissimilarity.

Data Splits

Only one data split: data. If being used for training/testing/validation of models, splits must be made manually.

Dataset Creation

This dataset was compiled as part of the field component of the Experiential Introduction to AI and Ecology Course run by the Imageomics Institute and the AI and Biodiversity Change (ABC) Global Center. This field work was done on the island of Hawai'i January 15-30, 2025.

Curation Rationale

This dataset was created in order to study the correlation between Hawaiian birds and phenology in natural and restored habitats.

Source Data

These data were collected at the Pu'u Maka'ala Natural Area Reserve (PUUM), a NEON field site located on the windward slope of Mauna Loa volcano at approximately 1700m elevation on Hawai'i Island. The site includes diverse habitats ranging from grasslands to tropical rainforest, with active koa-dominated forest restoration.

Data Collection and Processing

Nine SongMeter Micro 2 audio recorders (Wildlife Acoustics) were deployed between January 23 and April 3, 2025. Six recorders were arranged along an 800-meter phenology transect in a forested area, while three additional recorders were installed at separate koa tree (Acacia koa) restoration sites representing different maturity stages (Open Grassland, Park Land, and Closed Canopy).

Each recorder was programmed to collect acoustic data daily: 30 minutes during the dawn chorus (6:00-7:00) and 15 minutes every hour from 7:00 to 19:00. Raw audio files were processed using Bird-MixIT, an unsupervised sound source separation model based on Mixture Invariant Training, to isolate individual bird vocalizations from overlapping environmental sounds. Separated sources were then classified using the Perch bird sound recognition model (v2.0), retaining all Hawaiian species detections with probability >0.01.

Who are the source data producers?

These data were produced through a collaborative effort involving members of the AI and Biodiversity Change (ABC) Global Center, the Imageomics Institute, participants in the Experiential Introduction to AI and Ecology Course, and the National Ecological Observatory Network (NEON) team. NEON team members provided crucial support for recorder deployment and field logistics at the Pu'u Maka'ala Natural Area Reserve.

Considerations for Using the Data

Bias, Risks, and Limitations

Temporal Coverage: Recordings span January 23 - April 3, 2025, capturing late winter through early spring conditions. This temporal window may not represent year-round patterns in bird activity, particularly for migratory or seasonally variable species. Users should be cautious when extrapolating findings beyond this period.

Weather Effects: Heavy rainfall events can introduce acoustic interference and reduce detection rates. The negative correlation between rainfall and bird detections observed in the data may reflect both actual behavioral changes and technical limitations of the classification pipeline during precipitation events.

Spatial Variation: The six phenology transect recorders were placed near specific focal plant species, which may introduce vegetation-related bias in bird community observations. The three koa restoration sites represent different maturity stages but may not capture the full range of restoration conditions.

Classification Limitations: Bird species classifications were generated using the Perch model with a probability threshold of 0.01, which may result in false positives for rare species or misclassifications in complex acoustic environments. The model was trained on directional recordings, and performance may degrade in passive monitoring contexts with overlapping vocalizations despite source separation preprocessing.

Incomplete Coverage: Not all recorders operated continuously throughout the study period due to battery limitations, memory capacity, or equipment issues. See the audio presence figures in this dataset for detailed coverage by recorder and date.

Road and Human Noise: Some recorders may be affected by nearby footpaths, roads, or human activity, potentially introducing non-biological acoustic interference.

Recommendations

Data Validation: For critical analyses, consider manually validating a subset of automated detections, particularly for rare or endangered species where accurate counts are essential.

Weather Integration: When analyzing bird activity patterns, incorporate the provided weather data (rainfall, temperature, humidity) to distinguish behavioral responses from technical artifacts.

Habitat Context: Use the provided metadata (recorder locations, focal plant associations, habitat classifications) to control for spatial and habitat-related variation in analyses.

Probability Thresholding: The provided CSVs include probability scores for all detections. Users may wish to apply stricter probability thresholds (e.g., >0.1 or >0.5) depending on their tolerance for false positives versus false negatives.

Cross-Validation with Camera Traps: Acoustic detections can be cross-referenced with camera trap observations in pukiawe_detections_w_visits.csv to validate species presence. Use grouped_with_dist.csv to identify which camera traps are nearest to each acoustic recorder. Additional camera trap data from koa restoration sites is available in the PUUM Koa Restoration Camera Trap Dataset.

Licensing Information

This dataset is available to share and adapt for any use under the CC BY 4.0 license, provided appropriate credit is given. We ask that you cite this dataset if you make use of these data in any work or product.

Citation

If you use this dataset in your research, please cite it as:

BibTeX:

@misc{acoustic_puum_2025,
  author = {Nepovinnykh, Ekaterina and Zolotarev, Fedor and Kholiavchenko, Maksim and Banerji, Namrata and Beattie, Jacob and Keebler, Hikaru and Chen, Yuyan and Jousse, Maximiliane and Gabeff, Valentin and Potlapally, Anirudh and Meyers, Luke and Campolongo, Elizabeth and Berger-Wolf, Tanya and Rubenstein, Daniel},
  title = {PUUM Passive Acoustic Recordings (Revision 520ecee)},
  year = {2025},
  url = {https://huggingface.co/datasets/imageomics/acoustic-PUUM},
  doi = {10.57967/hf/7325},
  publisher = {Hugging Face}
}

Acknowledgements

This work was supported by both the Imageomics Institute and the AI and Biodiversity Change (ABC) Global Center. The Imageomics Institute is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). The ABC Global Center is funded by the US National Science Foundation under Award No. 2330423 and Natural Sciences and Engineering Research Council of Canada under Award No. 585136. This dataset draws on research supported by the Social Sciences and Humanities Research Council. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or Natural Sciences and Engineering Research Council of Canada.

This material is based in part upon work supported by the National Ecological Observatory Network (NEON), a program sponsored by the U.S. National Science Foundation (NSF) and operated under cooperative agreement by Battelle.

The data was gathered at the PUUM Natural Park Reserve NEON site in Hawai'i, in accordance with Research Permit No. 241118155000-NARS. We thank the Pu'u Maka'ala Natural Area Reserve (PUUM) team and the Hawai'i Department of Land and Natural Resources for providing access to field sites and supporting our data collection efforts.

Dataset Card Authors

Ekaterina Nepovinnykh, Fedor Zolotarev, Maksim Kholiavchenko

Dataset Card Contact

For questions relating to this dataset, please open a Discussion in the Community tab.

Downloads last month
1,966