NURC-SP_ENTOA_TTS / README.md
RodrigoLimaRFL's picture
Update README.md
2080706 verified
metadata
license: mit
task_categories:
  - text-to-speech
language:
  - pt
dataset_info:
  - config_name: audioCorpus
    features:
      - name: audio_name
        dtype: string
      - name: file_path
        dtype:
          audio:
            sampling_rate: 16000
      - name: text
        dtype: string
      - name: start_time
        dtype: float64
      - name: end_time
        dtype: float64
      - name: duration
        dtype: float64
      - name: quality
        dtype: string
      - name: speech_genre
        dtype: string
      - name: speech_style
        dtype: string
      - name: variety
        dtype: string
      - name: accent
        dtype: string
      - name: sex
        dtype: string
      - name: age_range
        dtype: string
      - name: num_speakers
        dtype: string
      - name: speaker_id
        dtype: float64
    splits:
      - name: train
        num_bytes: 1321925619.759
        num_examples: 7941
      - name: validation
        num_bytes: 81865796
        num_examples: 500
    download_size: 1397176842
    dataset_size: 1403791415.759
  - config_name: automatic
    features:
      - name: path
        dtype:
          audio:
            sampling_rate: 16000
      - name: name
        dtype: string
      - name: speaker
        dtype: string
      - name: start_time
        dtype: float64
      - name: end_time
        dtype: float64
      - name: text
        dtype: string
      - name: duration
        dtype: int64
      - name: most_common_speaker
        dtype: string
      - name: __index_level_0__
        dtype: int64
    splits:
      - name: train
        num_bytes: 1355301836.348
        num_examples: 8021
      - name: validation
        num_bytes: 66393070
        num_examples: 382
    download_size: 1403414447
    dataset_size: 1421694906.348
  - config_name: prosodic
    features:
      - name: path
        dtype:
          audio:
            sampling_rate: 16000
      - name: name
        dtype: string
      - name: speaker
        dtype: string
      - name: start_time
        dtype: float64
      - name: end_time
        dtype: float64
      - name: normalized_text
        dtype: string
      - name: text
        dtype: string
      - name: duration
        dtype: float64
      - name: type
        dtype: string
      - name: year
        dtype: int64
      - name: gender
        dtype: string
      - name: age_range
        dtype: string
      - name: total_duration
        dtype: string
      - name: quality
        dtype: string
      - name: theme
        dtype: string
    splits:
      - name: train
        num_bytes: 1365221237.321
        num_examples: 7527
      - name: validation
        num_bytes: 82345917
        num_examples: 473
    download_size: 1436686693
    dataset_size: 1447567154.321
  - config_name: test
    features:
      - name: path
        dtype:
          audio:
            sampling_rate: 16000
      - name: name
        dtype: string
      - name: speaker
        dtype: string
      - name: start_time
        dtype: string
      - name: end_time
        dtype: string
      - name: text
        dtype: string
      - name: duration
        dtype: int64
    splits:
      - name: train
        num_bytes: 2179301
        num_examples: 29
    download_size: 2125875
    dataset_size: 2179301
configs:
  - config_name: audioCorpus
    data_files:
      - split: train
        path: audioCorpus/train-*
      - split: validation
        path: audioCorpus/validation-*
  - config_name: automatic
    data_files:
      - split: train
        path: automatic/train-*
      - split: validation
        path: automatic/validation-*
  - config_name: prosodic
    data_files:
      - split: train
        path: prosodic/train-*
      - split: validation
        path: prosodic/validation-*
  - config_name: test
    data_files:
      - split: train
        path: test/train-*

How to Load the Dataset

There are 4 configurations: "prosodic", "automatic", "audioCorpus" and test. To load the dataset with the HuggingFace datasets library, use the following code:

prosodic = load_dataset("nilc-nlp/NURC-SP_ENTOA_TTS", name="prosodic")
automatic = load_dataset("nilc-nlp/NURC-SP_ENTOA_TTS", name = "automatic")
audioCorpus = load_dataset("nilc-nlp/NURC-SP_ENTOA_TTS", name = "audioCorpus")
test = load_dataset("nilc-nlp/NURC-SP_ENTOA_TTS", name="test")

Parameters of each configuration

Prosodic Parameters

  • path: The path to the audio file.
  • name: The name of the original audio.
  • speaker: The speaker in the segment (each different speaker in the original source was given an integer id). This field was automatically writter by WhisperX, so it might not be accurate.
  • start_time: The time the audio segment starts in the original source in seconds.
  • end_time: The time the audio segment ends in the original source in seconds.
  • normalized_text: The human-made trancription without prosodic markings for the given audio.
  • text: The human-made trancription with prosodic markings for the given audio.
  • duration: The duration of the audio segment in seconds.
  • type: The type of the audio according to the original NURC-SP classification.
  • year: The year the audio was recorded
  • gender: he speaker's sex. Divided into 'F', 'M', 'F e F', 'F e M' and 'M e M' ('F' stands for female and 'M' stands for male). Note that some audio sources have more than one speaker, so in that case the sex refers to the main speaker or speakers.
  • age_range: The speaker's age range. Divided into 'I' (25 to 35), 'II' (36 to 55) and 'III' (over 55). Note that some audio sources have more than one speaker, so in that case the age range refers to the main speaker or speakers.
  • total_duration: The duration of the original audio in minutes.
  • quality: The human-determined quality of the audio
  • theme: The theme of the speech.
  • audio: The audio data of the segment.

Automatic Parameters

  • path: The path to the audio file.
  • name: The name of the original audio.
  • speaker: The speaker in the segment (each different speaker in the original source was given an integer id). This field was automatically writter by WhisperX, so it might not be accurate.
  • start_time: The time the audio segment starts in the original source in seconds.
  • end_time: The time the audio segment ends in the original source in seconds.
  • text: The automatic trancription for the given audio.
  • duration: The duration of the audio segment in seconds.
  • audio: The audio data of the segment.

AudioCorpus Parameters

  • audio_name: The name given to the audio in the database. All audios extracted from the same source have the same name.
  • file_path: The path to the audio file.
  • text: The human-verified trancription for the given audio.
  • start_time: The time the audio segment starts in the original source in seconds.
  • end_time: The time the audio segment ends in the original source in seconds.
  • duration: The duration of the audio segment in seconds.
  • quality: Whether or not the audio had parts that could not be transcribed properly. Audios without this characteristic are rated 'high' and audios with it are rated 'low'.
  • speech_genre: The speech genre of the original source of the segment. Divided into 'dialogue', 'interview' or 'lecture and talks'.
  • speech_style: The speech style of the original source of the segment. All segments are categorized as 'spontaneous speech'.
  • variety: The audio language. All segments are categorized as 'pt-br'.
  • accent: The speaker's accent. All segments are categorized as 'sp-city'. Note that some audio sources have more than one speaker, so in that case the accent refers to the main speaker or speakers.
  • sex: The speaker's sex. Divided into 'F', 'M', 'F e F', 'F e M' and 'M e M' ('F' stands for female and 'M' stands for male). Note that some audio sources have more than one speaker, so in that case the sex refers to the main speaker or speakers.
  • age_range: The speaker's age range. Divided into 'I' (25 to 35), 'II' (36 to 55) and 'III' (over 55). Note that some audio sources have more than one speaker, so in that case the age range refers to the main speaker or speakers.
  • num_speakers: The number of speakers in the original source of the segment. This field was automatically writter by WhisperX, so it might not be accurate.
  • speaker_id: The speaker in the segment (each different speaker in the original source was given an integer id). This field was automatically writter by WhisperX, so it might not be accurate.

Test Parameters

  • path: The path to the audio file.
  • name: The name of the original audio.
  • speaker: The speaker in the segment (each different speaker in the original source was given an integer id). This field was automatically writter by WhisperX, so it might not be accurate.
  • start_time: The time the audio segment starts in the original source in seconds.
  • end_time: The time the audio segment ends in the original source in seconds.
  • text: The automatic trancription for the given audio.
  • duration: The duration of the audio segment in seconds * 16000 (sampling rate).
  • audio: The audio data of the segment.