text
stringlengths 0
4.99k
|
---|
gen_optimizer.apply_gradients(zip(grads_gen, generator.trainable_weights))
|
disc_optimizer.apply_gradients(zip(grads_disc, discriminator.trainable_weights))
|
self.gen_loss_tracker.update_state(gen_fm_loss)
|
self.disc_loss_tracker.update_state(disc_loss)
|
return {
|
\"gen_loss\": self.gen_loss_tracker.result(),
|
\"disc_loss\": self.disc_loss_tracker.result(),
|
}
|
Training
|
The paper suggests that the training with dynamic shapes takes around 400,000 steps (~500 epochs). For this example, we will run it only for a single epoch (819 steps). Longer training time (greater than 300 epochs) will almost certainly provide better results.
|
gen_optimizer = keras.optimizers.Adam(
|
LEARNING_RATE_GEN, beta_1=0.5, beta_2=0.9, clipnorm=1
|
)
|
disc_optimizer = keras.optimizers.Adam(
|
LEARNING_RATE_DISC, beta_1=0.5, beta_2=0.9, clipnorm=1
|
)
|
# Start training
|
generator = create_generator((None, 1))
|
discriminator = create_discriminator((None, 1))
|
mel_gan = MelGAN(generator, discriminator)
|
mel_gan.compile(
|
gen_optimizer,
|
disc_optimizer,
|
generator_loss,
|
feature_matching_loss,
|
discriminator_loss,
|
)
|
mel_gan.fit(
|
train_dataset.shuffle(200).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE), epochs=1
|
)
|
819/819 [==============================] - 641s 696ms/step - gen_loss: 0.9761 - disc_loss: 0.9350
|
<keras.callbacks.History at 0x7f8f702fe050>
|
Testing the model
|
The trained model can now be used for real time text-to-speech translation tasks. To test how fast the MelGAN inference can be, let us take a sample audio mel-spectrogram and convert it. Note that the actual model pipeline will not include the MelSpec layer and hence this layer will be disabled during inference. The inference input will be a mel-spectrogram processed similar to the MelSpec layer configuration.
|
For testing this, we will create a randomly uniformly distributed tensor to simulate the behavior of the inference pipeline.
|
# Sampling a random tensor to mimic a batch of 128 spectrograms of shape [50, 80]
|
audio_sample = tf.random.uniform([128, 50, 80])
|
Timing the inference speed of a single sample. Running this, you can see that the average inference time per spectrogram ranges from 8 milliseconds to 10 milliseconds on a K80 GPU which is pretty fast.
|
pred = generator.predict(audio_sample, batch_size=32, verbose=1)
|
4/4 [==============================] - 5s 280ms/step
|
Conclusion
|
The MelGAN is a highly effective architecture for spectral inversion that has a Mean Opinion Score (MOS) of 3.61 that considerably outperforms the Griffin Lim algorithm having a MOS of just 1.57. In contrast with this, the MelGAN compares with the state-of-the-art WaveGlow and WaveNet architectures on text-to-speech and speech enhancement tasks on the LJSpeech and VCTK datasets [1].
|
This tutorial highlights:
|
The advantages of using dilated convolutions that grow with the filter size
|
Implementation of a custom layer for on-the-fly conversion of audio waves to mel-spectrograms
|
Effectiveness of using the feature matching loss function for training GAN generators.
|
Further reading
|
MelGAN paper (Kundan Kumar et al.) to understand the reasoning behind the architecture and training process
|
For in-depth understanding of the feature matching loss, you can refer to Improved Techniques for Training GANs (Tim Salimans et al.).
|
Classify speakers using Fast Fourier Transform (FFT) and a 1D Convnet.
|
Introduction
|
This example demonstrates how to create a model to classify speakers from the frequency domain representation of speech recordings, obtained via Fast Fourier Transform (FFT).
|
It shows the following:
|
How to use tf.data to load, preprocess and feed audio streams into a model
|
How to create a 1D convolutional network with residual connections for audio classification.
|
Our process:
|
We prepare a dataset of speech samples from different speakers, with the speaker as label.
|
We add background noise to these samples to augment our data.
|
We take the FFT of these samples.
|
We train a 1D convnet to predict the correct speaker given a noisy FFT speech sample.
|
Note:
|
This example should be run with TensorFlow 2.3 or higher, or tf-nightly.
|
The noise samples in the dataset need to be resampled to a sampling rate of 16000 Hz before using the code in this example. In order to do this, you will need to have installed ffmpg.
|
Setup
|
import os
|
import shutil
|
import numpy as np
|
import tensorflow as tf
|
from tensorflow import keras
|
from pathlib import Path
|
from IPython.display import display, Audio
|
# Get the data from https://www.kaggle.com/kongaevans/speaker-recognition-dataset/download
|
# and save it to the 'Downloads' folder in your HOME directory
|
DATASET_ROOT = os.path.join(os.path.expanduser(\"~\"), \"Downloads/16000_pcm_speeches\")
|
# The folders in which we will put the audio samples and the noise samples
|
AUDIO_SUBFOLDER = \"audio\"
|
NOISE_SUBFOLDER = \"noise\"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.