text
stringlengths 0
4.99k
|
---|
For batch 4, loss is 3.47.
|
The average loss for epoch 8 is 3.47 and mean absolute error is 1.47.
|
Epoch 00009: Learning rate is 0.0050.
|
For batch 0, loss is 3.39.
|
For batch 1, loss is 3.04.
|
For batch 2, loss is 3.10.
|
For batch 3, loss is 3.22.
|
For batch 4, loss is 3.14.
|
The average loss for epoch 9 is 3.14 and mean absolute error is 1.38.
|
Epoch 00010: Learning rate is 0.0050.
|
For batch 0, loss is 2.77.
|
For batch 1, loss is 2.89.
|
For batch 2, loss is 2.94.
|
For batch 3, loss is 2.85.
|
For batch 4, loss is 2.78.
|
The average loss for epoch 10 is 2.78 and mean absolute error is 1.30.
|
Epoch 00011: Learning rate is 0.0050.
|
For batch 0, loss is 3.69.
|
For batch 1, loss is 3.33.
|
For batch 2, loss is 3.22.
|
For batch 3, loss is 3.57.
|
For batch 4, loss is 3.79.
|
The average loss for epoch 11 is 3.79 and mean absolute error is 1.51.
|
Epoch 00012: Learning rate is 0.0010.
|
For batch 0, loss is 3.61.
|
For batch 1, loss is 3.21.
|
For batch 2, loss is 3.07.
|
For batch 3, loss is 3.34.
|
For batch 4, loss is 3.23.
|
The average loss for epoch 12 is 3.23 and mean absolute error is 1.42.
|
Epoch 00013: Learning rate is 0.0010.
|
For batch 0, loss is 2.03.
|
For batch 1, loss is 3.25.
|
For batch 2, loss is 3.23.
|
For batch 3, loss is 3.36.
|
For batch 4, loss is 3.44.
|
The average loss for epoch 13 is 3.44 and mean absolute error is 1.46.
|
Epoch 00014: Learning rate is 0.0010.
|
For batch 0, loss is 3.28.
|
For batch 1, loss is 3.14.
|
For batch 2, loss is 2.89.
|
For batch 3, loss is 2.94.
|
For batch 4, loss is 3.02.
|
The average loss for epoch 14 is 3.02 and mean absolute error is 1.38.
|
<tensorflow.python.keras.callbacks.History at 0x15d844410>
|
Built-in Keras callbacks
|
Be sure to check out the existing Keras callbacks by reading the API docs. Applications include logging to CSV, saving the model, visualizing metrics in TensorBoard, and a lot more!Transfer learning & fine-tuning
|
Author: fchollet
|
Date created: 2020/04/15
|
Last modified: 2020/05/12
|
Description: Complete guide to transfer learning & fine-tuning in Keras.
|
View in Colab • GitHub source
|
Setup
|
import numpy as np
|
import tensorflow as tf
|
from tensorflow import keras
|
Introduction
|
Transfer learning consists of taking features learned on one problem, and leveraging them on a new, similar problem. For instance, features from a model that has learned to identify racoons may be useful to kick-start a model meant to identify tanukis.
|
Transfer learning is usually done for tasks where your dataset has too little data to train a full-scale model from scratch.
|
The most common incarnation of transfer learning in the context of deep learning is the following worfklow:
|
Take layers from a previously trained model.
|
Freeze them, so as to avoid destroying any of the information they contain during future training rounds.
|
Add some new, trainable layers on top of the frozen layers. They will learn to turn the old features into predictions on a new dataset.
|
Train the new layers on your dataset.
|
A last, optional step, is fine-tuning, which consists of unfreezing the entire model you obtained above (or part of it), and re-training it on the new data with a very low learning rate. This can potentially achieve meaningful improvements, by incrementally adapting the pretrained features to the new data.
|
First, we will go over the Keras trainable API in detail, which underlies most transfer learning & fine-tuning workflows.
|
Then, we'll demonstrate the typical workflow by taking a model pretrained on the ImageNet dataset, and retraining it on the Kaggle "cats vs dogs" classification dataset.
|
This is adapted from Deep Learning with Python and the 2016 blog post "building powerful image classification models using very little data".
|
Freezing layers: understanding the trainable attribute
|
Layers & models have three weight attributes:
|
weights is the list of all weights variables of the layer.
|
trainable_weights is the list of those that are meant to be updated (via gradient descent) to minimize the loss during training.
|
non_trainable_weights is the list of those that aren't meant to be trained. Typically they are updated by the model during the forward pass.
|
Example: the Dense layer has 2 trainable weights (kernel & bias)
|
layer = keras.layers.Dense(3)
|
layer.build((None, 4)) # Create the weights
|
print("weights:", len(layer.weights))
|
print("trainable_weights:", len(layer.trainable_weights))
|
print("non_trainable_weights:", len(layer.non_trainable_weights))
|
weights: 2
|
trainable_weights: 2
|
non_trainable_weights: 0
|
In general, all weights are trainable weights. The only built-in layer that has non-trainable weights is the BatchNormalization layer. It uses non-trainable weights to keep track of the mean and variance of its inputs during training. To learn how to use non-trainable weights in your own custom layers, see the guide to writing new layers from scratch.
|
Example: the BatchNormalization layer has 2 trainable weights and 2 non-trainable weights
|
layer = keras.layers.BatchNormalization()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.