Deep Learning

Practice Projects

P0: Image Classification

In this project, we'll classify images from the Flower Color Images Dataset. The content is very simple: 210 images (128x128x3) with 10 species of flowering plants and the file with labels flower-labels.csv. Photo files are in the .png format and the labels are the integers.

Label => Flower Name

  • 0 => phlox
  • 1 => rose
  • 2 => calendula
  • 3 => iris
  • 4 => max chrysanthemum
  • 5 => bellflower
  • 6 => viola
  • 7 => rudbeckia laciniata
  • 8 => peony
  • 9 => aquilegia

We'll preprocess the images, then train a neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded.

We are going to apply Keras: The Python Deep Learning library.

At the end, we'll get to see the neural network's predictions on the sample images.

Step 0. Style and Libraries

Let's set up a style of the Jupyter notebook and import the software libraries. The command hide_code will hide the code cells.

In [1]:
%%html
<style>
@import url('https://fonts.googleapis.com/css?family=Orbitron|Roboto');
body {background-color: aliceblue;} 
a {color: #4876ff; font-family: 'Roboto';} 
h1 {color: #348ABD; font-family: 'Orbitron'; text-shadow: 4px 4px 4px #ccc;} 
h2, h3 {color: slategray; font-family: 'Roboto'; text-shadow: 4px 4px 4px #ccc;}
h4 {color: #348ABD; font-family: 'Orbitron';}
span {text-shadow: 4px 4px 4px #ccc;}
div.output_prompt, div.output_area pre {color: slategray;}
div.input_prompt, div.output_subarea {color: #4876ff;}      
div.output_stderr pre {background-color: aliceblue;}  
div.output_stderr {background-color: slategrey;}                        
</style>
<script>
code_show = true; 
function code_display() {
    if (code_show) {
        $('div.input').each(function(id) {
            if (id == 0 || $(this).html().indexOf('hide_code') > -1) {$(this).hide();}
        });
        $('div.output_prompt').css('opacity', 0);
    } else {
        $('div.input').each(function(id) {$(this).show();});
        $('div.output_prompt').css('opacity', 1);
    };
    code_show = !code_show;
} 
$(document).ready(code_display);
</script>
<form action="javascript: code_display()">
<input style="color: #348ABD; background: aliceblue; opacity: 0.8;" \ 
type="submit" value="Click to display or hide code cells">
</form>                       
In [22]:
hide_code = ''
import numpy as np 
import pandas as pd

from PIL import ImageFile
from tqdm import tqdm
import h5py
import cv2

import matplotlib.pylab as plt
from matplotlib import cm
%matplotlib inline

from sklearn.model_selection import train_test_split

from keras.utils import to_categorical
from keras.preprocessing import image as keras_image

from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, GlobalAveragePooling1D, GlobalAveragePooling2D
from keras.layers import Activation, Flatten, Dropout, BatchNormalization
from keras.layers import Conv2D, MaxPooling2D, GlobalMaxPooling2D
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from keras.layers.advanced_activations import PReLU, LeakyReLU

Step 1. Load and Explore the Data

Run the following cell to download the dataset.

In [23]:
hide_code
# Function for processing an image
def image_to_tensor(img_path):
    img = keras_image.load_img("data/flower_images/" + img_path, target_size=(128, 128))
    x = keras_image.img_to_array(img)
    return np.expand_dims(x, axis=0)
# Function for creating the data tensor
def data_to_tensor(img_paths):
    list_of_tensors = [image_to_tensor(img_path) for img_path in tqdm(img_paths)]
    return np.vstack(list_of_tensors)

ImageFile.LOAD_TRUNCATED_IMAGES = True 
# Load the data
data = pd.read_csv("data/flower_images/flower_labels.csv")
files = data['file']
targets = data['label'].values
tensors = data_to_tensor(files);
100%|██████████| 210/210 [00:06<00:00, 33.77it/s]

Run the following cell to display the data shapes

In [24]:
hide_code
# Print the shape
print ('Tensor shape:', tensors.shape)
print ('Target shape', targets.shape)
Tensor shape: (210, 128, 128, 3)
Target shape (210,)

We can create a list of flower names and display image examples.

In [25]:
hide_code
# Create the name list
names = ['phlox', 'rose', 'calendula', 'iris', 'max chrysanthemum', 
         'bellflower', 'viola', 'rudbeckia laciniata', 'peony', 'aquilegia']
In [26]:
hide_code
# Read from files and display images using OpenCV
def display_images(img_path, ax):
    img = cv2.imread("data/flower_images/" + img_path)
    ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    
fig = plt.figure(figsize=(20, 10))
for i in range(8):
    ax = fig.add_subplot(2, 4, i + 1, xticks=[], yticks=[], title=names[targets[i]])
    display_images(files[i], ax)

Step 2. Save and Load the Data

The data tensors can be saved in the appropriate format of files .h5.

In [29]:
hide_code
# Create the tensor file
with h5py.File('FlowerColorImages.h5', 'w') as f:
    f.create_dataset('images', data = tensors)
    f.create_dataset('labels', data = targets)
    f.close()

If we decide to come back to this notebook or have to restart it, we can start here.

In [30]:
hide_code
# Read the h5 file
f = h5py.File('FlowerColorImages.h5', 'r')

# List all groups
keys = list(f.keys())
keys
Out[30]:
['images', 'labels']
In [31]:
hide_code
# Create tensors and targets
tensors = np.array(f[keys[0]])
targets = np.array(f[keys[1]])
print ('Tensor shape:', tensors.shape)
print ('Target shape', targets.shape)
Tensor shape: (210, 128, 128, 3)
Target shape (210,)
In [32]:
hide_code
# Create a csv file
images_csv = tensors.reshape(210,128*128*3)
np.savetxt("flower_images.csv", images_csv, fmt='%i', delimiter=",")
In [33]:
hide_code
# Read the pandas dataframe from csv
data_images = pd.read_csv("flower_images.csv", header=None)
data_images.iloc[:10,:10]
Out[33]:
0 1 2 3 4 5 6 7 8 9
0 13 22 10 14 23 9 16 24 10 16
1 38 49 30 37 50 30 38 52 30 40
2 65 83 48 72 87 58 74 90 62 81
3 162 53 102 147 66 91 156 80 97 169
4 193 52 78 194 51 76 195 58 85 197
5 53 76 55 53 76 55 53 77 56 53
6 8 9 8 8 9 9 9 9 9 8
7 9 9 8 9 9 9 8 8 8 8
8 195 127 169 188 118 160 135 76 101 55
9 7 7 7 8 7 7 8 9 8 9
In [34]:
hide_code
# Read image tensors from the dataframe
tensors = data_images.values
tensors = tensors.reshape(-1,128,128,3)

Step 3. Implement Preprocess Functions

Normalize

In the cell below, normalize the image tensors, and return them as a normalized Numpy array.

In [35]:
hide_code
# TODO: normalize the tensors
tensors = tensors.astype('float32')/255

One-hot encode

Now we'll implement the one-hot encoding function to_categorical.

In [36]:
hide_code
# TODO: one-hot encode the targets
targets = to_categorical(targets, 10)

Split

Apply the function train_test_split and split the data into training and testing sets.

Set up the size for the testing set - 10% and for the validation set - 10%.

In [37]:
hide_code
# TODO: split the data
x_train, x_test, y_train, y_test = train_test_split(tensors, targets, 
                                                    test_size = 0.2, 
                                                    random_state = 1)
n = int(len(x_test)/2)
x_valid, y_valid = x_test[:n], y_test[:n]
x_test, y_test = x_test[n:], y_test[n:]

Let's pring the shape of these data sets.

In [38]:
hide_code
# Print the shape
x_train.shape, x_test.shape, x_valid.shape, y_train.shape, y_test.shape, y_valid.shape
Out[38]:
((168, 128, 128, 3),
 (21, 128, 128, 3),
 (21, 128, 128, 3),
 (168, 10),
 (21, 10),
 (21, 10))

We can display an image example from the training set.

In [39]:
hide_code
# Read and display a tensor using Matplotlib
print('Label: ', names[np.argmax(y_train[3])])
plt.figure(figsize=(3,3))
plt.imshow((x_train[3]));
Label:  peony

Step 4. Create a Model

Define a model architecture and compile the model.

In [45]:
hide_code
def model():
    model = Sequential()
    # TODO: Define a model architecture

    model.add(Conv2D(32, (5, 5), padding='same', input_shape=x_train.shape[1:]))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(Conv2D(96, (5, 5)))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(GlobalMaxPooling2D())
    
    model.add(Dense(512))
    model.add(LeakyReLU(alpha=0.02))
    model.add(Dropout(0.5))  

    model.add(Dense(10))
    model.add(Activation('softmax'))
    
    # TODO: Compile the model
    model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
    
    return model

model = model()

Step 5. Train the Model

In [46]:
hide_code
# Create callbacks
checkpointer = ModelCheckpoint(filepath='weights.best.model.hdf5', 
                               verbose=2, save_best_only=True)
lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                 patience=5, verbose=2, factor=0.2)
In [47]:
hide_code
# Train the model
history = model.fit(x_train, y_train, 
                    epochs=60, batch_size=64, verbose=2,
                    validation_data=(x_valid, y_valid),
                    callbacks=[checkpointer,lr_reduction])
Train on 168 samples, validate on 21 samples
Epoch 1/60
 - 24s - loss: 2.4176 - acc: 0.1071 - val_loss: 2.2944 - val_acc: 0.1429

Epoch 00001: val_loss improved from inf to 2.29438, saving model to weights.best.model.hdf5
Epoch 2/60
 - 17s - loss: 2.2637 - acc: 0.1607 - val_loss: 2.2729 - val_acc: 0.2381

Epoch 00002: val_loss improved from 2.29438 to 2.27291, saving model to weights.best.model.hdf5
Epoch 3/60
 - 15s - loss: 2.2231 - acc: 0.2024 - val_loss: 2.2149 - val_acc: 0.0952

Epoch 00003: val_loss improved from 2.27291 to 2.21487, saving model to weights.best.model.hdf5
Epoch 4/60
 - 15s - loss: 2.0522 - acc: 0.2738 - val_loss: 2.1304 - val_acc: 0.2857

Epoch 00004: val_loss improved from 2.21487 to 2.13042, saving model to weights.best.model.hdf5
Epoch 5/60
 - 14s - loss: 2.1062 - acc: 0.2381 - val_loss: 2.4247 - val_acc: 0.0476

Epoch 00005: val_loss did not improve from 2.13042
Epoch 6/60
 - 14s - loss: 2.3037 - acc: 0.1667 - val_loss: 1.9101 - val_acc: 0.5238

Epoch 00006: val_loss improved from 2.13042 to 1.91009, saving model to weights.best.model.hdf5
Epoch 7/60
 - 14s - loss: 1.6845 - acc: 0.4821 - val_loss: 1.7827 - val_acc: 0.3810

Epoch 00007: val_loss improved from 1.91009 to 1.78271, saving model to weights.best.model.hdf5
Epoch 8/60
 - 14s - loss: 1.7298 - acc: 0.3869 - val_loss: 1.6842 - val_acc: 0.4286

Epoch 00008: val_loss improved from 1.78271 to 1.68422, saving model to weights.best.model.hdf5
Epoch 9/60
 - 17s - loss: 1.6130 - acc: 0.3512 - val_loss: 1.6172 - val_acc: 0.3333

Epoch 00009: val_loss improved from 1.68422 to 1.61716, saving model to weights.best.model.hdf5
Epoch 10/60
 - 16s - loss: 1.7121 - acc: 0.3214 - val_loss: 1.7153 - val_acc: 0.4286

Epoch 00010: val_loss did not improve from 1.61716
Epoch 11/60
 - 15s - loss: 1.3887 - acc: 0.5119 - val_loss: 1.5362 - val_acc: 0.4286

Epoch 00011: val_loss improved from 1.61716 to 1.53624, saving model to weights.best.model.hdf5
Epoch 12/60
 - 15s - loss: 1.3484 - acc: 0.5119 - val_loss: 1.4272 - val_acc: 0.4286

Epoch 00012: val_loss improved from 1.53624 to 1.42720, saving model to weights.best.model.hdf5
Epoch 13/60
 - 15s - loss: 1.2723 - acc: 0.5238 - val_loss: 1.3821 - val_acc: 0.3810

Epoch 00013: val_loss improved from 1.42720 to 1.38207, saving model to weights.best.model.hdf5
Epoch 14/60
 - 14s - loss: 1.2425 - acc: 0.5595 - val_loss: 1.3050 - val_acc: 0.6190

Epoch 00014: val_loss improved from 1.38207 to 1.30504, saving model to weights.best.model.hdf5
Epoch 15/60
 - 14s - loss: 1.1588 - acc: 0.5417 - val_loss: 1.3751 - val_acc: 0.4762

Epoch 00015: val_loss did not improve from 1.30504
Epoch 16/60
 - 15s - loss: 1.1532 - acc: 0.5476 - val_loss: 1.1977 - val_acc: 0.6190

Epoch 00016: val_loss improved from 1.30504 to 1.19769, saving model to weights.best.model.hdf5
Epoch 17/60
 - 15s - loss: 1.0678 - acc: 0.6429 - val_loss: 1.1876 - val_acc: 0.5714

Epoch 00017: val_loss improved from 1.19769 to 1.18762, saving model to weights.best.model.hdf5
Epoch 18/60
 - 16s - loss: 1.0349 - acc: 0.6250 - val_loss: 1.3852 - val_acc: 0.5714

Epoch 00018: val_loss did not improve from 1.18762
Epoch 19/60
 - 17s - loss: 1.1022 - acc: 0.5952 - val_loss: 1.2204 - val_acc: 0.5714

Epoch 00019: val_loss did not improve from 1.18762
Epoch 20/60
 - 16s - loss: 1.0499 - acc: 0.6369 - val_loss: 1.1288 - val_acc: 0.5714

Epoch 00020: val_loss improved from 1.18762 to 1.12883, saving model to weights.best.model.hdf5
Epoch 21/60
 - 18s - loss: 0.9115 - acc: 0.6667 - val_loss: 1.0649 - val_acc: 0.5238

Epoch 00021: val_loss improved from 1.12883 to 1.06492, saving model to weights.best.model.hdf5
Epoch 22/60
 - 18s - loss: 0.8894 - acc: 0.6964 - val_loss: 1.0290 - val_acc: 0.7143

Epoch 00022: val_loss improved from 1.06492 to 1.02897, saving model to weights.best.model.hdf5
Epoch 23/60
 - 19s - loss: 0.9873 - acc: 0.6488 - val_loss: 1.1472 - val_acc: 0.5714

Epoch 00023: val_loss did not improve from 1.02897
Epoch 24/60
 - 17s - loss: 0.8569 - acc: 0.7024 - val_loss: 1.1170 - val_acc: 0.6667

Epoch 00024: val_loss did not improve from 1.02897
Epoch 25/60
 - 16s - loss: 0.7714 - acc: 0.7262 - val_loss: 1.0507 - val_acc: 0.6667

Epoch 00025: val_loss did not improve from 1.02897
Epoch 26/60
 - 16s - loss: 0.7573 - acc: 0.7381 - val_loss: 0.8306 - val_acc: 0.7619

Epoch 00026: val_loss improved from 1.02897 to 0.83057, saving model to weights.best.model.hdf5
Epoch 27/60
 - 14s - loss: 0.7888 - acc: 0.7560 - val_loss: 0.8813 - val_acc: 0.6667

Epoch 00027: val_loss did not improve from 0.83057
Epoch 28/60
 - 16s - loss: 0.7922 - acc: 0.6667 - val_loss: 1.0744 - val_acc: 0.5714

Epoch 00028: val_loss did not improve from 0.83057
Epoch 29/60
 - 17s - loss: 1.0727 - acc: 0.6071 - val_loss: 1.7181 - val_acc: 0.4762

Epoch 00029: val_loss did not improve from 0.83057
Epoch 30/60
 - 15s - loss: 1.2424 - acc: 0.5536 - val_loss: 0.9296 - val_acc: 0.5714

Epoch 00030: val_loss did not improve from 0.83057
Epoch 31/60
 - 16s - loss: 0.7216 - acc: 0.7619 - val_loss: 0.8453 - val_acc: 0.7143

Epoch 00031: val_loss did not improve from 0.83057

Epoch 00031: ReduceLROnPlateau reducing learning rate to 0.0004000000189989805.
Epoch 32/60
 - 16s - loss: 0.6475 - acc: 0.7381 - val_loss: 0.8274 - val_acc: 0.7143

Epoch 00032: val_loss improved from 0.83057 to 0.82738, saving model to weights.best.model.hdf5
Epoch 33/60
 - 16s - loss: 0.5847 - acc: 0.7857 - val_loss: 0.8345 - val_acc: 0.7143

Epoch 00033: val_loss did not improve from 0.82738
Epoch 34/60
 - 16s - loss: 0.5553 - acc: 0.8036 - val_loss: 0.8259 - val_acc: 0.8095

Epoch 00034: val_loss improved from 0.82738 to 0.82586, saving model to weights.best.model.hdf5
Epoch 35/60
 - 15s - loss: 0.5743 - acc: 0.8095 - val_loss: 0.8227 - val_acc: 0.8095

Epoch 00035: val_loss improved from 0.82586 to 0.82270, saving model to weights.best.model.hdf5
Epoch 36/60
 - 15s - loss: 0.5259 - acc: 0.8155 - val_loss: 0.8211 - val_acc: 0.7619

Epoch 00036: val_loss improved from 0.82270 to 0.82112, saving model to weights.best.model.hdf5
Epoch 37/60
 - 16s - loss: 0.5372 - acc: 0.8155 - val_loss: 0.7977 - val_acc: 0.8095

Epoch 00037: val_loss improved from 0.82112 to 0.79768, saving model to weights.best.model.hdf5
Epoch 38/60
 - 15s - loss: 0.5547 - acc: 0.7976 - val_loss: 0.7927 - val_acc: 0.8095

Epoch 00038: val_loss improved from 0.79768 to 0.79267, saving model to weights.best.model.hdf5
Epoch 39/60
 - 14s - loss: 0.4661 - acc: 0.8452 - val_loss: 0.7744 - val_acc: 0.8095

Epoch 00039: val_loss improved from 0.79267 to 0.77436, saving model to weights.best.model.hdf5
Epoch 40/60
 - 15s - loss: 0.4751 - acc: 0.8631 - val_loss: 0.7618 - val_acc: 0.7619

Epoch 00040: val_loss improved from 0.77436 to 0.76177, saving model to weights.best.model.hdf5
Epoch 41/60
 - 16s - loss: 0.5012 - acc: 0.8333 - val_loss: 0.7460 - val_acc: 0.7619

Epoch 00041: val_loss improved from 0.76177 to 0.74602, saving model to weights.best.model.hdf5
Epoch 42/60
 - 15s - loss: 0.4396 - acc: 0.8333 - val_loss: 0.7285 - val_acc: 0.8095

Epoch 00042: val_loss improved from 0.74602 to 0.72847, saving model to weights.best.model.hdf5
Epoch 43/60
 - 16s - loss: 0.4874 - acc: 0.8393 - val_loss: 0.7244 - val_acc: 0.8095

Epoch 00043: val_loss improved from 0.72847 to 0.72444, saving model to weights.best.model.hdf5
Epoch 44/60
 - 16s - loss: 0.4043 - acc: 0.8571 - val_loss: 0.7045 - val_acc: 0.7619

Epoch 00044: val_loss improved from 0.72444 to 0.70448, saving model to weights.best.model.hdf5
Epoch 45/60
 - 16s - loss: 0.4152 - acc: 0.8512 - val_loss: 0.7107 - val_acc: 0.8095

Epoch 00045: val_loss did not improve from 0.70448
Epoch 46/60
 - 14s - loss: 0.4279 - acc: 0.8631 - val_loss: 0.7115 - val_acc: 0.8095

Epoch 00046: val_loss did not improve from 0.70448
Epoch 47/60
 - 15s - loss: 0.4140 - acc: 0.8333 - val_loss: 0.7424 - val_acc: 0.7619

Epoch 00047: val_loss did not improve from 0.70448
Epoch 48/60
 - 15s - loss: 0.4255 - acc: 0.8512 - val_loss: 0.6989 - val_acc: 0.8095

Epoch 00048: val_loss improved from 0.70448 to 0.69890, saving model to weights.best.model.hdf5
Epoch 49/60
 - 14s - loss: 0.3951 - acc: 0.8631 - val_loss: 0.6935 - val_acc: 0.7619

Epoch 00049: val_loss improved from 0.69890 to 0.69351, saving model to weights.best.model.hdf5
Epoch 50/60
 - 15s - loss: 0.4279 - acc: 0.8631 - val_loss: 0.7127 - val_acc: 0.8095

Epoch 00050: val_loss did not improve from 0.69351
Epoch 51/60
 - 15s - loss: 0.4581 - acc: 0.7976 - val_loss: 0.6687 - val_acc: 0.8095

Epoch 00051: val_loss improved from 0.69351 to 0.66874, saving model to weights.best.model.hdf5
Epoch 52/60
 - 15s - loss: 0.3932 - acc: 0.8690 - val_loss: 0.6499 - val_acc: 0.8095

Epoch 00052: val_loss improved from 0.66874 to 0.64992, saving model to weights.best.model.hdf5
Epoch 53/60
 - 15s - loss: 0.4108 - acc: 0.8690 - val_loss: 0.6591 - val_acc: 0.8095

Epoch 00053: val_loss did not improve from 0.64992
Epoch 54/60
 - 14s - loss: 0.4111 - acc: 0.8452 - val_loss: 0.6556 - val_acc: 0.8095

Epoch 00054: val_loss did not improve from 0.64992
Epoch 55/60
 - 15s - loss: 0.4009 - acc: 0.8750 - val_loss: 0.6695 - val_acc: 0.8095

Epoch 00055: val_loss did not improve from 0.64992
Epoch 56/60
 - 14s - loss: 0.3638 - acc: 0.8869 - val_loss: 0.6262 - val_acc: 0.8095

Epoch 00056: val_loss improved from 0.64992 to 0.62618, saving model to weights.best.model.hdf5
Epoch 57/60
 - 16s - loss: 0.3415 - acc: 0.8810 - val_loss: 0.6437 - val_acc: 0.8095

Epoch 00057: val_loss did not improve from 0.62618
Epoch 58/60
 - 16s - loss: 0.3801 - acc: 0.8631 - val_loss: 0.6547 - val_acc: 0.8095

Epoch 00058: val_loss did not improve from 0.62618
Epoch 59/60
 - 17s - loss: 0.3325 - acc: 0.9048 - val_loss: 0.6747 - val_acc: 0.8095

Epoch 00059: val_loss did not improve from 0.62618
Epoch 60/60
 - 17s - loss: 0.3665 - acc: 0.8988 - val_loss: 0.6398 - val_acc: 0.8095

Epoch 00060: val_loss did not improve from 0.62618
In [53]:
hide_code
# Train the model with image generation
data_generator = keras_image.ImageDataGenerator(shear_range=0.3, 
                                                zoom_range=0.3,
                                                rotation_range=30,
                                                horizontal_flip=True)
dg_history = model.fit_generator(data_generator.flow(x_train, y_train, batch_size=64),
                                 steps_per_epoch=189, epochs=5, verbose=2, 
                                 validation_data=(x_valid, y_valid),
                                 callbacks=[checkpointer,lr_reduction])
Epoch 1/5
 - 962s - loss: 0.3813 - acc: 0.8655 - val_loss: 0.4870 - val_acc: 0.8095

Epoch 00001: val_loss improved from 0.62618 to 0.48698, saving model to weights.best.model.hdf5
Epoch 2/5
 - 946s - loss: 0.2381 - acc: 0.9186 - val_loss: 0.3921 - val_acc: 0.9048

Epoch 00002: val_loss improved from 0.48698 to 0.39206, saving model to weights.best.model.hdf5
Epoch 3/5
 - 1017s - loss: 0.1480 - acc: 0.9513 - val_loss: 0.3717 - val_acc: 0.9048

Epoch 00003: val_loss improved from 0.39206 to 0.37171, saving model to weights.best.model.hdf5
Epoch 4/5
 - 1023s - loss: 0.0970 - acc: 0.9711 - val_loss: 0.3273 - val_acc: 0.9524

Epoch 00004: val_loss improved from 0.37171 to 0.32734, saving model to weights.best.model.hdf5
Epoch 5/5
 - 1089s - loss: 0.0623 - acc: 0.9824 - val_loss: 0.3523 - val_acc: 0.9524

Epoch 00005: val_loss did not improve from 0.32734

Step 6. Evaluate and Save the Model

We should have an accuracy greater than 10%. Let's try to reach the level 60-70%.

In [54]:
hide_code
# Load the model with the best validation accuracy
model.load_weights('weights.best.model.hdf5')
# Calculate classification accuracy on the testing set
score = model.evaluate(x_test, y_test)
score
21/21 [==============================] - 14s 662ms/step
Out[54]:
[0.6508561372756958, 0.8095238208770752]
In [55]:
hide_code
# Save/reload models
model.save('model.h5')
model = load_model('model.h5')

Trained model has been saved to the current folder.

Step 7. Display Predictions

In [56]:
hide_code
# Model predictions for the testing dataset
y_test_predict = model.predict_classes(x_test)
In [59]:
hide_code
# Display true labels and predictions
fig = plt.figure(figsize=(18, 18))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=16, replace=False)):
    ax = fig.add_subplot(4, 4, i + 1, xticks=[], yticks=[])
    ax.imshow(np.squeeze(x_test[idx]))
    pred_idx = y_test_predict[idx]
    true_idx = np.argmax(y_test[idx])
    ax.set_title("{} ({})".format(names[pred_idx], names[true_idx]),
                 color=("#4876ff" if pred_idx == true_idx else "darkred"))