Deep Learning

Practice Projects

P4: Style Recognition

Step 0. Style and Libraries

Let's choose a style of the Jupyter notebook and import the software libraries.

The command hide_code will display or hide the code cells.

In [1]:
%%html
<style>
@import url('https://fonts.googleapis.com/css?family=Orbitron|Roboto');
body {background-color: aliceblue;} 
a {color: #4876ff; font-family: 'Roboto';} 
h1 {color: #348ABD; font-family: 'Orbitron'; text-shadow: 4px 4px 4px #ccc;} 
h2, h3 {color: slategray; font-family: 'Roboto'; text-shadow: 4px 4px 4px #ccc;}
h4 {color: #348ABD; font-family: 'Orbitron';}
span {text-shadow: 4px 4px 4px #ccc;}
div.output_prompt, div.output_area pre {color: slategray;}
div.input_prompt, div.output_subarea {color: #4876ff;}      
div.output_stderr pre {background-color: aliceblue;}  
div.output_stderr {background-color: slategrey;}                        
</style>
<script>
code_show = true; 
function code_display() {
    if (code_show) {
        $('div.input').each(function(id) {
            if (id == 0 || $(this).html().indexOf('hide_code') > -1) {$(this).hide();}
        });
        $('div.output_prompt').css('opacity', 0);
    } else {
        $('div.input').each(function(id) {$(this).show();});
        $('div.output_prompt').css('opacity', 1);
    };
    code_show = !code_show;
} 
$(document).ready(code_display);
</script>
<form action="javascript: code_display()">
<input style="color: #348ABD; background: aliceblue; opacity: 0.8;" \ 
type="submit" value="Click to display or hide code cells">
</form> 
In [2]:
hide_code = ''
import numpy as np 
import pandas as pd
import tensorflow as tf

from PIL import ImageFile
from tqdm import tqdm
import h5py
import cv2

import matplotlib.pylab as plt
from matplotlib import cm
import seaborn as sns
%matplotlib inline

from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier

from keras.utils import to_categorical
from keras.preprocessing import image as keras_image
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras.preprocessing.image import ImageDataGenerator

from keras.models import Sequential, load_model, Model
from keras.layers import Input, BatchNormalization
from keras.layers import Dense, LSTM, GlobalAveragePooling1D, GlobalAveragePooling2D
from keras.layers import Activation, Flatten, Dropout
from keras.layers import Conv2D, MaxPooling2D, GlobalMaxPooling2D
from keras.layers.advanced_activations import PReLU, LeakyReLU

from keras.applications.inception_v3 import InceptionV3, preprocess_input
import scipy
from scipy import misc
Using TensorFlow backend.
In [3]:
hide_code
plt.style.use('seaborn-whitegrid')

# Plot a fitting history for neural networks
def history_plot(fit_history, n):
    plt.figure(figsize=(18, 12))
    
    plt.subplot(211)
    plt.plot(fit_history.history['loss'][n:], color='slategray', label = 'train')
    plt.plot(fit_history.history['val_loss'][n:], color='#4876ff', label = 'valid')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    plt.legend()
    plt.title('Loss Function');  
    
    plt.subplot(212)
    plt.plot(fit_history.history['acc'][n:], color='slategray', label = 'train')
    plt.plot(fit_history.history['val_acc'][n:], color='#4876ff', label = 'valid')
    plt.xlabel("Epochs")
    plt.ylabel("Accuracy")    
    plt.legend()
    plt.title('Accuracy');

Step 1. Load and Explore the Data

For this project, have made the database of photos sorted by products and brands.

The main dataset (style.zip) is 2184 color images (150x150x3) with 7 brands and 10 products, and the file with labels style.csv.

Photo files are in the .png format and the labels are integers and values.

Run the following cell to download the dataset.

In [4]:
hide_code
# Function for processing an image
def image_to_tensor(img_path):
    img = keras_image.load_img("data/" + img_path, target_size=(150, 150))
    x = keras_image.img_to_array(img)
    return np.expand_dims(x, axis=0)
# Function for creating the data tensor
def data_to_tensor(img_paths):
    list_of_tensors = [image_to_tensor(img_path) for img_path in tqdm(img_paths)]
    return np.vstack(list_of_tensors)

ImageFile.LOAD_TRUNCATED_IMAGES = True 
In [5]:
hide_code
# Load and display the data
data = pd.read_csv("data/style.csv")
data.head()
Out[5]:
brand_name brand_label product_name product_label file
0 Christian Louboutin 0 shoes 0 0_0_001.png
1 Christian Louboutin 0 shoes 0 0_0_002.png
2 Christian Louboutin 0 shoes 0 0_0_003.png
3 Christian Louboutin 0 shoes 0 0_0_004.png
4 Christian Louboutin 0 shoes 0 0_0_005.png

Visualize distributions of variables to have an imagination of the database.

In [6]:
hide_code
# Plot the product distribution
plt.figure(figsize=(15,5))
sns.countplot(x="product_name", data=data,
              facecolor=(0, 0, 0, 0), linewidth=7,
              edgecolor=sns.color_palette("Spectral",10))
plt.title('Product Distribution', fontsize=20);
In [7]:
hide_code
# Plot the product distribution grouped by brand
plt.figure(figsize=(15,10))
sns.countplot(y="product_name", hue="brand_name", data=data, 
              palette=sns.color_palette("Set1",7))
plt.legend(loc=4)
plt.title('Product Distribution Grouped by Brands', 
          fontsize=20);

Print out the brand_name and product_name unique values.

In [8]:
hide_code
# Print unique values of brand names
set(data['brand_name'])
Out[8]:
{'Chanel',
 'Christian Dior',
 'Christian Louboutin',
 'Dolce & Gabbana',
 'Gucci',
 'Versace',
 'Yves Saint Laurent'}
In [9]:
hide_code
# Print unique values of product names
set(data['product_name'])
Out[9]:
{'boots',
 'bracelet',
 'earrings',
 'handbag',
 'lipstick',
 'nail polish',
 'necklace',
 'ring',
 'shoes',
 'watches'}

Let's create tensors of variables and display some examples of images.

In [10]:
hide_code
# Create tensors
brands = data['brand_label'].values
products = data['product_label'].values
images = data_to_tensor(data['file']);
100%|██████████| 2184/2184 [01:01<00:00, 35.56it/s]
In [11]:
hide_code
# Print the shape 
print ('Image shape:', images.shape)
print ('Brand shape', brands.shape)
print ('Product shape', products.shape)
Image shape: (2184, 150, 150, 3)
Brand shape (2184,)
Product shape (2184,)
In [12]:
hide_code
# Read from files and display images using OpenCV
def display_images(img_path, ax):
    img = cv2.imread("data/" + img_path)
    ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
    
fig = plt.figure(figsize=(18, 6))
for i in range(10):
    ax = fig.add_subplot(2, 5, i + 1, xticks=[], yticks=[], 
                         title=data['brand_name'][i*218]+' || '+data['product_name'][i*218])
    display_images(data['file'][i*218], ax)

Step 2. Save and Load the Data

The data tensors can be saved in the appropriate format of files .h5.

In [19]:
hide_code
# Create the tensor file
with h5py.File('StyleColorImages.h5', 'w') as f:
    f.create_dataset('images', data = images)
    f.create_dataset('brands', data = brands)
    f.create_dataset('products', data = products)
    f.close()

The next time it is possible to start here with neural network's experiments.

In [13]:
hide_code
# Read the h5 file
f = h5py.File('StyleColorImages.h5', 'r')

# List all groups
keys = list(f.keys())
keys
Out[13]:
['brands', 'images', 'products']
In [14]:
hide_code
# Create tensors and targets
brands = np.array(f[keys[0]])
images = np.array(f[keys[1]])
products = np.array(f[keys[2]])

print ('Image shape:', images.shape)
print ('Brand shape', brands.shape)
print ('Product shape', products.shape)
Image shape: (2184, 150, 150, 3)
Brand shape (2184,)
Product shape (2184,)
In [16]:
hide_code
# Create the csv file
images_csv = images.reshape(-1, 150*150*3)
np.savetxt("style_images.csv", images_csv, fmt='%i', delimiter=",")
In [17]:
hide_code
# Read the pandas dataframe from csv
data_images = pd.read_csv("style_images.csv", header=None)
data_images.iloc[:10,:10]
Out[17]:
0 1 2 3 4 5 6 7 8 9
0 255 255 255 255 255 255 255 255 255 255
1 255 255 255 255 255 255 255 255 255 255
2 255 255 255 255 255 255 255 255 255 255
3 255 255 255 255 255 255 255 255 255 255
4 222 222 214 222 222 214 221 221 213 222
5 255 255 255 255 255 255 255 255 255 255
6 255 255 255 255 255 255 255 255 255 255
7 255 255 255 255 255 255 255 255 255 255
8 220 221 213 220 221 213 220 221 213 221
9 42 39 46 42 39 46 42 39 46 41
In [18]:
hide_code
# Read image tensors from the dataframe
images = data_images.values
images = images.reshape(-1,150,150,3)

Step 3. Implement Preprocess Functions

Normalize and Gray Scale

In the cell below, normalize the image tensors, and return them as a normalized numpy array.

In [13]:
hide_code
# Normalize tensors
images = images.astype('float32')/255
In [14]:
hide_code
# Read and display a tensor using Matplotlib
print('Product: ', data['product_name'][1000])
print('Brand: ', data['brand_name'][1000])
plt.figure(figsize=(3,3))
plt.imshow(images[1000]);
Product:  handbag
Brand:  Gucci

Create tensors of grayscaled images and display their shape.

In [15]:
hide_code
# Grayscaled tensors
gray_images = np.dot(images[...,:3], [0.299, 0.587, 0.114])
print ('Grayscaled Tensor shape:', gray_images.shape)
Grayscaled Tensor shape: (2184, 150, 150)
In [16]:
hide_code
# Read and display a tensor using Matplotlib
print('Product: ', data['product_name'][1000])
print('Brand: ', data['brand_name'][1000])
plt.figure(figsize=(3,3))
plt.imshow(gray_images[1000], cmap=cm.bone);
Product:  handbag
Brand:  Gucci

One-hot Encode

Now we'll implement the one-hot encoding function to_categorical.

In [17]:
hide_code
# Print the brand unique values
print(set(brands))
{0, 1, 2, 3, 4, 5, 6}
In [18]:
hide_code
# Print the product unique values
print(set(products))
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
In [19]:
hide_code
# One-hot encode the brands
cat_brands = to_categorical(brands, 7)
cat_brands.shape
Out[19]:
(2184, 7)
In [20]:
hide_code
# One-hot encode the products
cat_products = to_categorical(products, 10)
cat_products.shape
Out[20]:
(2184, 10)

Multi-Label Target

In [21]:
hide_code
# Create multi-label targets
targets = np.concatenate((cat_brands, cat_products), axis=1)
targets.shape
Out[21]:
(2184, 17)

Split

Apply the function train_test_split and split the data into training and testing sets. Set up the size of the testing set - 20%.

Color Images, Brand Target

In [22]:
hide_code
# Split the data
x_train, x_test, y_train, y_test = train_test_split(images, cat_brands, 
                                                    test_size = 0.2, 
                                                    random_state = 1)
n = int(len(x_test)/2)
x_valid, y_valid = x_test[:n], y_test[:n]
x_test, y_test = x_test[n:], y_test[n:]
In [23]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train.shape)
print ("Training target's shape", y_train.shape)
print ("Validating tensor's shape:", x_valid.shape)
print ("Validating target's shape", y_valid.shape)
print ("Testing tensor's shape:", x_test.shape)
print ("Testing target's shape", y_test.shape)
Training tensor's shape: (1747, 150, 150, 3)
Training target's shape (1747, 7)
Validating tensor's shape: (218, 150, 150, 3)
Validating target's shape (218, 7)
Testing tensor's shape: (219, 150, 150, 3)
Testing target's shape (219, 7)

Color Images, Product Target

In [24]:
hide_code
# Split the data
x_train2, x_test2, y_train2, y_test2 = train_test_split(images, cat_products, 
                                                        test_size = 0.2, 
                                                        random_state = 1)
n = int(len(x_test2)/2)
x_valid2, y_valid2 = x_test2[:n], y_test2[:n]
x_test2, y_test2 = x_test2[n:], y_test2[n:]
In [25]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train2.shape)
print ("Training target's shape", y_train2.shape)
print ("Validating tensor's shape:", x_valid2.shape)
print ("Validating target's shape", y_valid2.shape)
print ("Testing tensor's shape:", x_test2.shape)
print ("Testing target's shape", y_test2.shape)
Training tensor's shape: (1747, 150, 150, 3)
Training target's shape (1747, 10)
Validating tensor's shape: (218, 150, 150, 3)
Validating target's shape (218, 10)
Testing tensor's shape: (219, 150, 150, 3)
Testing target's shape (219, 10)

Color Images, Multi-Label Target

In [26]:
hide_code
# Split the data
x_train3, x_test3, y_train3, y_test3 = train_test_split(images, targets, 
                                                        test_size = 0.2, 
                                                        random_state = 1)
n = int(len(x_test3)/2)
x_valid3, y_valid3 = x_test3[:n], y_test3[:n]
x_test3, y_test3 = x_test3[n:], y_test3[n:]
In [27]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train3.shape)
print ("Training target's shape", y_train3.shape)
print ("Validating tensor's shape:", x_valid3.shape)
print ("Validating target's shape", y_valid3.shape)
print ("Testing tensor's shape:", x_test3.shape)
print ("Testing target's shape", y_test3.shape)
Training tensor's shape: (1747, 150, 150, 3)
Training target's shape (1747, 17)
Validating tensor's shape: (218, 150, 150, 3)
Validating target's shape (218, 17)
Testing tensor's shape: (219, 150, 150, 3)
Testing target's shape (219, 17)
In [28]:
hide_code
# Create a list of targets
y_train3_list = [y_train3[:, :7], y_train3[:, 7:]]
y_test3_list = [y_test3[:, :7], y_test3[:, 7:]]
y_valid3_list = [y_valid3[:, :7], y_valid3[:, 7:]]

Grayscaled Images, Brand Target

In [22]:
hide_code
# Split the data
x_train4, x_test4, y_train4, y_test4 = train_test_split(gray_images, cat_brands, 
                                                        test_size = 0.2, 
                                                        random_state = 1)
n = int(len(x_test4)/2)
x_valid4, y_valid4 = x_test4[:n], y_test4[:n]
x_test4, y_test4 = x_test4[n:], y_test4[n:]
In [23]:
hide_code
# Reshape the grayscaled data
x_train4, x_test4, x_valid4 = \
x_train4.reshape(-1, 150, 150, 1), x_test4.reshape(-1, 150, 150, 1), x_valid4.reshape(-1, 150, 150, 1)
In [24]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train4.shape)
print ("Training target's shape", y_train4.shape)
print ("Validating tensor's shape:", x_valid4.shape)
print ("Validating target's shape", y_valid4.shape)
print ("Testing tensor's shape:", x_test4.shape)
print ("Testing target's shape", y_test4.shape)
Training tensor's shape: (1747, 150, 150, 1)
Training target's shape (1747, 7)
Validating tensor's shape: (218, 150, 150, 1)
Validating target's shape (218, 7)
Testing tensor's shape: (219, 150, 150, 1)
Testing target's shape (219, 7)

Grayscaled Images, Product Target

In [32]:
hide_code
# Split the data
x_train5, x_test5, y_train5, y_test5 = train_test_split(gray_images, cat_products, 
                                                        test_size = 0.2, 
                                                        random_state = 1)
n = int(len(x_test5)/2)
x_valid5, y_valid5 = x_test5[:n], y_test5[:n]
x_test5, y_test5 = x_test5[n:], y_test5[n:]
In [33]:
hide_code
# Reshape the grayscaled data
x_train5, x_test5, x_valid5 = \
x_train5.reshape(-1, 150, 150, 1), x_test5.reshape(-1, 150, 150, 1), x_valid5.reshape(-1, 150, 150, 1)
In [34]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train5.shape)
print ("Training target's shape", y_train5.shape)
print ("Validating tensor's shape:", x_valid5.shape)
print ("Validating target's shape", y_valid5.shape)
print ("Testing tensor's shape:", x_test5.shape)
print ("Testing target's shape", y_test5.shape)
Training tensor's shape: (1747, 150, 150, 1)
Training target's shape (1747, 10)
Validating tensor's shape: (218, 150, 150, 1)
Validating target's shape (218, 10)
Testing tensor's shape: (219, 150, 150, 1)
Testing target's shape (219, 10)

Grayscaled Images, Multi-Label Target

In [35]:
hide_code
# Split the data
x_train6, x_test6, y_train6, y_test6 = train_test_split(gray_images, targets, 
                                                        test_size = 0.2, 
                                                        random_state = 1)
n = int(len(x_test6)/2)
x_valid6, y_valid6 = x_test6[:n], y_test6[:n]
x_test6, y_test6 = x_test6[n:], y_test6[n:]
In [36]:
hide_code
# Reshape the grayscaled data
x_train6, x_test6, x_valid6 = \
x_train6.reshape(-1, 150, 150, 1), x_test6.reshape(-1, 150, 150, 1), x_valid6.reshape(-1, 150, 150, 1)
In [37]:
hide_code
# Print the shape
print ("Training tensor's shape:", x_train6.shape)
print ("Training target's shape", y_train6.shape)
print ("Validating tensor's shape:", x_valid6.shape)
print ("Validating target's shape", y_valid6.shape)
print ("Testing tensor's shape:", x_test6.shape)
print ("Testing target's shape", y_test6.shape)
Training tensor's shape: (1747, 150, 150, 1)
Training target's shape (1747, 17)
Validating tensor's shape: (218, 150, 150, 1)
Validating target's shape (218, 17)
Testing tensor's shape: (219, 150, 150, 1)
Testing target's shape (219, 17)
In [38]:
hide_code
# Create a list of targets
y_train6_list = [y_train6[:, :7], y_train6[:, 7:]]
y_test6_list = [y_test6[:, :7], y_test6[:, 7:]]
y_valid6_list = [y_valid6[:, :7], y_valid6[:, 7:]]

Step 4. Create One-Label Classification Models

We should have an accuracy

  • greater than 14.3% for the first target (brand) and

  • greater than 10% for the second target (product).

Color Images, Brand Target

In [44]:
hide_code
def cb_model():
    model = Sequential()
    # TODO: Define a model architecture
    model.add(Conv2D(32, (5, 5), padding='same', input_shape=x_train.shape[1:]))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(Conv2D(196, (5, 5)))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(GlobalMaxPooling2D())
    
    model.add(Dense(512))
    model.add(LeakyReLU(alpha=0.02))
    model.add(Dropout(0.5)) 
    
    model.add(Dense(7))
    model.add(Activation('softmax'))
    # TODO: Compile the model    
    model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
    
    return model

cb_model = cb_model()
In [45]:
hide_code
# Create callbacks
cb_checkpointer = ModelCheckpoint(filepath='cb_model.styles.hdf5', 
                                  verbose=2, save_best_only=True)
cb_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                    patience=5, verbose=2, factor=0.5)
In [46]:
hide_code
# Train the model
cb_history = cb_model.fit(x_train, y_train, 
                          epochs=50, batch_size=16, verbose=2,
                          validation_data=(x_valid, y_valid),
                          callbacks=[cb_checkpointer,cb_lr_reduction])
Train on 1747 samples, validate on 218 samples
Epoch 1/50
 - 325s - loss: 1.9348 - acc: 0.2015 - val_loss: 1.9040 - val_acc: 0.2110

Epoch 00001: val_loss improved from inf to 1.90397, saving model to cb_model.styles.hdf5
Epoch 2/50
 - 400s - loss: 1.9185 - acc: 0.2221 - val_loss: 1.8528 - val_acc: 0.2385

Epoch 00002: val_loss improved from 1.90397 to 1.85280, saving model to cb_model.styles.hdf5
Epoch 3/50
 - 415s - loss: 1.8259 - acc: 0.2559 - val_loss: 1.8474 - val_acc: 0.2202

Epoch 00003: val_loss improved from 1.85280 to 1.84739, saving model to cb_model.styles.hdf5
Epoch 4/50
 - 349s - loss: 1.8127 - acc: 0.2879 - val_loss: 1.8383 - val_acc: 0.2477

Epoch 00004: val_loss improved from 1.84739 to 1.83834, saving model to cb_model.styles.hdf5
Epoch 5/50
 - 370s - loss: 1.7498 - acc: 0.3034 - val_loss: 1.7773 - val_acc: 0.3257

Epoch 00005: val_loss improved from 1.83834 to 1.77731, saving model to cb_model.styles.hdf5
Epoch 6/50
 - 349s - loss: 1.6801 - acc: 0.3371 - val_loss: 1.8509 - val_acc: 0.2661

Epoch 00006: val_loss did not improve from 1.77731
Epoch 7/50
 - 386s - loss: 1.6585 - acc: 0.3703 - val_loss: 1.7214 - val_acc: 0.3303

Epoch 00007: val_loss improved from 1.77731 to 1.72136, saving model to cb_model.styles.hdf5
Epoch 8/50
 - 389s - loss: 1.6009 - acc: 0.3950 - val_loss: 2.0627 - val_acc: 0.1927

Epoch 00008: val_loss did not improve from 1.72136
Epoch 9/50
 - 318s - loss: 1.5569 - acc: 0.4076 - val_loss: 1.6609 - val_acc: 0.3761

Epoch 00009: val_loss improved from 1.72136 to 1.66087, saving model to cb_model.styles.hdf5
Epoch 10/50
 - 385s - loss: 1.4921 - acc: 0.4287 - val_loss: 1.5270 - val_acc: 0.3670

Epoch 00010: val_loss improved from 1.66087 to 1.52696, saving model to cb_model.styles.hdf5
Epoch 11/50
 - 371s - loss: 1.4332 - acc: 0.4728 - val_loss: 1.5430 - val_acc: 0.4037

Epoch 00011: val_loss did not improve from 1.52696
Epoch 12/50
 - 351s - loss: 1.3990 - acc: 0.4717 - val_loss: 1.5269 - val_acc: 0.3853

Epoch 00012: val_loss improved from 1.52696 to 1.52687, saving model to cb_model.styles.hdf5
Epoch 13/50
 - 380s - loss: 1.3549 - acc: 0.4797 - val_loss: 2.8353 - val_acc: 0.2752

Epoch 00013: val_loss did not improve from 1.52687
Epoch 14/50
 - 357s - loss: 1.3247 - acc: 0.5054 - val_loss: 1.6201 - val_acc: 0.3716

Epoch 00014: val_loss did not improve from 1.52687
Epoch 15/50
 - 390s - loss: 1.2743 - acc: 0.5232 - val_loss: 1.3966 - val_acc: 0.4220

Epoch 00015: val_loss improved from 1.52687 to 1.39662, saving model to cb_model.styles.hdf5
Epoch 16/50
 - 385s - loss: 1.2297 - acc: 0.5507 - val_loss: 1.4883 - val_acc: 0.4266

Epoch 00016: val_loss did not improve from 1.39662
Epoch 17/50
 - 356s - loss: 1.2061 - acc: 0.5438 - val_loss: 1.9593 - val_acc: 0.3165

Epoch 00017: val_loss did not improve from 1.39662
Epoch 18/50
 - 300s - loss: 1.1887 - acc: 0.5644 - val_loss: 1.4614 - val_acc: 0.4541

Epoch 00018: val_loss did not improve from 1.39662
Epoch 19/50
 - 359s - loss: 1.1488 - acc: 0.5713 - val_loss: 1.3760 - val_acc: 0.4358

Epoch 00019: val_loss improved from 1.39662 to 1.37597, saving model to cb_model.styles.hdf5
Epoch 20/50
 - 457s - loss: 1.1052 - acc: 0.5804 - val_loss: 1.5151 - val_acc: 0.4266

Epoch 00020: val_loss did not improve from 1.37597
Epoch 21/50
 - 345s - loss: 1.0869 - acc: 0.5953 - val_loss: 1.7687 - val_acc: 0.3624

Epoch 00021: val_loss did not improve from 1.37597
Epoch 22/50
 - 386s - loss: 1.0612 - acc: 0.6033 - val_loss: 1.5082 - val_acc: 0.4771

Epoch 00022: val_loss did not improve from 1.37597
Epoch 23/50
 - 336s - loss: 1.0122 - acc: 0.6319 - val_loss: 1.5285 - val_acc: 0.4312

Epoch 00023: val_loss did not improve from 1.37597
Epoch 24/50
 - 354s - loss: 0.9874 - acc: 0.6205 - val_loss: 1.3922 - val_acc: 0.4541

Epoch 00024: val_loss did not improve from 1.37597

Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.0010000000474974513.
Epoch 25/50
 - 345s - loss: 0.8684 - acc: 0.6772 - val_loss: 1.3658 - val_acc: 0.5092

Epoch 00025: val_loss improved from 1.37597 to 1.36585, saving model to cb_model.styles.hdf5
Epoch 26/50
 - 395s - loss: 0.8170 - acc: 0.7029 - val_loss: 1.3647 - val_acc: 0.5046

Epoch 00026: val_loss improved from 1.36585 to 1.36472, saving model to cb_model.styles.hdf5
Epoch 27/50
 - 347s - loss: 0.7771 - acc: 0.7167 - val_loss: 1.3941 - val_acc: 0.5229

Epoch 00027: val_loss did not improve from 1.36472
Epoch 28/50
 - 341s - loss: 0.7681 - acc: 0.7241 - val_loss: 1.4000 - val_acc: 0.4908

Epoch 00028: val_loss did not improve from 1.36472
Epoch 29/50
 - 340s - loss: 0.7097 - acc: 0.7252 - val_loss: 1.5099 - val_acc: 0.4862

Epoch 00029: val_loss did not improve from 1.36472
Epoch 30/50
 - 311s - loss: 0.7259 - acc: 0.7367 - val_loss: 1.4355 - val_acc: 0.4862

Epoch 00030: val_loss did not improve from 1.36472
Epoch 31/50
 - 319s - loss: 0.7315 - acc: 0.7287 - val_loss: 1.4661 - val_acc: 0.5000

Epoch 00031: val_loss did not improve from 1.36472

Epoch 00031: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 32/50
 - 345s - loss: 0.6403 - acc: 0.7624 - val_loss: 1.4328 - val_acc: 0.4954

Epoch 00032: val_loss did not improve from 1.36472
Epoch 33/50
 - 288s - loss: 0.6177 - acc: 0.7773 - val_loss: 1.4503 - val_acc: 0.5275

Epoch 00033: val_loss did not improve from 1.36472
Epoch 34/50
 - 305s - loss: 0.5828 - acc: 0.7825 - val_loss: 1.4468 - val_acc: 0.5367

Epoch 00034: val_loss did not improve from 1.36472
Epoch 35/50
 - 297s - loss: 0.5916 - acc: 0.7888 - val_loss: 1.4794 - val_acc: 0.5046

Epoch 00035: val_loss did not improve from 1.36472
Epoch 36/50
 - 291s - loss: 0.5694 - acc: 0.7888 - val_loss: 1.4858 - val_acc: 0.4908

Epoch 00036: val_loss did not improve from 1.36472

Epoch 00036: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
Epoch 37/50
 - 295s - loss: 0.5389 - acc: 0.8082 - val_loss: 1.4725 - val_acc: 0.5046

Epoch 00037: val_loss did not improve from 1.36472
Epoch 38/50
 - 323s - loss: 0.5260 - acc: 0.8163 - val_loss: 1.4852 - val_acc: 0.5138

Epoch 00038: val_loss did not improve from 1.36472
Epoch 39/50
 - 320s - loss: 0.5222 - acc: 0.8174 - val_loss: 1.4887 - val_acc: 0.5183

Epoch 00039: val_loss did not improve from 1.36472
Epoch 40/50
 - 347s - loss: 0.5073 - acc: 0.8197 - val_loss: 1.5001 - val_acc: 0.5138

Epoch 00040: val_loss did not improve from 1.36472
Epoch 41/50
 - 379s - loss: 0.5099 - acc: 0.8220 - val_loss: 1.5006 - val_acc: 0.5275

Epoch 00041: val_loss did not improve from 1.36472

Epoch 00041: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
Epoch 42/50
 - 384s - loss: 0.5050 - acc: 0.8260 - val_loss: 1.5041 - val_acc: 0.5229

Epoch 00042: val_loss did not improve from 1.36472
Epoch 43/50
 - 383s - loss: 0.4995 - acc: 0.8220 - val_loss: 1.4859 - val_acc: 0.5367

Epoch 00043: val_loss did not improve from 1.36472
Epoch 44/50
 - 371s - loss: 0.4983 - acc: 0.8260 - val_loss: 1.5149 - val_acc: 0.5550

Epoch 00044: val_loss did not improve from 1.36472
Epoch 45/50
 - 470s - loss: 0.4909 - acc: 0.8277 - val_loss: 1.4956 - val_acc: 0.5321

Epoch 00045: val_loss did not improve from 1.36472
Epoch 46/50
 - 427s - loss: 0.4768 - acc: 0.8260 - val_loss: 1.5194 - val_acc: 0.5367

Epoch 00046: val_loss did not improve from 1.36472

Epoch 00046: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
Epoch 47/50
 - 310s - loss: 0.4871 - acc: 0.8271 - val_loss: 1.5188 - val_acc: 0.5229

Epoch 00047: val_loss did not improve from 1.36472
Epoch 48/50
 - 374s - loss: 0.4744 - acc: 0.8266 - val_loss: 1.5263 - val_acc: 0.5367

Epoch 00048: val_loss did not improve from 1.36472
Epoch 49/50
 - 334s - loss: 0.4791 - acc: 0.8248 - val_loss: 1.5086 - val_acc: 0.5367

Epoch 00049: val_loss did not improve from 1.36472
Epoch 50/50
 - 333s - loss: 0.4532 - acc: 0.8449 - val_loss: 1.5222 - val_acc: 0.5413

Epoch 00050: val_loss did not improve from 1.36472
In [47]:
hide_code
# Plot the training history
history_plot(cb_history, 0)
In [48]:
hide_code
# Load the model with the best validation accuracy
cb_model.load_weights('cb_model.styles.hdf5')
# Calculate classification accuracy on the testing set
cb_score = cb_model.evaluate(x_test, y_test)
cb_score
219/219 [==============================] - 25s 114ms/step
Out[48]:
[1.3096597864203257, 0.5844748836674102]

Color Images, Product Target

In [49]:
hide_code
def cp_model():
    model = Sequential()
    # TODO: Define a model architecture

    model.add(Conv2D(32, (5, 5), padding='same', input_shape=x_train2.shape[1:]))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(Conv2D(196, (5, 5)))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(GlobalMaxPooling2D())
    
    model.add(Dense(512))
    model.add(LeakyReLU(alpha=0.02))
    model.add(Dropout(0.5)) 
    
    model.add(Dense(10))
    model.add(Activation('softmax'))
    
    # TODO: Compile the model
    model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
    
    return model

cp_model = cp_model()
In [50]:
hide_code
# Create callbacks
cp_checkpointer = ModelCheckpoint(filepath='cp_model.styles.hdf5', 
                                  verbose=2, save_best_only=True)
cp_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                    patience=5, verbose=2, factor=0.5)
In [51]:
hide_code
# Train the model
cp_history = cp_model.fit(x_train2, y_train2, 
                          epochs=50, batch_size=16, verbose=2,
                          validation_data=(x_valid2, y_valid2),
                          callbacks=[cp_checkpointer,cp_lr_reduction])
Train on 1747 samples, validate on 218 samples
Epoch 1/50
 - 634s - loss: 2.1023 - acc: 0.2410 - val_loss: 1.9092 - val_acc: 0.2890

Epoch 00001: val_loss improved from inf to 1.90917, saving model to cp_model.styles.hdf5
Epoch 2/50
 - 323s - loss: 1.8082 - acc: 0.3326 - val_loss: 2.5884 - val_acc: 0.0963

Epoch 00002: val_loss did not improve from 1.90917
Epoch 3/50
 - 315s - loss: 1.6604 - acc: 0.3892 - val_loss: 2.0378 - val_acc: 0.2523

Epoch 00003: val_loss did not improve from 1.90917
Epoch 4/50
 - 312s - loss: 1.5842 - acc: 0.4179 - val_loss: 1.8535 - val_acc: 0.3532

Epoch 00004: val_loss improved from 1.90917 to 1.85347, saving model to cp_model.styles.hdf5
Epoch 5/50
 - 322s - loss: 1.4881 - acc: 0.4677 - val_loss: 2.8569 - val_acc: 0.1147

Epoch 00005: val_loss did not improve from 1.85347
Epoch 6/50
 - 361s - loss: 1.4134 - acc: 0.4900 - val_loss: 1.6599 - val_acc: 0.3716

Epoch 00006: val_loss improved from 1.85347 to 1.65993, saving model to cp_model.styles.hdf5
Epoch 7/50
 - 316s - loss: 1.3449 - acc: 0.4980 - val_loss: 1.4741 - val_acc: 0.4725

Epoch 00007: val_loss improved from 1.65993 to 1.47413, saving model to cp_model.styles.hdf5
Epoch 8/50
 - 331s - loss: 1.2759 - acc: 0.5432 - val_loss: 1.5779 - val_acc: 0.4312

Epoch 00008: val_loss did not improve from 1.47413
Epoch 9/50
 - 342s - loss: 1.2157 - acc: 0.5478 - val_loss: 1.3938 - val_acc: 0.5275

Epoch 00009: val_loss improved from 1.47413 to 1.39380, saving model to cp_model.styles.hdf5
Epoch 10/50
 - 348s - loss: 1.1969 - acc: 0.5707 - val_loss: 2.3067 - val_acc: 0.2110

Epoch 00010: val_loss did not improve from 1.39380
Epoch 11/50
 - 342s - loss: 1.1473 - acc: 0.5873 - val_loss: 1.7849 - val_acc: 0.3670

Epoch 00011: val_loss did not improve from 1.39380
Epoch 12/50
 - 372s - loss: 1.0977 - acc: 0.6027 - val_loss: 1.3031 - val_acc: 0.5550

Epoch 00012: val_loss improved from 1.39380 to 1.30313, saving model to cp_model.styles.hdf5
Epoch 13/50
 - 327s - loss: 1.0248 - acc: 0.6199 - val_loss: 1.3103 - val_acc: 0.5505

Epoch 00013: val_loss did not improve from 1.30313
Epoch 14/50
 - 300s - loss: 0.9685 - acc: 0.6394 - val_loss: 1.1745 - val_acc: 0.6009

Epoch 00014: val_loss improved from 1.30313 to 1.17451, saving model to cp_model.styles.hdf5
Epoch 15/50
 - 292s - loss: 0.9635 - acc: 0.6577 - val_loss: 1.4143 - val_acc: 0.5321

Epoch 00015: val_loss did not improve from 1.17451
Epoch 16/50
 - 293s - loss: 0.9480 - acc: 0.6629 - val_loss: 1.2734 - val_acc: 0.5688

Epoch 00016: val_loss did not improve from 1.17451
Epoch 17/50
 - 285s - loss: 0.9375 - acc: 0.6566 - val_loss: 1.1950 - val_acc: 0.5963

Epoch 00017: val_loss did not improve from 1.17451
Epoch 18/50
 - 287s - loss: 0.8551 - acc: 0.6863 - val_loss: 1.1105 - val_acc: 0.6422

Epoch 00018: val_loss improved from 1.17451 to 1.11048, saving model to cp_model.styles.hdf5
Epoch 19/50
 - 284s - loss: 0.8709 - acc: 0.6938 - val_loss: 1.2197 - val_acc: 0.5917

Epoch 00019: val_loss did not improve from 1.11048
Epoch 20/50
 - 285s - loss: 0.7821 - acc: 0.7121 - val_loss: 1.3176 - val_acc: 0.5688

Epoch 00020: val_loss did not improve from 1.11048
Epoch 21/50
 - 285s - loss: 0.7627 - acc: 0.7281 - val_loss: 1.2577 - val_acc: 0.6055

Epoch 00021: val_loss did not improve from 1.11048
Epoch 22/50
 - 290s - loss: 0.7498 - acc: 0.7241 - val_loss: 1.5879 - val_acc: 0.4862

Epoch 00022: val_loss did not improve from 1.11048
Epoch 23/50
 - 289s - loss: 0.7252 - acc: 0.7390 - val_loss: 1.2177 - val_acc: 0.6101

Epoch 00023: val_loss did not improve from 1.11048

Epoch 00023: ReduceLROnPlateau reducing learning rate to 0.0010000000474974513.
Epoch 24/50
 - 284s - loss: 0.5645 - acc: 0.7997 - val_loss: 1.2492 - val_acc: 0.5963

Epoch 00024: val_loss did not improve from 1.11048
Epoch 25/50
 - 291s - loss: 0.5350 - acc: 0.7968 - val_loss: 1.1034 - val_acc: 0.6330

Epoch 00025: val_loss improved from 1.11048 to 1.10342, saving model to cp_model.styles.hdf5
Epoch 26/50
 - 288s - loss: 0.5011 - acc: 0.8048 - val_loss: 1.1144 - val_acc: 0.6468

Epoch 00026: val_loss did not improve from 1.10342
Epoch 27/50
 - 287s - loss: 0.4733 - acc: 0.8226 - val_loss: 1.2159 - val_acc: 0.6330

Epoch 00027: val_loss did not improve from 1.10342
Epoch 28/50
 - 292s - loss: 0.4776 - acc: 0.8208 - val_loss: 1.1090 - val_acc: 0.6376

Epoch 00028: val_loss did not improve from 1.10342
Epoch 29/50
 - 283s - loss: 0.4558 - acc: 0.8317 - val_loss: 1.2641 - val_acc: 0.6193

Epoch 00029: val_loss did not improve from 1.10342
Epoch 30/50
 - 281s - loss: 0.4573 - acc: 0.8317 - val_loss: 1.2758 - val_acc: 0.6147

Epoch 00030: val_loss did not improve from 1.10342

Epoch 00030: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
Epoch 31/50
 - 282s - loss: 0.3803 - acc: 0.8620 - val_loss: 1.2178 - val_acc: 0.6147

Epoch 00031: val_loss did not improve from 1.10342
Epoch 32/50
 - 279s - loss: 0.3583 - acc: 0.8638 - val_loss: 1.2125 - val_acc: 0.6376

Epoch 00032: val_loss did not improve from 1.10342
Epoch 33/50
 - 284s - loss: 0.3391 - acc: 0.8804 - val_loss: 1.3127 - val_acc: 0.6101

Epoch 00033: val_loss did not improve from 1.10342
Epoch 34/50
 - 281s - loss: 0.3402 - acc: 0.8792 - val_loss: 1.2245 - val_acc: 0.6468

Epoch 00034: val_loss did not improve from 1.10342
Epoch 35/50
 - 281s - loss: 0.3297 - acc: 0.8769 - val_loss: 1.2638 - val_acc: 0.6376

Epoch 00035: val_loss did not improve from 1.10342

Epoch 00035: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
Epoch 36/50
 - 284s - loss: 0.3051 - acc: 0.8884 - val_loss: 1.2634 - val_acc: 0.6330

Epoch 00036: val_loss did not improve from 1.10342
Epoch 37/50
 - 287s - loss: 0.2768 - acc: 0.9061 - val_loss: 1.2578 - val_acc: 0.6376

Epoch 00037: val_loss did not improve from 1.10342
Epoch 38/50
 - 282s - loss: 0.2878 - acc: 0.8970 - val_loss: 1.2850 - val_acc: 0.6514

Epoch 00038: val_loss did not improve from 1.10342
Epoch 39/50
 - 285s - loss: 0.2746 - acc: 0.9033 - val_loss: 1.3062 - val_acc: 0.6330

Epoch 00039: val_loss did not improve from 1.10342
Epoch 40/50
 - 276s - loss: 0.2846 - acc: 0.8952 - val_loss: 1.2184 - val_acc: 0.6514

Epoch 00040: val_loss did not improve from 1.10342

Epoch 00040: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
Epoch 41/50
 - 279s - loss: 0.2657 - acc: 0.9118 - val_loss: 1.2608 - val_acc: 0.6606

Epoch 00041: val_loss did not improve from 1.10342
Epoch 42/50
 - 283s - loss: 0.2613 - acc: 0.9044 - val_loss: 1.2900 - val_acc: 0.6560

Epoch 00042: val_loss did not improve from 1.10342
Epoch 43/50
 - 279s - loss: 0.2528 - acc: 0.9078 - val_loss: 1.3131 - val_acc: 0.6422

Epoch 00043: val_loss did not improve from 1.10342
Epoch 44/50
 - 274s - loss: 0.2568 - acc: 0.9078 - val_loss: 1.3154 - val_acc: 0.6376

Epoch 00044: val_loss did not improve from 1.10342
Epoch 45/50
 - 274s - loss: 0.2505 - acc: 0.9170 - val_loss: 1.3156 - val_acc: 0.6330

Epoch 00045: val_loss did not improve from 1.10342

Epoch 00045: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
Epoch 46/50
 - 275s - loss: 0.2339 - acc: 0.9181 - val_loss: 1.3176 - val_acc: 0.6376

Epoch 00046: val_loss did not improve from 1.10342
Epoch 47/50
 - 274s - loss: 0.2337 - acc: 0.9141 - val_loss: 1.2792 - val_acc: 0.6560

Epoch 00047: val_loss did not improve from 1.10342
Epoch 48/50
 - 274s - loss: 0.2360 - acc: 0.9159 - val_loss: 1.2993 - val_acc: 0.6422

Epoch 00048: val_loss did not improve from 1.10342
Epoch 49/50
 - 276s - loss: 0.2439 - acc: 0.9078 - val_loss: 1.3072 - val_acc: 0.6560

Epoch 00049: val_loss did not improve from 1.10342
Epoch 50/50
 - 282s - loss: 0.2395 - acc: 0.9204 - val_loss: 1.3190 - val_acc: 0.6514

Epoch 00050: val_loss did not improve from 1.10342

Epoch 00050: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
In [52]:
hide_code
# Plot the training history
history_plot(cp_history, 0)
In [53]:
hide_code
# Load the model with the best validation accuracy
cp_model.load_weights('cp_model.styles.hdf5')
# Calculate classification accuracy on the testing set
cp_score = cp_model.evaluate(x_test2, y_test2)
cp_score
219/219 [==============================] - 63s 286ms/step
Out[53]:
[1.1334759024180234, 0.652968035441011]

Grayscaled Images, Brand Target

In [25]:
hide_code
def gray_cb_model():
    model = Sequential()
    
    # TODO: Define a model architecture
    model.add(Conv2D(32, (5, 5), padding='same', input_shape=x_train4.shape[1:]))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(Conv2D(196, (5, 5)))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(GlobalMaxPooling2D())
    
    model.add(Dense(512))
    model.add(LeakyReLU(alpha=0.02))
    model.add(Dropout(0.5)) 
    
    model.add(Dense(7))
    model.add(Activation('softmax'))
    
    # TODO: Compile the model
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

gray_cb_model = gray_cb_model()
In [26]:
hide_code
# Create callbacks
gray_cb_checkpointer = ModelCheckpoint(filepath='gray_cb_model.styles.hdf5', 
                                       verbose=2, save_best_only=True)
gray_cb_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                         patience=5, verbose=2, factor=0.2)
In [27]:
hide_code
# Train the model
gray_cb_history = gray_cb_model.fit(x_train4, y_train4, 
                                    epochs=50, batch_size=16, verbose=2,
                                    validation_data=(x_valid4, y_valid4),
                                    callbacks=[gray_cb_checkpointer,gray_cb_lr_reduction])
Train on 1747 samples, validate on 218 samples
Epoch 1/50
 - 265s - loss: 1.9308 - acc: 0.2210 - val_loss: 1.9131 - val_acc: 0.2339

Epoch 00001: val_loss improved from inf to 1.91310, saving model to gray_cb_model.styles.hdf5
Epoch 2/50
 - 275s - loss: 1.8967 - acc: 0.2410 - val_loss: 1.8786 - val_acc: 0.2798

Epoch 00002: val_loss improved from 1.91310 to 1.87863, saving model to gray_cb_model.styles.hdf5
Epoch 3/50
 - 275s - loss: 1.8158 - acc: 0.2908 - val_loss: 1.8060 - val_acc: 0.2798

Epoch 00003: val_loss improved from 1.87863 to 1.80605, saving model to gray_cb_model.styles.hdf5
Epoch 4/50
 - 293s - loss: 1.7384 - acc: 0.3331 - val_loss: 1.7736 - val_acc: 0.3028

Epoch 00004: val_loss improved from 1.80605 to 1.77364, saving model to gray_cb_model.styles.hdf5
Epoch 5/50
 - 299s - loss: 1.6852 - acc: 0.3520 - val_loss: 1.7364 - val_acc: 0.2752

Epoch 00005: val_loss improved from 1.77364 to 1.73645, saving model to gray_cb_model.styles.hdf5
Epoch 6/50
 - 290s - loss: 1.6412 - acc: 0.3675 - val_loss: 1.6416 - val_acc: 0.3349

Epoch 00006: val_loss improved from 1.73645 to 1.64158, saving model to gray_cb_model.styles.hdf5
Epoch 7/50
 - 315s - loss: 1.5926 - acc: 0.3824 - val_loss: 1.6683 - val_acc: 0.3165

Epoch 00007: val_loss did not improve from 1.64158
Epoch 8/50
 - 325s - loss: 1.5698 - acc: 0.3984 - val_loss: 1.7073 - val_acc: 0.3440

Epoch 00008: val_loss did not improve from 1.64158
Epoch 9/50
 - 320s - loss: 1.5259 - acc: 0.4219 - val_loss: 1.7544 - val_acc: 0.3119

Epoch 00009: val_loss did not improve from 1.64158
Epoch 10/50
 - 339s - loss: 1.4991 - acc: 0.4425 - val_loss: 1.7344 - val_acc: 0.3394

Epoch 00010: val_loss did not improve from 1.64158
Epoch 11/50
 - 293s - loss: 1.4414 - acc: 0.4516 - val_loss: 1.5923 - val_acc: 0.3716

Epoch 00011: val_loss improved from 1.64158 to 1.59225, saving model to gray_cb_model.styles.hdf5
Epoch 12/50
 - 289s - loss: 1.4310 - acc: 0.4511 - val_loss: 1.7200 - val_acc: 0.3532

Epoch 00012: val_loss did not improve from 1.59225
Epoch 13/50
 - 320s - loss: 1.3670 - acc: 0.4814 - val_loss: 1.5497 - val_acc: 0.4128

Epoch 00013: val_loss improved from 1.59225 to 1.54968, saving model to gray_cb_model.styles.hdf5
Epoch 14/50
 - 331s - loss: 1.3267 - acc: 0.4940 - val_loss: 1.5562 - val_acc: 0.3853

Epoch 00014: val_loss did not improve from 1.54968
Epoch 15/50
 - 384s - loss: 1.2995 - acc: 0.5180 - val_loss: 1.6308 - val_acc: 0.3899

Epoch 00015: val_loss did not improve from 1.54968
Epoch 16/50
 - 328s - loss: 1.2604 - acc: 0.5306 - val_loss: 1.6238 - val_acc: 0.3899

Epoch 00016: val_loss did not improve from 1.54968
Epoch 17/50
 - 310s - loss: 1.2041 - acc: 0.5610 - val_loss: 1.9510 - val_acc: 0.2752

Epoch 00017: val_loss did not improve from 1.54968
Epoch 18/50
 - 306s - loss: 1.1654 - acc: 0.5684 - val_loss: 1.5988 - val_acc: 0.3807

Epoch 00018: val_loss did not improve from 1.54968

Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
Epoch 19/50
 - 311s - loss: 1.0019 - acc: 0.6394 - val_loss: 1.4166 - val_acc: 0.4862

Epoch 00019: val_loss improved from 1.54968 to 1.41657, saving model to gray_cb_model.styles.hdf5
Epoch 20/50
 - 304s - loss: 0.9592 - acc: 0.6520 - val_loss: 1.3804 - val_acc: 0.5275

Epoch 00020: val_loss improved from 1.41657 to 1.38044, saving model to gray_cb_model.styles.hdf5
Epoch 21/50
 - 279s - loss: 0.9263 - acc: 0.6600 - val_loss: 1.4174 - val_acc: 0.4954

Epoch 00021: val_loss did not improve from 1.38044
Epoch 22/50
 - 308s - loss: 0.8920 - acc: 0.6806 - val_loss: 1.3974 - val_acc: 0.4679

Epoch 00022: val_loss did not improve from 1.38044
Epoch 23/50
 - 279s - loss: 0.8846 - acc: 0.6829 - val_loss: 1.3984 - val_acc: 0.4954

Epoch 00023: val_loss did not improve from 1.38044
Epoch 24/50
 - 298s - loss: 0.8626 - acc: 0.6932 - val_loss: 1.3724 - val_acc: 0.5138

Epoch 00024: val_loss improved from 1.38044 to 1.37241, saving model to gray_cb_model.styles.hdf5
Epoch 25/50
 - 301s - loss: 0.8620 - acc: 0.6966 - val_loss: 1.4064 - val_acc: 0.4862

Epoch 00025: val_loss did not improve from 1.37241
Epoch 26/50
 - 276s - loss: 0.8387 - acc: 0.7029 - val_loss: 1.4060 - val_acc: 0.4908

Epoch 00026: val_loss did not improve from 1.37241
Epoch 27/50
 - 282s - loss: 0.8190 - acc: 0.7127 - val_loss: 1.3922 - val_acc: 0.5092

Epoch 00027: val_loss did not improve from 1.37241
Epoch 28/50
 - 282s - loss: 0.7979 - acc: 0.7304 - val_loss: 1.4358 - val_acc: 0.4862

Epoch 00028: val_loss did not improve from 1.37241
Epoch 29/50
 - 266s - loss: 0.7967 - acc: 0.7252 - val_loss: 1.4332 - val_acc: 0.4725

Epoch 00029: val_loss did not improve from 1.37241

Epoch 00029: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05.
Epoch 30/50
 - 263s - loss: 0.7532 - acc: 0.7418 - val_loss: 1.3914 - val_acc: 0.5000

Epoch 00030: val_loss did not improve from 1.37241
Epoch 31/50
 - 261s - loss: 0.7462 - acc: 0.7344 - val_loss: 1.3948 - val_acc: 0.5046

Epoch 00031: val_loss did not improve from 1.37241
Epoch 32/50
 - 259s - loss: 0.7291 - acc: 0.7464 - val_loss: 1.3819 - val_acc: 0.5092

Epoch 00032: val_loss did not improve from 1.37241
Epoch 33/50
 - 260s - loss: 0.7380 - acc: 0.7459 - val_loss: 1.4024 - val_acc: 0.5138

Epoch 00033: val_loss did not improve from 1.37241
Epoch 34/50
 - 263s - loss: 0.7298 - acc: 0.7499 - val_loss: 1.3933 - val_acc: 0.4908

Epoch 00034: val_loss did not improve from 1.37241

Epoch 00034: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06.
Epoch 35/50
 - 315s - loss: 0.7354 - acc: 0.7390 - val_loss: 1.3905 - val_acc: 0.4908

Epoch 00035: val_loss did not improve from 1.37241
Epoch 36/50
 - 265s - loss: 0.7214 - acc: 0.7499 - val_loss: 1.3938 - val_acc: 0.5000

Epoch 00036: val_loss did not improve from 1.37241
Epoch 37/50
 - 300s - loss: 0.7293 - acc: 0.7481 - val_loss: 1.3932 - val_acc: 0.5000

Epoch 00037: val_loss did not improve from 1.37241
Epoch 38/50
 - 311s - loss: 0.7084 - acc: 0.7562 - val_loss: 1.3912 - val_acc: 0.5046

Epoch 00038: val_loss did not improve from 1.37241
Epoch 39/50
 - 322s - loss: 0.7154 - acc: 0.7544 - val_loss: 1.3882 - val_acc: 0.4908

Epoch 00039: val_loss did not improve from 1.37241

Epoch 00039: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06.
Epoch 40/50
 - 319s - loss: 0.7196 - acc: 0.7556 - val_loss: 1.3888 - val_acc: 0.4908

Epoch 00040: val_loss did not improve from 1.37241
Epoch 41/50
 - 317s - loss: 0.7237 - acc: 0.7401 - val_loss: 1.3889 - val_acc: 0.4908

Epoch 00041: val_loss did not improve from 1.37241
Epoch 42/50
 - 332s - loss: 0.7101 - acc: 0.7579 - val_loss: 1.3899 - val_acc: 0.4908

Epoch 00042: val_loss did not improve from 1.37241
Epoch 43/50
 - 317s - loss: 0.7198 - acc: 0.7476 - val_loss: 1.3885 - val_acc: 0.4908

Epoch 00043: val_loss did not improve from 1.37241
Epoch 44/50
 - 310s - loss: 0.7195 - acc: 0.7544 - val_loss: 1.3887 - val_acc: 0.4908

Epoch 00044: val_loss did not improve from 1.37241

Epoch 00044: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07.
Epoch 45/50
 - 293s - loss: 0.7311 - acc: 0.7493 - val_loss: 1.3886 - val_acc: 0.4908

Epoch 00045: val_loss did not improve from 1.37241
Epoch 46/50
 - 340s - loss: 0.7148 - acc: 0.7516 - val_loss: 1.3886 - val_acc: 0.4908

Epoch 00046: val_loss did not improve from 1.37241
Epoch 47/50
 - 354s - loss: 0.7222 - acc: 0.7459 - val_loss: 1.3884 - val_acc: 0.4908

Epoch 00047: val_loss did not improve from 1.37241
Epoch 48/50
 - 342s - loss: 0.7139 - acc: 0.7613 - val_loss: 1.3886 - val_acc: 0.4908

Epoch 00048: val_loss did not improve from 1.37241
Epoch 49/50
 - 379s - loss: 0.7206 - acc: 0.7481 - val_loss: 1.3886 - val_acc: 0.4908

Epoch 00049: val_loss did not improve from 1.37241

Epoch 00049: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08.
Epoch 50/50
 - 318s - loss: 0.7269 - acc: 0.7418 - val_loss: 1.3887 - val_acc: 0.4908

Epoch 00050: val_loss did not improve from 1.37241
In [28]:
hide_code
# Plot the training history
history_plot(gray_cb_history, 0)
In [29]:
hide_code
# Load the model with the best validation accuracy
gray_cb_model.load_weights('gray_cb_model.styles.hdf5')
# Calculate classification accuracy on the testing set
gray_cb_score = gray_cb_model.evaluate(x_test4, y_test4)
gray_cb_score
219/219 [==============================] - 17s 77ms/step
Out[29]:
[1.4009063330959513, 0.5068493183345011]

Grayscaled Images, Product Target

In [47]:
hide_code
def gray_cp_model():
    model = Sequential()
    
    # TODO: Define a model architecture
    model.add(Conv2D(32, (5, 5), padding='same', input_shape=x_train5.shape[1:]))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(Conv2D(196, (5, 5)))
    model.add(LeakyReLU(alpha=0.02))
    
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.2))

    model.add(GlobalMaxPooling2D())
    
    model.add(Dense(512))
    model.add(LeakyReLU(alpha=0.02))
    model.add(Dropout(0.5)) 
    
    model.add(Dense(10))
    model.add(Activation('softmax'))
    
    # TODO: Compile the model
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

gray_cp_model = gray_cp_model()
In [48]:
hide_code
# Create callbacks
gray_cp_checkpointer = ModelCheckpoint(filepath='gray_cp_model.styles.hdf5', 
                                       verbose=2, save_best_only=True)
gray_cp_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                         patience=5, verbose=2, factor=0.2)
In [49]:
hide_code
# Train the model
gray_cp_history = gray_cp_model.fit(x_train5, y_train5, 
                                    epochs=50, batch_size=16, verbose=2,
                                    validation_data=(x_valid5, y_valid5),
                                    callbacks=[gray_cp_checkpointer,gray_cp_lr_reduction])
Train on 1747 samples, validate on 218 samples
Epoch 1/50
 - 371s - loss: 2.1360 - acc: 0.2169 - val_loss: 2.0709 - val_acc: 0.2202

Epoch 00001: val_loss improved from inf to 2.07093, saving model to gray_cp_model.styles.hdf5
Epoch 2/50
 - 290s - loss: 1.8916 - acc: 0.3234 - val_loss: 1.8948 - val_acc: 0.3073

Epoch 00002: val_loss improved from 2.07093 to 1.89485, saving model to gray_cp_model.styles.hdf5
Epoch 3/50
 - 342s - loss: 1.7145 - acc: 0.3950 - val_loss: 1.8560 - val_acc: 0.3578

Epoch 00003: val_loss improved from 1.89485 to 1.85597, saving model to gray_cp_model.styles.hdf5
Epoch 4/50
 - 342s - loss: 1.6098 - acc: 0.4264 - val_loss: 1.5903 - val_acc: 0.4633

Epoch 00004: val_loss improved from 1.85597 to 1.59032, saving model to gray_cp_model.styles.hdf5
Epoch 5/50
 - 346s - loss: 1.5243 - acc: 0.4688 - val_loss: 1.8132 - val_acc: 0.3945

Epoch 00005: val_loss did not improve from 1.59032
Epoch 6/50
 - 400s - loss: 1.4407 - acc: 0.5083 - val_loss: 1.5208 - val_acc: 0.4862

Epoch 00006: val_loss improved from 1.59032 to 1.52080, saving model to gray_cp_model.styles.hdf5
Epoch 7/50
 - 293s - loss: 1.3583 - acc: 0.5255 - val_loss: 1.5967 - val_acc: 0.4541

Epoch 00007: val_loss did not improve from 1.52080
Epoch 8/50
 - 309s - loss: 1.3151 - acc: 0.5518 - val_loss: 1.5964 - val_acc: 0.4679

Epoch 00008: val_loss did not improve from 1.52080
Epoch 9/50
 - 295s - loss: 1.2669 - acc: 0.5690 - val_loss: 1.4624 - val_acc: 0.4679

Epoch 00009: val_loss improved from 1.52080 to 1.46242, saving model to gray_cp_model.styles.hdf5
Epoch 10/50
 - 308s - loss: 1.2031 - acc: 0.5816 - val_loss: 1.3757 - val_acc: 0.5229

Epoch 00010: val_loss improved from 1.46242 to 1.37565, saving model to gray_cp_model.styles.hdf5
Epoch 11/50
 - 327s - loss: 1.1205 - acc: 0.6119 - val_loss: 1.3693 - val_acc: 0.5275

Epoch 00011: val_loss improved from 1.37565 to 1.36926, saving model to gray_cp_model.styles.hdf5
Epoch 12/50
 - 341s - loss: 1.0869 - acc: 0.6274 - val_loss: 1.3466 - val_acc: 0.5505

Epoch 00012: val_loss improved from 1.36926 to 1.34658, saving model to gray_cp_model.styles.hdf5
Epoch 13/50
 - 336s - loss: 1.0576 - acc: 0.6377 - val_loss: 1.2851 - val_acc: 0.5917

Epoch 00013: val_loss improved from 1.34658 to 1.28508, saving model to gray_cp_model.styles.hdf5
Epoch 14/50
 - 315s - loss: 0.9997 - acc: 0.6525 - val_loss: 1.2693 - val_acc: 0.5596

Epoch 00014: val_loss improved from 1.28508 to 1.26931, saving model to gray_cp_model.styles.hdf5
Epoch 15/50
 - 306s - loss: 0.9710 - acc: 0.6629 - val_loss: 1.3003 - val_acc: 0.5596

Epoch 00015: val_loss did not improve from 1.26931
Epoch 16/50
 - 302s - loss: 0.9264 - acc: 0.6691 - val_loss: 1.2108 - val_acc: 0.6055

Epoch 00016: val_loss improved from 1.26931 to 1.21076, saving model to gray_cp_model.styles.hdf5
Epoch 17/50
 - 300s - loss: 0.8829 - acc: 0.6909 - val_loss: 1.4569 - val_acc: 0.5046

Epoch 00017: val_loss did not improve from 1.21076
Epoch 18/50
 - 293s - loss: 0.8284 - acc: 0.7098 - val_loss: 1.1751 - val_acc: 0.6376

Epoch 00018: val_loss improved from 1.21076 to 1.17512, saving model to gray_cp_model.styles.hdf5
Epoch 19/50
 - 272s - loss: 0.8346 - acc: 0.6949 - val_loss: 1.1383 - val_acc: 0.6147

Epoch 00019: val_loss improved from 1.17512 to 1.13827, saving model to gray_cp_model.styles.hdf5
Epoch 20/50
 - 313s - loss: 0.7810 - acc: 0.7092 - val_loss: 1.6391 - val_acc: 0.4862

Epoch 00020: val_loss did not improve from 1.13827
Epoch 21/50
 - 274s - loss: 0.7717 - acc: 0.7390 - val_loss: 1.3866 - val_acc: 0.5505

Epoch 00021: val_loss did not improve from 1.13827
Epoch 22/50
 - 295s - loss: 0.7453 - acc: 0.7418 - val_loss: 1.2654 - val_acc: 0.6101

Epoch 00022: val_loss did not improve from 1.13827
Epoch 23/50
 - 287s - loss: 0.7013 - acc: 0.7619 - val_loss: 1.2750 - val_acc: 0.6147

Epoch 00023: val_loss did not improve from 1.13827
Epoch 24/50
 - 300s - loss: 0.6866 - acc: 0.7642 - val_loss: 1.2467 - val_acc: 0.5917

Epoch 00024: val_loss did not improve from 1.13827

Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.
Epoch 25/50
 - 297s - loss: 0.4870 - acc: 0.8271 - val_loss: 1.0372 - val_acc: 0.6651

Epoch 00025: val_loss improved from 1.13827 to 1.03722, saving model to gray_cp_model.styles.hdf5
Epoch 26/50
 - 286s - loss: 0.4460 - acc: 0.8517 - val_loss: 1.0901 - val_acc: 0.6651

Epoch 00026: val_loss did not improve from 1.03722
Epoch 27/50
 - 315s - loss: 0.4039 - acc: 0.8672 - val_loss: 1.0993 - val_acc: 0.6514

Epoch 00027: val_loss did not improve from 1.03722
Epoch 28/50
 - 283s - loss: 0.3994 - acc: 0.8643 - val_loss: 1.0676 - val_acc: 0.6881

Epoch 00028: val_loss did not improve from 1.03722
Epoch 29/50
 - 292s - loss: 0.3837 - acc: 0.8729 - val_loss: 1.1005 - val_acc: 0.6789

Epoch 00029: val_loss did not improve from 1.03722
Epoch 30/50
 - 296s - loss: 0.3847 - acc: 0.8661 - val_loss: 1.0949 - val_acc: 0.6697

Epoch 00030: val_loss did not improve from 1.03722

Epoch 00030: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05.
Epoch 31/50
 - 301s - loss: 0.3508 - acc: 0.8827 - val_loss: 1.0690 - val_acc: 0.6927

Epoch 00031: val_loss did not improve from 1.03722
Epoch 32/50
 - 299s - loss: 0.3471 - acc: 0.8855 - val_loss: 1.0754 - val_acc: 0.6927

Epoch 00032: val_loss did not improve from 1.03722
Epoch 33/50
 - 278s - loss: 0.3387 - acc: 0.8849 - val_loss: 1.0779 - val_acc: 0.6972

Epoch 00033: val_loss did not improve from 1.03722
Epoch 34/50
 - 270s - loss: 0.3317 - acc: 0.8867 - val_loss: 1.0614 - val_acc: 0.6927

Epoch 00034: val_loss did not improve from 1.03722
Epoch 35/50
 - 273s - loss: 0.3111 - acc: 0.8952 - val_loss: 1.0769 - val_acc: 0.6927

Epoch 00035: val_loss did not improve from 1.03722

Epoch 00035: ReduceLROnPlateau reducing learning rate to 8.000000525498762e-06.
Epoch 36/50
 - 267s - loss: 0.3200 - acc: 0.8884 - val_loss: 1.0705 - val_acc: 0.6927

Epoch 00036: val_loss did not improve from 1.03722
Epoch 37/50
 - 268s - loss: 0.3166 - acc: 0.8930 - val_loss: 1.0664 - val_acc: 0.6881

Epoch 00037: val_loss did not improve from 1.03722
Epoch 38/50
 - 264s - loss: 0.3202 - acc: 0.8890 - val_loss: 1.0676 - val_acc: 0.6881

Epoch 00038: val_loss did not improve from 1.03722
Epoch 39/50
 - 262s - loss: 0.3146 - acc: 0.8941 - val_loss: 1.0672 - val_acc: 0.6881

Epoch 00039: val_loss did not improve from 1.03722
Epoch 40/50
 - 262s - loss: 0.3185 - acc: 0.8907 - val_loss: 1.0654 - val_acc: 0.6972

Epoch 00040: val_loss did not improve from 1.03722

Epoch 00040: ReduceLROnPlateau reducing learning rate to 1.6000001778593287e-06.
Epoch 41/50
 - 260s - loss: 0.3137 - acc: 0.8964 - val_loss: 1.0669 - val_acc: 0.6972

Epoch 00041: val_loss did not improve from 1.03722
Epoch 42/50
 - 260s - loss: 0.3245 - acc: 0.8924 - val_loss: 1.0670 - val_acc: 0.6972

Epoch 00042: val_loss did not improve from 1.03722
Epoch 43/50
 - 261s - loss: 0.3182 - acc: 0.8981 - val_loss: 1.0673 - val_acc: 0.6972

Epoch 00043: val_loss did not improve from 1.03722
Epoch 44/50
 - 260s - loss: 0.3209 - acc: 0.8975 - val_loss: 1.0668 - val_acc: 0.7018

Epoch 00044: val_loss did not improve from 1.03722
Epoch 45/50
 - 261s - loss: 0.3106 - acc: 0.8952 - val_loss: 1.0670 - val_acc: 0.7018

Epoch 00045: val_loss did not improve from 1.03722

Epoch 00045: ReduceLROnPlateau reducing learning rate to 3.200000264769187e-07.
Epoch 46/50
 - 258s - loss: 0.3172 - acc: 0.8935 - val_loss: 1.0670 - val_acc: 0.7018

Epoch 00046: val_loss did not improve from 1.03722
Epoch 47/50
 - 260s - loss: 0.3029 - acc: 0.8998 - val_loss: 1.0670 - val_acc: 0.7018

Epoch 00047: val_loss did not improve from 1.03722
Epoch 48/50
 - 259s - loss: 0.3069 - acc: 0.9050 - val_loss: 1.0669 - val_acc: 0.7018

Epoch 00048: val_loss did not improve from 1.03722
Epoch 49/50
 - 260s - loss: 0.3102 - acc: 0.8981 - val_loss: 1.0670 - val_acc: 0.7018

Epoch 00049: val_loss did not improve from 1.03722
Epoch 50/50
 - 258s - loss: 0.3165 - acc: 0.8958 - val_loss: 1.0670 - val_acc: 0.6972

Epoch 00050: val_loss did not improve from 1.03722

Epoch 00050: ReduceLROnPlateau reducing learning rate to 6.400000529538374e-08.
In [50]:
hide_code
# Plot the training history
history_plot(gray_cp_history, 0)
In [51]:
hide_code
# Load the model with the best validation accuracy
gray_cp_model.load_weights('gray_cp_model.styles.hdf5')
# Calculate classification accuracy on the testing set
gray_cp_score = gray_cp_model.evaluate(x_test5, y_test5)
gray_cp_score
219/219 [==============================] - 52s 239ms/step
Out[51]:
[1.0037635834249732, 0.6575342482083464]

Step 5. Create Multi-Label Classification Models

Color Images, Multi-Label Target

In [52]:
hide_code
def multi_model():    
    model_input = Input(shape=(150, 150, 3))
    x = BatchNormalization()(model_input)
    # TODO: Define a model architecture
    x = Conv2D(32, (5, 5),padding='same')(model_input)
    x = LeakyReLU(alpha=0.02)(x)
    x = MaxPooling2D(pool_size=(2, 2))(x)    
    x = Dropout(0.25)(x)
    
    x = Conv2D(196, (5, 5), padding='same')(x)
    x = LeakyReLU(alpha=0.02)(x)
    x = MaxPooling2D(pool_size=(2, 2))(x)    
    x = Dropout(0.25)(x)
              
    x = GlobalMaxPooling2D()(x)
    
    x = Dense(512)(x)
    x = LeakyReLU(alpha=0.02)(x)
    x = Dropout(0.5)(x)
    
    y1 = Dense(7, activation='softmax')(x)
    y2 = Dense(10, activation='softmax')(x)
    
    model = Model(inputs=model_input, outputs=[y1, y2])
    
    # TODO: Compile the model
    model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
    return model

multi_model = multi_model()
In [53]:
hide_code
# Create callbacks
multi_checkpointer = ModelCheckpoint(filepath='multi_model.styles.hdf5', 
                                     verbose=2, save_best_only=True)
multi_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                       patience=5, verbose=2, factor=0.2)
In [54]:
hide_code
# Train the model
multi_history = multi_model.fit(x_train3, y_train3_list, 
                                validation_data=(x_valid3, y_valid3_list), 
                                epochs=30, batch_size=16, verbose=0, 
                                callbacks=[multi_checkpointer,multi_lr_reduction])
Epoch 00001: val_loss improved from inf to 3.95420, saving model to multi_model.styles.hdf5

Epoch 00002: val_loss did not improve from 3.95420

Epoch 00003: val_loss improved from 3.95420 to 3.75990, saving model to multi_model.styles.hdf5

Epoch 00004: val_loss improved from 3.75990 to 3.65722, saving model to multi_model.styles.hdf5

Epoch 00005: val_loss improved from 3.65722 to 3.36591, saving model to multi_model.styles.hdf5

Epoch 00006: val_loss improved from 3.36591 to 3.32579, saving model to multi_model.styles.hdf5

Epoch 00007: val_loss improved from 3.32579 to 3.31468, saving model to multi_model.styles.hdf5

Epoch 00008: val_loss improved from 3.31468 to 3.25431, saving model to multi_model.styles.hdf5

Epoch 00009: val_loss improved from 3.25431 to 3.24478, saving model to multi_model.styles.hdf5

Epoch 00010: val_loss improved from 3.24478 to 3.09012, saving model to multi_model.styles.hdf5

Epoch 00011: val_loss did not improve from 3.09012

Epoch 00012: val_loss did not improve from 3.09012

Epoch 00013: val_loss did not improve from 3.09012

Epoch 00014: val_loss improved from 3.09012 to 2.84418, saving model to multi_model.styles.hdf5

Epoch 00015: val_loss did not improve from 2.84418

Epoch 00016: val_loss improved from 2.84418 to 2.82884, saving model to multi_model.styles.hdf5

Epoch 00017: val_loss did not improve from 2.82884

Epoch 00018: val_loss did not improve from 2.82884

Epoch 00019: val_loss improved from 2.82884 to 2.57165, saving model to multi_model.styles.hdf5

Epoch 00020: val_loss did not improve from 2.57165

Epoch 00021: val_loss did not improve from 2.57165

Epoch 00022: val_loss did not improve from 2.57165

Epoch 00023: val_loss did not improve from 2.57165

Epoch 00024: val_loss improved from 2.57165 to 2.53429, saving model to multi_model.styles.hdf5

Epoch 00025: val_loss did not improve from 2.53429

Epoch 00026: val_loss improved from 2.53429 to 2.50153, saving model to multi_model.styles.hdf5

Epoch 00027: val_loss did not improve from 2.50153

Epoch 00028: val_loss did not improve from 2.50153

Epoch 00029: val_loss did not improve from 2.50153

Epoch 00030: val_loss did not improve from 2.50153
In [55]:
hide_code
# Load the model with the best validation accuracy
multi_model.load_weights('multi_model.styles.hdf5')
# Calculate classification accuracy on the testing set
multi_scores = multi_model.evaluate(x_test3, y_test3_list, verbose=0)

print("Scores: \n" , (multi_scores))
print("First label. Accuracy: %.2f%%" % (multi_scores[3]*100))
print("Second label. Accuracy: %.2f%%" % (multi_scores[4]*100))
Scores: 
 [2.6864930941089646, 1.4126377318003407, 1.2738553448899153, 0.5251141557954762, 0.6255707740783691]
First label. Accuracy: 52.51%
Second label. Accuracy: 62.56%

Grayscaled Images, Multi-Label Target

In [56]:
hide_code
def gray_multi_model():    
    model_input = Input(shape=(150, 150, 1))
    x = BatchNormalization()(model_input)
    # TODO: Define a model architecture
    x = Conv2D(32, (5, 5), padding='same')(model_input)
    x = LeakyReLU(alpha=0.02)(x)
    x = MaxPooling2D(pool_size=(2, 2))(x)    
    x = Dropout(0.25)(x)
    
    x = Conv2D(196, (5, 5))(x) 
    x = LeakyReLU(alpha=0.02)(x)
    x = MaxPooling2D(pool_size=(2, 2))(x)    
    x = Dropout(0.25)(x)
              
    x = GlobalMaxPooling2D()(x)
    
    x = Dense(512)(x)
    x = LeakyReLU(alpha=0.02)(x)
    x = Dropout(0.5)(x)
    
    y1 = Dense(7, activation='softmax')(x)
    y2 = Dense(10, activation='softmax')(x)
       
    model = Model(inputs=model_input, outputs=[y1, y2])
    # TODO: Compile the model

    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])   
    return model

gray_multi_model = gray_multi_model()
In [57]:
hide_code
# Create callbacks
gray_multi_checkpointer = ModelCheckpoint(filepath='gray_multi_model.styles.hdf5', 
                                          verbose=2, save_best_only=True)
gray_multi_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                            patience=5, verbose=2, factor=0.2)
In [59]:
hide_code
# Train the model
gray_multi_history = gray_multi_model.fit(x_train6, y_train6_list, 
                                          validation_data=(x_valid6, y_valid6_list), 
                                          epochs=30, batch_size=16, verbose=0, 
                                          callbacks=[gray_multi_checkpointer,
                                                     gray_multi_lr_reduction])
Epoch 00001: val_loss improved from inf to 3.71692, saving model to gray_multi_model.styles.hdf5

Epoch 00002: val_loss improved from 3.71692 to 3.56347, saving model to gray_multi_model.styles.hdf5

Epoch 00003: val_loss improved from 3.56347 to 3.53995, saving model to gray_multi_model.styles.hdf5

Epoch 00004: val_loss improved from 3.53995 to 3.29303, saving model to gray_multi_model.styles.hdf5

Epoch 00005: val_loss did not improve from 3.29303

Epoch 00006: val_loss did not improve from 3.29303

Epoch 00007: val_loss improved from 3.29303 to 3.15776, saving model to gray_multi_model.styles.hdf5

Epoch 00008: val_loss did not improve from 3.15776

Epoch 00009: val_loss did not improve from 3.15776

Epoch 00010: val_loss improved from 3.15776 to 3.12744, saving model to gray_multi_model.styles.hdf5

Epoch 00011: val_loss improved from 3.12744 to 3.09433, saving model to gray_multi_model.styles.hdf5

Epoch 00012: val_loss improved from 3.09433 to 2.94499, saving model to gray_multi_model.styles.hdf5

Epoch 00013: val_loss improved from 2.94499 to 2.85061, saving model to gray_multi_model.styles.hdf5

Epoch 00014: val_loss did not improve from 2.85061

Epoch 00015: val_loss did not improve from 2.85061

Epoch 00016: val_loss improved from 2.85061 to 2.77148, saving model to gray_multi_model.styles.hdf5

Epoch 00017: val_loss did not improve from 2.77148

Epoch 00018: val_loss did not improve from 2.77148

Epoch 00019: val_loss did not improve from 2.77148

Epoch 00020: val_loss did not improve from 2.77148

Epoch 00021: val_loss improved from 2.77148 to 2.64721, saving model to gray_multi_model.styles.hdf5

Epoch 00022: val_loss did not improve from 2.64721

Epoch 00023: val_loss did not improve from 2.64721

Epoch 00024: val_loss did not improve from 2.64721

Epoch 00025: val_loss did not improve from 2.64721

Epoch 00026: val_loss did not improve from 2.64721

Epoch 00026: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026.

Epoch 00027: val_loss improved from 2.64721 to 2.36235, saving model to gray_multi_model.styles.hdf5

Epoch 00028: val_loss did not improve from 2.36235

Epoch 00029: val_loss improved from 2.36235 to 2.35639, saving model to gray_multi_model.styles.hdf5

Epoch 00030: val_loss did not improve from 2.35639
In [60]:
hide_code
# Load the model with the best validation accuracy
gray_multi_model.load_weights('gray_multi_model.styles.hdf5')
# Calculate classification accuracy on the testing set
gray_multi_scores = gray_multi_model.evaluate(x_test6, y_test6_list, verbose=0)

print("Scores: \n" , (gray_multi_scores))
print("First label. Accuracy: %.2f%%" % (gray_multi_scores[3]*100))
print("Second label. Accuracy: %.2f%%" % (gray_multi_scores[4]*100))
Scores: 
 [2.3777328179851516, 1.2813794035889787, 1.0963534318148818, 0.5433789959781246, 0.6392694033988534]
First label. Accuracy: 54.34%
Second label. Accuracy: 63.93%