Deep Learning

Practice Projects

P1: Neural Networks for Regression

In this project, we'll evaluate the performance and predictive power of neural networks in the sphere of regression tasks. Models will be trained and tested on data collected from homes in suburbs of Boston, Massachusetts.

Origin: This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.

Creators: Harrison, D. and Rubinfeld, D.L.

Data Set Information: Concerns housing values in suburbs of Boston.

Attribute Information:

  • CRIM: per capita crime rate by town
  • ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
  • INDUS: proportion of non-retail business acres per town
  • CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
  • NOX: nitric oxides concentration (parts per 10 million)
  • RM: average number of rooms per dwelling
  • AGE: proportion of owner-occupied units built prior to 1940
  • DIS: weighted distances to five Boston employment centres
  • RAD: index of accessibility to radial highways
  • TAX: full-value property-tax rate per 10,000 USD
  • PTRATIO: pupil-teacher ratio by town
  • B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
  • LSTAT: % lower status of the population
  • MEDV: Median value of owner-occupied homes in 1000 USD

The Boston housing data was collected in 1978 and each of the 506 entries represents aggregated data about 14 features for homes from various suburbs.

Step 0. Style and Libraries

Let's choose a style of the Jupyter notebook and import the software libraries. The command hide_code will hide the code cells.

In [ ]:
%%html
<style>
@import url('https://fonts.googleapis.com/css?family=Orbitron|Roboto');
body {background-color: aliceblue;} 
a {color: #4876ff; font-family: 'Roboto';} 
h1 {color: #348ABD; font-family: 'Orbitron'; text-shadow: 4px 4px 4px #ccc;} 
h2, h3 {color: slategray; font-family: 'Roboto'; text-shadow: 4px 4px 4px #ccc;}
h4 {color: #348ABD; font-family: 'Orbitron';}
span {text-shadow: 4px 4px 4px #ccc;}
div.output_prompt, div.output_area pre {color: slategray;}
div.input_prompt, div.output_subarea {color: #4876ff;}      
div.output_stderr pre {background-color: aliceblue;}  
div.output_stderr {background-color: slategrey;}                        
</style>
<script>
code_show = true; 
function code_display() {
    if (code_show) {
        $('div.input').each(function(id) {
            if (id == 0 || $(this).html().indexOf('hide_code') > -1) {$(this).hide();}
        });
        $('div.output_prompt').css('opacity', 0);
    } else {
        $('div.input').each(function(id) {$(this).show();});
        $('div.output_prompt').css('opacity', 1);
    };
    code_show = !code_show;
} 
$(document).ready(code_display);
</script>
<form action="javascript: code_display()">
<input style="color: #348ABD; background: aliceblue; opacity: 0.8;" \ 
type="submit" value="Click to display or hide code cells">
</form> 
In [ ]:
hide_code = ''
import numpy as np 
import pandas as pd

import seaborn as sns
import matplotlib.pylab as plt
from matplotlib import cm
%matplotlib inline

import warnings
warnings.filterwarnings("ignore", category=UserWarning, module="matplotlib")

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score

from keras.datasets import boston_housing
from keras.utils import to_categorical
from keras.preprocessing import image as keras_image
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau

from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, GlobalAveragePooling1D
from keras.layers import Activation, Flatten, Dropout, BatchNormalization
from keras.layers import Conv1D, MaxPooling1D, GlobalMaxPooling1D
from keras.layers.advanced_activations import PReLU, LeakyReLU
In [ ]:
hide_code
# Plot the Neural network fitting history
def history_plot(fit_history, n):
    plt.figure(figsize=(18, 12))
    
    plt.subplot(211)
    plt.plot(fit_history.history['loss'][n:], color='slategray', label = 'train')
    plt.plot(fit_history.history['val_loss'][n:], color='#4876ff', label = 'valid')
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    plt.legend()
    plt.title('Loss Function');  
    
    plt.subplot(212)
    plt.plot(fit_history.history['mean_absolute_error'][n:], color='slategray', label = 'train')
    plt.plot(fit_history.history['val_mean_absolute_error'][n:], color='#4876ff', label = 'valid')
    plt.xlabel("Epochs")
    plt.ylabel("MAE")    
    plt.legend()
    plt.title('Mean Absolute Error');

Step 1. Load and Explore the Data

This database is very popular for studying regression and can be downloaded in several ways. Let's display the easiest ones of them.

In [ ]:
hide_code
# Load the sklearn version
boston_data = datasets.load_boston()
boston_df = pd.DataFrame(boston_data.data, columns=boston_data.feature_names)
boston_df['MEDV'] = boston_data.target

# Load the keras version
(x_train, y_train), (x_test, y_test) = boston_housing.load_data()
# Divide the test set into two subsets.
x_valid, y_valid = x_test[:51], y_test[:51]
x_test, y_test = x_test[51:], y_test[51:]
In [ ]:
hide_code
# Display the example of rows
boston_df.head()
In [ ]:
hide_code
# Display correlation the table
pearson = boston_df.corr(method='pearson')
corr_with_prices = pearson.iloc[-1][:-1]
# TODO: Arrange the variables in descending order of correlation (by absolute values) with the target
#       and display the results
In [ ]:
hide_code
# Print the shape of datasets
print ("Training feature's shape:", x_train.shape)
print ("Training target's shape", y_train.shape)
print ("Validating feature's shape:", x_valid.shape)
print ("Validating target's shape", y_valid.shape)
print ("Testing feature's shape:", x_test.shape)
print ("Testing target's shape", y_test.shape)
In [ ]:
hide_code
# Plot the target distributions
plt.style.use('seaborn-whitegrid')
plt.figure(1, figsize=(18, 6))
plt.subplot(121)
sns.distplot(y_train, color='#4876ff', bins=30)
plt.ylabel("Distribution")
plt.xlabel("Prices")
plt.subplot(122)
sns.distplot(np.log(y_train), color='#4876ff', bins=30)
plt.ylabel("Distribution")
plt.xlabel("Logarithmic Prices")
plt.suptitle('Boston Housing Data', fontsize=15);

Step 2. Build the Neural Networks

For more information use the following links:

Multilayer Perceptron (MLP)

Define a model architecture and compile the model.

In [ ]:
hide_code
def mlp_model():
    model = Sequential()
    # TODO: Create the sequential MLP model  

    # TODO: Compile the model    
    # model.compile(loss=, optimizer=, metrics=)
    return model

mlp_model = mlp_model()

Run cells below to fit the model and save the best results. Choose parameters for fitting.

In [ ]:
hide_code
# Create the checkpointer for saving the best results
mlp_checkpointer = ModelCheckpoint(filepath='weights.best.mlp.hdf5', 
                                   verbose=0, save_best_only=True)
# Create the reducer for learning rates
mlp_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                     patience=10, verbose=2, factor=0.75)
In [ ]:
hide_code
# TODO: Define parameters
# epochs = 
# batch_size = 

# Fit the model
mlp_history = mlp_model.fit(x_train, y_train, 
                            validation_data=(x_valid, y_valid),
                            epochs=epochs, batch_size=batch_size, verbose=0, 
                            callbacks=[mlp_checkpointer,mlp_lr_reduction])

Display the fitting history and evaluate the model.

In [ ]:
hide_code
# Define the starting history point
n = 2
# Display training history
history_plot(mlp_history, n)
In [ ]:
hide_code
# Load the best model results 
mlp_model.load_weights('weights.best.mlp.hdf5')
# Create predictions
y_train_mlp = mlp_model.predict(x_train)
y_valid_mlp = mlp_model.predict(x_valid)
y_test_mlp = mlp_model.predict(x_test)
# Display R2 score
score_train_mlp = r2_score(y_train, y_train_mlp)
score_valid_mlp = r2_score(y_valid, y_valid_mlp)
score_test_mlp = r2_score(y_test, y_test_mlp)
print ('Train R2 score:', score_train_mlp)
print ('Valid R2 score:', score_valid_mlp)
print ('Test R2 score:', score_test_mlp)

Convolutional Neural Network (CNN)

In [ ]:
hide_code
def cnn_model():
    model = Sequential()
    # TODO: Create the sequential CNN model        


    # TODO: Compile the model    
    # model.compile(loss=, optimizer=, metrics=)
    return model

cnn_model = cnn_model()

Run cells below to fit the model and save the best results. Choose parameters for fitting.

In [ ]:
hide_code
# Create the checkpointer for saving the best results
cnn_checkpointer = ModelCheckpoint(filepath='weights.best.cnn.hdf5', 
                                   verbose=0, save_best_only=True)

# Create the reducer for learning rates
cnn_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                     patience=10, verbose=2, factor=0.7)
In [ ]:
hide_code
# TODO: Define parameters
# epochs = 
# batch_size = 

# Fit the model
cnn_history = cnn_model.fit(x_train.reshape(-1, 13, 1), y_train, 
                            validation_data=(x_valid.reshape(-1, 13, 1), y_valid),
                            epochs=epochs, batch_size=batch_size, verbose=0, 
                            callbacks=[cnn_checkpointer,cnn_lr_reduction])

Display the fitting history and evaluate the model.

In [ ]:
hide_code
# Define the starting history point
n = 2
# Display training history
history_plot(cnn_history, n)
In [ ]:
hide_code
# Load the best model results 
cnn_model.load_weights('weights.best.cnn.hdf5')
# Create predictions
y_train_cnn = cnn_model.predict(x_train.reshape(-1, 13, 1))
y_valid_cnn = cnn_model.predict(x_valid.reshape(-1, 13, 1))
y_test_cnn = cnn_model.predict(x_test.reshape(-1, 13, 1))
# Display R2 score
score_train_cnn = r2_score(y_train, y_train_cnn)
score_valid_cnn = r2_score(y_valid, y_valid_cnn)
score_test_cnn = r2_score(y_test, y_test_cnn)
print ('Train R2 score:', score_train_cnn)
print ('Valid R2 score:', score_valid_cnn)
print ('Test R2 score:', score_test_cnn)

Recurrent Neural Network (RNN)

Define a model architecture and compile the model.

In [16]:
hide_code
def rnn_model():
    model = Sequential()
   # TODO: Create the sequential RNN model  

   # TODO: Compile the model
    model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])    
    return model 

rnn_model = rnn_model()

Run cells below to fit the model and save the best results. Choose parameters for fitting.

In [ ]:
hide_code
# Create the checkpointer for saving the best results
rnn_checkpointer = ModelCheckpoint(filepath='weights.best.rnn.hdf5', 
                                   verbose=0, save_best_only=True)

# Create the reducer for learning rates
rnn_lr_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                     patience=10, verbose=2, factor=0.7)
In [ ]:
hide_code
# TODO: Define parameters
# epochs = 
# batch_size = 

# Fit the model
rnn_history = rnn_model.fit(x_train.reshape(-1, 1, 13), y_train, 
                            validation_data=(x_valid.reshape(-1, 1, 13), y_valid),
                            epochs=epochs, batch_size=batch_size, verbose=0, 
                            callbacks=[rnn_checkpointer,rnn_lr_reduction])

Display the fitting history and evaluate the model.

In [ ]:
hide_code
# Define the starting history point
n = 2
# Display training history
history_plot(rnn_history, n)
In [ ]:
hide_code
# Load the best model results 
rnn_model.load_weights('weights.best.rnn.hdf5')
# Create predictions
y_train_rnn = rnn_model.predict(x_train.reshape(-1, 1, 13))
y_valid_rnn = rnn_model.predict(x_valid.reshape(-1, 1, 13))
y_test_rnn = rnn_model.predict(x_test.reshape(-1, 1, 13))
# Display R2 score
score_train_rnn = r2_score(y_train, y_train_rnn)
score_valid_rnn = r2_score(y_valid, y_valid_rnn)
score_test_rnn = r2_score(y_test, y_test_rnn)
print ('Train R2 score:', score_train_rnn)
print ('Valid R2 score:', score_valid_rnn)
print ('Test R2 score:', score_test_rnn)

Step 3. Compare Predictions

Run the cells below to visualize the quality of predictions.

In [ ]:
hide_code
# Plot predicted values and real data points
plt.figure(figsize = (18, 6))
plt.plot(y_train[:50], color = 'black', label='Real Data')

plt.plot(y_train_mlp[:50], label='MLP')
plt.plot(y_train_cnn[:50], label='CNN')
plt.plot(y_train_rnn[:50], label='RNN')

plt.xlabel("Data Points")
plt.ylabel("Predicted and Real Target Values")
plt.legend()
plt.title("Training Set; Neural Network Predictions vs Real Data");
In [ ]:
hide_code
# Plot predicted values and real data points
plt.figure(figsize = (18, 6))
plt.plot(y_valid, color = 'black', label='Real Data')

plt.plot(y_valid_mlp, label='MLP')
plt.plot(y_valid_cnn, label='CNN')
plt.plot(y_valid_rnn, label='RNN')

plt.xlabel("Data Points")
plt.ylabel("Predicted and Real Target Values")
plt.legend()
plt.title("Validating Set; Neural Network Predictions vs Real Data");
In [ ]:
hide_code
# Plot predicted values and real data points
plt.figure(figsize = (18, 6))
plt.plot(y_test, color = 'black', label='Real Data')

plt.plot(y_test_mlp, label='MLP')
plt.plot(y_test_cnn, label='CNN')
plt.plot(y_test_rnn, label='RNN')

plt.xlabel("Data Points")
plt.ylabel("Predicted and Real Target Values")
plt.legend()
plt.title("Testing Set; Neural Network Predictions vs Real Data");