%%html
<style>
@import url('https://fonts.googleapis.com/css?family=Orbitron|Roboto');
body {background-color: #add8e6;}
a {color: darkblue; font-family: 'Roboto';}
h1 {color: steelblue; font-family: 'Orbitron'; text-shadow: 4px 4px 4px #aaa;}
h2, h3 {color: #483d8b; font-family: 'Orbitron'; text-shadow: 4px 4px 4px #aaa;}
h4 {color: slategray; font-family: 'Roboto';}
span {text-shadow: 4px 4px 4px #ccc;}
div.output_prompt, div.output_area pre {color: #483d8b;}
div.input_prompt, div.output_subarea {color: darkblue;}
div.output_stderr pre {background-color: #add8e6;}
div.output_stderr {background-color: #483d8b;}
</style>
# Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
This dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.
# Loading the data (it's preloaded in Keras)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)
print(x_train.shape)
print(x_test.shape)
Notice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.
The output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.
print(x_train[0])
print(y_train[0])
Here, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.
# One-hot encoding the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train[0])
And we'll also one-hot encode the output.
# One-hot encoding the output
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
print(y_test.shape)
Build a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.
# TODO: Build the model architecture
# TODO: Compile the model using a loss function and an optimizer.
def mlp_mc_model():
model = Sequential()
model.add(Dense(1024, activation='relu', input_shape=(1000,)))
model.add(Dropout(0.1))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer='nadam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
mlp_mc_model = mlp_mc_model()
Run the model here. Experiment with different batch_size, and number of epochs!
# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.
fit_mlp = mlp_mc_model.fit(x_train, y_train,
validation_data=(x_test, y_test),
epochs=2, batch_size=128, verbose=2)
plt.figure(figsize=(18, 4))
plt.plot(fit_mlp.history['loss'], '-o', c='steelblue', lw=1, label = 'train')
plt.plot(fit_mlp.history['val_loss'], '-o', c='darkblue', lw=1, label = 'test')
plt.legend()
plt.title('MLP Accuracy');
This will give you the accuracy of the model, as evaluated on the testing set. Can you get something over 85%?
score = mlp_mc_model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: ", score[1])