Real Data Analysis

FASHION MNIST with Python (DAY 9) - MLP using reused variables

딥스탯 2018. 9. 1. 07:29
FASHION_MNIST_DAY9_with_Python

FASHION MNIST with Python (DAY 9)

DATA SOURCE : https://www.kaggle.com/zalando-research/fashionmnist (Kaggle, Fashion MNIST)

FASHION MNIST with Python (DAY 1) : http://deepstat.tistory.com/35

FASHION MNIST with Python (DAY 2) : http://deepstat.tistory.com/36

FASHION MNIST with Python (DAY 3) : http://deepstat.tistory.com/37

FASHION MNIST with Python (DAY 4) : http://deepstat.tistory.com/38

FASHION MNIST with Python (DAY 5) : http://deepstat.tistory.com/39

FASHION MNIST with Python (DAY 6) : http://deepstat.tistory.com/40

FASHION MNIST with Python (DAY 7) : http://deepstat.tistory.com/41

FASHION MNIST with Python (DAY 8) : http://deepstat.tistory.com/42

Datasets

Importing numpy, pandas, pyplot

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

Loading datasets

In [2]:
data_train = pd.read_csv("../datasets/fashion-mnist_train.csv")
data_test = pd.read_csv("../datasets/fashion-mnist_test.csv")
In [3]:
data_train_y = data_train.label
y_test = data_test.label
In [4]:
data_train_x = data_train.drop("label",axis=1)/256
x_test = data_test.drop("label",axis=1)/256

Spliting valid and training

In [5]:
np.random.seed(0)
valid2_idx = np.random.choice(60000,10000,replace = False)
valid1_idx = np.random.choice(list(set(range(60000)) - set(valid2_idx)),10000,replace=False)
train_idx = list(set(range(60000))-set(valid1_idx)-set(valid2_idx))

x_train = data_train_x.iloc[train_idx,:]
y_train = data_train_y.iloc[train_idx]

x_valid1 = data_train_x.iloc[valid1_idx,:]
y_valid1 = data_train_y.iloc[valid1_idx]

x_valid2 = data_train_x.iloc[valid2_idx,:]
y_valid2 = data_train_y.iloc[valid2_idx]

MLP with re-using variables

Importing TensorFlow

In [6]:
import tensorflow as tf
from sklearn.metrics import confusion_matrix

Defining weight_variables and bias_variables

In [7]:
def weight_variables(shape):
    initial = tf.truncated_normal(shape)
    return tf.Variable(initial)

def bias_variables(shape):
    initial = tf.truncated_normal(shape)
    return tf.Variable(initial)    

Constructing the MLP with re-using variables

Linear, ReLU, leaky ReLU, ELU, SELU, Sigmoid, arctan, tanh, softsign, softplus, softmax, Maxout, Dropout, Batch Normalization, cross entropy, Adam

  • Model : input -> [inner product -> dropout]-> [batch normalization -> inner product -> [Linear, ReLU, leaky ReLU, ELU, SELU, Sigmoid, arctan, tanh, softsign, softplus, softmax, Maxout]*20 -> dropout]*10 -> [batch normalization -> inner product -> softmax] -> output

  • Loss : cross entropy

  • Optimizer : Adam

In [8]:
def weight_reuse_layer(inputs, training, drop_prob):
    with tf.variable_scope("deepstat", reuse=tf.AUTO_REUSE):
        w_linear = tf.get_variable("w_linear", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_linear = tf.get_variable("b_linear", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_relu = tf.get_variable("w_relu", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_relu = tf.get_variable("b_relu", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_leaky_relu = tf.get_variable("w_leaky_relu", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_leaky_relu = tf.get_variable("b_leaky_relu", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_elu = tf.get_variable("w_elu", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_elu = tf.get_variable("b_elu", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_selu = tf.get_variable("w_selu", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_selu = tf.get_variable("b_selu", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_sigmoid = tf.get_variable("w_sigmoid", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_sigmoid = tf.get_variable("b_sigmoid", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_atan = tf.get_variable("w_atan", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_atan = tf.get_variable("b_atan", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_tanh = tf.get_variable("w_tanh", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_tanh = tf.get_variable("b_tanh", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_softsign = tf.get_variable("w_softsign", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_softsign = tf.get_variable("b_softsign", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_softplus = tf.get_variable("w_softplus", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_softplus = tf.get_variable("b_softplus", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_log_softmax = tf.get_variable("w_log_softmax", [1920,160], initializer = tf.initializers.random_uniform(-1,1))
        b_log_softmax = tf.get_variable("b_log_softmax", [160], initializer = tf.initializers.random_uniform(-1,1))
        w_maxout = tf.get_variable("w_maxout", [1920,320], initializer = tf.initializers.random_uniform(-1,1))
        b_maxout = tf.get_variable("b_maxout", [320], initializer = tf.initializers.random_uniform(-1,1))
    
    l_batch_normalization = tf.layers.batch_normalization(inputs, training = training)
    l_linear = tf.matmul(l_batch_normalization, w_linear) + b_linear
    l_relu = tf.nn.relu(tf.matmul(l_batch_normalization, w_relu) + b_relu)
    l_leaky_relu = tf.nn.leaky_relu(tf.matmul(l_batch_normalization, w_leaky_relu) + b_leaky_relu)
    l_elu = tf.nn.elu(tf.matmul(l_batch_normalization, w_elu) + b_elu)
    l_selu = tf.nn.selu(tf.matmul(l_batch_normalization, w_selu) + b_selu)
    l_sigmoid = tf.nn.sigmoid(tf.matmul(l_batch_normalization, w_sigmoid) + b_sigmoid)
    l_atan = tf.atan(tf.matmul(l_batch_normalization, w_atan) + b_atan)
    l_tanh = tf.nn.tanh(tf.matmul(l_batch_normalization, w_tanh) + b_tanh)
    l_softsign = tf.nn.softsign(tf.matmul(l_batch_normalization, w_softsign) + b_softsign)
    l_softplus = tf.nn.softplus(tf.matmul(l_batch_normalization, w_softplus) + b_softplus)
    l_log_softmax = tf.nn.log_softmax(tf.matmul(l_batch_normalization, w_log_softmax) + b_log_softmax)
    l_maxout = tf.reshape(
        tf.contrib.layers.maxout(
            tf.reshape(
                tf.matmul(
                    l_batch_normalization, w_maxout) + b_maxout,
                [-1,160,2]),
            num_units=1),
        [-1,160])
    
    l_concat = tf.concat([
        l_linear,l_relu,l_leaky_relu,l_elu,l_selu,l_sigmoid,
        l_atan,l_tanh,l_softsign,l_softplus,l_log_softmax,l_maxout
        ], 1)
    l_dropout = tf.layers.dropout(l_concat, rate = drop_prob, training = training)
    return l_dropout

Inputs

In [9]:
x = tf.placeholder("float", [None,784])
y = tf.placeholder("int64", [None,])
y_dummies = tf.one_hot(y,depth = 10)

drop_prob = tf.placeholder("float")
training = tf.placeholder("bool")

Layer1

[inner product -> dropout]

In [10]:
l1_w = weight_variables([784,1920])
l1_b = bias_variables([1920])
l1_inner_product = tf.matmul(x, l1_w) + l1_b
l1_dropout = tf.layers.dropout(l1_inner_product,rate = drop_prob, training = training)

Layer2-11

[batch normalization -> inner product -> [Linear, ReLU, leaky ReLU, ELU, SELU, Sigmoid, arctan, tanh, softsign, softplus, softmax, Maxout]*20 -> dropout]

In [11]:
l2 = weight_reuse_layer(l1_dropout, training, drop_prob)
l3 = weight_reuse_layer(l2, training, drop_prob)
l4 = weight_reuse_layer(l3, training, drop_prob)
l5 = weight_reuse_layer(l4, training, drop_prob)
l6 = weight_reuse_layer(l5, training, drop_prob)
l7 = weight_reuse_layer(l6, training, drop_prob)
l8 = weight_reuse_layer(l7, training, drop_prob)
l9 = weight_reuse_layer(l8, training, drop_prob)
l10 = weight_reuse_layer(l9, training, drop_prob)
l11 = weight_reuse_layer(l10, training, drop_prob)

Layer12

[batch normalization -> inner product -> softmax]

In [12]:
l12_w = weight_variables([1920,10])
l12_b = bias_variables([10])
l12_batch_normalization =  tf.layers.batch_normalization(l11, training = training)
l12_inner_product = tf.matmul(l12_batch_normalization, l12_w) + l12_b
l12_log_softmax = tf.nn.log_softmax(l12_inner_product)

Cross-entropy

In [13]:
xent_loss = -tf.reduce_sum( tf.multiply(y_dummies,l12_log_softmax) )

Accuracy

In [14]:
pred_labels = tf.argmax(l12_log_softmax,axis=1)
acc = tf.reduce_mean(tf.cast(tf.equal(y, pred_labels),"float"))

Training the Model

In [15]:
lr = tf.placeholder("float")
train_step = tf.train.AdamOptimizer(lr).minimize(xent_loss)
In [16]:
saver = tf.train.Saver()
sess = tf.Session()
sess.run(tf.global_variables_initializer())
In [17]:
batch_size = 512
for i in range(20001):
    batch_obs = np.random.choice(x_train.shape[0],batch_size,replace=False)
    batch_train_x = x_train.iloc[batch_obs]
    batch_train_y = y_train.iloc[batch_obs]
    feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.1}
    _, tmp = sess.run([train_step,xent_loss], feed_dict = feed_dict)
    
    if i % 2000 == 0:
        print("step " + str(i) + " training cross-entropy : " + str(tmp))
    
    if i % 4000 == 0:
        feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : False}
        train_acc = sess.run(acc, feed_dict = feed_dict)
        feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : False}
        valid1_acc = sess.run(acc, feed_dict = feed_dict)
        print("step " + str(i) + " training_acc = " + str(train_acc) + " valid_acc = " + str(valid1_acc))
        save_path = saver.save(sess, "./MLP_reuse/model.ckpt")
        print("Model saved in path: " + save_path)
step 0 training cross-entropy : 31872.088
step 0 training_acc = 0.1004 valid_acc = 0.0995
Model saved in path: ./MLP_reuse/model.ckpt
step 2000 training cross-entropy : 583.2489
step 4000 training cross-entropy : 1131.0829
step 4000 training_acc = 0.0983 valid_acc = 0.106
Model saved in path: ./MLP_reuse/model.ckpt
step 6000 training cross-entropy : 1343.8828
step 8000 training cross-entropy : 970.83813
step 8000 training_acc = 0.0992 valid_acc = 0.1006
Model saved in path: ./MLP_reuse/model.ckpt
step 10000 training cross-entropy : 1214.2537
step 12000 training cross-entropy : 825.8395
step 12000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 14000 training cross-entropy : 800.1408
step 16000 training cross-entropy : 833.8031
step 16000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 18000 training cross-entropy : 932.93713
step 20000 training cross-entropy : 690.29736
step 20000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
In [18]:
batch_size = 512
for i in range(80001):
    batch_obs = np.random.choice(x_train.shape[0],batch_size,replace=False)
    batch_train_x = x_train.iloc[batch_obs]
    batch_train_y = y_train.iloc[batch_obs]
    feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.01}
    _, tmp = sess.run([train_step,xent_loss], feed_dict = feed_dict)
    
    if i % 8000 == 0:
        print("step " + str(i) + " training cross-entropy : " + str(tmp))
    
    if i % 16000 == 0:
        feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : False}
        train_acc = sess.run(acc, feed_dict = feed_dict)
        feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : False}
        valid1_acc = sess.run(acc, feed_dict = feed_dict)
        print("step " + str(i) + " training_acc = " + str(train_acc) + " valid_acc = " + str(valid1_acc))
        save_path = saver.save(sess, "./MLP_reuse/model.ckpt")
        print("Model saved in path: " + save_path)
step 0 training cross-entropy : 664.647
step 0 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 8000 training cross-entropy : 392.25644
step 16000 training cross-entropy : 345.9796
step 16000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 24000 training cross-entropy : 343.65155
step 32000 training cross-entropy : 281.71674
step 32000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 40000 training cross-entropy : 210.97086
step 48000 training cross-entropy : 263.9254
step 48000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 56000 training cross-entropy : 255.4663
step 64000 training cross-entropy : 202.80786
step 64000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 72000 training cross-entropy : 187.29158
step 80000 training cross-entropy : 170.4274
step 80000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
In [19]:
batch_size = 512
for i in range(320001):
    batch_obs = np.random.choice(x_train.shape[0],batch_size,replace=False)
    batch_train_x = x_train.iloc[batch_obs]
    batch_train_y = y_train.iloc[batch_obs]
    feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.001}
    _, tmp = sess.run([train_step,xent_loss], feed_dict = feed_dict)
    
    if i % 32000 == 0:
        print("step " + str(i) + " training cross-entropy : " + str(tmp))
    
    if i % 64000 == 0:
        feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : False}
        train_acc = sess.run(acc, feed_dict = feed_dict)
        feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : False}
        valid1_acc = sess.run(acc, feed_dict = feed_dict)
        print("step " + str(i) + " training_acc = " + str(train_acc) + " valid_acc = " + str(valid1_acc))
        save_path = saver.save(sess, "./MLP_reuse/model.ckpt")
        print("Model saved in path: " + save_path)
step 0 training cross-entropy : 211.09633
step 0 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 32000 training cross-entropy : 147.32101
step 64000 training cross-entropy : 169.66533
step 64000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 96000 training cross-entropy : 218.44417
step 128000 training cross-entropy : 139.58224
step 128000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 160000 training cross-entropy : 135.28773
step 192000 training cross-entropy : 125.1377
step 192000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 224000 training cross-entropy : 121.90304
step 256000 training cross-entropy : 116.033875
step 256000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 288000 training cross-entropy : 137.7208
step 320000 training cross-entropy : 125.229385
step 320000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
In [20]:
batch_size = 512
for i in range(1280001):
    batch_obs = np.random.choice(x_train.shape[0],batch_size,replace=False)
    batch_train_x = x_train.iloc[batch_obs]
    batch_train_y = y_train.iloc[batch_obs]
    feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.0001}
    _, tmp = sess.run([train_step,xent_loss], feed_dict = feed_dict)
    
    if i % 128000 == 0:
        print("step " + str(i) + " training cross-entropy : " + str(tmp))
    
    if i % 256000 == 0:
        feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : False}
        train_acc = sess.run(acc, feed_dict = feed_dict)
        feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : False}
        valid1_acc = sess.run(acc, feed_dict = feed_dict)
        print("step " + str(i) + " training_acc = " + str(train_acc) + " valid_acc = " + str(valid1_acc))
        save_path = saver.save(sess, "./MLP_reuse/model.ckpt")
        print("Model saved in path: " + save_path)
step 0 training cross-entropy : 106.5896
step 0 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 128000 training cross-entropy : 115.030716
step 256000 training cross-entropy : 152.09715
step 256000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 384000 training cross-entropy : 155.68195
step 512000 training cross-entropy : 118.50292
step 512000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
step 640000 training cross-entropy : 123.75328
step 768000 training cross-entropy : 107.267
step 768000 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-20-f9c7dcdd5355> in <module>()
      5     batch_train_y = y_train.iloc[batch_obs]
      6     feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.0001}
----> 7     _, tmp = sess.run([train_step,xent_loss], feed_dict = feed_dict)
      8 
      9     if i % 128000 == 0:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    875     try:
    876       result = self._run(None, fetches, feed_dict, options_ptr,
--> 877                          run_metadata_ptr)
    878       if run_metadata:
    879         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1098     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1099       results = self._do_run(handle, final_targets, final_fetches,
-> 1100                              feed_dict_tensor, options, run_metadata)
   1101     else:
   1102       results = []

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1270     if handle is None:
   1271       return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1272                            run_metadata)
   1273     else:
   1274       return self._do_call(_prun_fn, handle, feeds, fetches)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1276   def _do_call(self, fn, *args):
   1277     try:
-> 1278       return fn(*args)
   1279     except errors.OpError as e:
   1280       message = compat.as_text(e.message)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1261       self._extend_graph()
   1262       return self._call_tf_sessionrun(
-> 1263           options, feed_dict, fetch_list, target_list, run_metadata)
   1264 
   1265     def _prun_fn(handle, feed_dict, fetch_list):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1348     return tf_session.TF_SessionRun_wrapper(
   1349         self._session, options, feed_dict, fetch_list, target_list,
-> 1350         run_metadata)
   1351 
   1352   def _call_tf_sessionprun(self, handle, feed_dict, fetch_list):

KeyboardInterrupt: 
In [22]:
batch_size = 512
batch_obs = np.random.choice(x_train.shape[0],batch_size,replace=False)
batch_train_x = x_train.iloc[batch_obs]
batch_train_y = y_train.iloc[batch_obs]
feed_dict = {x : batch_train_x, y : batch_train_y, drop_prob : .125, training : True, lr : 0.0001}
_, tmp,tmp_acc = sess.run([train_step,xent_loss,acc], feed_dict = feed_dict)

print("step " + str(i) + " training cross-entropy : " + str(tmp) + " accuracy of training step : " + str(tmp_acc))
feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : False}
train_acc = sess.run(acc, feed_dict = feed_dict)
feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : False}
valid1_acc = sess.run(acc, feed_dict = feed_dict)
print("step " + str(i) + " training_acc = " + str(train_acc) + " valid_acc = " + str(valid1_acc))
save_path = saver.save(sess, "./MLP_reuse/model.ckpt")
print("Model saved in path: " + save_path)
step 818348 training cross-entropy : 115.68921 accuracy of training step : 0.91796875
step 818348 training_acc = 0.09985 valid_acc = 0.1015
Model saved in path: ./MLP_reuse/model.ckpt

Training Accuracy

In [23]:
feed_dict = {x : x_train, y : y_train, drop_prob : .125, training : True}
MLP_predict_train, MLP_train_acc = sess.run([pred_labels,acc], feed_dict = feed_dict)
In [24]:
print(confusion_matrix(MLP_predict_train,y_train))
print("TRAINING ACCURACY =",MLP_train_acc)
[[3559    2   48   32    2    0  600    0    1    0]
 [   5 3911    3    9    2    0    3    0    2    0]
 [  15    0 3138    2  107    0  129    0    1    0]
 [ 135   64   57 3781  195    1  114    0   15    0]
 [   7    2  496   58 3539    0  232    0    5    0]
 [   0    0    0    1    0 3901    0   58    7   21]
 [ 221    5  283   30  109    0 2853    0    6    0]
 [   0    0    0    0    0   21    0 3998    5  141]
 [  52    6   31   16   62    6   74    4 3900    9]
 [   0    0    0    0    0    3    0   43    4 3858]]
TRAINING ACCURACY = 0.91095

Validation Accuracy

In [25]:
feed_dict = {x : x_valid1, y : y_valid1, drop_prob : .125, training : True}
MLP_predict_valid1, MLP_valid1_acc = sess.run([pred_labels,acc], feed_dict = feed_dict)
In [26]:
print(confusion_matrix(MLP_predict_valid1,y_valid1))
print("VALIDATION ACCURACY =",MLP_valid1_acc)
[[ 871    5   17   22    2    0  143    0    2    0]
 [   4  999    0    9    3    0    1    0    0    0]
 [   3    2  669    2   42    0   61    0    2    0]
 [  49   18   21  939   75    0   39    0   11    0]
 [   0    1  137   25  797    0   83    0    2    0]
 [   0    0    0    1    0 1022    0   27    6   16]
 [  80    1   89   10   59    0  637    0    6    0]
 [   0    0    0    0    0   21    0  896    1   37]
 [   8    0   12    4   17    8   23    5 1000   10]
 [   0    0    0    0    0    9    0   20    4  915]]
VALIDATION ACCURACY = 0.8745
In [27]:
{"TRAIN_ACC" : MLP_train_acc , "VALID_ACC" : MLP_valid1_acc}
Out[27]:
{'TRAIN_ACC': 0.91095, 'VALID_ACC': 0.8745}