Deep Feed Forward Networks

In [5]:
%pylab inline
from ipypublish import nb_setup
Populating the interactive namespace from numpy and matplotlib

Non-Linear Filters

The Linear Models that we discussed in Chapter LinearLearningModels work well if the input dataset is approximately linearly separable, but they have limited accuracy for complex datasets. Some of the issues with Linear Models are the following:

  • If the input data is not linearly separable, then the designer has to expend a lot of effort in finding an appropriate feature map that makes it so. It would be nice to have a model that solves this problem automatically, by learning the best feature map from the data itself.

  • We showed that the model weight parameters could be regarded as a filter, so that for $K$ classes, the operation of the system is equivalent to trying to the match the input with $K$ different filters. The limitations of this approach can be seen in the filter for the "horse" class in Figure LC2. The filter looks like a horse with two heads, since it is trying its best to match with a horse image, irrespective of the direction in which the horse is facing. This type of filtering will clearly not work for cases in which the horse were standing with some other orientation, or if it were located in a corner of the image. The fact that the best accuracy that can be achieved with linear classifiers and the CIFAR-10 Dataset is only about 40% is a reflection of this shortcoming. The linear system tries to do classification by taking each and every pixel into account, which is a difficult task. What if it were possible to create representations for higher level features in the image, say the head of the horse or its legs, and then use these for classification instead. This will enable the system to identify a horse irrespective of its orientation and its location in the image. This is precisely what Deep Learning systems do.

  • In general a way to make any model more powerful is by increasing the number of parameters. However in a Linear Model the number of parameters is constrained to $KN + K$ by the sizes of the input data and the number of output classes, which limits its modeling power.

In [2]:
#LC2
nb_setup.images_hconcat(["DL_images/LC2.png"], width=600)
Out[2]:

Dense Feed Forward Networks

Dense Feed Forward Networks were designed with the objective the overcoming these shortcomings. As Figure DFN1 shows, we are looking for a functional block between the input vector $(x_1,...,x_N)$ and the output logits $(a_1,...,a_K)$, that can create a new representation vector $(z_1,...,z_P)$ which satisfies the approximate linear separability property. One way to do this is shown in Figure DFN2, which is a Deep Feed Forward Network with a single Hidden Layer. Note the following:

  • The Input layer and Output layers are as before, but we have added a third layer, the so-called Hidden Layer in between. The Input Layer is fully connected to the Hidden Layer, i.e., each node in the Input Layer is connected to every other node in the Hidden Layer, and the same holds true for connections between the Hidden Layer and the Output Layer. DLNs with these characteristics are called Dense Feed Forward Neural Networks. Later in this monograph we will come across examples of DLNs where these properties don’t apply; either because the fully connected property does not hold (as in Convolutional Neural Networks), or the DLN incorporates feedback loops (as in Recurrent Neural Networks).

  • The $j$-th node in the Hidden Layer performs the following computation on the input variables $x_i$ to generate an output $z_j^{(1)}, 1 \leq j \leq P$ given by $$ a_j^{(1)} = \sum_{i=1}^N w_{ji}^{(1)} x_i + b_j^{(1)} $$ $$ z_j^{(1)} = f(a_j^{(1)}) $$ The vector $(a^{(1)}_1,...,a_P^{(1)})$, which we call the Pre-Activation, is computed as a simple linear combination of the Input Vector. The output of the Hidden Layer $(z^{(1)}_1,...,z_P^{(1)})$ which we call the Activation, is computed as an elementwise non-linear function of the Pre-Activations.

  • The Output Layer operates on the Activations $z_j^{(1)}$ from the Hidden Layer, and computes the logits for the K classes $(a_1^{(2)},...,a_K^{(2)})$. $$ a_k^{(2)} = \sum_{i=1}^P w_{ki}^{(2)} z_i^{(1)} + b_k^{(2)}, \ \ 1\le k\le K $$ The classification probabilities $y_k, 1\le k\le K$ are obtained by applying the Softmax function to the logits. $$ y_k = \frac{\exp(a_k^{(2)})}{\sum_{j=1}^K \exp(a_j^{(2)})}, \ \ 1\le k\le K $$ Note that the logit and classification probability computations are identical to that done in Linear Systems, with the inputs $X$ now replaced by the activations $Z$.

  • The weight parameters $w_{ij}^{(1)}, 1\le i\le P,1\le j\le N; w_{ij}^{(2)}, 1\le i\le K,1\le j\le P$ and the bias parameters $b_i^{(1)}, 1\le i\le P; b_i^{(2)}, 1\le i\le K$ have to be learnt using the training data, as in Linear Models. The total number of parameters need to describe this network is given by $NP + P + PK + K$, which is now dependent on the number of nodes in the Hidden Layer $P$. Hence we can build a Dense Feed Forward model with more powerful classification ability by increasing the number of nodes in the Hidden Layer, which is an option that does not exist in Linear Systems.

In [3]:
#DFN1
nb_setup.images_hconcat(["DL_images/DFN1.png"], width=600)
Out[3]:
In [4]:
#DFN2
nb_setup.images_hconcat(["DL_images/DFN2.png"], width=600)
Out[4]:

The activations $(z^{(1)}_1,...,z_P^{(1)})$ correspond to the new data representation that we are looking for. They filter the input and create higher layer representations, which are then used by the logit layer for classification. Note that the filtering done by the Hidden Layer is non-linear due to the presence of the non-linear Function $f$. This function is called the Activation Function, and plays an important role in system performance. The most popular Activation Function in use is called the Rectified Linear Unit, or ReLU, and is shown in Figure DFN3. It simply passes on the pre-activations that are greater than zero, and blocks those that are less.

The presence of the Activation Function is critical to the functioning of the DLN, and it can be easily shown that if they were to be omitted, then the Hidden and Output layers can be collapsed together so that the resulting model would be equivalent to a Linear Model. Indeed the presence of Activation Functions gives the system its modeling power, and in general we will see later in the book that DLN systems can be made more powerful by increasing the amount of non-linear processing. The appropriate choice of Activation Functions has a big influence on the performance of the DLN, and the discovery of more effective Activation Functions such as the ReLU have helped make DLNs easier to train.

In [5]:
nb_setup.images_hconcat(["DL_images/DFN3.png"], width=600)
Out[5]:

The system shown in Figure DFN2 incorporates only a single Hidden Layer. Why not continue the process and enable the model to create higher level representations by adding additional hidden layers? This is certainly possible and the resulting network is shown in Figure DFN4. It shows a Dense Feed Forward Network with $R$ hidden layers, such that layer $r$ consists of $P^r$ nodes. The equations decribing this network can be written as:

  • The activations for the first Hidden Layer: $$ a_j^{(1)} = \sum_{i=1}^N w_{ji}^{(1)} x_i + b_j^{(1)},\ \ 1\le j\le P^1 $$ $$ z_j^{(1)} = f(a_j^{(1)}),\ \ 1\le j\le P^1 $$
  • The Activations for Hidden Layer 2 to Hidden Layer R: $$ a_j^{(r+1)} = \sum_{i=1}^{P^r} w_{ji}^{(r+1)} z_i^{r} + b_j^{(r+1)},\ \ 1\le r\le R-1, 1\le j\le P^{(r+1)} $$ $$ z_j^{(r+1)} = f(a_j^{(r+1)}),\ \ 1\le r\le R-1, 1\le j\le P^{(r+1)} $$
  • The logits and the classification probabilities: $$ a_k^{(R+1)} = \sum_{i=1}^{P^R} w_{ki}^{(R+1)} z_i^R + b_k^{(R+1)},\ \ 1\le k\le K $$ $$ y_k = \frac{a_k^{(R+1)}}{\sum_{j=1}^K a_j^{(R+1)}}, \ \ 1\le k\le K $$

With each successive Hidden Layer, this network creates representations at higher levels of abstraction.

Using matrix notation, these equations can be compactly written as (with the $Z^{(0)} = X$):

$$ A^{(r)} = W^{(r)}Z^{(r-1)} + B^{(r)},\ \ Z^{(r)} = f(A^{(r)}),\ \ 1\le r\le R $$$$ A^{(R+1)} = W^{(R+1)}Z^{(R)} + B^{(R+1)},\ \ Y = h(A^{(R+1)}) $$

In these equations $f$ and $h$ represent the Activation and Softmax functions respectively, and these operations are carried out on an elementwise basis across all the matrix entries.

In [6]:
nb_setup.images_hconcat(["DL_images/DFN4.png"], width=600)
Out[6]:

Nodes vs Layers

We have introduced two degrees of freedom in DLN design in this chapter: (1) The number of Hidden Layers, and (2) The number of nodes per Hidden Layer. This leads to the following questions:

  • To get a better performing model, is it preferable to increase the number of layers, or is it better to increase the number of nodes per layer (while keeping the number of layers fixed)?
  • Does the system performance keep improving as we add more and more layers, or are there limits that the model runs into?

Unfortunately there don't exist many theoretical results in this area which can give definite answers to these questions. However there is one interesting theorem regarding Deep Feed Forward Networks with a single Hidden Layer whose proof was given by Cybenko et.al. in 1989:

Given an arbitrary continuous function $g$ of $n$ variables such as

$$ y = g(x_1,...,x_n) $$

it is always possible to find a Deep Feed Forward Network with a single Hidden Layer, such that the output of the network approximates $g$, and the approximation can be made as close as we want by adding nodes to the Hidden Layer.

This property is of course dependent on the form of the Activation Function used, but it has been proven to be true for the most commonly used functions. Hence it should be possible to solve any classification problem with a Dense Feed Forward Network containing a single layer. However the theorem does not specify the number of hidden nodes needed for a particular problem.

In practice, the following has been observed that to increase the modeling power of a DLN, it is advantageous to add Hidden Layers, becuase of the following reasons:

  • More layers allow the model to develop an hierarchical representation of the input data, which simplifies the task of the linear classifier in the final layer.

  • Having additional layers increases the amount of non-linearity and thus the modeling capacity.

This still begs the question of how wide should the network be. There has been some progress on this recently more recently [Li, Xu, et.al] (https://arxiv.org/pdf/1712.09913.pdf), and their key finding is shown in the Figure convnet46.

In [4]:
#convnet46
nb_setup.images_hconcat(["DL_images/convnet46.png"], width=700)
Out[4]:

As illustrated in the figure, the width of the network has a critical effect on the smoothness of its Loss Function. The figure shows four contour plots for the Loss Function of an increasingly wider network, and as can be seen the Loss Function landscape becomes progressively smoother as we move from left to right. This makes the optimization task much easier. This effect is more pronounced for the very deep networks with hundreds of layers that we will study later in the course, and less of an issue in a network with only a few layers.

If the Loss Function is highly chaotic as in the leftmost plot, then this causes the optimization becomes highly dependent on the initialization values, since a bad initialization can cause the trajectory to get caught in the ups and downs of the uneven loss landscape. Increasing the width of the network promotes flat minimizers and prevents the transition to chaotic behavior, which also improces the generalization ability for the network.

Performance as a function of layers

The other question that we raised is whether the DLN performance keeps improving as we add more and more Hidden Layers. This is actually not the case, the model performance is constrained due to the following factors:

  • The Vanishing Gradient Problem: In order to train a multilayer Deep Feed Forward Network, the gradients $\frac{\partial L}{\partial w^{(r)}_{ij}}$ and $\frac{\partial L}{\partial b^{(r)}_i}$ have to be computed. It turns that if the number of layers is large, the gradients of the weights that are either in the first few layers or the last few layers, converge towards zero as the training progresses. Once this happens, the corresponding weights stop adapting to new training data, and thus the training process grinds to a halt. This phenomena is known as the Vanishing Gradient problem, and its causes are explained in detail in Chapter GradientDescentTechniques. In addition adding more layers layers makes the Loss Landscape more chaotic as shown in Figure convnet46 which makes optimization very difficult. This problem contrains the number of layers that can be added to the network to asbout 20 or so, without degrading the training process. In order to get around this problem, we can increase the width of the network as explained above, or use a recent advance in DLN architecture called Residual Connections which allows much deeper networks containing hundreds of layers.

  • The Overfitting Problem: Larger models with more layers have a larger number of parameters, and this in turn requires larger training datasets. As explained in Chapter ImprovingModelGeneralization, modeling is an exercise in matching the Capacity of the Model with the Complexity of the Dataset. If the Capacity of the Model is greater than the Complexity of the Dataset (which can happen if we add more layers than necessary), then it leads to overfitting. This problem constrains the model's generalization ability.

As this discussion shows, there is no formula or theoretical result which tells us the number of layers or the nodes per layer to use in the model. These numbers, which are also called hyper-parameters are a function of the dataset that we are trying to model, and the only way to find the best numbers is by trial and error. Hence when building the model, the designer has to do several trial runs with different vales for these hyper-parameters before settling on the best ones.

In Chapter ImprovingModelGeneralization we provide some guidelines that can be used to make this process more efficient.

Example of a Dense Feed Forward Network in Keras

Models Using the Keras Layers Module

There are two ways to define a Dense Feed Forward Network in Keras:

  • Using the Keras Layers Module
  • Using the Keras Functional API

The code shown below uses the Layers Module to define a Dense Feed Forward Network with two hidden layers with 20 and 15 nodes respectively. The first hidden layer is constrained to accept input tensors of shape (32 32 3, ). Note that the second dimension of this tensor is left un-specified, this allows the system to feed this layer with batches of data such that any batch size can be accepted. The input tensor is transformed into a tensor of shape (20, ) by the first hidden layer, and this tensor is then processed by the second hidden layer with 15 nodes. There is no need to specify an input shape argument for the second layer, since Keras automatically decides on this based on the output of the first layer.

Comparing the results of the Linear Model from the previous chapter and the Dense Feed Forward Model, the accuracy increased from about 40% to 45%. This is a significant jump, however not good enough. One of the main factors that is holding back the Dense Feed Forward model from doing a better job on the accuracy is that it is only able to process images after they have been flattened into a vector shape. Thus a lot of information that is present in the original 3D image shape is lost, especially data about pixels that are in proximity of each other in the original image. In order to process images in the native 3D shape, we will need a more sophisticated Neural Network model called Convolutional Neural Networks, which is discussed in one of the later chapters.

In [5]:
import keras
keras.__version__
from keras import models
from keras import layers

from keras.datasets import cifar10

(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()

train_images = train_images.reshape((50000, 32 * 32 * 3))
train_images = train_images.astype('float32') / 255

test_images = test_images.reshape((10000, 32 * 32 * 3))
test_images = test_images.astype('float32') / 255

from tensorflow.keras.utils import to_categorical

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

network = models.Sequential()
network.add(layers.Dense(20, activation='relu', input_shape=(32 * 32 * 3,)))
network.add(layers.Dense(15, activation='relu'))
network.add(layers.Dense(10, activation='softmax'))

network.compile(optimizer='sgd',
                loss='categorical_crossentropy',
                metrics=['accuracy'])

history = network.fit(train_images, train_labels, epochs=100, batch_size=128, validation_split=0.2)
Epoch 1/100
313/313 [==============================] - 2s 4ms/step - loss: 2.2224 - accuracy: 0.1804 - val_loss: 2.1399 - val_accuracy: 0.2087
Epoch 2/100
313/313 [==============================] - 1s 3ms/step - loss: 2.0539 - accuracy: 0.2482 - val_loss: 2.0086 - val_accuracy: 0.2662
Epoch 3/100
313/313 [==============================] - 1s 3ms/step - loss: 1.9258 - accuracy: 0.2991 - val_loss: 1.9078 - val_accuracy: 0.3113
Epoch 4/100
313/313 [==============================] - 1s 4ms/step - loss: 1.8651 - accuracy: 0.3219 - val_loss: 1.9120 - val_accuracy: 0.3135
Epoch 5/100
313/313 [==============================] - 1s 3ms/step - loss: 1.8258 - accuracy: 0.3395 - val_loss: 1.8388 - val_accuracy: 0.3335
Epoch 6/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7963 - accuracy: 0.3514 - val_loss: 1.8283 - val_accuracy: 0.3377
Epoch 7/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7743 - accuracy: 0.3606 - val_loss: 1.8082 - val_accuracy: 0.3454
Epoch 8/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7570 - accuracy: 0.3694 - val_loss: 1.8109 - val_accuracy: 0.3515
Epoch 9/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7396 - accuracy: 0.3746 - val_loss: 1.7487 - val_accuracy: 0.3742
Epoch 10/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7257 - accuracy: 0.3796 - val_loss: 1.7517 - val_accuracy: 0.3771
Epoch 11/100
313/313 [==============================] - 1s 3ms/step - loss: 1.7108 - accuracy: 0.3873 - val_loss: 1.7341 - val_accuracy: 0.3764
Epoch 12/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6988 - accuracy: 0.3925 - val_loss: 1.7676 - val_accuracy: 0.3726
Epoch 13/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6872 - accuracy: 0.3965 - val_loss: 1.7758 - val_accuracy: 0.3589
Epoch 14/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6760 - accuracy: 0.4033 - val_loss: 1.7202 - val_accuracy: 0.3817
Epoch 15/100
313/313 [==============================] - 1s 4ms/step - loss: 1.6647 - accuracy: 0.4072 - val_loss: 1.7044 - val_accuracy: 0.3949
Epoch 16/100
313/313 [==============================] - 1s 4ms/step - loss: 1.6586 - accuracy: 0.4096 - val_loss: 1.7036 - val_accuracy: 0.3925
Epoch 17/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6495 - accuracy: 0.4121 - val_loss: 1.7121 - val_accuracy: 0.3844
Epoch 18/100
313/313 [==============================] - 1s 4ms/step - loss: 1.6409 - accuracy: 0.4155 - val_loss: 1.6819 - val_accuracy: 0.3980
Epoch 19/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6302 - accuracy: 0.4207 - val_loss: 1.7588 - val_accuracy: 0.3775
Epoch 20/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6251 - accuracy: 0.4232 - val_loss: 1.6870 - val_accuracy: 0.3967
Epoch 21/100
313/313 [==============================] - 1s 4ms/step - loss: 1.6198 - accuracy: 0.4241 - val_loss: 1.6598 - val_accuracy: 0.4138
Epoch 22/100
313/313 [==============================] - 1s 3ms/step - loss: 1.6110 - accuracy: 0.4288 - val_loss: 1.6686 - val_accuracy: 0.4023
Epoch 23/100
313/313 [==============================] - 1s 4ms/step - loss: 1.6078 - accuracy: 0.4282 - val_loss: 1.6444 - val_accuracy: 0.4140
Epoch 24/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5993 - accuracy: 0.4342 - val_loss: 1.6792 - val_accuracy: 0.4041
Epoch 25/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5948 - accuracy: 0.4354 - val_loss: 1.6517 - val_accuracy: 0.4141
Epoch 26/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5892 - accuracy: 0.4360 - val_loss: 1.6861 - val_accuracy: 0.4057
Epoch 27/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5856 - accuracy: 0.4378 - val_loss: 1.6292 - val_accuracy: 0.4195
Epoch 28/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5785 - accuracy: 0.4387 - val_loss: 1.6437 - val_accuracy: 0.4194
Epoch 29/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5749 - accuracy: 0.4388 - val_loss: 1.6746 - val_accuracy: 0.4025
Epoch 30/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5716 - accuracy: 0.4433 - val_loss: 1.6429 - val_accuracy: 0.4153
Epoch 31/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5676 - accuracy: 0.4444 - val_loss: 1.6169 - val_accuracy: 0.4300
Epoch 32/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5602 - accuracy: 0.4462 - val_loss: 1.6660 - val_accuracy: 0.4040
Epoch 33/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5568 - accuracy: 0.4484 - val_loss: 1.6150 - val_accuracy: 0.4316
Epoch 34/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5539 - accuracy: 0.4473 - val_loss: 1.6274 - val_accuracy: 0.4196
Epoch 35/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5496 - accuracy: 0.4490 - val_loss: 1.6318 - val_accuracy: 0.4210
Epoch 36/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5487 - accuracy: 0.4513 - val_loss: 1.6406 - val_accuracy: 0.4150
Epoch 37/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5427 - accuracy: 0.4514 - val_loss: 1.6197 - val_accuracy: 0.4251
Epoch 38/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5387 - accuracy: 0.4526 - val_loss: 1.6142 - val_accuracy: 0.4296
Epoch 39/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5369 - accuracy: 0.4541 - val_loss: 1.6024 - val_accuracy: 0.4350
Epoch 40/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5327 - accuracy: 0.4557 - val_loss: 1.6138 - val_accuracy: 0.4321
Epoch 41/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5293 - accuracy: 0.4556 - val_loss: 1.6303 - val_accuracy: 0.4185
Epoch 42/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5265 - accuracy: 0.4566 - val_loss: 1.6417 - val_accuracy: 0.4231
Epoch 43/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5208 - accuracy: 0.4602 - val_loss: 1.5818 - val_accuracy: 0.4385
Epoch 44/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5198 - accuracy: 0.4587 - val_loss: 1.6336 - val_accuracy: 0.4313
Epoch 45/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5188 - accuracy: 0.4622 - val_loss: 1.5858 - val_accuracy: 0.4422
Epoch 46/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5124 - accuracy: 0.4620 - val_loss: 1.6163 - val_accuracy: 0.4313
Epoch 47/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5152 - accuracy: 0.4608 - val_loss: 1.6237 - val_accuracy: 0.4336
Epoch 48/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5112 - accuracy: 0.4639 - val_loss: 1.6017 - val_accuracy: 0.4327
Epoch 49/100
313/313 [==============================] - 1s 3ms/step - loss: 1.5043 - accuracy: 0.4640 - val_loss: 1.5864 - val_accuracy: 0.4383
Epoch 50/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5021 - accuracy: 0.4669 - val_loss: 1.5868 - val_accuracy: 0.4364
Epoch 51/100
313/313 [==============================] - 1s 4ms/step - loss: 1.5021 - accuracy: 0.4686 - val_loss: 1.6660 - val_accuracy: 0.4187
Epoch 52/100
313/313 [==============================] - 1s 4ms/step - loss: 1.4997 - accuracy: 0.4681 - val_loss: 1.6058 - val_accuracy: 0.4336
Epoch 53/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4990 - accuracy: 0.4670 - val_loss: 1.6451 - val_accuracy: 0.4165
Epoch 54/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4935 - accuracy: 0.4687 - val_loss: 1.6530 - val_accuracy: 0.4162
Epoch 55/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4888 - accuracy: 0.4712 - val_loss: 1.6777 - val_accuracy: 0.4080
Epoch 56/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4911 - accuracy: 0.4719 - val_loss: 1.5867 - val_accuracy: 0.4401
Epoch 57/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4866 - accuracy: 0.4739 - val_loss: 1.5865 - val_accuracy: 0.4377
Epoch 58/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4858 - accuracy: 0.4743 - val_loss: 1.5604 - val_accuracy: 0.4449
Epoch 59/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4837 - accuracy: 0.4745 - val_loss: 1.5733 - val_accuracy: 0.4451
Epoch 60/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4825 - accuracy: 0.4732 - val_loss: 1.5519 - val_accuracy: 0.4519
Epoch 61/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4802 - accuracy: 0.4738 - val_loss: 1.6047 - val_accuracy: 0.4249
Epoch 62/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4790 - accuracy: 0.4734 - val_loss: 1.6213 - val_accuracy: 0.4341
Epoch 63/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4749 - accuracy: 0.4787 - val_loss: 1.5718 - val_accuracy: 0.4447
Epoch 64/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4743 - accuracy: 0.4744 - val_loss: 1.6211 - val_accuracy: 0.4181
Epoch 65/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4705 - accuracy: 0.4776 - val_loss: 1.5992 - val_accuracy: 0.4454
Epoch 66/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4704 - accuracy: 0.4753 - val_loss: 1.6006 - val_accuracy: 0.4418
Epoch 67/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4679 - accuracy: 0.4808 - val_loss: 1.5763 - val_accuracy: 0.4438
Epoch 68/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4702 - accuracy: 0.4786 - val_loss: 1.6170 - val_accuracy: 0.4435
Epoch 69/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4661 - accuracy: 0.4763 - val_loss: 1.5853 - val_accuracy: 0.4381
Epoch 70/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4639 - accuracy: 0.4790 - val_loss: 1.5461 - val_accuracy: 0.4583
Epoch 71/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4614 - accuracy: 0.4821 - val_loss: 1.5972 - val_accuracy: 0.4349
Epoch 72/100
313/313 [==============================] - 1s 3ms/step - loss: 1.4617 - accuracy: 0.4803 - val_loss: 1.5614 - val_accuracy: 0.4451
Epoch 73/100
313/31