The code is publicly available on my Github
The architecture of the network we will work on is as follows:
Input
Convolution (5 x 5)
MaxPooling
Convolution (5 x 5)
MaxPooling
FullyConnected
The model is trained on the popular MNIST dataset with following parameters:
batch_size = 50
learning_rate = 0.001
epochs = 400
Optimiser = Adam
After training, we load the layer to visualise and pass a sample input via the input layer. The function then runs a session for the layer given the input and returns all the filters that comprise that layer.
This is done by using TensorFlow's session.run() function which returns all the filters when a layer is fed in as an object.
The results for the sample input are the following visualisations which are plot using matplotlib.
Hidden Layer 1 :
Hidden Layer 2:
The number of plots correspond to the increased number of filters as we go deeper into the network.
The depth also describes how more finer details are sought by the filters as the depth increases. This can be seen in the representation between what HiddenLayer1 vs HiddenLayer2 sees as the filter shows how the input stimulates the filter.
The height of your accomplishments equal the depth of your convictions.
Stats:
TensorFlow 1.8
Jupyter notebook
Ubuntu 17.10
0 comments:
Post a Comment