Multilayer Perceptron in Gluon¶
Now that we learned how multilayer perceptrons (MLPs) work in theory, let’s implement them. We begin, as always, by importing modules.
In [1]:
import gluonbook as gb
from mxnet import gluon, init
from mxnet.gluon import loss as gloss, nn
The Model¶
The only difference from the softmax regression is the addition of a fully connected layer as a hidden layer. It has 256 hidden units and uses ReLU as the activation function.
In [2]:
net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net.initialize(init.Normal(sigma=0.01))
One minor detail is of note when invoking net.add()
. It adds one or
more layers to the network. That is, an equivalent to the above lines
would be net.add(nn.Dense(256, activation='relu'), nn.Dense(10))
.
Also note that Gluon automagically infers the missing parameteters, such
as the fact that the second layer needs a matrix of size
\(256 \times 10\). This happens the first time the network is
invoked.
We use almost the same steps for softmax regression training as we do for reading and training the model.
In [3]:
batch_size = 256
train_iter, test_iter = gb.load_data_fashion_mnist(batch_size)
loss = gloss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.5})
num_epochs = 10
gb.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,
None, None, trainer)
epoch 1, loss 0.7992, train acc 0.702, test acc 0.822
epoch 2, loss 0.4866, train acc 0.818, test acc 0.846
epoch 3, loss 0.4356, train acc 0.838, test acc 0.860
epoch 4, loss 0.3941, train acc 0.855, test acc 0.866
epoch 5, loss 0.3723, train acc 0.863, test acc 0.870
epoch 6, loss 0.3575, train acc 0.868, test acc 0.872
epoch 7, loss 0.3418, train acc 0.873, test acc 0.877
epoch 8, loss 0.3280, train acc 0.879, test acc 0.878
epoch 9, loss 0.3183, train acc 0.883, test acc 0.879
epoch 10, loss 0.3081, train acc 0.886, test acc 0.880
Problems¶
- Try adding a few more hidden layers to see how the result changes.
- Try out different activation functions. Which ones work best?
- Try out different initializations of the weights.