Neural Networks Representation
Neural Networks Representation
1.3 Vectorizing Logistic Regression
All right, I already used the Vectorized approach and without any loops last exercise.
1.4 One-vs-all Classification
1 | initial_theta = zeros(n + 1, 1); |
the key points here are:
for
is needed here to loop from1
tonum_labels
all_theta
should assigned with(i,:)
, otherwise it will be a one-dimensional vector.
1.4.1 One-vs-all Prediction
1 | A = sigmoid(X * all_theta'); |
the max
function will return a two-dimensional vector, m
is the value, and p
is the max index in that row.
2 Neural Networks
Finally, I touched the Neural Networks with so many classes after. 😳
The
Before this, I wondering what is the process of prediction with a trained model. Is it the same with the training process?
1 | a0x = ones(m, 1); |
Amazing ha! 🤩
First I forgot to sigmoid
the hidden layer and output layer. The Accuracy shows
Training Set Accuracy: 69.62
And it will get the same accuracy if the output layer has no sigmoid
. It also happened in the previous One-vs-all Prediction.
Therefore I still have the question. 🧐
It’s time to keep on the training part. 💪