Neural Networks Representation
Neural Networks Representation
1.3 Vectorizing Logistic Regression
All right, I have already used the vectorized approach without any loops in the previous exercise.
1.4 One-vs-all Classification
1 | initial_theta = zeros(n + 1, 1); |
The key points here are:
- The
for
loop is needed here to iterate from1
tonum_labels
all_theta
should be assigned with(i,:)
, otherwise it will be a one-dimensional vector.
1.4.1 One-vs-all Prediction
1 | A = sigmoid(X * all_theta'); |
The max
function will return a two-dimensional vector. m
is the value, and p
is the index of the maximum value in each row.
2 Neural Networks
Finally, I touched on Neural Networks with multiple classes. π³
The
Prior to this, I was wondering about the prediction process with a trained model. Is it the same as the training process?
1 | a0x = ones(m, 1); |
Amazing, right? π€©
First, I forgot to apply the sigmoid
function to the hidden layer and output layer. The accuracy shows:
Training Set Accuracy: 69.62
And it will get the same accuracy if the output layer has no sigmoid
. This also happened in the previous One-vs-all Prediction.
Therefore, I still have a question. π§
Itβs time to continue with the training. πͺ
Translated by gpt-3.5-turbo