Logistic Regression

1.2.1 Warmup exercise: sigmoid function

As the sigmoid function is defined as:

1
g = 1./(1+exp(-z))

1.2.2 Cost function and gradient

1
2
J = (-y'*log(sigmoid(X*theta)) - (1-y)'*log(1-sigmoid(X*theta)))/m
grad = (X'*(sigmoid(X*theta) - y))/m

2.3 Cost function and gradient

for , the part = 0

1
2
3
4
5
6
r1 = sum(theta(2:end).^2)*lambda/2/m
J = (-y'*log(sigmoid(X*theta)) - (1-y)'*log(1-sigmoid(X*theta)))/m + r1

r2 = ones(size(theta))
r2(1) = 0
grad = (X'*(sigmoid(X*theta) - y))/m + (theta.*r2)*lambda/m

costFunctionReg

Yes, yes, I know I passed. 😄

But, God knows what happened? 🤔

3 Logistic Regression


title: Logistic Regression
date: 2018-01-03 21:59:14
tags: [AI, Machine Learning]
mathjax: true


1.2.1 Warmup exercise: sigmoid function

The sigmoid function is defined as:

1
g = 1./(1+exp(-z))

1.2.2 Cost function and gradient

The cost function is given by:

where is the gradient, defined as:

1
2
J = (-y'*log(sigmoid(X*theta)) - (1-y)'*log(1-sigmoid(X*theta)))/m
grad = (X'*(sigmoid(X*theta) - y))/m

2.3 Cost function and gradient

The regularized cost function is given by:

where is the gradient, defined as:

For , the regularization term is not applied.

1
2
3
4
5
6
r1 = sum(theta(2:end).^2)*lambda/2/m
J = (-y'*log(sigmoid(X*theta)) - (1-y)'*log(1-sigmoid(X*theta)))/m + r1

r2 = ones(size(theta))
r2(1) = 0
grad = (X'*(sigmoid(X*theta) - y))/m + (theta.*r2)*lambda/m

costFunctionReg

Yes, yes, I know I passed. 😄

But, God knows what happened? 🤔

3 Logistic Regression

Translated by gpt-3.5-turbo