Machine Learning Wu Enda3
chapter 27 Multiple features
start to talk about a new version of linear regression,more powerful one that works with multiple variables or with multiple features.
Notation:
n = number of features
= input (features) of i th training example.
= value of feature j in i th training example.
Hypothesis:
for convenience of notation ,define = 1.
so
Multivariate linear regression.
chapter 28 Gradient descent for multiple variables
how to fit the parameters of that hypothesis.how to use gradient descent for linear regression with multiple features
Hypothesis:
Parameters: here is a n+1-dimensional vector.
Cost function:
Gradient descent:
Repeat{
} (simultaneously update for every )
New algorithm(n>=1):
Repeat{
}
chapter 29 Gradient descent in practice 1:feature scaling
practical tricks for making gradient descent work well
Feature Scaling:
Idea: Make sure features are on similar scale.
Get every feature into approximately a range.
Mean normalization
Replace with to make features have approximately zero mean (Do not apply to )
is the average value of x1 in the training sets.
is the range of values of that feature or standard deviation
chapter 30 Gradient descent in practice 2:learning rate
around the learning rate
- “Debugging “:How to make sure gradient descent is working correctly.
- How to choose learning rate
Declare convergence if decreases by less than in one iteration.
but choose what this threshold is pretty difficult.So,in order to check your gradient descent has converged, actually tend to look at plots.
- For sufficiently small ,should decrease on every iteration.
- But if is too small,gradient descent can be slow to converge.
- if is too large ; may not decrease on every iteration ;may not converge.
chapter 31 Features and polynomial regression
the choice of features that you have and how you can get different learning algorithm
It is important to apply feature scaling if you’re using gradient descent to get them into comparable ranges of values.
broad choices in the features you use.
chapter 32 Normal equation
which for some linear regression problems ,will give us a much better way to solve for the optimal value of the parameters
Normal equation : Method to solve for analytically.
正规方程推到过程:https://zhuanlan.zhihu.com/p/22474562
矩阵求导:https://blog.****.net/nomadlx53/article/details/50849941
Gradient Descent and Normal Equation advantages and disadvantages :
The normal equation method actually do not work for those more sophisticated learning algorithms.
chapter 33 Normal equation and non-invertibility (optional)
what if is non-invertible?
- Redundant features (linearly dependent)
e g : = size in feet = size in m
- too many features (e.g m<=n)
Delete some features , or use regularization.