机器学习笔记 ---- Evaluations and Diagnostics on Algorithms

Improvements and Diagnostics on Algorithms

1. How to Evaluate A Hypothesis

Split training set into 2 parts: training set + test set
If Jtest(θ) high, J(θ) low, then overfitting occurs.



Linear Regression Test Error:
Same as J(θ)


Logistic Regression Test Error:

err(hΘ(x),y)={1,if hΘ(x)0.5 and y=0 or hΘ(x)<0.5 and y=10,otherwise

then
TestError=1Mtesterr(hΘ(x),y)

2. Model Selection

Split training set into 3 parts: training set + cross validation set (CV) + test set
1) Optimize the parameters in Θ using the training set for each polynomial degree.
2) Find the polynomial degree d with the least error using the cross validation set.
3) Estimate the generalization error using the test set with Jtest(Θ(d)), (d = theta from polynomial with lower error);
In reality, CV set and test set should be randomly picked!

3. Diagnosing Bias & Variance

Training error decreases with d increases.
Validation error first decreases, then increases as d becomes bigger.
机器学习笔记 ---- Evaluations and Diagnostics on Algorithms
High Bias:
JCV(θ)Jtrain(θ) is high
High Variance:
JCV(θ) high, Jtrain(θ) low

4. Choosing λ When Doing Regularization

Try λ:=λ2, Pick the one wth least JCV(θ) and see its test error
High Bias:
JCV(θ)Jtrain(θ) is high, λ is big
High Variance:
JCV(θ) high, Jtrain(θ) low, λ is small

5. Learning Curves

x-axis is m, y-axis is error

High Bias:
机器学习笔记 ---- Evaluations and Diagnostics on Algorithms
If bias is high, adding more training data won’t help.


High Variance:
机器学习笔记 ---- Evaluations and Diagnostics on Algorithms
If variance is high, adding more training data may help.

6. Solutions for Bias & Variance

High Bias:
-more features;
-more polynomials;
-decreasing λ

High Variance:
-more examples;
-less features;
-increasing λ

7.Bias & Variance for Neural Network

Small Network: High Bias
Big Network: High Variance, using λ doing regularization

8. Error Metrics: Precision & Recall

Put y=1 in presence of rare classes.
- Precision: Of all y=1 predictions, how many are correctly detected?
- Recall: Of all the rare cases, how many are correctly detected?

How to compare precision and recall? Using F score.
F score = 2PRP+R