Machine learning(3): Bayesian Learning(1)
Conditional Probability
Bayes Theorem
The conditional probability in independence
Especially,
Conditional independence
Bayesian Learning
Estimating probabilities is essentially a matter of counting the
occurrences of particular combinations of values in the
training data set.
That is the basic estimating probabilities.
But if we have low conts, our probability is closed to 0. So, we could find a way to solve it .
Complete Bayes Classifiers
But using the complete bayes has a lot examples to support.
We could estimate some examples:
So, we could see if we want to use some sinple examples, their probabilities of examples are up to bigger numbers.
This method is no useful for all situtations.
Naive Bayes Classifiers
The complete Bayes classifier is impractical because so much
data is required to estimate the conditional probabilities.
Can we get round this problem by finding a much more
economical way to estimate them.
You could see that. It just need shorter number of examples to complete the prediction.
NUMERIC ATTRIBUTES
Two type of solution:
<1> Discretization
<2> Assume a distribution
BAYESIAN BELIEF NETWORKS
Build a model with two conditions:
<1> Specifies which conditional independence assumptions
are valid.
<2> Provides sets of conditional probabilities to specify the
joint probability distributions wherever dependencies
exist.