Skip to content

Latest commit

 

History

History
93 lines (60 loc) · 3.2 KB

naive_bayes.rst

File metadata and controls

93 lines (60 loc) · 3.2 KB

Naive Bayes

scikits.learn.naive_bayes

Naive Bayes algorithms are a set of supervised learning methods based on applying Bayes' theorem with the "naive" assumption of independence between every pair of features. Given a class variable c and a dependent set of feature variables f1 through fn, Bayes' theorem states the following relationship:


p(c ∣ f1, …, fn) ∝ p(c)p( ∣ f1, …, fn ∣ c)

Using the naive assumption this relationship is simplified:

$$p(c \mid f_1,\dots,f_n) \propto p(c) \prod_{i=1}^{n} p(f_i \mid c)$$⇓$$\hat{c} = \arg\max_c p(c) \prod_{i=1}^{n} p(f_i \mid c),$$

where we used the Maximum a Posteriori estimator.

The different naive Bayes classifiers differ by the assumption on the distribution of p(fi ∣ c):

In spite of their naive design and apparently over-simplified assumptions, naive Bayes classifiers have worked quite well in many real-world situations, famously document classification and spam filtering. They requires a small amount of training data to estimate the necessary parameters.

The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution. This in turn helps to alleviate problems stemming from the curse of dimensionality.

Gaussian Naive Bayes

GaussianNB implements the Gaussian Naive Bayes algorithm for classification. The likelihood of the features is assumed to be gaussian:

$$p(f_i \mid c) &= \frac{1}{\sqrt{2\pi\sigma^2_c}} \exp^{-\frac{ (f_i - \mu_c)^2}{2\pi\sigma^2_c}}$$

The parameters of the distribution, σc and μc are estimated using maximum likelihood.

Examples:

  • example_naive_bayes.py,

Multinomial Naive Bayes

MultinomialNB implements the Multinomial Naive Bayes algorithm for classification. Multinomial Naive Bayes models the distribution of words in a document as a multinomial. The distribution is parametrized by the vector $\overline{\theta_c} = (\theta_{c1},\ldots,\theta_{cn})$ where c is the class of document, n is the size of the vocabulary and θci is the probability of word i appearing in a document of class c. The likelihood of document d is,

$$p(d \mid \overline{\theta_c}) &= \frac{ (\sum_i f_i)! }{\prod_i f_i !} \prod_i(\theta_{ci})^{f_i}$$

where fi is the frequency count of word i. It can be shown that the maximum posterior probability is,

$$\hat{c} = \arg\max_c [ \log p(\overline{\theta_c}) + \sum_i f_i \log \theta_{ci} ]$$

The vector of parameters $\overline{\theta_c}$ is estimated by a smoothed version of maximum likelihood,

$$\hat{\theta}_{ci} = \frac{ N_{ci} + \alpha_i }{N_c + \alpha }$$

where Nci is the number of times word i appears in a document of class c and Nc is the total count of words in a document of class c. The smoothness priors αi and their sum α account for words not seen in the learning samples.