Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

3
Computing P(error) for Naive Bayes Classifier
Post Body

I have implemented a naive bayes classifier for a multivariate Gaussian distribution. I have 3 classes, C1, C2 and C3.

I know P(error) is defined as the number of items that were classified incorrectly divided by the total number of items – for instance the number of times a items that belonged to class 1 were assigned labels for class 2 or 3.

A simple way to compute the P(error) numerically would be to compute the confusion matrix first and then just sum the number of elements assigned incorrectly.

The total probability error would be: P(error)_C1 P(error)_C2 P(error)_C2

I have confusion about what number to divide by – should I divide by the total number of items in the data-set or divide each sum by the total number of items for that particular class i.e.:

Should it be:

P(error)_C1 = number of items classified as C2 and C3 / total items in the dataset

or

P(error)_C1 = number of items classified as C2 and C3 / total items in C1

Author
Account Strength
60%
Account Age
12 years
Verified Email
No
Verified Flair
No
Total Karma
601
Link Karma
86
Comment Karma
515
Profile updated: 6 days ago
Posts updated: 1 month ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
5 years ago