Is accuracy = 1- test error rate

Apologies if this is a very obvious question, but I have been reading various posts and can’t seem to find a good confirmation. In the case of classification, is a classifier’s accuracy = 1- test error rate? I get that accuracy is $\frac{TP+TN}{P+N}$, but my question is how exactly are accuracy and test error rate related.

Answer

In principle, accuracy is the fraction of properly predicted cases.

This is the same as 1 - the fraction of misclassified cases or 1 - the *error* (rate).

Both terms may be sometimes used in a more vague way, however, and cover different things like class-balanced error/accuracy or even F-score or AUROC — it is always best to look for/include a proper clarification in the paper or report.

Also, note that test error rate implies error on a test set, so it is likely 1-test set accuracy, and there may be other accuracies flying around.

Attribution
Source : Link , Question Author : micro_gnomics , Answer Author : Community

Leave a Comment