Publisher's Synopsis
Excerpt from Variance Approximations for Assessments of Classification Accuracy
The kappa statistic, which is computed from a square contingency table, is a scalar measure of agreement be tween two classifiers. If one classifier is considered a reference that is without error, then the kappa statistic is a measure of classification accuracy. Kappa equals 1 for perfect agreement, and zero for agreement expected by chance alone. Figure 1 provides interpretations of the magnitude of the kappa statistic that have appeared in the literature. In addition to kappa, Fleiss (1981) sug gests that conditional probabilities are useful when as sessing the agreement between two different classifi ers, and Bishop et al. (1975) suggest statistics that quantify the disagreement between classifiers. About the Publisher Forgotten Books publishes hundreds of thousands of rare and classic books. Find more at www.forgottenbooks.com This book is a reproduction of an important historical work. Forgotten Books uses state-of-the-art technology to digitally reconstruct the work, preserving the original format whilst repairing imperfections present in the aged copy. In rare cases, an imperfection in the original, such as a blemish or missing page, may be replicated in our edition. We do, however, repair the vast majority of imperfections successfully; any imperfections that remain are intentionally left to preserve the state of such historical works.