Journal of Official Statistics, Vol.28, No.3, 2012. pp. 395412

Current Issue
Personal Reference Library (PRL)
Personal Page

Mutual Information as a Measure of Intercoder Agreement

In a situation where two raters are classifying a series of observations, it is useful to have an index of agreement among the raters that takes into account both the simple rate of agreement and the complexity of the rating task. Information theory provides a measure of the quantity of information in a list of classifications which can be used to produce an appropriate index of agreement. A normalized weighted mutual information index improves upon the traditional intercoder agreement index in a number of ways, key being that there is no need to develop a model of error generation before use; comparison across experiments is easier; and that ratings are based on the distribution of agreement across categories, not just an overall agreement level.

Intercoder agreement, Cohen’s kappa

Copyright Statistics Sweden 1996-2018.  Open Access
ISSN 0282-423X
Created and Maintained by OKS Group