
Corrections On page 29, in the Fisher's z transformation near the bottom of page, the letter "l" (el) in the numerator and the denominator, should be a "1" (one). On page 44, the last sentence in the second full paragraph should end as "is increased from 1.9845 to only 1.9908, a trial increase." Clarifications See the webinar on nonindependence. Elaborations According to Landis and Koch (Landis J. R., & Koch G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159174), the following standards are used to interpret kappa:
SPSS syntax files to compute various measures of nonindependence (inter1.sps: r, ICC, and r pairwise and inter2.sps and inter3.sps: kappa) and example data for kappa from a paper by Alferes and Kenny (2009) can be downloaded. Methods for correcting for bias in the ICC are discussed in Donoghue, J. R., & Collins, L. M. (1990). A note on the unbiased estimation of the intraclass correlation. Psychometrika, 55, 159164. A better method for computing the confidence interval for the ICC is given in Cappelleri, J. C., & Ting, N. (2003). A modified large sample approach to approximate interval estimation for a particular intraclass correlation coefficient. Statistics in Medicine, 22, 18611877. Shrout and Fleiss (Shrout, P., & Fleiss, J. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420428) refer to six different types of intraclass correlations. The one typically used in dyadic research is ICC(1,1). The first "1" means each dyad has different members. The second "1" means that the interest is in a single score. If we wanted to average the two scores to compute a couple score, its relatability would be ICC/(ICC + 1  ICCC/2) and would be denoted as ICC(1,2). SPSS data files (sav) for the data in Table 2.1 and Table 2.3 can be downloaded. An Excel file that can be used to test r, r_{I}, and r_{P} for statistical as well as compute the 95% confidence interval: ci_tests.xls. The SPSS syntax and data file in Table 2.4 to compute kappa and it standard error: data and syntax There are several websites that can be used to compute kappa and its standard error. One such site is Lowry at Vassar and it does also provide the 95% confidence interval. Note that it gives the standard error as .0539 whereas we obtain .0537. We are not sure why there is a difference. Perhaps Lowry uses N  1 and not N in the formula? 