Tuesday, February 18, 2014

Reliability!

"An additional researcher coded a randomly chosen sub-set of the discussion posts to determine the level of inter-rater reliability for the coding process. After coding 50 discussion posts the percentage of agreement was 80%. This can be considered an excellent level of agreement and indicates a high degree of consistency for the coding process" (Fleiss, Levin, & Paik, 2003).
This pilot study used inter-rater reliability to examine to what extent both researchers are agreed in scoring data. In this case, both researchers scored the answers from pilot study and the Kappa inter-rater reliability was computed using the Statistical Package for Social Sciences (SPSS) 15.0 for Windows. The inter-rater reliability for the scoring rubric was Kappa = 0.75 (p < 0.01). This measure of agreement based on Landis (1977) interpretation for Kappa is substantial.


No comments:

Post a Comment