Affiliations 

  • 1 Universiti Sains Malaysia
Education in Medicine Journal, 2012;4(2):100-104.
MyJurnal

Abstract

Introduction: Ratings are known to have a generosity error, provide limited discrimination and distorted interpretation, and often fail to document serious deficits. A potential source of these problems is rater judgement. These problems compromise the capability of raters to maintain the standards of rating. The authors propose a simple grading system to improve this situation including providing feedback to raters. Method: The authors developed a grading system named the Discrepancy-Agreement Grade (DAG) to provide feedback on rater judgments. Dependent-t and intraclass correlation tests were applied to determine discrepancy and agreement levels of raters. Rater judgments were then classified into grades A, B, C or D. This grading system was tested in an examination and a student selection interview to assess rating judgments of examiners and interviewers. The purpose was to evaluate the practicability of the grading system to provide feedback on examiners’ and interviewers’ rating judgements. Results: in the examination, five short essays were rated by five pairs of senior lecturers. Out of 5 pairs, 2 (40%) obtained grade A and 3 (60%) obtained grade B. In the student selection interview, a total of 48 pairs of interviewers interviewed ten applicants. Out of 48 pairs, 20 (41.7%) obtained grade A, 1 (2.1%) obtained grade B, 23 (47.9%) obtained grade C and 4 (8.3%) obtained grade D. Conclusion: The grading system showed variability of rater judgments on medical students’ and applicants’ performance in an examination and interview session respectively. It provided feedback on the examiners’ and interviewers’ judgments on candidate performances. This exercise demonstrated practicability of the grading system to provide feedback on rater judgements.