Context: Question vetting is important to ensure validity, reliability, and other quality indicators of assessment tools, including the MCQ. Faculty members invest a substantial amount of time and effort into the MCQ vetting process. However, there is shortage of scientific evidence showing its effectiveness and at which level it needs to be focused on. This study aimed to provide scientific evidence regarding the effects of question vetting process on students’ examination performance by looking at their scores and pass-fail outcomes.
Method: A parallel randomized control trial was conducted on third year medical students in a medical school. They were randomly assigned into two equal groups (i.e. control and experimental). Two mock examinations were conducted (i.e. time I and time II). At time I, non-vetted MCQs were administered to both groups as a baseline measurement. At time II, vetted MCQs were administered to the experimental group, while the same non-vetted MCQs were administered to the control group.
Results: Out of 203 students, 129 (63.5%) participated in both mock examinations. 65 students were in the control group and 64 students were in the experimental group. Statistical analysis showed no significant differences (p > 0.05) in mean examination scores and pass-fail outcomes between or within the control and experimental groups.
Conclusion: This study indicated that the MCQ vetting process did not influence examination performance. Despite these findings, the MCQ vetting process should still be considered an important activity to ensure that test items are developed at the highest quality and standards. However, it can be suggested that such activity can be done at the departmental level rather than at the central level.