METHODS: We analysed 350 items used in 7 professional examinations and determined their distractor efficiency and the number of functional distractors per item. The items were sorted into five groups - excellent, good, fair, remediable and discarded based on their discrimination index. We studied how the distractor efficiency and functional distractors per item correlated with these five groups.
RESULTS: Correlation of distractor efficiency with psychometric indices was significant but far from perfect. The excellent group topped in distractor efficiency in 3 tests, the good group in one test, the remediable group equalled excellent group in one test, and the discarded group topped in 2 tests.
CONCLUSIONS: The distractor efficiency did not correlate in a consistent pattern with the discrimination index. Fifty per cent or higher distractor efficiency, not hundred percent, was found to be the optimum.
EDUCATIONAL ACTIVITY AND SETTING: A quasi-experimental pre- and posttest control group design was employed. The experimental group experienced the flipped classroom for selected topics while the control group learned in a traditional classroom. Analysis of covariance was utilized to compare the performance on the final exam using the grade point of a pre-requisite course as the covariate. Students' perceptions of their experience in the flipped classroom were gauged through a web-based survey.
FINDINGS: Student performance on the final exam was significantly higher in the flipped classroom group. The lowest-scoring students benefitted the most in terms of academic performance. More than two-thirds of students responded positively to the use of the flipped classroom and felt more confident while participating in classes and tests.
SUMMARY: The flipped classroom is academically beneficial in a challenging course with a historically low pass rate; it was also effective in stimulating learning interest. The current study identified that for the flipped classroom to be successful, the role of educators, the feasibility of the approach, and the acceptance of students were important.
METHODS: We compared two methods of OSCE feedback delivered to fourth year medical students in Malaysia: (i) Face to face (FTF) immediate feedback (semester one) (ii) Individualised enhanced written (EW) feedback containing detailed scores in each domain, examiners' free text comments and the marking rubric (semester two). Both methods were evaluated by students and staff examiners, and students' responses were compared against their OSCE performance.
RESULTS: Of the 116 students who sat for both formative OSCEs, 82.8% (n=96) and 86.2% (n=100) responded to the first and second survey respectively. Most students were comfortable to receive feedback (91.3% in FTF, 96% in EW) with EW feedback associated with higher comfort levels (p=0.022). Distress affected a small number with no differences between either method (13.5% in FTF, 10% in EW, p=0.316). Most students perceived both types of feedback improved their performance (89.6% in FTF, 95% in EW); this perception was significantly stronger for EW feedback (p=0.008). Students who preferred EW feedback had lower OSCE scores compared to those preferring FTF feedback (mean scores ± SD: 43.8 ± 5.3 in EW, 47.2 ± 6.5 in FTF, p=0.049). Students ranked the "marking rubric" to be the most valuable aspect of the EW feedback. Tutors felt both methods of feedback were equally beneficial. Few examiners felt they needed training (21.4% in FTF, 15% in EW) but students perceived this need for tutors' training differently (53.1% in FTF, 46% in EW) CONCLUSION: Whilst both methods of OSCE feedback were highly valued, students preferred to receive EW feedback and felt it was more beneficial. Learning cultures of Malaysian students may have influenced this view. Information provided in EW feedback should be tailored accordingly to provide meaningful feedback in OSCE exams.
METHODS: A cross-sectional study was conducted using validated modified-communication tools; Patient Communication Assessment Instruments (PCAI), Student Communication Assessment Instruments (SCAI) and Clinical Communication Assessment Instruments (CCAI) which included four communication domains. One hundred and seventy-six undergraduate clinical year students were recruited in this study whereby each of them was assessed by a clinical instructor and a randomly selected patient in two settings: Dental Health Education (DHE) and Comprehensive Care (CC) clinic.
RESULTS: Comparing the three perspectives, PCAI yielded the highest scores across all domains, followed by SCAI and CCAI (p
Methods: . We assessed links through curriculum mapping, between assessments and expected learning outcomes of dental physiology curriculum of three batches of students (2012-14) at Melaka-Manipal Medical College (MMMC), Manipal. The questions asked under each assessment method were mapped to the respective expected learning outcomes, and students' scores in different assessments in physiology were gathered. Students' (n = 220) and teachers' (n=15) perspectives were collected through focus group discussion sessions and questionnaire surveys.
Results: . More than 75% of students were successful (≥50% scores) in majority of the assessments. There was moderate (r=0.4-0.6) to strong positive correlation (r=0.7-0.9) between majority of the assessments. However, students' scores in viva voce had a weak positive correlation with the practical examination score (r=0.230). The score in the assessments of problem-based learning had either weak (r=0.1-0.3) or no correlation with other assessment scores.
Conclusions: . Through curriculum mapping, we were able to establish links between assessments and expected learning outcomes. We observed that, in the assessment system followed at MMMC, all expected learning outcomes were not given equal weightage in the examinations. Moreover, there was no direct assessment of self-directed learning skills. Our study also showed that assessment has supported students in achieving the expected learning outcomes as evidenced by the qualitative and quantitative data.
MATERIALS AND METHODS: In this nonrandomized trial on interrupted time series study, flipped class was conducted on group of 112 students of bachelor of pharmacy semester V. The topic selected was popular herbal remedies of the complementary medicine module. Flipped class was conducted with audio and video presentation in the form of a quiz using ten one-best-answer type of multiple-choice questions covering the learning objectives. Audience response was captured using web-based interaction with Poll Everywhere. Feedback was obtained from participants at the end of FC activity and debriefing was done.
RESULTS: Randomly selected 112 complete responses were included in the final analysis. There were 47 (42%) male and 65 (58%) female respondents. The overall Cronbach's alpha of feedback questionnaire was 0.912. The central tendencies and dispersions of items in the questionnaire indicated the effectiveness of FC. The low or middle achievers of quiz session (pretest) during the FC activity were three times (95% confidence interval [CI] = 1.1-8.9) at the risk of providing neutral or negative feedback than high achievers (P = 0.040). Those who gave neutral or negative feedback on FC activity were 3.9 times (95% CI = 1.3-11.8) at the risk of becoming low or middle achievers during the end of semester examination (P = 0.013). The multivariate analysis of "Agree" or "Disagree" and "Agree" or "Strongly Agree" was statistically significant.
CONCLUSION: This study provides insight on how the pharmacy students learn and develop their cognitive functions. The results revealed that the FC activity with Poll Everywhere is an effective teaching-learning method.
METHOD: A quasi-experimental pre- and posttest design with a control group was used to study the effectiveness of an educational intervention on the clinical judgment skills of 80 RNs from two district hospitals. The change in clinical judgment skills during a 6-week period was evaluated using a complex case-based scenario after the completion of the educational intervention.
RESULTS: The mean scores of clinical judgment skills of the experimental group had significantly improved from 24.15 ± 6.92 to 47.38 ± 7.20. (p < .001). However, only a slight change was seen in mean scores for the control group (23.80 ± 5.77 to 26.50 ± 6.53).
CONCLUSION: The educational intervention was effective postintervention. Continuing nursing education using a traditional and case-based method is recommended to improve clinical judgment skills in clinical settings. J Contin Educ Nurs. 2017;48(8):347-352.