METHODS: A cross-sectional study was conducted using validated modified-communication tools; Patient Communication Assessment Instruments (PCAI), Student Communication Assessment Instruments (SCAI) and Clinical Communication Assessment Instruments (CCAI) which included four communication domains. One hundred and seventy-six undergraduate clinical year students were recruited in this study whereby each of them was assessed by a clinical instructor and a randomly selected patient in two settings: Dental Health Education (DHE) and Comprehensive Care (CC) clinic.
RESULTS: Comparing the three perspectives, PCAI yielded the highest scores across all domains, followed by SCAI and CCAI (p
METHODS: The norm-referenced method of standard setting was applied to the real scores of 40 final-year dental students on a multiple-choice question (MCQ), a short answer question (SAQ), and an objective structured clinical examination (OSCE). A panel of 10 judges set the standard using the modified-Angoff method for the same paper in one sitting. One judge set the passing score of 10 OSCE questions after 2 weeks. A comparison of the grades and pass/fail rates derived from the absolute standard, norm-referenced, and modified-Angoff methods was made. The intra-rater and inter-rater reliabilities of the modified-Angoff method were assessed.
RESULTS: The passing rate for the absolute standard was 100% (40/40), for the norm-referenced method it was 62.5% (25/40), and for the modified-Angoff method it was 80% (32/40). The modified-Angoff method had good inter-rater reliability of 0.876 and excellent test-retest reliability of 0.941.
CONCLUSION: There were significant differences in the outcomes of these three standard-setting methods, as shown by the difference in the proportion of candidates who passed and failed the assessment. The modified-Angoff method was found to have good reliability for use with a professional qualifying dental examination.
METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.
RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.
CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.
METHODS: A cross-sectional electronic survey was conducted at universities in Indonesia, Malaysia, and Pakistan. A 59-item survey was administered between October 2017 and December 2017.
FINDINGS: The survey was completed by 211 students (response rate 77.8%). The mean knowledge score for antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship was 5.6 ± 1.5, 4.7 ± 1.8 (maximum scores 10.0) and 3.1 ± 1.4 (maximum score 5.0), respectively. Significant variations were noted among the schools. There was poor awareness about the consequences of antibiotic resistance and cases with no need for an antibiotic. The knowledge of antibiotic resistance was higher among male respondents (6.1 vs. 5.4) and those who had attended antibiotic resistance (5.7 vs. 5.2) and antibiotic therapy (5.8 vs. 4.9) courses (p
METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.
RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.
CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.
MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.
RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.
CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.
METHODS: This critique on the OSCE is based on the published findings of researchers from its inception in 1975 to 2004.
RESULTS: The reliability, validity, objectivity and practicability or feasibility of this examination are based on the number of stations, construction of stations, method of scoring (checklists and/ or global scoring) and number of students assessed. For a comprehensive assessment of clinical competence, other methods should be used in conjunction with the OSCE.
CONCLUSION: The OSCE can be a reasonably reliable, valid and objective method of assessment, but its main drawback is that it is resource-intensive.