Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Puthiaparampil T, Rahman MM
    BMC Med Educ, 2020 May 06;20(1):141.
    PMID: 32375739 DOI: 10.1186/s12909-020-02057-w
    BACKGROUND: Multiple choice questions, used in medical school assessments for decades, have many drawbacks such as hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked, directly answered questions like Very Short Answer Questions (VSAQ) are considered a better alternative with several advantages.

    OBJECTIVES: This study aims to compare student performance in MCQ and VSAQ and obtain feedback. from the stakeholders.

    METHODS: Conduct multiple true-false, one best answer, and VSAQ tests in two batches of medical students, compare their scores and psychometric indices of the tests and seek opinion from students and academics regarding these assessment methods.

    RESULTS: Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders' opinions were significantly in favour of VSAQ.

    CONCLUSION AND RECOMMENDATION: This study concludes that VSAQ is a viable alternative to multiple-choice question tests, and it is widely accepted by medical students and academics in the medical faculty.

    Matched MeSH terms: Educational Measurement/methods*
  2. Abraham R, Ramnarayan K, Kamath A
    BMC Med Educ, 2008 Jul 24;8:40.
    PMID: 18652649 DOI: 10.1186/1472-6920-8-40
    BACKGROUND: It has been proved that basic science knowledge learned in the context of a clinical case is actually better comprehended and more easily applied by medical students than basic science knowledge learned in isolation. The present study intended to validate the effectiveness of Clinically Oriented Physiology Teaching (COPT) in undergraduate medical curriculum at Melaka Manipal Medical College (Manipal Campus), Manipal, India.

    METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.

    RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.

    CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.

    Matched MeSH terms: Educational Measurement/methods
  3. Ismail MA, Ahmad A, Mohammad JA, Fakri NMRM, Nor MZM, Pa MNM
    BMC Med Educ, 2019 Jun 25;19(1):230.
    PMID: 31238926 DOI: 10.1186/s12909-019-1658-z
    BACKGROUND: Gamification is an increasingly common phenomenon in education. It is a technique to facilitate formative assessment and to promote student learning. It has been shown to be more effective than traditional methods. This phenomenological study was conducted to explore the advantages of gamification through the use of the Kahoot! platform for formative assessment in medical education.

    METHODS: This study employed a phenomenological design. Five focus groups were conducted with medical students who had participated in several Kahoot! sessions.

    RESULTS: Thirty-six categories and nine sub-themes emerged from the focus group discussions. They were grouped into three themes: attractive learning tool, learning guidance and source of motivation.

    CONCLUSIONS: The results suggest that Kahoot! sessions motivate students to study, to determine the subject matter that needs to be studied and to be aware of what they have learned. Thus, the platform is a promising tool for formative assessment in medical education.

    Matched MeSH terms: Educational Measurement/methods*
  4. Perera J, Mohamadou G, Kaur S
    Adv Health Sci Educ Theory Pract, 2010 May;15(2):185-93.
    PMID: 19757129 DOI: 10.1007/s10459-009-9191-1
    Feedback is essential to guide students towards expected performance goals. The usefulness of teacher feedback on improving communication skills (CS) has been well documented. It has been proposed that self-assessment and peer-feedback has an equally important role to play in enhancing learning. This is the focus of this study. Objectively structured self-assessment and peer feedback (OSSP) was incorporated into small group CS teaching sessions of a group of semester one medical students who were learning CS for the first time, to minimise the influence of previous educational interventions. A control group matched for academic performance, gender and age was used to enable parallel evaluation of the innovation. A reflective log containing closed and open ended questions was used for OSSP. Facilitators and simulated patients provided feedback to students in both groups during CS learning as per routine practice. Student perceptions on OSSP and acceptability as a learning method were explored using a questionnaire. CS were assessed in both groups using objective structured clinical examination (OSCE) as per routine practice and assessors were blinded as to which group the student belonged. Mean total score and scores for specific areas of interview skills were significantly higher in the experimental group. Analysis of the questionnaire data showed that students gained fresh insights into specific areas such as empathy, addressing patients' concerns and interview style during OSSP which clearly corroborated the specific differences in scores. The free text comments were highly encouraging as to acceptability of OSSP, in spite of 67% being never exposed to formal self- and peer-assessment during pre-university studies. OSSP promotes effective CS learning and learner acceptability is high.
    Matched MeSH terms: Educational Measurement/methods
  5. Tan CP, Rokiah P
    Med J Malaysia, 2005 Aug;60 Suppl D:48-53.
    PMID: 16315624
    Formative and summative student assessment has always been of concern to medical teachers, and this is especially important at the level of graduating doctors. The effectiveness and comprehensiveness of the clinical training provided is tested with the use of clinical cases, either with real patients who have genuine medical conditions, or with the use of standardised patients who are trained to simulate accurately actual patients. The Objective Structured Clinical Examination (OSCE) is one method of assessing the adequacy of clinical skills of medical students, and their level of competence. It can be used to test a variety of skills such as history taking (communication and interpersonal skills) and performing aspects of physical examination, undertaking emergency procedures, and interpreting investigational data. It can also be used to ensure an adequate depth and breadth of coverage of clinical skills expected of a graduating doctor.
    Matched MeSH terms: Educational Measurement/methods*
  6. Rao M
    Adv Physiol Educ, 2006 Jun;30(2):95.
    PMID: 16709743
    Matched MeSH terms: Educational Measurement/methods*
  7. Chan MY
    Med Educ Online, 2015;20:28565.
    PMID: 26194482 DOI: 10.3402/meo.v20.28565
    The oral case presentation is an important communicative activity in the teaching and assessment of students. Despite its importance, not much attention has been paid to providing support for teachers to teach this difficult task to medical students who are novices to this form of communication. As a formalized piece of talk that takes a regularized form and used for a specific communicative goal, the case presentation is regarded as a rhetorical activity and awareness of its rhetorical and linguistic characteristics should be given due consideration in teaching. This paper reviews practitioners' and the limited research literature that relates to expectations of medical educators about what makes a good case presentation, and explains the rhetorical aspect of the activity. It is found there is currently a lack of a comprehensive model of the case presentation that projects the rhetorical and linguistic skills needed to produce and deliver a good presentation. Attempts to describe the structure of the case presentation have used predominantly opinion-based methodologies. In this paper, I argue for a performance-based model that would not only allow a description of the rhetorical structure of the oral case presentation, but also enable a systematic examination of the tacit genre knowledge that differentiates the expert from the novice. Such a model will be a useful resource for medical educators to provide more structured feedback and teaching support to medical students in learning this important genre.
    Matched MeSH terms: Educational Measurement/methods*
  8. Bosher S, Bowles M
    Nurs Educ Perspect, 2008 May-Jun;29(3):165-72.
    PMID: 18575241
    Recent research has indicated that language may be a source of construct-irrelevant variance for non-native speakers of English, or English as a second language (ESL) students, when they take exams. As a result, exams may not accurately measure knowledge of nursing content. One accommodation often used to level the playing field for ESL students is linguistic modification, a process by which the reading load of test items is reduced while the content and integrity of the item are maintained. Research on the effects of linguistic modification has been conducted on examinees in the K-12 population, but is just beginning in other areas. This study describes the collaborative process by which items from a pathophysiology exam were linguistically modified and subsequently evaluated for comprehensibility by ESL students. Findings indicate that in a majority of cases, modification improved examinees' comprehension of test items. Implications for test item writing and future research are discussed.
    Matched MeSH terms: Educational Measurement/methods*
  9. Tan K, Chong MC, Subramaniam P, Wong LP
    Nurse Educ Today, 2018 May;64:180-189.
    PMID: 29500999 DOI: 10.1016/j.nedt.2017.12.030
    BACKGROUND: Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students.

    OBJECTIVE: To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students.

    DESIGN: A systematic review of interventional experimental studies.

    DATA SOURCES: Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched.

    REVIEW METHODS: Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted.

    RESULTS: Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect.

    CONCLUSIONS: Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed.

    Matched MeSH terms: Educational Measurement/methods*
  10. Schwartz PL, Crooks TJ, Sein KT
    Med Educ, 1986 Sep;20(5):399-406.
    PMID: 3762442
    It has been suggested that the 'ideal' measure of reliability of an examination is obtained by test and retest using the one examination on the same group of students. However, because of practical and theoretical arguments, most reported reliabilities for multiple choice examinations in medicine are actually measures of internal consistency. While attempting to minimize the effects of potential interfering factors, we have undertaken a study of true test-retest reliability of multiple true-false type multiple choice questions in preclinical medical subjects. From three end-of-term examinations, 363 items (106 of 449 from term 1, 150 of 499 from term 2, and 107 of 492 from term 3) were repeated in the final examination (out of 999 total items). Between test and retest, there was little overall decrease in the percentage of items answered correctly and a decrease of only 3.4 in the percentage score after correction for guessing. However, there was an inverse relation between test-retest interval and decrease in performance. Between test and retest, performance decreased significantly on 33 items and increased significantly on 11 items. Test-retest correlation coefficients were 0.70 to 0.78 for items from the separate terms and 0.885 for all items that were retested. Thus, overall, these items had a very high degree of reliability, approximately the 0.9 which has been specified as the requirement for being able to distinguish between individuals.
    Matched MeSH terms: Educational Measurement/methods*
  11. Prashanti E, Ramnarayan K
    Adv Physiol Educ, 2019 Jun 01;43(2):99-102.
    PMID: 30835147 DOI: 10.1152/advan.00173.2018
    In an era that is seemingly saturated with standardized tests of all hues and stripes, it is easy to forget that assessments not only measure the performance of students, but also consolidate and enhance their learning. Assessment for learning is best elucidated as a process by which the assessment information can be used by teachers to modify their teaching strategies while students adjust and alter their learning approaches. Effectively implemented, formative assessments can convert classroom culture to one that resonates with the triumph of learning. In this paper, we present 10 maxims that show ways that formative assessments can be better understood, appreciated, and implemented.
    Matched MeSH terms: Educational Measurement/methods*
  12. Sim JH, Abdul Aziz YF, Mansor A, Vijayananthan A, Foong CC, Vadivelu J
    Med Educ Online, 2015;20:26185.
    PMID: 25697602 DOI: 10.3402/meo.v20.26185
    INTRODUCTION: The purpose of this study was to compare students' performance in the different clinical skills (CSs) assessed in the objective structured clinical examination.

    METHODS: Data for this study were obtained from final year medical students' exit examination (n=185). Retrospective analysis of data was conducted using SPSS. Means for the six CSs assessed across the 16 stations were computed and compared.

    RESULTS: Means for history taking, physical examination, communication skills, clinical reasoning skills (CRSs), procedural skills (PSs), and professionalism were 6.25±1.29, 6.39±1.36, 6.34±0.98, 5.86±0.99, 6.59±1.08, and 6.28±1.02, respectively. Repeated measures ANOVA showed there was a significant difference in the means of the six CSs assessed [F(2.980, 548.332)=20.253, p<0.001]. Pairwise multiple comparisons revealed significant differences between the means of the eight pairs of CSs assessed, at p<0.05.

    CONCLUSIONS: CRSs appeared to be the weakest while PSs were the strongest, among the six CSs assessed. Students' unsatisfactory performance in CRS needs to be addressed as CRS is one of the core competencies in medical education and a critical skill to be acquired by medical students before entering the workplace. Despite its challenges, students must learn the skills of clinical reasoning, while clinical teachers should facilitate the clinical reasoning process and guide students' clinical reasoning development.

    Matched MeSH terms: Educational Measurement/methods*
  13. Venkataramani P, Sadanandan T, Savanna RS, Sugathan S
    Med Educ, 2019 05;53(5):499-500.
    PMID: 30891812 DOI: 10.1111/medu.13860
    Matched MeSH terms: Educational Measurement/methods*
  14. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  15. Lai NM, Teng CL
    BMC Med Educ, 2011;11:25.
    PMID: 21619672 DOI: 10.1186/1472-6920-11-25
    BACKGROUND: Previous studies report various degrees of agreement between self-perceived competence and objectively measured competence in medical students. There is still a paucity of evidence on how the two correlate in the field of Evidence Based Medicine (EBM). We undertook a cross-sectional study to evaluate the self-perceived competence in EBM of senior medical students in Malaysia, and assessed its correlation to their objectively measured competence in EBM.
    METHODS: We recruited a group of medical students in their final six months of training between March and August 2006. The students were receiving a clinically-integrated EBM training program within their curriculum. We evaluated the students' self-perceived competence in two EBM domains ("searching for evidence" and "appraising the evidence") by piloting a questionnaire containing 16 relevant items, and objectively assessed their competence in EBM using an adapted version of the Fresno test, a validated tool. We correlated the matching components between our questionnaire and the Fresno test using Pearson's product-moment correlation.
    RESULTS: Forty-five out of 72 students in the cohort (62.5%) participated by completing the questionnaire and the adapted Fresno test concurrently. In general, our students perceived themselves as moderately competent in most items of the questionnaire. They rated themselves on average 6.34 out of 10 (63.4%) in "searching" and 44.41 out of 57 (77.9%) in "appraising". They scored on average 26.15 out of 60 (43.6%) in the "searching" domain and 57.02 out of 116 (49.2%) in the "appraising" domain in the Fresno test. The correlations between the students' self-rating and their performance in the Fresno test were poor in both the "searching" domain (r = 0.13, p = 0.4) and the "appraising" domain (r = 0.24, p = 0.1).
    CONCLUSIONS: This study provides supporting evidence that at the undergraduate level, self-perceived competence in EBM, as measured using our questionnaire, does not correlate well with objectively assessed EBM competence measured using the adapted Fresno test.
    STUDY REGISTRATION: International Medical University, Malaysia, research ID: IMU 110/06.
    Matched MeSH terms: Educational Measurement/methods*
  16. Roslani AM, Sein KT, Nordin R
    Med J Malaysia, 1989 Mar;44(1):75-82.
    PMID: 2626116
    The Phase I and Phase II undergraduate teaching programmes of the School of Medical Sciences were reviewed at the end of the 1985/86 academic year. It was found that deviations from the School's philosophy had crept into the implementation process. Modifications were therefore made in Phase I and Phase II programmes with a view to:--(i) reducing content, (ii) promoting integration, (iii) improving clinical examination skills of students, and (iv) providing more opportunities to students for self learning, reinforcement and application of knowledge. The number of assessment items in Phase I and the frequency of assessment in Phase II were also found to be inappropriate and so modifications in assessment were made to rectify this situation.
    Matched MeSH terms: Educational Measurement/methods*
  17. Sim SM, Rasiah RI
    Ann Acad Med Singap, 2006 Feb;35(2):67-71.
    PMID: 16565756
    INTRODUCTION: This paper reports the relationship between the difficulty level and the discrimination power of true/false-type multiple-choice questions (MCQs) in a multidisciplinary paper for the para-clinical year of an undergraduate medical programme.

    MATERIALS AND METHODS: MCQ items in papers taken from Year II Parts A, B and C examinations for Sessions 2001/02, and Part B examinations for 2002/03 and 2003/04, were analysed to obtain their difficulty indices and discrimination indices. Each paper consisted of 250 true/false items (50 questions of 5 items each) on topics drawn from different disciplines. The questions were first constructed and vetted by the individual departments before being submitted to a central committee, where the final selection of the MCQs was made, based purely on the academic judgement of the committee.

    RESULTS: There was a wide distribution of item difficulty indices in all the MCQ papers analysed. Furthermore, the relationship between the difficulty index (P) and discrimination index (D) of the MCQ items in a paper was not linear, but more dome-shaped. Maximal discrimination (D = 51% to 71%) occurred with moderately easy/difficult items (P = 40% to 74%). On average, about 38% of the MCQ items in each paper were "very easy" (P > or =75%), while about 9% were "very difficult" (P <25%). About two-thirds of these very easy/difficult items had "very poor" or even negative discrimination (D < or =20%).

    CONCLUSIONS: MCQ items that demonstrate good discriminating potential tend to be moderately difficult items, and the moderately-to-very difficult items are more likely to show negative discrimination. There is a need to evaluate the effectiveness of our MCQ items.

    Matched MeSH terms: Educational Measurement/methods*
  18. Nagandla K, Gupta ED, Motilal T, Teng CL, Gangadaran S
    Natl Med J India, 2019 7 4;31(5):293-295.
    PMID: 31267998 DOI: 10.4103/0970-258X.261197
    Background: Assessment drives students' learning. It measures the level of students' understanding. We aimed to determine whether performance in continuous assessment can predict failure in the final professional examination results.

    Methods: We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).

    Results: Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.

    Conclusion: ICA with theory and clinical examination had a direct relationship with students' performance in the final examination and is a useful assessment tool.

    Matched MeSH terms: Educational Measurement/methods*
  19. Awaisu A, Mohamed MH, Al-Efan QA
    Am J Pharm Educ, 2007 Dec 15;71(6):118.
    PMID: 19503702
    OBJECTIVES: To assess bachelor of pharmacy students' overall perception and acceptance of an objective structured clinical examination (OSCE), a new method of clinical competence assessment in pharmacy undergraduate curriculum at our Faculty, and to explore its strengths and weaknesses through feedback.

    METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.

    RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.

    CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.

    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links