Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Yusoff MS, Hadie SN, Abdul Rahim AF
    Med Educ, 2014 Feb;48(2):108-10.
    PMID: 24528391 DOI: 10.1111/medu.12403
    Matched MeSH terms: Educational Measurement/methods*
  2. Awaisu A, Mohamed MH, Al-Efan QA
    Am J Pharm Educ, 2007 Dec 15;71(6):118.
    PMID: 19503702
    OBJECTIVES: To assess bachelor of pharmacy students' overall perception and acceptance of an objective structured clinical examination (OSCE), a new method of clinical competence assessment in pharmacy undergraduate curriculum at our Faculty, and to explore its strengths and weaknesses through feedback.

    METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.

    RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.

    CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.

    Matched MeSH terms: Educational Measurement/methods*
  3. Abubakar U, Muhammad HT, Sulaiman SAS, Ramatillah DL, Amir O
    Curr Pharm Teach Learn, 2020 03;12(3):265-273.
    PMID: 32273061 DOI: 10.1016/j.cptl.2019.12.002
    BACKGROUND AND PURPOSE: Training pharmacy students in infectious diseases (ID) is important to enable them to participate in antibiotic stewardship programs. This study evaluated knowledge and self-confidence regarding antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship among final year pharmacy undergraduate students.

    METHODS: A cross-sectional electronic survey was conducted at universities in Indonesia, Malaysia, and Pakistan. A 59-item survey was administered between October 2017 and December 2017.

    FINDINGS: The survey was completed by 211 students (response rate 77.8%). The mean knowledge score for antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship was 5.6 ± 1.5, 4.7 ± 1.8 (maximum scores 10.0) and 3.1 ± 1.4 (maximum score 5.0), respectively. Significant variations were noted among the schools. There was poor awareness about the consequences of antibiotic resistance and cases with no need for an antibiotic. The knowledge of antibiotic resistance was higher among male respondents (6.1 vs. 5.4) and those who had attended antibiotic resistance (5.7 vs. 5.2) and antibiotic therapy (5.8 vs. 4.9) courses (p 

    Matched MeSH terms: Educational Measurement/methods
  4. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  5. Liew SC, Dutta S, Sidhu JK, De-Alwis R, Chen N, Sow CF, et al.
    Med Teach, 2014 Jul;36(7):626-31.
    PMID: 24787534 DOI: 10.3109/0142159X.2014.899689
    The complexity of modern medicine creates more challenges for teaching and assessment of communication skills in undergraduate medical programme. This research was conducted to study the level of communication skills among undergraduate medical students and to determine the difference between simulated patients and clinical instructors' assessment of communication skills.
    Matched MeSH terms: Educational Measurement/methods*
  6. Bosher S, Bowles M
    Nurs Educ Perspect, 2008 May-Jun;29(3):165-72.
    PMID: 18575241
    Recent research has indicated that language may be a source of construct-irrelevant variance for non-native speakers of English, or English as a second language (ESL) students, when they take exams. As a result, exams may not accurately measure knowledge of nursing content. One accommodation often used to level the playing field for ESL students is linguistic modification, a process by which the reading load of test items is reduced while the content and integrity of the item are maintained. Research on the effects of linguistic modification has been conducted on examinees in the K-12 population, but is just beginning in other areas. This study describes the collaborative process by which items from a pathophysiology exam were linguistically modified and subsequently evaluated for comprehensibility by ESL students. Findings indicate that in a majority of cases, modification improved examinees' comprehension of test items. Implications for test item writing and future research are discussed.
    Matched MeSH terms: Educational Measurement/methods*
  7. Chan MY
    Med Educ Online, 2015;20:28565.
    PMID: 26194482 DOI: 10.3402/meo.v20.28565
    The oral case presentation is an important communicative activity in the teaching and assessment of students. Despite its importance, not much attention has been paid to providing support for teachers to teach this difficult task to medical students who are novices to this form of communication. As a formalized piece of talk that takes a regularized form and used for a specific communicative goal, the case presentation is regarded as a rhetorical activity and awareness of its rhetorical and linguistic characteristics should be given due consideration in teaching. This paper reviews practitioners' and the limited research literature that relates to expectations of medical educators about what makes a good case presentation, and explains the rhetorical aspect of the activity. It is found there is currently a lack of a comprehensive model of the case presentation that projects the rhetorical and linguistic skills needed to produce and deliver a good presentation. Attempts to describe the structure of the case presentation have used predominantly opinion-based methodologies. In this paper, I argue for a performance-based model that would not only allow a description of the rhetorical structure of the oral case presentation, but also enable a systematic examination of the tacit genre knowledge that differentiates the expert from the novice. Such a model will be a useful resource for medical educators to provide more structured feedback and teaching support to medical students in learning this important genre.
    Matched MeSH terms: Educational Measurement/methods*
  8. Lee Chin K, Ling Yap Y, Leng Lee W, Chang Soh Y
    Am J Pharm Educ, 2014 Oct 15;78(8):153.
    PMID: 25386018 DOI: 10.5688/ajpe788153
    To determine whether human patient simulation (HPS) is superior to case-based learning (CBL) in teaching diabetic ketoacidosis (DKA) and thyroid storm (TS) to pharmacy students.
    Matched MeSH terms: Educational Measurement/methods
  9. Ramoo V, Abdullah KL, Tan PS, Wong LP, Chua PY
    Nurs Crit Care, 2016 Sep;21(5):287-94.
    PMID: 25271143 DOI: 10.1111/nicc.12105
    BACKGROUND: Sedation management is an integral component of critical care practice. It requires the greatest attention of critical care practitioners because it carries significant risks to patients. Therefore, it is imperative that nurses are aware of potential adverse consequences of sedation therapy and current sedation practice recommendations.

    AIMS AND OBJECTIVES: To evaluate the impact of an educational intervention on nurses' knowledge of sedation assessment and management.

    DESIGNS AND METHODS: A quasi-experimental design with a pre- and post-test method was used. The educational intervention included theoretical sessions on assessing and managing sedation and hands-on sedation assessment practice using the Richmond Agitation Sedation Scale. Its effect was measured using self-administered questionnaire, completed at the baseline level and 3 months following the intervention.

    RESULTS: Participants were 68 registered nurses from an intensive care unit of a teaching hospital in Malaysia. Significant increases in overall mean knowledge scores were observed from pre- to post-intervention phases (mean of 79·00 versus 102·00, p < 0·001). Nurses with fewer than 5 years of work experience, less than 26 years old, and with a only basic nursing education had significantly greater level of knowledge improvement at the post-intervention phase compared to other colleagues, with mean differences of 24·64 (p = 0·001), 23·81 (p = 0·027) and 27·25 (p = 0·0001), respectively. A repeated-measures analysis of variance revealed a statistically significant effect of educational intervention on knowledge score after controlling for age, years of work and level of nursing education (p = 0·0001, ηp (2) = 0·431).

    CONCLUSION: An educational intervention consisting of theoretical sessions and hands-on sedation assessment practice was found effective in improving nurses' knowledge and understanding of sedation management.

    RELEVANCE TO CLINICAL PRACTICE: This study highlighted the importance of continuing education to increase nurses' understanding of intensive care practices, which is vital for improving the quality of patient care.

    Matched MeSH terms: Educational Measurement/methods
  10. Guilding C, Pye RE, Butler S, Atkinson M, Field E
    Pharmacol Res Perspect, 2021 Aug;9(4):e00833.
    PMID: 34309243 DOI: 10.1002/prp2.833
    Multiple choice questions (MCQs) are a common form of assessment in medical schools and students seek opportunities to engage with formative assessment that reflects their summative exams. Formative assessment with feedback and active learning strategies improve student learning outcomes, but a challenge for educators, particularly those with large class sizes, is how to provide students with such opportunities without overburdening faculty. To address this, we enrolled medical students in the online learning platform PeerWise, which enables students to author and answer MCQs, rate the quality of other students' contributions as well as discuss content. A quasi-experimental mixed methods research design was used to explore PeerWise use and its impact on the learning experience and exam results of fourth year medical students who were studying courses in clinical sciences and pharmacology. Most students chose to engage with PeerWise following its introduction as a noncompulsory learning opportunity. While students perceived benefits in authoring and peer discussion, students engaged most highly with answering questions, noting that this helped them identify gaps in knowledge, test their learning and improve exam technique. Detailed analysis of the 2015 cohort (n = 444) with hierarchical regression models revealed a significant positive predictive relationship between answering PeerWise questions and exam results, even after controlling for previous academic performance, which was further confirmed with a follow-up multi-year analysis (2015-2018, n = 1693). These 4 years of quantitative data corroborated students' belief in the benefit of answering peer-authored questions for learning.
    Matched MeSH terms: Educational Measurement/methods*
  11. Nagandla K, Gupta ED, Motilal T, Teng CL, Gangadaran S
    Natl Med J India, 2019 7 4;31(5):293-295.
    PMID: 31267998 DOI: 10.4103/0970-258X.261197
    Background: Assessment drives students' learning. It measures the level of students' understanding. We aimed to determine whether performance in continuous assessment can predict failure in the final professional examination results.

    Methods: We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).

    Results: Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.

    Conclusion: ICA with theory and clinical examination had a direct relationship with students' performance in the final examination and is a useful assessment tool.

    Matched MeSH terms: Educational Measurement/methods*
  12. Sim JH, Tong WT, Hong WH, Vadivelu J, Hassan H
    Med Educ Online, 2015;20:28612.
    PMID: 26511792 DOI: 10.3402/meo.v20.28612
    INTRODUCTION: Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students' perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument.
    METHOD: The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors' institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined.
    RESULTS: Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of 'feedback mechanism' recorded the lowest mean (2.39/4.00), whereas the factor/subscale of 'assessment system/procedure' scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years.
    CONCLUSIONS: The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students' perceptions of the assessment environment in an undergraduate medical program.
    KEYWORDS: assessment environment; development; instrument; psychometric properties; validation
    Matched MeSH terms: Educational Measurement/methods*
  13. Abraham R, Ramnarayan K, Kamath A
    BMC Med Educ, 2008 Jul 24;8:40.
    PMID: 18652649 DOI: 10.1186/1472-6920-8-40
    BACKGROUND: It has been proved that basic science knowledge learned in the context of a clinical case is actually better comprehended and more easily applied by medical students than basic science knowledge learned in isolation. The present study intended to validate the effectiveness of Clinically Oriented Physiology Teaching (COPT) in undergraduate medical curriculum at Melaka Manipal Medical College (Manipal Campus), Manipal, India.

    METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.

    RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.

    CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.

    Matched MeSH terms: Educational Measurement/methods
  14. Perera J, Mohamadou G, Kaur S
    Adv Health Sci Educ Theory Pract, 2010 May;15(2):185-93.
    PMID: 19757129 DOI: 10.1007/s10459-009-9191-1
    Feedback is essential to guide students towards expected performance goals. The usefulness of teacher feedback on improving communication skills (CS) has been well documented. It has been proposed that self-assessment and peer-feedback has an equally important role to play in enhancing learning. This is the focus of this study. Objectively structured self-assessment and peer feedback (OSSP) was incorporated into small group CS teaching sessions of a group of semester one medical students who were learning CS for the first time, to minimise the influence of previous educational interventions. A control group matched for academic performance, gender and age was used to enable parallel evaluation of the innovation. A reflective log containing closed and open ended questions was used for OSSP. Facilitators and simulated patients provided feedback to students in both groups during CS learning as per routine practice. Student perceptions on OSSP and acceptability as a learning method were explored using a questionnaire. CS were assessed in both groups using objective structured clinical examination (OSCE) as per routine practice and assessors were blinded as to which group the student belonged. Mean total score and scores for specific areas of interview skills were significantly higher in the experimental group. Analysis of the questionnaire data showed that students gained fresh insights into specific areas such as empathy, addressing patients' concerns and interview style during OSSP which clearly corroborated the specific differences in scores. The free text comments were highly encouraging as to acceptability of OSSP, in spite of 67% being never exposed to formal self- and peer-assessment during pre-university studies. OSSP promotes effective CS learning and learner acceptability is high.
    Matched MeSH terms: Educational Measurement/methods
  15. Khairani AZ, Ahmad NS, Khairani MZ
    J Appl Meas, 2017;18(4):449-458.
    PMID: 29252212
    Adolescences is an important transitional phase in human development where they experience physiological as well as psychological changes. Nevertheless, these changes are often understood by teachers, parents, and even the adolescents themselves. Thus, conflicts exist and adolescents are affected from the conflict physically and emotionally. An important state of emotions that result from this conflict is anger. This article describes the development and validation of the 34-item Adolescent Anger Inventory (AAI) to measure types of anger among Malaysian adolescents. A sample of 2,834 adolescents in secondary school who provide responses that were analyzed using Rasch model measurement framework. The 4 response category worked satisfactorily for the scale developed. A total of 11 items did not fit to the model's expectations, and thus dropped from the final scale. The scale also demonstrated satisfactory reliability and separation evidence. Also, items in the AAI depicted no evidence of DIF between 14- and 16-year-old adolescents. Nevertheless, the AAI did not have sufficient items to target adolescents with a high level of physical aggressive anger.
    Matched MeSH terms: Educational Measurement/methods*
  16. Karanth KV, Kumar MV
    Ann Acad Med Singap, 2008 Dec;37(12):1008-11.
    PMID: 19159033
    The existing clinical teaching in small group sessions is focused on the patient's disease. The main dual limitation is that not only does the clinical skill testing become secondary but there is also a slackening of student involvement as only 1 student is evaluated during the entire session. A new methodology of small group teaching being experimented shifted the focus to testing students' clinical skills with emphasise on team participation by daily evaluation of the entire team. The procedure involved was that the group underwent training sessions where the clinical skills were taught demonstrated and practiced on simulated patients (hear-see-do module). Later the entire small group, as a team, examined the patient and each student was evaluated for 1 of 5 specific tasks--history taking, general examination, systemic examination, discussion and case write-up. Out of 170 students, 69 students (study) and 101 students (control) were randomly chosen and trained according to the new and existing methods respectively. Senior faculty (who were blinded as to which method of teaching the student underwent) evaluated all the students. The marks obtained at 2 examinations were tabulated and compared for tests of significance using t-test. The difference in the marks obtained showed a statistically significant improvement in the study group indicating that the new module was an effective methodology of teaching. The teaching effectiveness was evaluated by student feedback regarding improvement in knowledge, clinical and communication skills and positive attitudes on a 5-point Likert scale. Psychometric analysis was very positively indicative of the success of the module.
    Matched MeSH terms: Educational Measurement/methods
  17. Loh KY, Kwa SK
    Med Educ, 2009 Nov;43(11):1101-2.
    PMID: 19874515 DOI: 10.1111/j.1365-2923.2009.03501.x
    Matched MeSH terms: Educational Measurement/methods*
  18. Schwartz PL, Kyaw Tun Sein
    Med Educ, 1987 May;21(3):265-8.
    PMID: 3600444
    Matched MeSH terms: Educational Measurement/methods*
  19. Lai NM
    Med Educ, 2009 May;43(5):479-80.
    PMID: 19344346 DOI: 10.1111/j.1365-2923.2009.03320.x
    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links