Displaying publications 21 - 40 of 140 in total

Abstract:
Sort:
  1. Sim SM, Rasiah RI
    Ann Acad Med Singap, 2006 Feb;35(2):67-71.
    PMID: 16565756
    INTRODUCTION: This paper reports the relationship between the difficulty level and the discrimination power of true/false-type multiple-choice questions (MCQs) in a multidisciplinary paper for the para-clinical year of an undergraduate medical programme.

    MATERIALS AND METHODS: MCQ items in papers taken from Year II Parts A, B and C examinations for Sessions 2001/02, and Part B examinations for 2002/03 and 2003/04, were analysed to obtain their difficulty indices and discrimination indices. Each paper consisted of 250 true/false items (50 questions of 5 items each) on topics drawn from different disciplines. The questions were first constructed and vetted by the individual departments before being submitted to a central committee, where the final selection of the MCQs was made, based purely on the academic judgement of the committee.

    RESULTS: There was a wide distribution of item difficulty indices in all the MCQ papers analysed. Furthermore, the relationship between the difficulty index (P) and discrimination index (D) of the MCQ items in a paper was not linear, but more dome-shaped. Maximal discrimination (D = 51% to 71%) occurred with moderately easy/difficult items (P = 40% to 74%). On average, about 38% of the MCQ items in each paper were "very easy" (P > or =75%), while about 9% were "very difficult" (P <25%). About two-thirds of these very easy/difficult items had "very poor" or even negative discrimination (D < or =20%).

    CONCLUSIONS: MCQ items that demonstrate good discriminating potential tend to be moderately difficult items, and the moderately-to-very difficult items are more likely to show negative discrimination. There is a need to evaluate the effectiveness of our MCQ items.

    Matched MeSH terms: Educational Measurement/methods*
  2. Torke S, Upadhya S, Abraham RR, Ramnarayan K
    Adv Physiol Educ, 2006 Mar;30(1):48-9.
    PMID: 16481610
    Matched MeSH terms: Educational Measurement/methods*
  3. Rao M
    Adv Physiol Educ, 2006 Jun;30(2):95.
    PMID: 16709743
    Matched MeSH terms: Educational Measurement/methods*
  4. Sim SM, Azila NM, Lian LH, Tan CP, Tan NH
    Ann Acad Med Singap, 2006 Sep;35(9):634-41.
    PMID: 17051280
    INTRODUCTION: A process-oriented instrument was developed for the summative assessment of student performance during problem-based learning (PBL) tutorials. This study evaluated (1) the acceptability of the instrument by tutors and (2) the consistency of assessment scores by different raters.

    MATERIALS AND METHODS: A survey of the tutors who had used the instrument was conducted to determine whether the assessment instrument or form was user-friendly. The 4 competencies assessed, using a 5-point rating scale, were (1) participation and communication skills, (2) cooperation or team-building skills, (3) comprehension or reasoning skills and (4) knowledge or information-gathering skills. Tutors were given a set of criteria guidelines for scoring the students' performance in these 4 competencies. Tutors were not attached to a particular PBL group, but took turns to facilitate different groups on different case or problem discussions. Assessment scores for one cohort of undergraduate medical students in their respective PBL groups in Year I (2003/2004) and Year II (2004/2005) were analysed. The consistency of scores was analysed using intraclass correlation.

    RESULTS: The majority of the tutors surveyed expressed no difficulty in using the instrument and agreed that it helped them assess the students fairly. Analysis of the scores obtained for the above cohort indicated that the different raters were relatively consistent in their assessment of student performance, despite a small number consistently showing either "strict" or "indiscriminate" rating practice.

    CONCLUSION: The instrument designed for the assessment of student performance in the PBL tutorial classroom setting is user-friendly and is reliable when used judiciously with the criteria guidelines provided.

    Matched MeSH terms: Educational Measurement/methods*
  5. Mukari SZ, Ling LN, Ghani HA
    Int J Pediatr Otorhinolaryngol, 2007 Feb;71(2):231-40.
    PMID: 17109974
    The present study documents the school performance of 20 pediatric cochlear implant recipients who attended mainstream classes and compares their educational performance with their normally hearing peers.
    Matched MeSH terms: Educational Measurement*
  6. Torke S, Abraham RR, Ramnarayan K, Upadhya S
    Adv Physiol Educ, 2007 Mar;31(1):118.
    PMID: 17327594
    Matched MeSH terms: Educational Measurement
  7. Tan CP, Azila NM
    Med Educ, 2007 May;41(5):517.
    PMID: 17470099
    Matched MeSH terms: Educational Measurement/standards*
  8. Yun LS, Hassan Y, Aziz NA, Awaisu A, Ghazali R
    Patient Educ Couns, 2007 Dec;69(1-3):47-54.
    PMID: 17720351 DOI: 10.1016/j.pec.2007.06.017
    Objective: The primary objective of this study was to assess and compare the knowledge of diabetes mellitus possessed by patients with diabetes and healthy adult volunteers in Penang, Malaysia.
    Method: A cross-sectional study was conducted from 20 February 2006 to 31 March 2006. We randomly selected 120 patients with diabetes mellitus from a diabetic clinic at the General Hospital Penang, Malaysia and 120 healthy adults at a shopping complex in Penang. Each participant was interviewed face-to-face by a pharmacist using a validated questionnaire, and they were required to answer a total of 30 questions concerning knowledge about diabetes mellitus using Yes, No or Unsure as the only response.
    Results: The results showed that patients with diabetes mellitus were significantly more knowledgeable than the healthy volunteers about risk factors, symptoms, chronic complications, treatment and self-management, and monitoring parameters. Educational level was the best predictive factor for diabetes mellitus and public awareness.
    Conclusion: Knowledge about diabetes mellitus should be improved among the general population.
    Practice implications: This study has major implications for the design of an educational programme for diabetics and a health promotion programme as a primary prevention measure for the healthy population in general, and especially for those at high risk. The results could be useful in the design of future studies for evaluating patients' and the general public's knowledge about diabetes mellitus.
    Matched MeSH terms: Educational Measurement
  9. Awaisu A, Mohamed MH, Al-Efan QA
    Am J Pharm Educ, 2007 Dec 15;71(6):118.
    PMID: 19503702
    OBJECTIVES: To assess bachelor of pharmacy students' overall perception and acceptance of an objective structured clinical examination (OSCE), a new method of clinical competence assessment in pharmacy undergraduate curriculum at our Faculty, and to explore its strengths and weaknesses through feedback.

    METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.

    RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.

    CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.

    Matched MeSH terms: Educational Measurement/methods*; Educational Measurement/standards
  10. Kwa SK, Sheikh Mohd Amin MM, Ng AC
    Malays Fam Physician, 2007;2(1):18-21.
    MyJurnal
    Questions on Key Features Problems (KFP) are an important component of the theory paper for Part 1 of the membership examination of the Academy of Family Physicians of Malaysia (MAFP) and the Fellowship for the Royal Australian College of General Practitioners (FRACGP).This paper will attempt to provide information on the format and marking scheme of KFP. Expected answers for some KFP cases will be discussed and common errors made by candidates highlighted with suggestions on how to avoid them.
    Matched MeSH terms: Educational Measurement
  11. Tiong TS
    Singapore Med J, 2008 Apr;49(4):328-32.
    PMID: 18418526
    INTRODUCTION: In medical practice, some patients consult doctors for reassurance of normality, e.g. patients with throat discomfort. Therefore, medical graduates should be competent in diagnosing clinical normality. One way to assess clinical competence is by the objective structured clinical examination (OSCE).
    METHODS: In 2002-2006, five batches of medical students who completed their otorhinolaryngology posting in Universiti Malaysia Sarawak were examined with the same OSCE question on clinically normal vocal cords. There were five subquestions concerning structures, clinical features, diagnosis and management. All students had prior slide show sessions regarding normal and abnormal laryngeal conditions.
    RESULTS: The total number of students in 2002, 2003, 2004, 2005 and 2006 was 25, 41, 20, 30 and 16, respectively, and 100 percent responded. The average percentage of students with correct answers was 19.4, 2.4, 2.2, 21.2, and 2.4, in the subquestions 0.1 to 0.5, respectively, leaving the remaining relatively larger percentages with incorrect answers of various clinical abnormalities. A reason for these findings is examination fever by the students, who also assumed that all the stations had clinical abnormalities and required differentiating abnormalities from abnormalities, and not from normality. Without clinical normality OSCE questions, the assessment of the undergraduates' clinical competence in real life would seem incomplete.
    CONCLUSION: This study showed that a significantly large percentage of students answered incorrectly in the clinical normality OSCE. This may mean that more clinical normality OSCE questions should be included in the undergraduate medical examination to help undergraduates practise the need to look for, and become competent in, clinical normality in real life.
    Matched MeSH terms: Educational Measurement*
  12. Abraham R, Ramnarayan K, Kamath A
    BMC Med Educ, 2008 Jul 24;8:40.
    PMID: 18652649 DOI: 10.1186/1472-6920-8-40
    BACKGROUND: It has been proved that basic science knowledge learned in the context of a clinical case is actually better comprehended and more easily applied by medical students than basic science knowledge learned in isolation. The present study intended to validate the effectiveness of Clinically Oriented Physiology Teaching (COPT) in undergraduate medical curriculum at Melaka Manipal Medical College (Manipal Campus), Manipal, India.

    METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.

    RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.

    CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.

    Matched MeSH terms: Educational Measurement/methods
  13. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  14. Loh KY, Nalliah S
    Med Educ, 2008 Nov;42(11):1127-8.
    PMID: 18991988 DOI: 10.1111/j.1365-2923.2008.03217.x
    Matched MeSH terms: Educational Measurement/methods*; Educational Measurement/standards
  15. Karanth KV, Kumar MV
    Ann Acad Med Singap, 2008 Dec;37(12):1008-11.
    PMID: 19159033
    The existing clinical teaching in small group sessions is focused on the patient's disease. The main dual limitation is that not only does the clinical skill testing become secondary but there is also a slackening of student involvement as only 1 student is evaluated during the entire session. A new methodology of small group teaching being experimented shifted the focus to testing students' clinical skills with emphasise on team participation by daily evaluation of the entire team. The procedure involved was that the group underwent training sessions where the clinical skills were taught demonstrated and practiced on simulated patients (hear-see-do module). Later the entire small group, as a team, examined the patient and each student was evaluated for 1 of 5 specific tasks--history taking, general examination, systemic examination, discussion and case write-up. Out of 170 students, 69 students (study) and 101 students (control) were randomly chosen and trained according to the new and existing methods respectively. Senior faculty (who were blinded as to which method of teaching the student underwent) evaluated all the students. The marks obtained at 2 examinations were tabulated and compared for tests of significance using t-test. The difference in the marks obtained showed a statistically significant improvement in the study group indicating that the new module was an effective methodology of teaching. The teaching effectiveness was evaluated by student feedback regarding improvement in knowledge, clinical and communication skills and positive attitudes on a 5-point Likert scale. Psychometric analysis was very positively indicative of the success of the module.
    Matched MeSH terms: Educational Measurement/methods
  16. Bosher S, Bowles M
    Nurs Educ Perspect, 2008 May-Jun;29(3):165-72.
    PMID: 18575241
    Recent research has indicated that language may be a source of construct-irrelevant variance for non-native speakers of English, or English as a second language (ESL) students, when they take exams. As a result, exams may not accurately measure knowledge of nursing content. One accommodation often used to level the playing field for ESL students is linguistic modification, a process by which the reading load of test items is reduced while the content and integrity of the item are maintained. Research on the effects of linguistic modification has been conducted on examinees in the K-12 population, but is just beginning in other areas. This study describes the collaborative process by which items from a pathophysiology exam were linguistically modified and subsequently evaluated for comprehensibility by ESL students. Findings indicate that in a majority of cases, modification improved examinees' comprehension of test items. Implications for test item writing and future research are discussed.
    Matched MeSH terms: Educational Measurement/methods*
  17. Mohd Said, N., Rogayah, A., Hafizah, A.
    Medicine & Health, 2008;3(2):274-279.
    MyJurnal
    Learning environment in the universities plays an important role in producing highly competent graduates especially in nursing profession. Thus, the most important as-pects are the teaching activities and as well as student – teacher interaction in daily environment in the university. To investigate the International Islamic University Malay-
    sia (IIUM) nursing students experience towards their teachers and to identify the rela-tionship between teaching and students learning perception in their learning environ-ment. This study used quantitative method and utilized two out of five subscales in Dundee Ready Educational Environment Measurement (DREEM). The subscales used
    were students’ perception of learning (SPoL) , students’ perception of teacher (SPoT)
    and total items in these both subscales are 12 and 11 items, respectively. The ques-tionnaire results revealed that IIUM nursing students scored 28.54/48.00 in (SPoL) and
    28.13/44.00 in (SPoT). Both findings showed  the IIUM nursing students’ experience their teachers and the learning environment are moving in towards positive directions. The regression finding was 51% of the total variation in students’ perception of teacher score was explained by students’ perception of learning. Although the overall sub-scales (SPoL) score in the current study falls in the category of a more positive per-ception, 2 out of 12 items were poorly scored by the IIUM nursing students. The re-searcher strongly agrees that listening to  the expression of students is an important consideration for an educational institution. The overall mean score for (SPoT) showed that the students perceived their teachers as moving in the right direction. In this pre-sent IIUM study, one item showed a mean score of less then 2.00. As a result, these two subscales most probably should reflect the same outcome such as in their aca-demic performance and experience greatly during their student life on campus. The arising issues from this DREEM study at IIUM embrace the need for the creation of supportive environment as well as designing and implementing interventions to remedy unsatisfactorily elements of the learning environment for more effective and successful teaching and learning to be realised.
    Matched MeSH terms: Educational Measurement
  18. Lai NM
    Med Educ, 2009 May;43(5):479-80.
    PMID: 19344346 DOI: 10.1111/j.1365-2923.2009.03320.x
    Matched MeSH terms: Educational Measurement/methods*
  19. Lai NM, Teng CL
    Hong Kong Med J, 2009 Oct;15(5):332-8.
    PMID: 19801689
    OBJECTIVE: To assess the impact of a structured, clinically integrated evidence-based undergraduate medicine training programme using a validated tool. DESIGN. Before and after study with no control group.
    SETTING: A medical school in Malaysia with an affiliated district clinical training hospital.
    PARTICIPANTS: Seventy-two medical students in their final 6 months of training (senior clerkship) encountered between March and August 2006.
    INTERVENTION: Our educational intervention included two plenary lectures at the beginning of the clerkship, small-group bedside question-generating sessions, and a journal club in the paediatric posting.
    MAIN OUTCOME MEASURES: Our primary outcome was evidence-based medicine knowledge, measured using the adapted Fresno test (score range, 0-212) administered before and after the intervention. We evaluated the performance of the whole cohort, as well as the scores of different subgroups that received separate small-group interventions in their paediatric posting. We also measured the correlation between the students' evidence-based medicine test scores and overall academic performances in the senior clerkship.
    RESULTS: Fifty-five paired scripts were analysed. Evidence-based medicine knowledge improved significantly post-intervention (means: pre-test, 84 [standard deviation, 24]; post-test, 122 [22]; P<0.001). Post-test scores were significantly correlated with overall senior clerkship performance (r=0.329, P=0.014). Lower post-test scores were observed in subgroups that received their small-group training earlier as opposed to later in the clerkship.
    CONCLUSIONS: Clinically integrated undergraduate evidence-based medicine training produced an educationally important improvement in evidence-based medicine knowledge. Student performance in the adapted Fresno test to some extent reflected their overall academic performance in the senior clerkship. Loss of evidence-based medicine knowledge, which might have occurred soon after small-group training, is a concern that warrants future assessment.
    Matched MeSH terms: Educational Measurement
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links