Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Armstrong HE, Tan ES
    Med Educ, 1979 Mar;13(2):99-102.
    PMID: 431423 DOI: 10.1111/j.1365-2923.1979.tb00930.x
    Behavioural self-analysis projects were introduced into the second year medical curriculum in behavioural sciences at the University of Malaya. Students performance and evaluation of the experience were compared with those of American medical students. It was concluded that receptivity of medical students to principles of behaviour therapy is relatively similar in the two societies.
    Matched MeSH terms: Educational Measurement/methods*
  2. Shahabudin SH
    Med Educ, 1983 Sep;17(5):316-8.
    PMID: 6621433
    The belief that it is unwise to alter the initial response to a multiple choice question is questioned. Among 39 380 MCQ responses, there were 1818 changes (4.62%) of which 21.9% were correct to incorrect responses, 46.3% incorrect to correct responses and 31.8% incorrect to incorrect. This effect was very much more marked among the better students, incorrect to correct changes accounting for 61% of the responses in the upper group, 42% in the middle group and 34% in the lower group.
    Matched MeSH terms: Educational Measurement/methods*
  3. Schwartz PL, Crooks TJ, Sein KT
    Med Educ, 1986 Sep;20(5):399-406.
    PMID: 3762442
    It has been suggested that the 'ideal' measure of reliability of an examination is obtained by test and retest using the one examination on the same group of students. However, because of practical and theoretical arguments, most reported reliabilities for multiple choice examinations in medicine are actually measures of internal consistency. While attempting to minimize the effects of potential interfering factors, we have undertaken a study of true test-retest reliability of multiple true-false type multiple choice questions in preclinical medical subjects. From three end-of-term examinations, 363 items (106 of 449 from term 1, 150 of 499 from term 2, and 107 of 492 from term 3) were repeated in the final examination (out of 999 total items). Between test and retest, there was little overall decrease in the percentage of items answered correctly and a decrease of only 3.4 in the percentage score after correction for guessing. However, there was an inverse relation between test-retest interval and decrease in performance. Between test and retest, performance decreased significantly on 33 items and increased significantly on 11 items. Test-retest correlation coefficients were 0.70 to 0.78 for items from the separate terms and 0.885 for all items that were retested. Thus, overall, these items had a very high degree of reliability, approximately the 0.9 which has been specified as the requirement for being able to distinguish between individuals.
    Matched MeSH terms: Educational Measurement/methods*
  4. Schwartz PL, Kyaw Tun Sein
    Med Educ, 1987 May;21(3):265-8.
    PMID: 3600444
    Matched MeSH terms: Educational Measurement/methods*
  5. Roslani AM, Sein KT, Nordin R
    Med J Malaysia, 1989 Mar;44(1):75-82.
    PMID: 2626116
    The Phase I and Phase II undergraduate teaching programmes of the School of Medical Sciences were reviewed at the end of the 1985/86 academic year. It was found that deviations from the School's philosophy had crept into the implementation process. Modifications were therefore made in Phase I and Phase II programmes with a view to:--(i) reducing content, (ii) promoting integration, (iii) improving clinical examination skills of students, and (iv) providing more opportunities to students for self learning, reinforcement and application of knowledge. The number of assessment items in Phase I and the frequency of assessment in Phase II were also found to be inappropriate and so modifications in assessment were made to rectify this situation.
    Matched MeSH terms: Educational Measurement/methods*
  6. Tan CP, Rokiah P
    Med J Malaysia, 2005 Aug;60 Suppl D:48-53.
    PMID: 16315624
    Formative and summative student assessment has always been of concern to medical teachers, and this is especially important at the level of graduating doctors. The effectiveness and comprehensiveness of the clinical training provided is tested with the use of clinical cases, either with real patients who have genuine medical conditions, or with the use of standardised patients who are trained to simulate accurately actual patients. The Objective Structured Clinical Examination (OSCE) is one method of assessing the adequacy of clinical skills of medical students, and their level of competence. It can be used to test a variety of skills such as history taking (communication and interpersonal skills) and performing aspects of physical examination, undertaking emergency procedures, and interpreting investigational data. It can also be used to ensure an adequate depth and breadth of coverage of clinical skills expected of a graduating doctor.
    Matched MeSH terms: Educational Measurement/methods*
  7. Sim SM, Rasiah RI
    Ann Acad Med Singap, 2006 Feb;35(2):67-71.
    PMID: 16565756
    INTRODUCTION: This paper reports the relationship between the difficulty level and the discrimination power of true/false-type multiple-choice questions (MCQs) in a multidisciplinary paper for the para-clinical year of an undergraduate medical programme.

    MATERIALS AND METHODS: MCQ items in papers taken from Year II Parts A, B and C examinations for Sessions 2001/02, and Part B examinations for 2002/03 and 2003/04, were analysed to obtain their difficulty indices and discrimination indices. Each paper consisted of 250 true/false items (50 questions of 5 items each) on topics drawn from different disciplines. The questions were first constructed and vetted by the individual departments before being submitted to a central committee, where the final selection of the MCQs was made, based purely on the academic judgement of the committee.

    RESULTS: There was a wide distribution of item difficulty indices in all the MCQ papers analysed. Furthermore, the relationship between the difficulty index (P) and discrimination index (D) of the MCQ items in a paper was not linear, but more dome-shaped. Maximal discrimination (D = 51% to 71%) occurred with moderately easy/difficult items (P = 40% to 74%). On average, about 38% of the MCQ items in each paper were "very easy" (P > or =75%), while about 9% were "very difficult" (P <25%). About two-thirds of these very easy/difficult items had "very poor" or even negative discrimination (D < or =20%).

    CONCLUSIONS: MCQ items that demonstrate good discriminating potential tend to be moderately difficult items, and the moderately-to-very difficult items are more likely to show negative discrimination. There is a need to evaluate the effectiveness of our MCQ items.

    Matched MeSH terms: Educational Measurement/methods*
  8. Torke S, Upadhya S, Abraham RR, Ramnarayan K
    Adv Physiol Educ, 2006 Mar;30(1):48-9.
    PMID: 16481610
    Matched MeSH terms: Educational Measurement/methods*
  9. Rao M
    Adv Physiol Educ, 2006 Jun;30(2):95.
    PMID: 16709743
    Matched MeSH terms: Educational Measurement/methods*
  10. Sim SM, Azila NM, Lian LH, Tan CP, Tan NH
    Ann Acad Med Singap, 2006 Sep;35(9):634-41.
    PMID: 17051280
    INTRODUCTION: A process-oriented instrument was developed for the summative assessment of student performance during problem-based learning (PBL) tutorials. This study evaluated (1) the acceptability of the instrument by tutors and (2) the consistency of assessment scores by different raters.

    MATERIALS AND METHODS: A survey of the tutors who had used the instrument was conducted to determine whether the assessment instrument or form was user-friendly. The 4 competencies assessed, using a 5-point rating scale, were (1) participation and communication skills, (2) cooperation or team-building skills, (3) comprehension or reasoning skills and (4) knowledge or information-gathering skills. Tutors were given a set of criteria guidelines for scoring the students' performance in these 4 competencies. Tutors were not attached to a particular PBL group, but took turns to facilitate different groups on different case or problem discussions. Assessment scores for one cohort of undergraduate medical students in their respective PBL groups in Year I (2003/2004) and Year II (2004/2005) were analysed. The consistency of scores was analysed using intraclass correlation.

    RESULTS: The majority of the tutors surveyed expressed no difficulty in using the instrument and agreed that it helped them assess the students fairly. Analysis of the scores obtained for the above cohort indicated that the different raters were relatively consistent in their assessment of student performance, despite a small number consistently showing either "strict" or "indiscriminate" rating practice.

    CONCLUSION: The instrument designed for the assessment of student performance in the PBL tutorial classroom setting is user-friendly and is reliable when used judiciously with the criteria guidelines provided.

    Matched MeSH terms: Educational Measurement/methods*
  11. Awaisu A, Mohamed MH, Al-Efan QA
    Am J Pharm Educ, 2007 Dec 15;71(6):118.
    PMID: 19503702
    OBJECTIVES: To assess bachelor of pharmacy students' overall perception and acceptance of an objective structured clinical examination (OSCE), a new method of clinical competence assessment in pharmacy undergraduate curriculum at our Faculty, and to explore its strengths and weaknesses through feedback.

    METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.

    RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.

    CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.

    Matched MeSH terms: Educational Measurement/methods*
  12. Abraham R, Ramnarayan K, Kamath A
    BMC Med Educ, 2008 Jul 24;8:40.
    PMID: 18652649 DOI: 10.1186/1472-6920-8-40
    BACKGROUND: It has been proved that basic science knowledge learned in the context of a clinical case is actually better comprehended and more easily applied by medical students than basic science knowledge learned in isolation. The present study intended to validate the effectiveness of Clinically Oriented Physiology Teaching (COPT) in undergraduate medical curriculum at Melaka Manipal Medical College (Manipal Campus), Manipal, India.

    METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.

    RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.

    CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.

    Matched MeSH terms: Educational Measurement/methods
  13. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  14. Loh KY, Nalliah S
    Med Educ, 2008 Nov;42(11):1127-8.
    PMID: 18991988 DOI: 10.1111/j.1365-2923.2008.03217.x
    Matched MeSH terms: Educational Measurement/methods*
  15. Karanth KV, Kumar MV
    Ann Acad Med Singap, 2008 Dec;37(12):1008-11.
    PMID: 19159033
    The existing clinical teaching in small group sessions is focused on the patient's disease. The main dual limitation is that not only does the clinical skill testing become secondary but there is also a slackening of student involvement as only 1 student is evaluated during the entire session. A new methodology of small group teaching being experimented shifted the focus to testing students' clinical skills with emphasise on team participation by daily evaluation of the entire team. The procedure involved was that the group underwent training sessions where the clinical skills were taught demonstrated and practiced on simulated patients (hear-see-do module). Later the entire small group, as a team, examined the patient and each student was evaluated for 1 of 5 specific tasks--history taking, general examination, systemic examination, discussion and case write-up. Out of 170 students, 69 students (study) and 101 students (control) were randomly chosen and trained according to the new and existing methods respectively. Senior faculty (who were blinded as to which method of teaching the student underwent) evaluated all the students. The marks obtained at 2 examinations were tabulated and compared for tests of significance using t-test. The difference in the marks obtained showed a statistically significant improvement in the study group indicating that the new module was an effective methodology of teaching. The teaching effectiveness was evaluated by student feedback regarding improvement in knowledge, clinical and communication skills and positive attitudes on a 5-point Likert scale. Psychometric analysis was very positively indicative of the success of the module.
    Matched MeSH terms: Educational Measurement/methods
  16. Bosher S, Bowles M
    Nurs Educ Perspect, 2008 May-Jun;29(3):165-72.
    PMID: 18575241
    Recent research has indicated that language may be a source of construct-irrelevant variance for non-native speakers of English, or English as a second language (ESL) students, when they take exams. As a result, exams may not accurately measure knowledge of nursing content. One accommodation often used to level the playing field for ESL students is linguistic modification, a process by which the reading load of test items is reduced while the content and integrity of the item are maintained. Research on the effects of linguistic modification has been conducted on examinees in the K-12 population, but is just beginning in other areas. This study describes the collaborative process by which items from a pathophysiology exam were linguistically modified and subsequently evaluated for comprehensibility by ESL students. Findings indicate that in a majority of cases, modification improved examinees' comprehension of test items. Implications for test item writing and future research are discussed.
    Matched MeSH terms: Educational Measurement/methods*
  17. Lai NM
    Med Educ, 2009 May;43(5):479-80.
    PMID: 19344346 DOI: 10.1111/j.1365-2923.2009.03320.x
    Matched MeSH terms: Educational Measurement/methods*
  18. Loh KY, Kwa SK
    Med Educ, 2009 Nov;43(11):1101-2.
    PMID: 19874515 DOI: 10.1111/j.1365-2923.2009.03501.x
    Matched MeSH terms: Educational Measurement/methods*
  19. Goh BL, Ganeshadeva Yudisthra M, Lim TO
    Semin Dial, 2009 Mar-Apr;22(2):199-203.
    PMID: 19426429 DOI: 10.1111/j.1525-139X.2008.00536.x
    Peritoneal dialysis (PD) catheter insertion success rate is known to vary among different operators, and peritoneoscope PD catheter insertion demands mastery of a steep learning curve. Defining a learning curve using a continuous monitoring tool such as a Cumulative Summation (CUSUM) chart is useful for planning training programs. We aimed to analyze the learning curve of a trainee nephrologist in performing peritoneoscope PD catheter implantation with CUSUM chart. This was a descriptive single-center study using collected data from all PD patients who underwent peritoneoscope PD catheter insertion in our hospital. CUSUM model was used to evaluate the learning curve for peritoneoscope PD catheter insertion. Unacceptable primary failure rate (i.e., catheter malfunction within 1 month of insertion) was defined at >40% and acceptable performance was defined at <25%. CUSUM chart showed the learning curve of a trainee in acquiring new skill. As the trainee became more skillful with training, the CUSUM curve flattened. Technical proficiency of the trainee nephrologist in performing peritoneoscope Tenckhoff catheter insertion (<25% primary catheter malfunction) was attained after 23 procedures. We also noted earlier in our program that Tenckhoff catheters directed to the right iliac fossae had poorer survival as compared to catheters directed to the left iliac fossae. Survival of catheters directed to the left iliac fossae was 94.6% while the survival for catheters directed to the right iliac fossae was 48.6% (p < 0.01). We advocate that quality control of Tenckhoff catheter insertion is performed using CUSUM charting as described to monitor primary catheter dysfunction (i.e., failure of catheter function within 1 month of insertion), primary leak (i.e., within 1 month of catheter insertion), and primary peritonitis (i.e., within 2 weeks of catheter insertion).
    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links