Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Solarsh G, Lindley J, Whyte G, Fahey M, Walker A
    Acad Med, 2012 Jun;87(6):807-14.
    PMID: 22643380 DOI: 10.1097/ACM.0b013e318253226a
    The learning objectives, curriculum content, and assessment standards for distributed medical education programs must be aligned across the health care systems and community contexts in which their students train. In this article, the authors describe their experiences at Monash University implementing a distributed medical education program at metropolitan, regional, and rural Australian sites and an offshore Malaysian site, using four different implementation models. Standardizing learning objectives, curriculum content, and assessment standards across all sites while allowing for site-specific implementation models created challenges for educational alignment. At the same time, this diversity created opportunities to customize the curriculum to fit a variety of settings and for innovations that have enriched the educational system as a whole.Developing these distributed medical education programs required a detailed review of Monash's learning objectives and curriculum content and their relevance to the four different sites. It also required a review of assessment methods to ensure an identical and equitable system of assessment for students at all sites. It additionally demanded changes to the systems of governance and the management of the educational program away from a centrally constructed and mandated curriculum to more collaborative approaches to curriculum design and implementation involving discipline leaders at multiple sites.Distributed medical education programs, like that at Monash, in which cohorts of students undertake the same curriculum in different contexts, provide potentially powerful research platforms to compare different pedagogical approaches to medical education and the impact of context on learning outcomes.
    Matched MeSH terms: Educational Measurement/methods
  2. Perera J, Mohamadou G, Kaur S
    Adv Health Sci Educ Theory Pract, 2010 May;15(2):185-93.
    PMID: 19757129 DOI: 10.1007/s10459-009-9191-1
    Feedback is essential to guide students towards expected performance goals. The usefulness of teacher feedback on improving communication skills (CS) has been well documented. It has been proposed that self-assessment and peer-feedback has an equally important role to play in enhancing learning. This is the focus of this study. Objectively structured self-assessment and peer feedback (OSSP) was incorporated into small group CS teaching sessions of a group of semester one medical students who were learning CS for the first time, to minimise the influence of previous educational interventions. A control group matched for academic performance, gender and age was used to enable parallel evaluation of the innovation. A reflective log containing closed and open ended questions was used for OSSP. Facilitators and simulated patients provided feedback to students in both groups during CS learning as per routine practice. Student perceptions on OSSP and acceptability as a learning method were explored using a questionnaire. CS were assessed in both groups using objective structured clinical examination (OSCE) as per routine practice and assessors were blinded as to which group the student belonged. Mean total score and scores for specific areas of interview skills were significantly higher in the experimental group. Analysis of the questionnaire data showed that students gained fresh insights into specific areas such as empathy, addressing patients' concerns and interview style during OSSP which clearly corroborated the specific differences in scores. The free text comments were highly encouraging as to acceptability of OSSP, in spite of 67% being never exposed to formal self- and peer-assessment during pre-university studies. OSSP promotes effective CS learning and learner acceptability is high.
    Matched MeSH terms: Educational Measurement/methods
  3. Torke S, Upadhya S, Abraham RR, Ramnarayan K
    Adv Physiol Educ, 2006 Mar;30(1):48-9.
    PMID: 16481610
    Matched MeSH terms: Educational Measurement/methods*
  4. Rao M
    Adv Physiol Educ, 2006 Jun;30(2):95.
    PMID: 16709743
    Matched MeSH terms: Educational Measurement/methods*
  5. Prashanti E, Ramnarayan K
    Adv Physiol Educ, 2019 Jun 01;43(2):99-102.
    PMID: 30835147 DOI: 10.1152/advan.00173.2018
    In an era that is seemingly saturated with standardized tests of all hues and stripes, it is easy to forget that assessments not only measure the performance of students, but also consolidate and enhance their learning. Assessment for learning is best elucidated as a process by which the assessment information can be used by teachers to modify their teaching strategies while students adjust and alter their learning approaches. Effectively implemented, formative assessments can convert classroom culture to one that resonates with the triumph of learning. In this paper, we present 10 maxims that show ways that formative assessments can be better understood, appreciated, and implemented.
    Matched MeSH terms: Educational Measurement/methods*
  6. Awaisu A, Mohamed MH, Al-Efan QA
    Am J Pharm Educ, 2007 Dec 15;71(6):118.
    PMID: 19503702
    OBJECTIVES: To assess bachelor of pharmacy students' overall perception and acceptance of an objective structured clinical examination (OSCE), a new method of clinical competence assessment in pharmacy undergraduate curriculum at our Faculty, and to explore its strengths and weaknesses through feedback.

    METHODS: A cross-sectional survey was conducted via a validated 49-item questionnaire, administered immediately after all students completed the examination. The questionnaire comprised of questions to evaluate the content and structure of the examination, perception of OSCE validity and reliability, and rating of OSCE in relation to other assessment methods. Open-ended follow-up questions were included to generate qualitative data.

    RESULTS: Over 80% of the students found the OSCE to be helpful in highlighting areas of weaknesses in their clinical competencies. Seventy-eight percent agreed that it was comprehensive and 66% believed it was fair. About 46% felt that the 15 minutes allocated per station was inadequate. Most importantly, about half of the students raised concerns that personality, ethnicity, and/or gender, as well as interpatient and inter-assessor variability were potential sources of bias that could affect their scores. However, an overwhelming proportion of the students (90%) agreed that the OSCE provided a useful and practical learning experience.

    CONCLUSIONS: Students' perceptions and acceptance of the new method of assessment were positive. The survey further highlighted for future refinement the strengths and weaknesses associated with the development and implementation of an OSCE in the International Islamic University Malaysia's pharmacy curriculum.

    Matched MeSH terms: Educational Measurement/methods*
  7. Lee Chin K, Ling Yap Y, Leng Lee W, Chang Soh Y
    Am J Pharm Educ, 2014 Oct 15;78(8):153.
    PMID: 25386018 DOI: 10.5688/ajpe788153
    To determine whether human patient simulation (HPS) is superior to case-based learning (CBL) in teaching diabetic ketoacidosis (DKA) and thyroid storm (TS) to pharmacy students.
    Matched MeSH terms: Educational Measurement/methods
  8. Hadie SNH, Hassan A, Ismail ZIM, Asari MA, Khan AA, Kasim F, et al.
    Anat Sci Educ, 2017 Sep;10(5):423-432.
    PMID: 28135037 DOI: 10.1002/ase.1683
    Students' perceptions of the education environment influence their learning. Ever since the major medical curriculum reform, anatomy education has undergone several changes in terms of its curriculum, teaching modalities, learning resources, and assessment methods. By measuring students' perceptions concerning anatomy education environment, valuable information can be obtained to facilitate improvements in teaching and learning. Hence, it is important to use a valid inventory that specifically measures attributes of the anatomy education environment. In this study, a new 11-factor, 132-items Anatomy Education Environment Measurement Inventory (AEEMI) was developed using Delphi technique and was validated in a Malaysian public medical school. The inventory was found to have satisfactory content evidence (scale-level content validity index [total] = 0.646); good response process evidence (scale-level face validity index [total] = 0.867); and acceptable to high internal consistency, with the Raykov composite reliability estimates of the six factors are in the range of 0.604-0.876. The best fit model of the AEEMI is achieved with six domains and 25 items (X2  = 415.67, P 
    Matched MeSH terms: Educational Measurement/methods*
  9. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  10. Karanth KV, Kumar MV
    Ann Acad Med Singap, 2008 Dec;37(12):1008-11.
    PMID: 19159033
    The existing clinical teaching in small group sessions is focused on the patient's disease. The main dual limitation is that not only does the clinical skill testing become secondary but there is also a slackening of student involvement as only 1 student is evaluated during the entire session. A new methodology of small group teaching being experimented shifted the focus to testing students' clinical skills with emphasise on team participation by daily evaluation of the entire team. The procedure involved was that the group underwent training sessions where the clinical skills were taught demonstrated and practiced on simulated patients (hear-see-do module). Later the entire small group, as a team, examined the patient and each student was evaluated for 1 of 5 specific tasks--history taking, general examination, systemic examination, discussion and case write-up. Out of 170 students, 69 students (study) and 101 students (control) were randomly chosen and trained according to the new and existing methods respectively. Senior faculty (who were blinded as to which method of teaching the student underwent) evaluated all the students. The marks obtained at 2 examinations were tabulated and compared for tests of significance using t-test. The difference in the marks obtained showed a statistically significant improvement in the study group indicating that the new module was an effective methodology of teaching. The teaching effectiveness was evaluated by student feedback regarding improvement in knowledge, clinical and communication skills and positive attitudes on a 5-point Likert scale. Psychometric analysis was very positively indicative of the success of the module.
    Matched MeSH terms: Educational Measurement/methods
  11. Sim SM, Azila NM, Lian LH, Tan CP, Tan NH
    Ann Acad Med Singap, 2006 Sep;35(9):634-41.
    PMID: 17051280
    INTRODUCTION: A process-oriented instrument was developed for the summative assessment of student performance during problem-based learning (PBL) tutorials. This study evaluated (1) the acceptability of the instrument by tutors and (2) the consistency of assessment scores by different raters.

    MATERIALS AND METHODS: A survey of the tutors who had used the instrument was conducted to determine whether the assessment instrument or form was user-friendly. The 4 competencies assessed, using a 5-point rating scale, were (1) participation and communication skills, (2) cooperation or team-building skills, (3) comprehension or reasoning skills and (4) knowledge or information-gathering skills. Tutors were given a set of criteria guidelines for scoring the students' performance in these 4 competencies. Tutors were not attached to a particular PBL group, but took turns to facilitate different groups on different case or problem discussions. Assessment scores for one cohort of undergraduate medical students in their respective PBL groups in Year I (2003/2004) and Year II (2004/2005) were analysed. The consistency of scores was analysed using intraclass correlation.

    RESULTS: The majority of the tutors surveyed expressed no difficulty in using the instrument and agreed that it helped them assess the students fairly. Analysis of the scores obtained for the above cohort indicated that the different raters were relatively consistent in their assessment of student performance, despite a small number consistently showing either "strict" or "indiscriminate" rating practice.

    CONCLUSION: The instrument designed for the assessment of student performance in the PBL tutorial classroom setting is user-friendly and is reliable when used judiciously with the criteria guidelines provided.

    Matched MeSH terms: Educational Measurement/methods*
  12. Sim SM, Rasiah RI
    Ann Acad Med Singap, 2006 Feb;35(2):67-71.
    PMID: 16565756
    INTRODUCTION: This paper reports the relationship between the difficulty level and the discrimination power of true/false-type multiple-choice questions (MCQs) in a multidisciplinary paper for the para-clinical year of an undergraduate medical programme.

    MATERIALS AND METHODS: MCQ items in papers taken from Year II Parts A, B and C examinations for Sessions 2001/02, and Part B examinations for 2002/03 and 2003/04, were analysed to obtain their difficulty indices and discrimination indices. Each paper consisted of 250 true/false items (50 questions of 5 items each) on topics drawn from different disciplines. The questions were first constructed and vetted by the individual departments before being submitted to a central committee, where the final selection of the MCQs was made, based purely on the academic judgement of the committee.

    RESULTS: There was a wide distribution of item difficulty indices in all the MCQ papers analysed. Furthermore, the relationship between the difficulty index (P) and discrimination index (D) of the MCQ items in a paper was not linear, but more dome-shaped. Maximal discrimination (D = 51% to 71%) occurred with moderately easy/difficult items (P = 40% to 74%). On average, about 38% of the MCQ items in each paper were "very easy" (P > or =75%), while about 9% were "very difficult" (P <25%). About two-thirds of these very easy/difficult items had "very poor" or even negative discrimination (D < or =20%).

    CONCLUSIONS: MCQ items that demonstrate good discriminating potential tend to be moderately difficult items, and the moderately-to-very difficult items are more likely to show negative discrimination. There is a need to evaluate the effectiveness of our MCQ items.

    Matched MeSH terms: Educational Measurement/methods*
  13. Lai NM, Teng CL
    BMC Med Educ, 2011;11:25.
    PMID: 21619672 DOI: 10.1186/1472-6920-11-25
    BACKGROUND: Previous studies report various degrees of agreement between self-perceived competence and objectively measured competence in medical students. There is still a paucity of evidence on how the two correlate in the field of Evidence Based Medicine (EBM). We undertook a cross-sectional study to evaluate the self-perceived competence in EBM of senior medical students in Malaysia, and assessed its correlation to their objectively measured competence in EBM.
    METHODS: We recruited a group of medical students in their final six months of training between March and August 2006. The students were receiving a clinically-integrated EBM training program within their curriculum. We evaluated the students' self-perceived competence in two EBM domains ("searching for evidence" and "appraising the evidence") by piloting a questionnaire containing 16 relevant items, and objectively assessed their competence in EBM using an adapted version of the Fresno test, a validated tool. We correlated the matching components between our questionnaire and the Fresno test using Pearson's product-moment correlation.
    RESULTS: Forty-five out of 72 students in the cohort (62.5%) participated by completing the questionnaire and the adapted Fresno test concurrently. In general, our students perceived themselves as moderately competent in most items of the questionnaire. They rated themselves on average 6.34 out of 10 (63.4%) in "searching" and 44.41 out of 57 (77.9%) in "appraising". They scored on average 26.15 out of 60 (43.6%) in the "searching" domain and 57.02 out of 116 (49.2%) in the "appraising" domain in the Fresno test. The correlations between the students' self-rating and their performance in the Fresno test were poor in both the "searching" domain (r = 0.13, p = 0.4) and the "appraising" domain (r = 0.24, p = 0.1).
    CONCLUSIONS: This study provides supporting evidence that at the undergraduate level, self-perceived competence in EBM, as measured using our questionnaire, does not correlate well with objectively assessed EBM competence measured using the adapted Fresno test.
    STUDY REGISTRATION: International Medical University, Malaysia, research ID: IMU 110/06.
    Matched MeSH terms: Educational Measurement/methods*
  14. Ismail MA, Ahmad A, Mohammad JA, Fakri NMRM, Nor MZM, Pa MNM
    BMC Med Educ, 2019 Jun 25;19(1):230.
    PMID: 31238926 DOI: 10.1186/s12909-019-1658-z
    BACKGROUND: Gamification is an increasingly common phenomenon in education. It is a technique to facilitate formative assessment and to promote student learning. It has been shown to be more effective than traditional methods. This phenomenological study was conducted to explore the advantages of gamification through the use of the Kahoot! platform for formative assessment in medical education.

    METHODS: This study employed a phenomenological design. Five focus groups were conducted with medical students who had participated in several Kahoot! sessions.

    RESULTS: Thirty-six categories and nine sub-themes emerged from the focus group discussions. They were grouped into three themes: attractive learning tool, learning guidance and source of motivation.

    CONCLUSIONS: The results suggest that Kahoot! sessions motivate students to study, to determine the subject matter that needs to be studied and to be aware of what they have learned. Thus, the platform is a promising tool for formative assessment in medical education.

    Matched MeSH terms: Educational Measurement/methods*
  15. Puthiaparampil T, Rahman MM
    BMC Med Educ, 2020 May 06;20(1):141.
    PMID: 32375739 DOI: 10.1186/s12909-020-02057-w
    BACKGROUND: Multiple choice questions, used in medical school assessments for decades, have many drawbacks such as hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked, directly answered questions like Very Short Answer Questions (VSAQ) are considered a better alternative with several advantages.

    OBJECTIVES: This study aims to compare student performance in MCQ and VSAQ and obtain feedback. from the stakeholders.

    METHODS: Conduct multiple true-false, one best answer, and VSAQ tests in two batches of medical students, compare their scores and psychometric indices of the tests and seek opinion from students and academics regarding these assessment methods.

    RESULTS: Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders' opinions were significantly in favour of VSAQ.

    CONCLUSION AND RECOMMENDATION: This study concludes that VSAQ is a viable alternative to multiple-choice question tests, and it is widely accepted by medical students and academics in the medical faculty.

    Matched MeSH terms: Educational Measurement/methods*
  16. Abraham R, Ramnarayan K, Kamath A
    BMC Med Educ, 2008 Jul 24;8:40.
    PMID: 18652649 DOI: 10.1186/1472-6920-8-40
    BACKGROUND: It has been proved that basic science knowledge learned in the context of a clinical case is actually better comprehended and more easily applied by medical students than basic science knowledge learned in isolation. The present study intended to validate the effectiveness of Clinically Oriented Physiology Teaching (COPT) in undergraduate medical curriculum at Melaka Manipal Medical College (Manipal Campus), Manipal, India.

    METHODS: COPT was a teaching strategy wherein, students were taught physiology using cases and critical thinking questions. Three batches of undergraduate medical students (n = 434) served as the experimental groups to whom COPT was incorporated in the third block (teaching unit) of Physiology curriculum and one batch (n = 149) served as the control group to whom COPT was not incorporated. The experimental group of students were trained to answer clinically oriented questions whereas the control group of students were not trained. Both the group of students undertook a block exam which consisted of clinically oriented questions and recall questions, at the end of each block.

    RESULTS: Comparison of pre-COPT and post-COPT essay exam scores of experimental group of students revealed that the post-COPT scores were significantly higher compared to the pre-COPT scores. Comparison of post-COPT essay exam scores of the experimental group and control group of students revealed that the experimental group of students performed better compared to the control group. Feedback from the students indicated that they preferred COPT to didactic lectures.

    CONCLUSION: The study supports the fact that assessment and teaching patterns should fall in line with each other as proved by the better performance of the experimental group of students compared to the control group. COPT was also found to be a useful adjunct to didactic lectures in teaching physiology.

    Matched MeSH terms: Educational Measurement/methods
  17. Abubakar U, Muhammad HT, Sulaiman SAS, Ramatillah DL, Amir O
    Curr Pharm Teach Learn, 2020 03;12(3):265-273.
    PMID: 32273061 DOI: 10.1016/j.cptl.2019.12.002
    BACKGROUND AND PURPOSE: Training pharmacy students in infectious diseases (ID) is important to enable them to participate in antibiotic stewardship programs. This study evaluated knowledge and self-confidence regarding antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship among final year pharmacy undergraduate students.

    METHODS: A cross-sectional electronic survey was conducted at universities in Indonesia, Malaysia, and Pakistan. A 59-item survey was administered between October 2017 and December 2017.

    FINDINGS: The survey was completed by 211 students (response rate 77.8%). The mean knowledge score for antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship was 5.6 ± 1.5, 4.7 ± 1.8 (maximum scores 10.0) and 3.1 ± 1.4 (maximum score 5.0), respectively. Significant variations were noted among the schools. There was poor awareness about the consequences of antibiotic resistance and cases with no need for an antibiotic. The knowledge of antibiotic resistance was higher among male respondents (6.1 vs. 5.4) and those who had attended antibiotic resistance (5.7 vs. 5.2) and antibiotic therapy (5.8 vs. 4.9) courses (p 

    Matched MeSH terms: Educational Measurement/methods
  18. Goh CF, Ong ET
    Curr Pharm Teach Learn, 2019 06;11(6):621-629.
    PMID: 31213319 DOI: 10.1016/j.cptl.2019.02.025
    BACKGROUND AND PURPOSE: The flipped classroom has not been fully exploited to improve tertiary education in Malaysia. A transformation in pharmacy education using flipped classrooms will be pivotal to resolve poor academic performance in certain courses. This study aimed to investigate the effectiveness of the flipped classroom in improving student learning and academic performance in a course with a historically low pass rate.

    EDUCATIONAL ACTIVITY AND SETTING: A quasi-experimental pre- and posttest control group design was employed. The experimental group experienced the flipped classroom for selected topics while the control group learned in a traditional classroom. Analysis of covariance was utilized to compare the performance on the final exam using the grade point of a pre-requisite course as the covariate. Students' perceptions of their experience in the flipped classroom were gauged through a web-based survey.

    FINDINGS: Student performance on the final exam was significantly higher in the flipped classroom group. The lowest-scoring students benefitted the most in terms of academic performance. More than two-thirds of students responded positively to the use of the flipped classroom and felt more confident while participating in classes and tests.

    SUMMARY: The flipped classroom is academically beneficial in a challenging course with a historically low pass rate; it was also effective in stimulating learning interest. The current study identified that for the flipped classroom to be successful, the role of educators, the feasibility of the approach, and the acceptance of students were important.

    Matched MeSH terms: Educational Measurement/methods
  19. Lai NM, Teng CL, Nalliah S
    Educ Health (Abingdon), 2012 Jul;25(1):33-9.
    PMID: 23787382
    CONTEXT: The Fresno test and the Berlin Questionnaire are two validated instruments for objectively assessing competence in evidence-based medicine (EBM). Although both instruments purport to assess a comprehensive range of EBM knowledge, they differ in their formats. We undertook a preliminary study using the adapted version of the two instruments to assess their correlations when administered to medical students. The adaptations were made mainly to simplify the presentation for our undergraduate students while preserving the contents that were assessed.
    METHODS: We recruited final-year students from a Malaysian medical school from September 2006 to August 2007. The students received a structured EBM training program within their curriculum. They took the two instruments concurrently, midway through their final six months of training. We determined the correlations using either the Pearson's or Spearman's correlation depending on the data distribution.
    RESULTS: Of the 120 students invited, 72 (60.0%) participated in the study. The adapted Fresno test and the Berlin Questionnaire had a Cronbach's alfa of 0.66 and 0.70, respectively. Inter-rater correlation (r) of the adapted Fresno test was 0.9. The students scored 45.4% on average [standard deviation (SD) 10.1] on the Fresno test and 44.7% (SD 14.9) on the Berlin Questionnaire (P = 0.7). The overall correlation between the two instruments was poor (r = 0.2, 95% confidence interval: -0.07 to 0.42, P = 0.08), and correlations remained poor between items assessing the same EBM domains (r = 0.01-0.2, P = 0.07-0.9).
    DISCUSSION: The adapted versions of the Fresno test and the Berlin Questionnaire correlated poorly when administered to medical students. The two instruments may not be used interchangeably to assess undergraduate competence in EBM.
    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links