Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Nagandla K, Gupta ED, Motilal T, Teng CL, Gangadaran S
    Natl Med J India, 2019 7 4;31(5):293-295.
    PMID: 31267998 DOI: 10.4103/0970-258X.261197
    Background: Assessment drives students' learning. It measures the level of students' understanding. We aimed to determine whether performance in continuous assessment can predict failure in the final professional examination results.

    Methods: We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).

    Results: Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.

    Conclusion: ICA with theory and clinical examination had a direct relationship with students' performance in the final examination and is a useful assessment tool.

    Matched MeSH terms: Educational Measurement/methods*
  2. Tan CP, Rokiah P
    Med J Malaysia, 2005 Aug;60 Suppl D:48-53.
    PMID: 16315624
    Formative and summative student assessment has always been of concern to medical teachers, and this is especially important at the level of graduating doctors. The effectiveness and comprehensiveness of the clinical training provided is tested with the use of clinical cases, either with real patients who have genuine medical conditions, or with the use of standardised patients who are trained to simulate accurately actual patients. The Objective Structured Clinical Examination (OSCE) is one method of assessing the adequacy of clinical skills of medical students, and their level of competence. It can be used to test a variety of skills such as history taking (communication and interpersonal skills) and performing aspects of physical examination, undertaking emergency procedures, and interpreting investigational data. It can also be used to ensure an adequate depth and breadth of coverage of clinical skills expected of a graduating doctor.
    Matched MeSH terms: Educational Measurement/methods*
  3. Roslani AM, Sein KT, Nordin R
    Med J Malaysia, 1989 Mar;44(1):75-82.
    PMID: 2626116
    The Phase I and Phase II undergraduate teaching programmes of the School of Medical Sciences were reviewed at the end of the 1985/86 academic year. It was found that deviations from the School's philosophy had crept into the implementation process. Modifications were therefore made in Phase I and Phase II programmes with a view to:--(i) reducing content, (ii) promoting integration, (iii) improving clinical examination skills of students, and (iv) providing more opportunities to students for self learning, reinforcement and application of knowledge. The number of assessment items in Phase I and the frequency of assessment in Phase II were also found to be inappropriate and so modifications in assessment were made to rectify this situation.
    Matched MeSH terms: Educational Measurement/methods*
  4. Goh BL, Ganeshadeva Yudisthra M, Lim TO
    Semin Dial, 2009 Mar-Apr;22(2):199-203.
    PMID: 19426429 DOI: 10.1111/j.1525-139X.2008.00536.x
    Peritoneal dialysis (PD) catheter insertion success rate is known to vary among different operators, and peritoneoscope PD catheter insertion demands mastery of a steep learning curve. Defining a learning curve using a continuous monitoring tool such as a Cumulative Summation (CUSUM) chart is useful for planning training programs. We aimed to analyze the learning curve of a trainee nephrologist in performing peritoneoscope PD catheter implantation with CUSUM chart. This was a descriptive single-center study using collected data from all PD patients who underwent peritoneoscope PD catheter insertion in our hospital. CUSUM model was used to evaluate the learning curve for peritoneoscope PD catheter insertion. Unacceptable primary failure rate (i.e., catheter malfunction within 1 month of insertion) was defined at >40% and acceptable performance was defined at <25%. CUSUM chart showed the learning curve of a trainee in acquiring new skill. As the trainee became more skillful with training, the CUSUM curve flattened. Technical proficiency of the trainee nephrologist in performing peritoneoscope Tenckhoff catheter insertion (<25% primary catheter malfunction) was attained after 23 procedures. We also noted earlier in our program that Tenckhoff catheters directed to the right iliac fossae had poorer survival as compared to catheters directed to the left iliac fossae. Survival of catheters directed to the left iliac fossae was 94.6% while the survival for catheters directed to the right iliac fossae was 48.6% (p < 0.01). We advocate that quality control of Tenckhoff catheter insertion is performed using CUSUM charting as described to monitor primary catheter dysfunction (i.e., failure of catheter function within 1 month of insertion), primary leak (i.e., within 1 month of catheter insertion), and primary peritonitis (i.e., within 2 weeks of catheter insertion).
    Matched MeSH terms: Educational Measurement/methods*
  5. Refat N, Kassim H, Rahman MA, Razali RB
    PLoS One, 2020;15(8):e0236862.
    PMID: 32857762 DOI: 10.1371/journal.pone.0236862
    Language learning is an emerging research area where researchers have done significant contributions by incorporating technological assistantship (i.e., computer- and mobile-assistant learning). However, it has been revealed from the recent empirical studies that little attention is given on grammar learning with the proper instructional materials design and the motivational framework for designing an efficient mobile-assisted grammar learning tool. This paper hence, reports a preliminary study that investigated learner motivation when a mobile-assisted tool for tense learning was used. This study applied the Attention-Relevance-Confidence-Satisfaction (ARCS) model. It was hypothesized that with the use of the designed mobile- assisted tense learning tool students would be motivated to learn grammar (English tense). In addition, with the increase of motivation, performance outcome in paper- based test would also be improved. With the purpose to investigate the impact of the tool, a sequential mixed-method research design was employed with the use of three research instruments; Instructional Materials Motivation Survey (IMMS), a paper-based test and an interview protocol using a semi-structured interview. Participants were 115 undergraduate students, who were enrolled in a remedial English course. The findings showed that with the effective design of instructional materials, students were motivated to learn grammar, where they were positive at improving their attitude towards learning (male 86%, female 80%). The IMMS findings revealed that students' motivation increased after using the tool. Moreover, students improved their performance level that was revealed from the outcome of paper-based instrument. Therefore, it is confirmed that the study contributed to designing an effective multimedia based instructions for a mobile-assisted tool that increased learners' motivational attitude which resulted in an improved learning performance.
    Matched MeSH terms: Educational Measurement/methods*
  6. Chan SW, Ismail Z, Sumintono B
    PLoS One, 2016;11(11):e0163846.
    PMID: 27812091 DOI: 10.1371/journal.pone.0163846
    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.
    Matched MeSH terms: Educational Measurement/methods*
  7. Guilding C, Pye RE, Butler S, Atkinson M, Field E
    Pharmacol Res Perspect, 2021 Aug;9(4):e00833.
    PMID: 34309243 DOI: 10.1002/prp2.833
    Multiple choice questions (MCQs) are a common form of assessment in medical schools and students seek opportunities to engage with formative assessment that reflects their summative exams. Formative assessment with feedback and active learning strategies improve student learning outcomes, but a challenge for educators, particularly those with large class sizes, is how to provide students with such opportunities without overburdening faculty. To address this, we enrolled medical students in the online learning platform PeerWise, which enables students to author and answer MCQs, rate the quality of other students' contributions as well as discuss content. A quasi-experimental mixed methods research design was used to explore PeerWise use and its impact on the learning experience and exam results of fourth year medical students who were studying courses in clinical sciences and pharmacology. Most students chose to engage with PeerWise following its introduction as a noncompulsory learning opportunity. While students perceived benefits in authoring and peer discussion, students engaged most highly with answering questions, noting that this helped them identify gaps in knowledge, test their learning and improve exam technique. Detailed analysis of the 2015 cohort (n = 444) with hierarchical regression models revealed a significant positive predictive relationship between answering PeerWise questions and exam results, even after controlling for previous academic performance, which was further confirmed with a follow-up multi-year analysis (2015-2018, n = 1693). These 4 years of quantitative data corroborated students' belief in the benefit of answering peer-authored questions for learning.
    Matched MeSH terms: Educational Measurement/methods*
  8. Ramoo V, Abdullah KL, Tan PS, Wong LP, Chua PY
    Nurs Crit Care, 2016 Sep;21(5):287-94.
    PMID: 25271143 DOI: 10.1111/nicc.12105
    BACKGROUND: Sedation management is an integral component of critical care practice. It requires the greatest attention of critical care practitioners because it carries significant risks to patients. Therefore, it is imperative that nurses are aware of potential adverse consequences of sedation therapy and current sedation practice recommendations.

    AIMS AND OBJECTIVES: To evaluate the impact of an educational intervention on nurses' knowledge of sedation assessment and management.

    DESIGNS AND METHODS: A quasi-experimental design with a pre- and post-test method was used. The educational intervention included theoretical sessions on assessing and managing sedation and hands-on sedation assessment practice using the Richmond Agitation Sedation Scale. Its effect was measured using self-administered questionnaire, completed at the baseline level and 3 months following the intervention.

    RESULTS: Participants were 68 registered nurses from an intensive care unit of a teaching hospital in Malaysia. Significant increases in overall mean knowledge scores were observed from pre- to post-intervention phases (mean of 79·00 versus 102·00, p < 0·001). Nurses with fewer than 5 years of work experience, less than 26 years old, and with a only basic nursing education had significantly greater level of knowledge improvement at the post-intervention phase compared to other colleagues, with mean differences of 24·64 (p = 0·001), 23·81 (p = 0·027) and 27·25 (p = 0·0001), respectively. A repeated-measures analysis of variance revealed a statistically significant effect of educational intervention on knowledge score after controlling for age, years of work and level of nursing education (p = 0·0001, ηp (2) = 0·431).

    CONCLUSION: An educational intervention consisting of theoretical sessions and hands-on sedation assessment practice was found effective in improving nurses' knowledge and understanding of sedation management.

    RELEVANCE TO CLINICAL PRACTICE: This study highlighted the importance of continuing education to increase nurses' understanding of intensive care practices, which is vital for improving the quality of patient care.

    Matched MeSH terms: Educational Measurement/methods
  9. Bosher S, Bowles M
    Nurs Educ Perspect, 2008 May-Jun;29(3):165-72.
    PMID: 18575241
    Recent research has indicated that language may be a source of construct-irrelevant variance for non-native speakers of English, or English as a second language (ESL) students, when they take exams. As a result, exams may not accurately measure knowledge of nursing content. One accommodation often used to level the playing field for ESL students is linguistic modification, a process by which the reading load of test items is reduced while the content and integrity of the item are maintained. Research on the effects of linguistic modification has been conducted on examinees in the K-12 population, but is just beginning in other areas. This study describes the collaborative process by which items from a pathophysiology exam were linguistically modified and subsequently evaluated for comprehensibility by ESL students. Findings indicate that in a majority of cases, modification improved examinees' comprehension of test items. Implications for test item writing and future research are discussed.
    Matched MeSH terms: Educational Measurement/methods*
  10. Tan K, Chong MC, Subramaniam P, Wong LP
    Nurse Educ Today, 2018 May;64:180-189.
    PMID: 29500999 DOI: 10.1016/j.nedt.2017.12.030
    BACKGROUND: Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students.

    OBJECTIVE: To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students.

    DESIGN: A systematic review of interventional experimental studies.

    DATA SOURCES: Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched.

    REVIEW METHODS: Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted.

    RESULTS: Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect.

    CONCLUSIONS: Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed.

    Matched MeSH terms: Educational Measurement/methods*
  11. Liew SC, Dutta S, Sidhu JK, De-Alwis R, Chen N, Sow CF, et al.
    Med Teach, 2014 Jul;36(7):626-31.
    PMID: 24787534 DOI: 10.3109/0142159X.2014.899689
    The complexity of modern medicine creates more challenges for teaching and assessment of communication skills in undergraduate medical programme. This research was conducted to study the level of communication skills among undergraduate medical students and to determine the difference between simulated patients and clinical instructors' assessment of communication skills.
    Matched MeSH terms: Educational Measurement/methods*
  12. Sim JH, Tong WT, Hong WH, Vadivelu J, Hassan H
    Med Educ Online, 2015;20:28612.
    PMID: 26511792 DOI: 10.3402/meo.v20.28612
    INTRODUCTION: Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students' perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument.
    METHOD: The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors' institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined.
    RESULTS: Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of 'feedback mechanism' recorded the lowest mean (2.39/4.00), whereas the factor/subscale of 'assessment system/procedure' scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years.
    CONCLUSIONS: The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students' perceptions of the assessment environment in an undergraduate medical program.
    KEYWORDS: assessment environment; development; instrument; psychometric properties; validation
    Matched MeSH terms: Educational Measurement/methods*
  13. Chan MY
    Med Educ Online, 2015;20:28565.
    PMID: 26194482 DOI: 10.3402/meo.v20.28565
    The oral case presentation is an important communicative activity in the teaching and assessment of students. Despite its importance, not much attention has been paid to providing support for teachers to teach this difficult task to medical students who are novices to this form of communication. As a formalized piece of talk that takes a regularized form and used for a specific communicative goal, the case presentation is regarded as a rhetorical activity and awareness of its rhetorical and linguistic characteristics should be given due consideration in teaching. This paper reviews practitioners' and the limited research literature that relates to expectations of medical educators about what makes a good case presentation, and explains the rhetorical aspect of the activity. It is found there is currently a lack of a comprehensive model of the case presentation that projects the rhetorical and linguistic skills needed to produce and deliver a good presentation. Attempts to describe the structure of the case presentation have used predominantly opinion-based methodologies. In this paper, I argue for a performance-based model that would not only allow a description of the rhetorical structure of the oral case presentation, but also enable a systematic examination of the tacit genre knowledge that differentiates the expert from the novice. Such a model will be a useful resource for medical educators to provide more structured feedback and teaching support to medical students in learning this important genre.
    Matched MeSH terms: Educational Measurement/methods*
  14. Sim JH, Abdul Aziz YF, Mansor A, Vijayananthan A, Foong CC, Vadivelu J
    Med Educ Online, 2015;20:26185.
    PMID: 25697602 DOI: 10.3402/meo.v20.26185
    INTRODUCTION: The purpose of this study was to compare students' performance in the different clinical skills (CSs) assessed in the objective structured clinical examination.

    METHODS: Data for this study were obtained from final year medical students' exit examination (n=185). Retrospective analysis of data was conducted using SPSS. Means for the six CSs assessed across the 16 stations were computed and compared.

    RESULTS: Means for history taking, physical examination, communication skills, clinical reasoning skills (CRSs), procedural skills (PSs), and professionalism were 6.25±1.29, 6.39±1.36, 6.34±0.98, 5.86±0.99, 6.59±1.08, and 6.28±1.02, respectively. Repeated measures ANOVA showed there was a significant difference in the means of the six CSs assessed [F(2.980, 548.332)=20.253, p<0.001]. Pairwise multiple comparisons revealed significant differences between the means of the eight pairs of CSs assessed, at p<0.05.

    CONCLUSIONS: CRSs appeared to be the weakest while PSs were the strongest, among the six CSs assessed. Students' unsatisfactory performance in CRS needs to be addressed as CRS is one of the core competencies in medical education and a critical skill to be acquired by medical students before entering the workplace. Despite its challenges, students must learn the skills of clinical reasoning, while clinical teachers should facilitate the clinical reasoning process and guide students' clinical reasoning development.

    Matched MeSH terms: Educational Measurement/methods*
  15. Yusoff MS, Hadie SN, Abdul Rahim AF
    Med Educ, 2014 Feb;48(2):108-10.
    PMID: 24528391 DOI: 10.1111/medu.12403
    Matched MeSH terms: Educational Measurement/methods*
  16. Yusoff MS
    Med Educ, 2012 Nov;46(11):1122.
    PMID: 23078712 DOI: 10.1111/medu.12057
    Matched MeSH terms: Educational Measurement/methods*
  17. Loh KY, Kwa SK
    Med Educ, 2009 Nov;43(11):1101-2.
    PMID: 19874515 DOI: 10.1111/j.1365-2923.2009.03501.x
    Matched MeSH terms: Educational Measurement/methods*
  18. Lai NM
    Med Educ, 2009 May;43(5):479-80.
    PMID: 19344346 DOI: 10.1111/j.1365-2923.2009.03320.x
    Matched MeSH terms: Educational Measurement/methods*
  19. Loh KY, Nalliah S
    Med Educ, 2008 Nov;42(11):1127-8.
    PMID: 18991988 DOI: 10.1111/j.1365-2923.2008.03217.x
    Matched MeSH terms: Educational Measurement/methods*
  20. Schwartz PL, Kyaw Tun Sein
    Med Educ, 1987 May;21(3):265-8.
    PMID: 3600444
    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links