Displaying publications 1 - 20 of 56 in total

Abstract:
Sort:
  1. Finn GM, Tai J, Nadarajah VD
    Med Educ, 2025 Jan;59(1):88-96.
    PMID: 39255998 DOI: 10.1111/medu.15535
    CONTEXT: In this article, we draw upon diverse and contextually different experiences of working on inclusive assessment, with the aim of bridging and enhancing practices of inclusive assessments for health professions education (HPE) within universities. Instead of juxtaposing our views from three countries, we combine our perspectives to advocate for inclusive assessment.

    DISCUSSION: Creating an inclusive assessment culture is important for equitable education, even if priorities for inclusion might differ between contexts. We recognise challenges in the enactment of inclusive assessment, namely, the notion of lowering standards, harming reliability and robustness of assessment design and inclusion as a poorly defined and catchall term. Importantly, the lack of awareness that inclusion means recognising intersectionality is a barrier for well-designed inclusive assessments. This is why we offer considerations for HPE practitioners that can guide towards a unified direction of travel for inclusive assessments. This article highlights the importance of contextual prioritisation and initiatives to be considered at the global level to national, institutional, programme and the individual level. Utilising experience and literature from undergraduate, higher education contexts, we offer considerations with applicability across the assessment continuum.

    CONTEXT: In this state of science paper, we were set the challenge of providing cross-cultural viewpoints on inclusive assessment. In this discursive article, we focus on inclusive assessment within undergraduate health professions education whilst looking to the wider higher education literature, since institutional policies and procedures frequently drive assessment decisions and influence the environment in which they occur. We explore our experiences of working in inclusive assessment, with the aim of bridging and enhancing practices of inclusive assessments for HPE. Unlike other articles that juxtapose views, we all come from the perspective of supporting inclusive assessment. We begin with a discussion on what inclusive assessment is and then describe our contexts as a basis for understanding differences and broadening conversations. We work in the United Kingdom, Australia and Malaysia, having undertaken research, facilitated workshops and seminars on inclusive assessment nationally and internationally. We recognise our perspectives will differ as a consequence of our global context, institutional culture, individual characteristics and educational experiences. (Note that individual characteristics are also known as protected characteristics in some countries). Then, we outline challenges and opportunities associated with inclusive assessment, drawing on evidence within our contexts, acknowledging that our understanding of inclusive assessment research is limited to publications in English and currently tilted to publications from the Global North. In the final section, we then offer recommendations for championing inclusion, focussing firstly on assessment designs, and then broader considerations to organise collective action. Our article is unapologetically practical; the deliberate divergence from a theoretical piece is with the intent that anyone who reads this paper might enact even one small change progressing towards more inclusive assessment practices within their context.

    Matched MeSH terms: Educational Measurement/methods
  2. Schwartz PL, Kyaw Tun Sein
    Med Educ, 1987 May;21(3):265-8.
    PMID: 3600444
    Matched MeSH terms: Educational Measurement/methods*
  3. Schwartz PL, Crooks TJ, Sein KT
    Med Educ, 1986 Sep;20(5):399-406.
    PMID: 3762442
    It has been suggested that the 'ideal' measure of reliability of an examination is obtained by test and retest using the one examination on the same group of students. However, because of practical and theoretical arguments, most reported reliabilities for multiple choice examinations in medicine are actually measures of internal consistency. While attempting to minimize the effects of potential interfering factors, we have undertaken a study of true test-retest reliability of multiple true-false type multiple choice questions in preclinical medical subjects. From three end-of-term examinations, 363 items (106 of 449 from term 1, 150 of 499 from term 2, and 107 of 492 from term 3) were repeated in the final examination (out of 999 total items). Between test and retest, there was little overall decrease in the percentage of items answered correctly and a decrease of only 3.4 in the percentage score after correction for guessing. However, there was an inverse relation between test-retest interval and decrease in performance. Between test and retest, performance decreased significantly on 33 items and increased significantly on 11 items. Test-retest correlation coefficients were 0.70 to 0.78 for items from the separate terms and 0.885 for all items that were retested. Thus, overall, these items had a very high degree of reliability, approximately the 0.9 which has been specified as the requirement for being able to distinguish between individuals.
    Matched MeSH terms: Educational Measurement/methods*
  4. Shahabudin SH
    Med Educ, 1983 Sep;17(5):316-8.
    PMID: 6621433
    The belief that it is unwise to alter the initial response to a multiple choice question is questioned. Among 39 380 MCQ responses, there were 1818 changes (4.62%) of which 21.9% were correct to incorrect responses, 46.3% incorrect to correct responses and 31.8% incorrect to incorrect. This effect was very much more marked among the better students, incorrect to correct changes accounting for 61% of the responses in the upper group, 42% in the middle group and 34% in the lower group.
    Matched MeSH terms: Educational Measurement/methods*
  5. Roslani AM, Sein KT, Nordin R
    Med J Malaysia, 1989 Mar;44(1):75-82.
    PMID: 2626116
    The Phase I and Phase II undergraduate teaching programmes of the School of Medical Sciences were reviewed at the end of the 1985/86 academic year. It was found that deviations from the School's philosophy had crept into the implementation process. Modifications were therefore made in Phase I and Phase II programmes with a view to:--(i) reducing content, (ii) promoting integration, (iii) improving clinical examination skills of students, and (iv) providing more opportunities to students for self learning, reinforcement and application of knowledge. The number of assessment items in Phase I and the frequency of assessment in Phase II were also found to be inappropriate and so modifications in assessment were made to rectify this situation.
    Matched MeSH terms: Educational Measurement/methods*
  6. Taha MH, Mohammed HEEG, Abdalla ME, Yusoff MSB, Mohd Napiah MK, Wadi MM
    Med Educ Online, 2024 Dec 31;29(1):2412392.
    PMID: 39445670 DOI: 10.1080/10872981.2024.2412392
    The Extended matching Questions (EMQs), or R-type questions, are format of selected-response. The validity evidence for this format is crucial, but there have been reports of misunderstandings about validity. It is unclear what kinds of evidence should be presented and how to present them to support their educational impact. This review explores the pattern and quality of reporting the sources of validity evidence of EMQs in health professions education, encompassing content, response process, internal structure, relationship to other variables, and consequences. A systematic search in the electronic databases including MEDLINE via PubMed, Scopus, Web of Science, CINAHL, and ERIC was conducted to extract studies that utilize EMQs. The framework for a unitary concept of validity was applied to extract data. A total of 218 titles were initially selected, the final number of titles was 19. The most reported pieces of evidence were the reliability coefficient, followed by the relationship to another variable. Additionally, the adopted definition of validity is mostly the old tripartite concept. This study found that reporting and presenting validity evidence appeared to be deficient. The available evidence can hardly provide a strong validity argument that supports the educational impact of EMQs. This review calls for more work on developing a tool to measure the reporting and presenting validity evidence.
    Matched MeSH terms: Educational Measurement/methods
  7. Ramanathan A, Zaini ZM, Ghani WMN, Wong GR, Zainuddin NI, Yang YH, et al.
    Oral Dis, 2024 Nov;30(8):5483-5489.
    PMID: 38488212 DOI: 10.1111/odi.14927
    OBJECTIVE: This study evaluated the effectiveness of face-to-face (F2F) and online OralDETECT training programme in enhancing early detection skills for oral cancer.

    METHODS: A total of 328 final-year dental students were trained across six cohorts. Three cohorts (175 students) received F2F training from the academic years 2016/2017 to 2018/2019, and the remaining three (153 students) underwent online training during the Covid-19 pandemic from 2019/2020 to 2021/2022. Participant scores were analysed using the Wilcoxon signed rank test, the Mann-Whitney test, Cohen's d effect size, and multiple linear regression.

    RESULTS: Both F2F and online training showed increases in mean scores from pre-test to post-test 3: from 67.66 ± 11.81 to 92.06 ± 5.27 and 75.89 ± 11.03 to 90.95 ± 5.22, respectively. Comparison between F2F and online methods revealed significant differences in mean scores with large effect sizes at the pre-test stage (p 

    Matched MeSH terms: Educational Measurement/methods
  8. Torke S, Upadhya S, Abraham RR, Ramnarayan K
    Adv Physiol Educ, 2006 Mar;30(1):48-9.
    PMID: 16481610
    Matched MeSH terms: Educational Measurement/methods*
  9. Rao M
    Adv Physiol Educ, 2006 Jun;30(2):95.
    PMID: 16709743
    Matched MeSH terms: Educational Measurement/methods*
  10. Puthiaparampil T, Rahman MM
    BMC Med Educ, 2020 May 06;20(1):141.
    PMID: 32375739 DOI: 10.1186/s12909-020-02057-w
    BACKGROUND: Multiple choice questions, used in medical school assessments for decades, have many drawbacks such as hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked, directly answered questions like Very Short Answer Questions (VSAQ) are considered a better alternative with several advantages.

    OBJECTIVES: This study aims to compare student performance in MCQ and VSAQ and obtain feedback. from the stakeholders.

    METHODS: Conduct multiple true-false, one best answer, and VSAQ tests in two batches of medical students, compare their scores and psychometric indices of the tests and seek opinion from students and academics regarding these assessment methods.

    RESULTS: Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders' opinions were significantly in favour of VSAQ.

    CONCLUSION AND RECOMMENDATION: This study concludes that VSAQ is a viable alternative to multiple-choice question tests, and it is widely accepted by medical students and academics in the medical faculty.

    Matched MeSH terms: Educational Measurement/methods*
  11. Chan SW, Ismail Z, Sumintono B
    PLoS One, 2016;11(11):e0163846.
    PMID: 27812091 DOI: 10.1371/journal.pone.0163846
    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.
    Matched MeSH terms: Educational Measurement/methods*
  12. Tan K, Chong MC, Subramaniam P, Wong LP
    Nurse Educ Today, 2018 May;64:180-189.
    PMID: 29500999 DOI: 10.1016/j.nedt.2017.12.030
    BACKGROUND: Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students.

    OBJECTIVE: To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students.

    DESIGN: A systematic review of interventional experimental studies.

    DATA SOURCES: Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched.

    REVIEW METHODS: Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted.

    RESULTS: Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect.

    CONCLUSIONS: Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed.

    Matched MeSH terms: Educational Measurement/methods*
  13. Sim SM, Rasiah RI
    Ann Acad Med Singap, 2006 Feb;35(2):67-71.
    PMID: 16565756
    INTRODUCTION: This paper reports the relationship between the difficulty level and the discrimination power of true/false-type multiple-choice questions (MCQs) in a multidisciplinary paper for the para-clinical year of an undergraduate medical programme.

    MATERIALS AND METHODS: MCQ items in papers taken from Year II Parts A, B and C examinations for Sessions 2001/02, and Part B examinations for 2002/03 and 2003/04, were analysed to obtain their difficulty indices and discrimination indices. Each paper consisted of 250 true/false items (50 questions of 5 items each) on topics drawn from different disciplines. The questions were first constructed and vetted by the individual departments before being submitted to a central committee, where the final selection of the MCQs was made, based purely on the academic judgement of the committee.

    RESULTS: There was a wide distribution of item difficulty indices in all the MCQ papers analysed. Furthermore, the relationship between the difficulty index (P) and discrimination index (D) of the MCQ items in a paper was not linear, but more dome-shaped. Maximal discrimination (D = 51% to 71%) occurred with moderately easy/difficult items (P = 40% to 74%). On average, about 38% of the MCQ items in each paper were "very easy" (P > or =75%), while about 9% were "very difficult" (P <25%). About two-thirds of these very easy/difficult items had "very poor" or even negative discrimination (D < or =20%).

    CONCLUSIONS: MCQ items that demonstrate good discriminating potential tend to be moderately difficult items, and the moderately-to-very difficult items are more likely to show negative discrimination. There is a need to evaluate the effectiveness of our MCQ items.

    Matched MeSH terms: Educational Measurement/methods*
  14. Barman A
    Ann Acad Med Singap, 2008 Nov;37(11):957-63.
    PMID: 19082204
    INTRODUCTION: Setting, maintaining and re-evaluation of assessment standard periodically are important issues in medical education. The cut-off scores are often "pulled from the air" or set to an arbitrary percentage. A large number of methods/procedures used to set standard or cut score are described in literature. There is a high degree of uncertainty in performance standard set by using these methods. Standards set using the existing methods reflect the subjective judgment of the standard setters. This review is not to describe the existing standard setting methods/procedures but to narrate the validity, reliability, feasibility and legal issues relating to standard setting.

    MATERIALS AND METHODS: This review is on some of the issues in standard setting based on the published articles of educational assessment researchers.

    RESULTS: Standard or cut-off score should be to determine whether the examinee attained the requirement to be certified competent. There is no perfect method to determine cut score on a test and none is agreed upon as the best method. Setting standard is not an exact science. Legitimacy of the standard is supported when performance standard is linked to the requirement of practice. Test-curriculum alignment and content validity are important for most educational test validity arguments.

    CONCLUSION: Representative percentage of must-know learning objectives in the curriculum may be the basis of test items and pass/fail marks. Practice analysis may help in identifying the must-know areas of curriculum. Cut score set by this procedure may give the credibility, validity, defensibility and comparability of the standard. Constructing the test items by subject experts and vetted by multi-disciplinary faculty members may ensure the reliability of the test as well as the standard.

    Matched MeSH terms: Educational Measurement/methods*
  15. Yusoff MS
    Med Educ, 2012 Nov;46(11):1122.
    PMID: 23078712 DOI: 10.1111/medu.12057
    Matched MeSH terms: Educational Measurement/methods*
  16. Lai NM
    Med Educ, 2009 May;43(5):479-80.
    PMID: 19344346 DOI: 10.1111/j.1365-2923.2009.03320.x
    Matched MeSH terms: Educational Measurement/methods*
  17. Tan CP, Rokiah P
    Med J Malaysia, 2005 Aug;60 Suppl D:48-53.
    PMID: 16315624
    Formative and summative student assessment has always been of concern to medical teachers, and this is especially important at the level of graduating doctors. The effectiveness and comprehensiveness of the clinical training provided is tested with the use of clinical cases, either with real patients who have genuine medical conditions, or with the use of standardised patients who are trained to simulate accurately actual patients. The Objective Structured Clinical Examination (OSCE) is one method of assessing the adequacy of clinical skills of medical students, and their level of competence. It can be used to test a variety of skills such as history taking (communication and interpersonal skills) and performing aspects of physical examination, undertaking emergency procedures, and interpreting investigational data. It can also be used to ensure an adequate depth and breadth of coverage of clinical skills expected of a graduating doctor.
    Matched MeSH terms: Educational Measurement/methods*
  18. Chan MY
    Med Educ Online, 2015;20:28565.
    PMID: 26194482 DOI: 10.3402/meo.v20.28565
    The oral case presentation is an important communicative activity in the teaching and assessment of students. Despite its importance, not much attention has been paid to providing support for teachers to teach this difficult task to medical students who are novices to this form of communication. As a formalized piece of talk that takes a regularized form and used for a specific communicative goal, the case presentation is regarded as a rhetorical activity and awareness of its rhetorical and linguistic characteristics should be given due consideration in teaching. This paper reviews practitioners' and the limited research literature that relates to expectations of medical educators about what makes a good case presentation, and explains the rhetorical aspect of the activity. It is found there is currently a lack of a comprehensive model of the case presentation that projects the rhetorical and linguistic skills needed to produce and deliver a good presentation. Attempts to describe the structure of the case presentation have used predominantly opinion-based methodologies. In this paper, I argue for a performance-based model that would not only allow a description of the rhetorical structure of the oral case presentation, but also enable a systematic examination of the tacit genre knowledge that differentiates the expert from the novice. Such a model will be a useful resource for medical educators to provide more structured feedback and teaching support to medical students in learning this important genre.
    Matched MeSH terms: Educational Measurement/methods*
  19. Ismail MA, Ahmad A, Mohammad JA, Fakri NMRM, Nor MZM, Pa MNM
    BMC Med Educ, 2019 Jun 25;19(1):230.
    PMID: 31238926 DOI: 10.1186/s12909-019-1658-z
    BACKGROUND: Gamification is an increasingly common phenomenon in education. It is a technique to facilitate formative assessment and to promote student learning. It has been shown to be more effective than traditional methods. This phenomenological study was conducted to explore the advantages of gamification through the use of the Kahoot! platform for formative assessment in medical education.

    METHODS: This study employed a phenomenological design. Five focus groups were conducted with medical students who had participated in several Kahoot! sessions.

    RESULTS: Thirty-six categories and nine sub-themes emerged from the focus group discussions. They were grouped into three themes: attractive learning tool, learning guidance and source of motivation.

    CONCLUSIONS: The results suggest that Kahoot! sessions motivate students to study, to determine the subject matter that needs to be studied and to be aware of what they have learned. Thus, the platform is a promising tool for formative assessment in medical education.

    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links