Displaying publications 1 - 20 of 48 in total

Abstract:
Sort:
  1. Guilding C, Pye RE, Butler S, Atkinson M, Field E
    Pharmacol Res Perspect, 2021 Aug;9(4):e00833.
    PMID: 34309243 DOI: 10.1002/prp2.833
    Multiple choice questions (MCQs) are a common form of assessment in medical schools and students seek opportunities to engage with formative assessment that reflects their summative exams. Formative assessment with feedback and active learning strategies improve student learning outcomes, but a challenge for educators, particularly those with large class sizes, is how to provide students with such opportunities without overburdening faculty. To address this, we enrolled medical students in the online learning platform PeerWise, which enables students to author and answer MCQs, rate the quality of other students' contributions as well as discuss content. A quasi-experimental mixed methods research design was used to explore PeerWise use and its impact on the learning experience and exam results of fourth year medical students who were studying courses in clinical sciences and pharmacology. Most students chose to engage with PeerWise following its introduction as a noncompulsory learning opportunity. While students perceived benefits in authoring and peer discussion, students engaged most highly with answering questions, noting that this helped them identify gaps in knowledge, test their learning and improve exam technique. Detailed analysis of the 2015 cohort (n = 444) with hierarchical regression models revealed a significant positive predictive relationship between answering PeerWise questions and exam results, even after controlling for previous academic performance, which was further confirmed with a follow-up multi-year analysis (2015-2018, n = 1693). These 4 years of quantitative data corroborated students' belief in the benefit of answering peer-authored questions for learning.
    Matched MeSH terms: Educational Measurement/methods*
  2. Elnaem MH, Akkawi ME, Nazar NIM, Ab Rahman NS, Mohamed MHN
    PMID: 33910270 DOI: 10.3352/jeehp.2021.18.6
    PURPOSE: This study investigated pharmacy students' perceptions of various aspects of virtual objective structured clinical examinations (vOSCEs) conducted during the coronavirus disease 2019 pandemic in Malaysia.

    METHODS: This cross-sectional study involved third- and fourth-year pharmacy students at the International Islamic University Malaysia. A validated self-administered questionnaire was distributed to students who had taken a vOSCE a week before.

    RESULTS: Out of the 253 students who were approached, 231 (91.3%) completed the questionnaire. More than 75% of the participants agreed that the instructions and preparations were clear and helpful in familiarizing them with the vOSCE flow. It was found that 53.2% of the respondents were satisfied with the flow and conduct of the vOSCE. However, only approximately one-third of the respondents believed that the tasks provided in the vOSCE were more convenient, less stressful, and easier to perform than those in the conventional OSCE. Furthermore, 49.7% of the students favored not having a vOSCE in the future when conducting a conventional OSCE becomes feasible again. Internet connection was reported as a problem hindering the performance of the vOSCE by 51.9% of the participants. Students who were interested in clinical pharmacy courses were more satisfied than other students with the preparation and operation of the vOSCE, the faculty support, and the allocated time.

    CONCLUSION: Students were satisfied with the organization and operation of the vOSCE. However, they still preferred the conventional OSCE over the vOSCE. These findings might indicate a further need to expose students to telehealthcare models.

    Matched MeSH terms: Educational Measurement/methods*
  3. Puthiaparampil T, Rahman MM
    BMC Med Educ, 2020 May 06;20(1):141.
    PMID: 32375739 DOI: 10.1186/s12909-020-02057-w
    BACKGROUND: Multiple choice questions, used in medical school assessments for decades, have many drawbacks such as hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked, directly answered questions like Very Short Answer Questions (VSAQ) are considered a better alternative with several advantages.

    OBJECTIVES: This study aims to compare student performance in MCQ and VSAQ and obtain feedback. from the stakeholders.

    METHODS: Conduct multiple true-false, one best answer, and VSAQ tests in two batches of medical students, compare their scores and psychometric indices of the tests and seek opinion from students and academics regarding these assessment methods.

    RESULTS: Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders' opinions were significantly in favour of VSAQ.

    CONCLUSION AND RECOMMENDATION: This study concludes that VSAQ is a viable alternative to multiple-choice question tests, and it is widely accepted by medical students and academics in the medical faculty.

    Matched MeSH terms: Educational Measurement/methods*
  4. Abubakar U, Muhammad HT, Sulaiman SAS, Ramatillah DL, Amir O
    Curr Pharm Teach Learn, 2020 03;12(3):265-273.
    PMID: 32273061 DOI: 10.1016/j.cptl.2019.12.002
    BACKGROUND AND PURPOSE: Training pharmacy students in infectious diseases (ID) is important to enable them to participate in antibiotic stewardship programs. This study evaluated knowledge and self-confidence regarding antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship among final year pharmacy undergraduate students.

    METHODS: A cross-sectional electronic survey was conducted at universities in Indonesia, Malaysia, and Pakistan. A 59-item survey was administered between October 2017 and December 2017.

    FINDINGS: The survey was completed by 211 students (response rate 77.8%). The mean knowledge score for antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship was 5.6 ± 1.5, 4.7 ± 1.8 (maximum scores 10.0) and 3.1 ± 1.4 (maximum score 5.0), respectively. Significant variations were noted among the schools. There was poor awareness about the consequences of antibiotic resistance and cases with no need for an antibiotic. The knowledge of antibiotic resistance was higher among male respondents (6.1 vs. 5.4) and those who had attended antibiotic resistance (5.7 vs. 5.2) and antibiotic therapy (5.8 vs. 4.9) courses (p 

    Matched MeSH terms: Educational Measurement/methods
  5. Refat N, Kassim H, Rahman MA, Razali RB
    PLoS One, 2020;15(8):e0236862.
    PMID: 32857762 DOI: 10.1371/journal.pone.0236862
    Language learning is an emerging research area where researchers have done significant contributions by incorporating technological assistantship (i.e., computer- and mobile-assistant learning). However, it has been revealed from the recent empirical studies that little attention is given on grammar learning with the proper instructional materials design and the motivational framework for designing an efficient mobile-assisted grammar learning tool. This paper hence, reports a preliminary study that investigated learner motivation when a mobile-assisted tool for tense learning was used. This study applied the Attention-Relevance-Confidence-Satisfaction (ARCS) model. It was hypothesized that with the use of the designed mobile- assisted tense learning tool students would be motivated to learn grammar (English tense). In addition, with the increase of motivation, performance outcome in paper- based test would also be improved. With the purpose to investigate the impact of the tool, a sequential mixed-method research design was employed with the use of three research instruments; Instructional Materials Motivation Survey (IMMS), a paper-based test and an interview protocol using a semi-structured interview. Participants were 115 undergraduate students, who were enrolled in a remedial English course. The findings showed that with the effective design of instructional materials, students were motivated to learn grammar, where they were positive at improving their attitude towards learning (male 86%, female 80%). The IMMS findings revealed that students' motivation increased after using the tool. Moreover, students improved their performance level that was revealed from the outcome of paper-based instrument. Therefore, it is confirmed that the study contributed to designing an effective multimedia based instructions for a mobile-assisted tool that increased learners' motivational attitude which resulted in an improved learning performance.
    Matched MeSH terms: Educational Measurement/methods*
  6. Nagandla K, Gupta ED, Motilal T, Teng CL, Gangadaran S
    Natl Med J India, 2019 7 4;31(5):293-295.
    PMID: 31267998 DOI: 10.4103/0970-258X.261197
    Background: Assessment drives students' learning. It measures the level of students' understanding. We aimed to determine whether performance in continuous assessment can predict failure in the final professional examination results.

    Methods: We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).

    Results: Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.

    Conclusion: ICA with theory and clinical examination had a direct relationship with students' performance in the final examination and is a useful assessment tool.

    Matched MeSH terms: Educational Measurement/methods*
  7. Ismail MA, Ahmad A, Mohammad JA, Fakri NMRM, Nor MZM, Pa MNM
    BMC Med Educ, 2019 Jun 25;19(1):230.
    PMID: 31238926 DOI: 10.1186/s12909-019-1658-z
    BACKGROUND: Gamification is an increasingly common phenomenon in education. It is a technique to facilitate formative assessment and to promote student learning. It has been shown to be more effective than traditional methods. This phenomenological study was conducted to explore the advantages of gamification through the use of the Kahoot! platform for formative assessment in medical education.

    METHODS: This study employed a phenomenological design. Five focus groups were conducted with medical students who had participated in several Kahoot! sessions.

    RESULTS: Thirty-six categories and nine sub-themes emerged from the focus group discussions. They were grouped into three themes: attractive learning tool, learning guidance and source of motivation.

    CONCLUSIONS: The results suggest that Kahoot! sessions motivate students to study, to determine the subject matter that needs to be studied and to be aware of what they have learned. Thus, the platform is a promising tool for formative assessment in medical education.

    Matched MeSH terms: Educational Measurement/methods*
  8. Prashanti E, Ramnarayan K
    Adv Physiol Educ, 2019 Jun 01;43(2):99-102.
    PMID: 30835147 DOI: 10.1152/advan.00173.2018
    In an era that is seemingly saturated with standardized tests of all hues and stripes, it is easy to forget that assessments not only measure the performance of students, but also consolidate and enhance their learning. Assessment for learning is best elucidated as a process by which the assessment information can be used by teachers to modify their teaching strategies while students adjust and alter their learning approaches. Effectively implemented, formative assessments can convert classroom culture to one that resonates with the triumph of learning. In this paper, we present 10 maxims that show ways that formative assessments can be better understood, appreciated, and implemented.
    Matched MeSH terms: Educational Measurement/methods*
  9. Goh CF, Ong ET
    Curr Pharm Teach Learn, 2019 06;11(6):621-629.
    PMID: 31213319 DOI: 10.1016/j.cptl.2019.02.025
    BACKGROUND AND PURPOSE: The flipped classroom has not been fully exploited to improve tertiary education in Malaysia. A transformation in pharmacy education using flipped classrooms will be pivotal to resolve poor academic performance in certain courses. This study aimed to investigate the effectiveness of the flipped classroom in improving student learning and academic performance in a course with a historically low pass rate.

    EDUCATIONAL ACTIVITY AND SETTING: A quasi-experimental pre- and posttest control group design was employed. The experimental group experienced the flipped classroom for selected topics while the control group learned in a traditional classroom. Analysis of covariance was utilized to compare the performance on the final exam using the grade point of a pre-requisite course as the covariate. Students' perceptions of their experience in the flipped classroom were gauged through a web-based survey.

    FINDINGS: Student performance on the final exam was significantly higher in the flipped classroom group. The lowest-scoring students benefitted the most in terms of academic performance. More than two-thirds of students responded positively to the use of the flipped classroom and felt more confident while participating in classes and tests.

    SUMMARY: The flipped classroom is academically beneficial in a challenging course with a historically low pass rate; it was also effective in stimulating learning interest. The current study identified that for the flipped classroom to be successful, the role of educators, the feasibility of the approach, and the acceptance of students were important.

    Matched MeSH terms: Educational Measurement/methods
  10. Venkataramani P, Sadanandan T, Savanna RS, Sugathan S
    Med Educ, 2019 05;53(5):499-500.
    PMID: 30891812 DOI: 10.1111/medu.13860
    Matched MeSH terms: Educational Measurement/methods*
  11. Tan K, Chong MC, Subramaniam P, Wong LP
    Nurse Educ Today, 2018 May;64:180-189.
    PMID: 29500999 DOI: 10.1016/j.nedt.2017.12.030
    BACKGROUND: Outcome Based Education (OBE) is a student-centered approach of curriculum design and teaching that emphasize on what learners should know, understand, demonstrate and how to adapt to life beyond formal education. However, no systematic review has been seen to explore the effectiveness of OBE in improving the competencies of nursing students.

    OBJECTIVE: To appraise and synthesize the best available evidence that examines the effectiveness of OBE approaches towards the competencies of nursing students.

    DESIGN: A systematic review of interventional experimental studies.

    DATA SOURCES: Eight online databases namely CINAHL, EBSCO, Science Direct, ProQuest, Web of Science, PubMed, EMBASE and SCOPUS were searched.

    REVIEW METHODS: Relevant studies were identified using combined approaches of electronic database search without geographical or language filters but were limited to articles published from 2006 to 2016, handsearching journals and visually scanning references from retrieved studies. Two reviewers independently conducted the quality appraisal of selected studies and data were extracted.

    RESULTS: Six interventional studies met the inclusion criteria. Two of the studies were rated as high methodological quality and four were rated as moderate. Studies were published between 2009 and 2016 and were mostly from Asian and Middle Eastern countries. Results showed that OBE approaches improves competency in knowledge acquisition in terms of higher final course grades and cognitive skills, improve clinical skills and nursing core competencies and higher behavioural skills score while performing clinical skills. Learners' satisfaction was also encouraging as reported in one of the studies. Only one study reported on the negative effect.

    CONCLUSIONS: Although OBE approaches does show encouraging effects towards improving competencies of nursing students, more robust experimental study design with larger sample sizes, evaluating other outcome measures such as other areas of competencies, students' satisfaction, and patient outcomes are needed.

    Matched MeSH terms: Educational Measurement/methods*
  12. Hadie SNH, Hassan A, Ismail ZIM, Asari MA, Khan AA, Kasim F, et al.
    Anat Sci Educ, 2017 Sep;10(5):423-432.
    PMID: 28135037 DOI: 10.1002/ase.1683
    Students' perceptions of the education environment influence their learning. Ever since the major medical curriculum reform, anatomy education has undergone several changes in terms of its curriculum, teaching modalities, learning resources, and assessment methods. By measuring students' perceptions concerning anatomy education environment, valuable information can be obtained to facilitate improvements in teaching and learning. Hence, it is important to use a valid inventory that specifically measures attributes of the anatomy education environment. In this study, a new 11-factor, 132-items Anatomy Education Environment Measurement Inventory (AEEMI) was developed using Delphi technique and was validated in a Malaysian public medical school. The inventory was found to have satisfactory content evidence (scale-level content validity index [total] = 0.646); good response process evidence (scale-level face validity index [total] = 0.867); and acceptable to high internal consistency, with the Raykov composite reliability estimates of the six factors are in the range of 0.604-0.876. The best fit model of the AEEMI is achieved with six domains and 25 items (X2  = 415.67, P 
    Matched MeSH terms: Educational Measurement/methods*
  13. Khairani AZ, Ahmad NS, Khairani MZ
    J Appl Meas, 2017;18(4):449-458.
    PMID: 29252212
    Adolescences is an important transitional phase in human development where they experience physiological as well as psychological changes. Nevertheless, these changes are often understood by teachers, parents, and even the adolescents themselves. Thus, conflicts exist and adolescents are affected from the conflict physically and emotionally. An important state of emotions that result from this conflict is anger. This article describes the development and validation of the 34-item Adolescent Anger Inventory (AAI) to measure types of anger among Malaysian adolescents. A sample of 2,834 adolescents in secondary school who provide responses that were analyzed using Rasch model measurement framework. The 4 response category worked satisfactorily for the scale developed. A total of 11 items did not fit to the model's expectations, and thus dropped from the final scale. The scale also demonstrated satisfactory reliability and separation evidence. Also, items in the AAI depicted no evidence of DIF between 14- and 16-year-old adolescents. Nevertheless, the AAI did not have sufficient items to target adolescents with a high level of physical aggressive anger.
    Matched MeSH terms: Educational Measurement/methods*
  14. Ramoo V, Abdullah KL, Tan PS, Wong LP, Chua PY
    Nurs Crit Care, 2016 Sep;21(5):287-94.
    PMID: 25271143 DOI: 10.1111/nicc.12105
    BACKGROUND: Sedation management is an integral component of critical care practice. It requires the greatest attention of critical care practitioners because it carries significant risks to patients. Therefore, it is imperative that nurses are aware of potential adverse consequences of sedation therapy and current sedation practice recommendations.

    AIMS AND OBJECTIVES: To evaluate the impact of an educational intervention on nurses' knowledge of sedation assessment and management.

    DESIGNS AND METHODS: A quasi-experimental design with a pre- and post-test method was used. The educational intervention included theoretical sessions on assessing and managing sedation and hands-on sedation assessment practice using the Richmond Agitation Sedation Scale. Its effect was measured using self-administered questionnaire, completed at the baseline level and 3 months following the intervention.

    RESULTS: Participants were 68 registered nurses from an intensive care unit of a teaching hospital in Malaysia. Significant increases in overall mean knowledge scores were observed from pre- to post-intervention phases (mean of 79·00 versus 102·00, p < 0·001). Nurses with fewer than 5 years of work experience, less than 26 years old, and with a only basic nursing education had significantly greater level of knowledge improvement at the post-intervention phase compared to other colleagues, with mean differences of 24·64 (p = 0·001), 23·81 (p = 0·027) and 27·25 (p = 0·0001), respectively. A repeated-measures analysis of variance revealed a statistically significant effect of educational intervention on knowledge score after controlling for age, years of work and level of nursing education (p = 0·0001, ηp (2) = 0·431).

    CONCLUSION: An educational intervention consisting of theoretical sessions and hands-on sedation assessment practice was found effective in improving nurses' knowledge and understanding of sedation management.

    RELEVANCE TO CLINICAL PRACTICE: This study highlighted the importance of continuing education to increase nurses' understanding of intensive care practices, which is vital for improving the quality of patient care.

    Matched MeSH terms: Educational Measurement/methods
  15. Chan SW, Ismail Z, Sumintono B
    PLoS One, 2016;11(11):e0163846.
    PMID: 27812091 DOI: 10.1371/journal.pone.0163846
    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.
    Matched MeSH terms: Educational Measurement/methods*
  16. Sim JH, Tong WT, Hong WH, Vadivelu J, Hassan H
    Med Educ Online, 2015;20:28612.
    PMID: 26511792 DOI: 10.3402/meo.v20.28612
    INTRODUCTION: Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students' perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument.
    METHOD: The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors' institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined.
    RESULTS: Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of 'feedback mechanism' recorded the lowest mean (2.39/4.00), whereas the factor/subscale of 'assessment system/procedure' scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years.
    CONCLUSIONS: The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students' perceptions of the assessment environment in an undergraduate medical program.
    KEYWORDS: assessment environment; development; instrument; psychometric properties; validation
    Matched MeSH terms: Educational Measurement/methods*
  17. Chan MY
    Med Educ Online, 2015;20:28565.
    PMID: 26194482 DOI: 10.3402/meo.v20.28565
    The oral case presentation is an important communicative activity in the teaching and assessment of students. Despite its importance, not much attention has been paid to providing support for teachers to teach this difficult task to medical students who are novices to this form of communication. As a formalized piece of talk that takes a regularized form and used for a specific communicative goal, the case presentation is regarded as a rhetorical activity and awareness of its rhetorical and linguistic characteristics should be given due consideration in teaching. This paper reviews practitioners' and the limited research literature that relates to expectations of medical educators about what makes a good case presentation, and explains the rhetorical aspect of the activity. It is found there is currently a lack of a comprehensive model of the case presentation that projects the rhetorical and linguistic skills needed to produce and deliver a good presentation. Attempts to describe the structure of the case presentation have used predominantly opinion-based methodologies. In this paper, I argue for a performance-based model that would not only allow a description of the rhetorical structure of the oral case presentation, but also enable a systematic examination of the tacit genre knowledge that differentiates the expert from the novice. Such a model will be a useful resource for medical educators to provide more structured feedback and teaching support to medical students in learning this important genre.
    Matched MeSH terms: Educational Measurement/methods*
  18. Sim JH, Abdul Aziz YF, Mansor A, Vijayananthan A, Foong CC, Vadivelu J
    Med Educ Online, 2015;20:26185.
    PMID: 25697602 DOI: 10.3402/meo.v20.26185
    INTRODUCTION: The purpose of this study was to compare students' performance in the different clinical skills (CSs) assessed in the objective structured clinical examination.

    METHODS: Data for this study were obtained from final year medical students' exit examination (n=185). Retrospective analysis of data was conducted using SPSS. Means for the six CSs assessed across the 16 stations were computed and compared.

    RESULTS: Means for history taking, physical examination, communication skills, clinical reasoning skills (CRSs), procedural skills (PSs), and professionalism were 6.25±1.29, 6.39±1.36, 6.34±0.98, 5.86±0.99, 6.59±1.08, and 6.28±1.02, respectively. Repeated measures ANOVA showed there was a significant difference in the means of the six CSs assessed [F(2.980, 548.332)=20.253, p<0.001]. Pairwise multiple comparisons revealed significant differences between the means of the eight pairs of CSs assessed, at p<0.05.

    CONCLUSIONS: CRSs appeared to be the weakest while PSs were the strongest, among the six CSs assessed. Students' unsatisfactory performance in CRS needs to be addressed as CRS is one of the core competencies in medical education and a critical skill to be acquired by medical students before entering the workplace. Despite its challenges, students must learn the skills of clinical reasoning, while clinical teachers should facilitate the clinical reasoning process and guide students' clinical reasoning development.

    Matched MeSH terms: Educational Measurement/methods*
  19. Lee Chin K, Ling Yap Y, Leng Lee W, Chang Soh Y
    Am J Pharm Educ, 2014 Oct 15;78(8):153.
    PMID: 25386018 DOI: 10.5688/ajpe788153
    To determine whether human patient simulation (HPS) is superior to case-based learning (CBL) in teaching diabetic ketoacidosis (DKA) and thyroid storm (TS) to pharmacy students.
    Matched MeSH terms: Educational Measurement/methods
  20. Liew SC, Dutta S, Sidhu JK, De-Alwis R, Chen N, Sow CF, et al.
    Med Teach, 2014 Jul;36(7):626-31.
    PMID: 24787534 DOI: 10.3109/0142159X.2014.899689
    The complexity of modern medicine creates more challenges for teaching and assessment of communication skills in undergraduate medical programme. This research was conducted to study the level of communication skills among undergraduate medical students and to determine the difference between simulated patients and clinical instructors' assessment of communication skills.
    Matched MeSH terms: Educational Measurement/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links