Methods: We retrieved the in-course continuous assessment (ICA) and final professional examination results of 3 cohorts of medical students (n = 245) from the examination unit of the International Medical University, Seremban, Malaysia. The ICA was 3 sets of composite marks derived from course works, which includes summative theory paper with short answer questions and 1 of the best answers. The clinical examination includes end-of-posting practical examination. These examinations are conducted every 6 months in semesters 6, 7 and 8; they are graded as pass/fail for each student. The final professional examination including modified essay questions (MEQs), 1 8-question objective structured practical examination (OSPE) and a 16-station objective structured clinical examination (OSCE), were graded as pass/fail. Failure in the continuous assessment that can predict failure in each component of the final professional examination was tested using chi-square test and presented as odds ratio (OR) with 95% confidence interval (CI).
Results: Failure in ICA in semesters 6-8 strongly predicts failure in MEQs, OSPE and OSCE of the final professional examination with OR of 3.8-14.3 (all analyses p< 0.001) and OR of 2.4-6.9 (p<0.05). However, the correlation was stronger with MEQs and OSPE compared to OSCE.
Conclusion: ICA with theory and clinical examination had a direct relationship with students' performance in the final examination and is a useful assessment tool.
MATERIALS AND METHODS: A survey of the tutors who had used the instrument was conducted to determine whether the assessment instrument or form was user-friendly. The 4 competencies assessed, using a 5-point rating scale, were (1) participation and communication skills, (2) cooperation or team-building skills, (3) comprehension or reasoning skills and (4) knowledge or information-gathering skills. Tutors were given a set of criteria guidelines for scoring the students' performance in these 4 competencies. Tutors were not attached to a particular PBL group, but took turns to facilitate different groups on different case or problem discussions. Assessment scores for one cohort of undergraduate medical students in their respective PBL groups in Year I (2003/2004) and Year II (2004/2005) were analysed. The consistency of scores was analysed using intraclass correlation.
RESULTS: The majority of the tutors surveyed expressed no difficulty in using the instrument and agreed that it helped them assess the students fairly. Analysis of the scores obtained for the above cohort indicated that the different raters were relatively consistent in their assessment of student performance, despite a small number consistently showing either "strict" or "indiscriminate" rating practice.
CONCLUSION: The instrument designed for the assessment of student performance in the PBL tutorial classroom setting is user-friendly and is reliable when used judiciously with the criteria guidelines provided.
METHODS: The development and validation of the academic writing self-assessment toolkit involved several key steps. First, a thorough review of the literature was conducted to identify the essential criteria for authentic assessment. Next, an analysis of medical students' reflection papers was undertaken to gain insights into their experiences using AI-powered tools for writing feedback. Based on these initial steps, a preliminary version of the self-assessment toolkit was devised. An expert focus group discussion was then convened to refine the questions and content of the toolkit. To assess content validity, the toolkit was evaluated by a panel of 22 medical student participants. They were asked to review each item and provide feedback on the relevance and comprehensiveness of the toolkit for evaluating academic writing skills. Face validity was also examined, with the students assessing the clarity, wording, and appropriateness of the toolkit items.
RESULTS: The content validity evaluation revealed that 95% of the toolkit items were rated as highly relevant, and 88% were deemed comprehensive in assessing key aspects of academic writing. Minor wording changes were suggested by the students to enhance clarity and interpretability. The face validity assessment found that 92% of the items were rated as unambiguous, with 90% considered appropriate and relevant for self-assessment. Feedback from the students led to the refinement of a few items to improve their clarity in the context of the Persian language. The robust reliability testing demonstrated the consistency and stability of the academic writing self-assessment toolkit in measuring students' writing skills over time.
CONCLUSION: The comprehensive evaluation process has established the academic writing self-assessment toolkit as a robust and credible instrument for supporting students' writing improvement. The toolkit's strong psychometric properties and user-centered design make it a valuable resource for enhancing academic writing skills in higher education.
METHODS: ML algorithms logistic regression (LR), decision tree (DT), random forest (RF), and support vector machine (SVM) models were applied. Academic performance prediction in pre-clinical years was made using three input parameters: age during admission, pre-university Cumulative Grade Point Average (CGPA), and total matriculation semester. PCC was deployed to identify the correlation between pre-university CGPA and dental school grades. The proposed models' classification accuracy ranged from 29% to 57%, ranked from highest to lowest as follows: RF, SVM, DT, and LR. Pre-university CGPA was shown to be predictive of dental students' academic performance; however, alone they did not yield optimal outcomes. RF was the most precise algorithm for predicting grades A, B, and C, followed by LR, DT, and SVM. In forecasting failure, LR predicted three grades with the highest recall, SVM predicted two grades, and DT predicted one. RF performance was insignificant.
CONCLUSION: The findings demonstrated the application of ML algorithms and PCC to predict dental students' academic performance. However, it was limited by several factors. Each algorithm has unique performance qualities, and trade-offs between different performance metrics may be necessary. No definitive model stood out as the best algorithm for predicting student academic success in this study.
AIMS AND OBJECTIVES: To evaluate the impact of an educational intervention on nurses' knowledge of sedation assessment and management.
DESIGNS AND METHODS: A quasi-experimental design with a pre- and post-test method was used. The educational intervention included theoretical sessions on assessing and managing sedation and hands-on sedation assessment practice using the Richmond Agitation Sedation Scale. Its effect was measured using self-administered questionnaire, completed at the baseline level and 3 months following the intervention.
RESULTS: Participants were 68 registered nurses from an intensive care unit of a teaching hospital in Malaysia. Significant increases in overall mean knowledge scores were observed from pre- to post-intervention phases (mean of 79·00 versus 102·00, p < 0·001). Nurses with fewer than 5 years of work experience, less than 26 years old, and with a only basic nursing education had significantly greater level of knowledge improvement at the post-intervention phase compared to other colleagues, with mean differences of 24·64 (p = 0·001), 23·81 (p = 0·027) and 27·25 (p = 0·0001), respectively. A repeated-measures analysis of variance revealed a statistically significant effect of educational intervention on knowledge score after controlling for age, years of work and level of nursing education (p = 0·0001, ηp (2) = 0·431).
CONCLUSION: An educational intervention consisting of theoretical sessions and hands-on sedation assessment practice was found effective in improving nurses' knowledge and understanding of sedation management.
RELEVANCE TO CLINICAL PRACTICE: This study highlighted the importance of continuing education to increase nurses' understanding of intensive care practices, which is vital for improving the quality of patient care.
EDUCATIONAL ACTIVITY AND SETTING: A quasi-experimental pre- and posttest control group design was employed. The experimental group experienced the flipped classroom for selected topics while the control group learned in a traditional classroom. Analysis of covariance was utilized to compare the performance on the final exam using the grade point of a pre-requisite course as the covariate. Students' perceptions of their experience in the flipped classroom were gauged through a web-based survey.
FINDINGS: Student performance on the final exam was significantly higher in the flipped classroom group. The lowest-scoring students benefitted the most in terms of academic performance. More than two-thirds of students responded positively to the use of the flipped classroom and felt more confident while participating in classes and tests.
SUMMARY: The flipped classroom is academically beneficial in a challenging course with a historically low pass rate; it was also effective in stimulating learning interest. The current study identified that for the flipped classroom to be successful, the role of educators, the feasibility of the approach, and the acceptance of students were important.
METHODS: A cross-sectional electronic survey was conducted at universities in Indonesia, Malaysia, and Pakistan. A 59-item survey was administered between October 2017 and December 2017.
FINDINGS: The survey was completed by 211 students (response rate 77.8%). The mean knowledge score for antibiotic resistance, appropriate antibiotic therapy, and antibiotic stewardship was 5.6 ± 1.5, 4.7 ± 1.8 (maximum scores 10.0) and 3.1 ± 1.4 (maximum score 5.0), respectively. Significant variations were noted among the schools. There was poor awareness about the consequences of antibiotic resistance and cases with no need for an antibiotic. The knowledge of antibiotic resistance was higher among male respondents (6.1 vs. 5.4) and those who had attended antibiotic resistance (5.7 vs. 5.2) and antibiotic therapy (5.8 vs. 4.9) courses (p