METHOD: Two large datasets, including 1110 3D CT images, were split into five segments of 20% each. Each dataset's first 20% segment was separated as a holdout test set. 3D-CNN training was performed with the remaining 80% from each dataset. Two small external datasets were also used to independently evaluate the trained models.
RESULTS: The total combination of 80% of each dataset has an accuracy of 91% on Iranmehr and 83% on Moscow holdout test datasets. Results indicated that 80% of the primary datasets are adequate for fully training a model. The additional fine-tuning using 40% of a secondary dataset helps the model generalize to a third, unseen dataset. The highest accuracy achieved through transfer learning was 85% on LDCT dataset and 83% on Iranmehr holdout test sets when retrained on 80% of Iranmehr dataset.
CONCLUSION: While the total combination of both datasets produced the best results, different combinations and transfer learning still produced generalizable results. Adopting the proposed methodology may help to obtain satisfactory results in the case of limited external datasets.
METHODS: Eighteen students with prior experience in traditional PDPBL processes participated in the study, divided into three groups to perform PDPBL sessions with various triggers from pharmaceutical chemistry, pharmaceutics, and clinical pharmacy fields, while utilizing chat AI provided by ChatGPT to assist with data searching and problem-solving. Questionnaires were used to collect data on the impact of ChatGPT on students' satisfaction, engagement, participation, and learning experience during the PBL sessions.
RESULTS: The survey revealed that ChatGPT improved group collaboration and engagement during PDPBL, while increasing motivation and encouraging more questions. Nevertheless, some students encountered difficulties understanding ChatGPT's information and questioned its reliability and credibility. Despite these challenges, most students saw ChatGPT's potential to eventually replace traditional information-seeking methods.
CONCLUSIONS: The study suggests that ChatGPT has the potential to enhance PDPBL in pharmacy education. However, further research is needed to examine the validity and reliability of the information provided by ChatGPT, and its impact on a larger sample size.
METHODS: A bibliometric review was conducted on literature from over 23 years from January 2000 to May 2023. Articles focusing on any type of OSCE research in pharmacy education in both undergraduate and postgraduate sectors were included. Articles were excluded if they were not original articles or not published in English. A summative content analysis was also conducted to identify key topics.
RESULTS: A total of 192 articles were included in the analysis. There were 242 institutions that contributed to the OSCE literature in pharmacy education, with the leading country being Canada. Most OSCE research came from developed countries and were descriptive studies based on single institution data. The top themes emerging from content analysis were student perceptions on OSCE station styles (n = 98), staff perception (n = 19), grade assessment of OSCEs (n = 145), interprofessional education (n = 11), standardized patients (n = 12), and rubric development and standard setting (n = 8).
IMPLICATIONS: There has been a growth in virtual OSCEs, interprofessional OSCEs, and artificial intelligence OSCEs. Communication rubrics and minimizing assessor variability are still trending research areas. There is scope to conduct more research on evaluating specific types of OSCEs, when best to hold an OSCE, and comparing OSCEs to other assessments.